Introduction
In Tutorial 05, we built a REST API that stored data in a JSON file (items.json). That approach works for learning, but it has real limitations: no concurrent access safety, no way to query or filter data efficiently, and no data integrity guarantees. In this module, we move to PostgreSQL — a production-grade relational database — and connect to it from PHP using PDO (PHP Data Objects).
Module 05 (flat file): Module 06 (database):
Browser --> Apache --> PHP Browser --> Apache --> PHP
| |
Read/write PDO connection
items.json |
| PostgreSQL
JSON array +--------------+
in a file | stories table |
| id, title, |
| status, ... |
+--------------+
The API surface stays the same — clients still send the same HTTP requests and receive the same JSON responses. The only thing that changes is how we store and retrieve data on the server side.
Prerequisites
Before starting, make sure you have:
- PHP installed with the
pdo_pgsqlextension. Check with:php -m | grep pdo_pgsql - PostgreSQL installed and running. Check with:
psql --version - A PostgreSQL user that can create databases (typically the
postgressuperuser)
Creating the Database
First, create the PostgreSQL database and load the schema:
The schema.sql file creates a stories table and inserts three sample rows:
Verify the data loaded correctly:
Key points about the schema:
SERIALauto-generates incrementing integer IDsNOT NULLon title enforces that every story must have a nameCHECKconstraints limit priority and status to valid values — the database itself rejects bad dataDEFAULT CURRENT_TIMESTAMPauto-fills the creation time
PDO Connection
PHP Data Objects (PDO) is PHP's standard database abstraction layer. It provides a consistent interface for connecting to different databases (PostgreSQL, MySQL, SQLite, etc.). The connection requires a DSN (Data Source Name) string that specifies the driver, host, and database name:
The three PDO options we set are important:
| Option | Value | Why |
|---|---|---|
ATTR_ERRMODE |
ERRMODE_EXCEPTION |
Throws exceptions on errors instead of silently failing. This lets us use try/catch for proper error handling. |
ATTR_DEFAULT_FETCH_MODE |
FETCH_ASSOC |
Returns rows as associative arrays ($row['title']) instead of both numeric and associative indexes. |
ATTR_EMULATE_PREPARES |
false |
Uses real database-level prepared statements instead of PHP emulation. This is more secure and lets PostgreSQL optimize the query plan. |
Set these in your Apache config, .env file, or shell: export DB_PASSWORD=secret
CRUD Operations with Prepared Statements
PDO prepared statements separate SQL structure from data values. You write the query with named placeholders (:title, :id), then pass the actual values in a separate step. The database driver handles escaping and type safety.
INSERT — Create a Story
RETURNING * is a PostgreSQL feature that returns the inserted row, including the auto-generated id and created_at. This saves a second query.
Compare to Tutorial 05 (JSON file):
SELECT — Read Stories
Notice: query() is used when there are no parameters. prepare() + execute() is used when there are user-supplied values (like :id). This distinction matters for security.
UPDATE — Modify a Story
The merge pattern (check existing, then apply partial updates) lets clients send only the fields they want to change — a partial PUT or PATCH-like behavior.
DELETE — Remove a Story
RETURNING * on DELETE gives us the deleted row, so we can confirm what was removed. If fetch() returns false, the ID didn't exist.
Building the API Endpoints
The full db-demo.php script ties the CRUD operations together with HTTP routing. The structure mirrors Tutorial 05 — a switch on the request method — but replaces file operations with PDO queries:
The key structural difference from Tutorial 05: the entire switch block is wrapped in a try/catch for PDOException. If any query fails (bad SQL, constraint violation, connection lost), PHP catches the exception and returns a 500 error with details instead of crashing.
Running and Testing
Since db-demo.php is a regular PHP file, Apache serves it automatically if it's in your document root. For local development, you can also use PHP's built-in server:
Testing with curl
List all stories:
Expected response:
Get a single story:
Create a new story:
Expected response (201 Created):
Update an existing story:
Delete a story:
Expected response:
SQL Injection Prevention
SQL injection is one of the most common and dangerous web vulnerabilities. It happens when user input is concatenated directly into SQL strings, allowing attackers to execute arbitrary SQL commands.
WRONG — String concatenation (vulnerable)
RIGHT — Prepared statements (safe)
With prepared statements, the SQL structure and the data are sent to the database separately. The database parses the SQL first (with placeholders), then binds the values. There is no way for user input to alter the SQL structure.
Comparison: JSON File vs Database
Here's how Tutorial 05's flat-file approach compares to Tutorial 06's PostgreSQL approach:
| Aspect | JSON File (Tutorial 05) | PostgreSQL (Tutorial 06) |
|---|---|---|
| Data persistence | Stored in a .json file on disk. Entire file read/written on every request. |
Stored in database tables with ACID guarantees. Only affected rows are read/written. |
| Concurrent access | Unsafe. Two simultaneous writes can corrupt the file or lose data (last-write-wins). | Safe. PostgreSQL handles concurrent reads/writes with row-level locking and MVCC. |
| Query capability | Must load all data into memory, then loop through arrays in PHP to search/filter. | Full SQL: WHERE, ORDER BY, JOIN, aggregations, indexes for fast lookups. |
| Data validation | Must validate in PHP code. Nothing stops bad data from being written to the file. | NOT NULL, CHECK constraints, foreign keys. Database rejects invalid data regardless of how it's inserted. |
| Setup complexity | Zero setup. Just create a .json file. No external services needed. |
Requires PostgreSQL installed and running, database created, schema loaded, PHP extension enabled. |
| Scalability | Breaks down quickly. Reading a 10MB JSON file on every request is slow. | Handles millions of rows with indexes. Connection pooling supports many concurrent users. |
| Production-ready | No. Suitable for prototypes and tutorials only. | Yes. PostgreSQL is used in production by companies of all sizes. |