Introduction
PostgreSQL is the world's most advanced open-source relational database. Whether you are debugging a slow query at 2 AM or setting up a brand-new schema, having the right syntax at your fingertips saves you from context-switching into documentation.
This cheat sheet is designed as a fast-lookup reference. Every snippet was tested against PostgreSQL 16 (most syntax works on 14+). Bookmark it, print it, or keep it open in a second tab — it will pay for itself the first time you forget the ON CONFLICT upsert syntax.
Who is this for?
Backend engineers, full-stack developers, data engineers, and anyone who works with PostgreSQL daily. If you can write a basic SELECT, this guide will level you up with CTEs, window functions, JSONB operators, and performance tuning patterns.
Command-Line (psql) Basics
Connecting to a Database
bash# Connect to a local database
psql -U postgres -d mydb
# Connect to a remote server
psql -h db.example.com -p 5432 -U app_user -d production
# Using a connection string
psql "postgresql://app_user:[email protected]:5432/production?sslmode=require"Essential Meta-Commands
| Command | Description |
|---|---|
\l | List all databases |
\c dbname | Switch to a different database |
\dt | List tables in the current schema |
\dt schema.* | List tables in a specific schema |
\d tablename | Describe table (columns, types, indexes) |
\di | List all indexes |
\df | List all functions |
\du | List all roles/users |
\dn | List all schemas |
\x | Toggle expanded (vertical) output |
\timing | Toggle query execution time display |
\e | Open last query in $EDITOR |
\i file.sql | Execute SQL from a file |
\o output.txt | Send query results to a file |
\q | Quit psql |
Pro tip
Combine \x auto so psql automatically switches to vertical output when rows are too wide for your terminal.
Database & Schema Management
Creating and Dropping Databases
sql-- Create a new database
CREATE DATABASE myapp
WITH OWNER = app_user
ENCODING = 'UTF8'
LC_COLLATE = 'en_US.UTF-8'
LC_CTYPE = 'en_US.UTF-8'
TEMPLATE = template0;
-- Drop a database (must not be connected to it)
DROP DATABASE IF EXISTS myapp;Working with Schemas
sql-- Create a schema
CREATE SCHEMA IF NOT EXISTS billing;
-- Create a schema owned by a specific role
CREATE SCHEMA inventory AUTHORIZATION warehouse_user;
-- Drop a schema and everything inside it
DROP SCHEMA IF EXISTS billing CASCADE;
-- Set default search path for the session
SET search_path TO billing, public;
-- Set default search path permanently for a role
ALTER ROLE app_user SET search_path TO billing, public;Table Operations (DDL)
Creating Tables
sqlCREATE TABLE employees (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
first_name VARCHAR(100) NOT NULL,
last_name VARCHAR(100) NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL,
department VARCHAR(80) NOT NULL,
salary NUMERIC(12,2) NOT NULL DEFAULT 0,
metadata JSONB DEFAULT '{}',
is_active BOOLEAN NOT NULL DEFAULT TRUE,
hired_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);Common Data Types
| Type | Use Case |
|---|---|
UUID | Primary keys (distributed-safe) |
SERIAL / BIGSERIAL | Auto-incrementing integer PKs |
VARCHAR(n) | Variable-length string with max length |
TEXT | Unlimited-length string |
INTEGER / BIGINT | Whole numbers (32-bit / 64-bit) |
NUMERIC(p,s) | Exact decimal (money, precision math) |
BOOLEAN | true / false |
TIMESTAMPTZ | Timestamp with time zone (always use this) |
DATE | Date only (no time) |
JSONB | Binary JSON (indexable, fast queries) |
TEXT[] | Array of text values |
INET | IPv4 or IPv6 address |
TSQUERY / TSVECTOR | Full-text search |
Always use TIMESTAMPTZ
Never use TIMESTAMP without time zone. It silently drops timezone info, causing bugs when your servers or users span multiple time zones. Always use TIMESTAMPTZ.
Constraints
sql-- Primary key
ALTER TABLE orders ADD PRIMARY KEY (id);
-- Foreign key
ALTER TABLE orders
ADD CONSTRAINT fk_orders_customer
FOREIGN KEY (customer_id) REFERENCES customers(id)
ON DELETE CASCADE;
-- Unique constraint
ALTER TABLE employees ADD CONSTRAINT uq_email UNIQUE (email);
-- Check constraint
ALTER TABLE products
ADD CONSTRAINT chk_price_positive CHECK (price > 0);
-- Not null
ALTER TABLE employees ALTER COLUMN first_name SET NOT NULL;Altering Tables
sql-- Add a column
ALTER TABLE employees ADD COLUMN phone VARCHAR(20);
-- Drop a column
ALTER TABLE employees DROP COLUMN phone;
-- Rename a column
ALTER TABLE employees RENAME COLUMN first_name TO given_name;
-- Change column type
ALTER TABLE employees ALTER COLUMN salary TYPE NUMERIC(14,2);
-- Set / drop default
ALTER TABLE employees ALTER COLUMN is_active SET DEFAULT TRUE;
ALTER TABLE employees ALTER COLUMN is_active DROP DEFAULT;
-- Rename table
ALTER TABLE employees RENAME TO staff;Dropping and Truncating
sql-- Drop a table (and all dependent objects)
DROP TABLE IF EXISTS employees CASCADE;
-- Truncate (delete all rows, reset sequences)
TRUNCATE TABLE employees RESTART IDENTITY CASCADE;Data Manipulation (DML)
Inserting Data
sql-- Single row
INSERT INTO employees (first_name, last_name, email, department, salary)
VALUES ('Jose', 'Gayares', '[email protected]', 'Engineering', 95000);
-- Multiple rows
INSERT INTO employees (first_name, last_name, email, department, salary)
VALUES
('Anna', 'Reyes', '[email protected]', 'HR', 65000),
('Marco', 'Santos', '[email protected]', 'Engineering', 88000),
('Lara', 'Cruz', '[email protected]', 'Design', 72000);
-- Insert from a SELECT
INSERT INTO employee_archive (id, first_name, last_name, email)
SELECT id, first_name, last_name, email
FROM employees
WHERE is_active = FALSE;Upsert (INSERT ... ON CONFLICT)
sql-- Update on conflict (upsert)
INSERT INTO employees (email, first_name, last_name, department, salary)
VALUES ('[email protected]', 'Jose', 'Gayares', 'Engineering', 105000)
ON CONFLICT (email)
DO UPDATE SET
salary = EXCLUDED.salary,
department = EXCLUDED.department,
updated_at = NOW();
-- Do nothing on conflict (skip duplicates)
INSERT INTO employees (email, first_name, last_name, department, salary)
VALUES ('[email protected]', 'Jose', 'Gayares', 'Engineering', 105000)
ON CONFLICT (email) DO NOTHING;When to use upsert
Upserts are essential for idempotent data pipelines, syncing external data, and handling race conditions in concurrent inserts. The EXCLUDED keyword refers to the row that would have been inserted.
Updating Records
sql-- Basic update
UPDATE employees
SET salary = salary * 1.10, updated_at = NOW()
WHERE department = 'Engineering';
-- Update with a subquery
UPDATE orders
SET status = 'cancelled'
WHERE customer_id IN (
SELECT id FROM customers WHERE is_banned = TRUE
);
-- Update with FROM (join-style update)
UPDATE inventory i
SET quantity = i.quantity - o.amount
FROM order_items o
WHERE o.product_id = i.product_id
AND o.order_id = 42;Deleting Records
sql-- Basic delete
DELETE FROM employees WHERE is_active = FALSE;
-- Delete with RETURNING (get deleted rows back)
DELETE FROM sessions
WHERE expires_at < NOW()
RETURNING id, user_id, expires_at;
-- Delete with a subquery
DELETE FROM order_items
WHERE order_id IN (
SELECT id FROM orders WHERE status = 'cancelled'
);RETURNING Clause
sql-- Works with INSERT, UPDATE, and DELETE
INSERT INTO employees (first_name, last_name, email, department, salary)
VALUES ('Pia', 'Mendoza', '[email protected]', 'Marketing', 68000)
RETURNING id, email, created_at;
UPDATE employees SET salary = salary + 5000
WHERE department = 'Engineering'
RETURNING id, first_name, salary AS new_salary;Querying Data (DQL)
Basic SELECT
sql-- Select specific columns
SELECT first_name, last_name, salary
FROM employees
WHERE department = 'Engineering'
AND salary > 80000;
-- Select all columns (avoid in production code)
SELECT * FROM employees LIMIT 10;
-- Aliased columns
SELECT
first_name || ' ' || last_name AS full_name,
salary,
ROUND(salary / 12, 2) AS monthly_pay
FROM employees;Filtering
sql-- Pattern matching
SELECT * FROM employees WHERE email LIKE '%@gmail.com';
SELECT * FROM employees WHERE last_name ILIKE '%santos%'; -- case-insensitive
-- IN list
SELECT * FROM employees WHERE department IN ('Engineering', 'Design', 'Data');
-- BETWEEN (inclusive on both ends)
SELECT * FROM employees WHERE salary BETWEEN 60000 AND 100000;
-- NULL checks
SELECT * FROM employees WHERE phone IS NULL;
SELECT * FROM employees WHERE phone IS NOT NULL;
-- Combining conditions
SELECT * FROM employees
WHERE (department = 'Engineering' OR department = 'Data')
AND salary >= 80000
AND is_active = TRUE;Sorting and Pagination
sql-- Sort ascending / descending
SELECT * FROM employees ORDER BY salary DESC, last_name ASC;
-- Nulls first / last
SELECT * FROM employees ORDER BY phone NULLS LAST;
-- Pagination with LIMIT and OFFSET
SELECT * FROM employees
ORDER BY created_at DESC
LIMIT 20 OFFSET 40; -- page 3 (20 per page)
-- Keyset pagination (better performance on large tables)
SELECT * FROM employees
WHERE created_at < '2026-01-01'
ORDER BY created_at DESC
LIMIT 20;Avoid OFFSET on large tables
OFFSET 100000 still scans and discards 100k rows. For large datasets, use keyset (cursor-based) pagination with a WHERE clause on an indexed column instead.
Grouping and Aggregation
sql-- Group by with aggregates
SELECT
department,
COUNT(*) AS headcount,
ROUND(AVG(salary),2) AS avg_salary,
MAX(salary) AS max_salary,
MIN(salary) AS min_salary,
SUM(salary) AS total_payroll
FROM employees
WHERE is_active = TRUE
GROUP BY department
ORDER BY headcount DESC;
-- HAVING (filter groups)
SELECT department, COUNT(*) AS headcount
FROM employees
GROUP BY department
HAVING COUNT(*) >= 5;
-- Aggregate with FILTER
SELECT
COUNT(*) FILTER (WHERE is_active) AS active_count,
COUNT(*) FILTER (WHERE NOT is_active) AS inactive_count
FROM employees;DISTINCT and DISTINCT ON
sql-- Remove duplicates
SELECT DISTINCT department FROM employees;
-- DISTINCT ON (PostgreSQL-specific, very useful)
-- Get the highest-paid employee per department
SELECT DISTINCT ON (department)
department, first_name, last_name, salary
FROM employees
ORDER BY department, salary DESC;Joins & Relationships
Join Types
sql-- INNER JOIN: only matching rows from both tables
SELECT o.id, c.name, o.total
FROM orders o
INNER JOIN customers c ON c.id = o.customer_id;
-- LEFT JOIN: all rows from left table + matches from right
SELECT e.first_name, d.name AS dept_name
FROM employees e
LEFT JOIN departments d ON d.id = e.department_id;
-- RIGHT JOIN: all rows from right table + matches from left
SELECT e.first_name, d.name AS dept_name
FROM employees e
RIGHT JOIN departments d ON d.id = e.department_id;
-- FULL OUTER JOIN: all rows from both, NULLs where no match
SELECT e.first_name, d.name
FROM employees e
FULL OUTER JOIN departments d ON d.id = e.department_id;Cross Join and Self Join
sql-- CROSS JOIN: cartesian product (every combination)
SELECT s.size, c.color
FROM sizes s
CROSS JOIN colors c;
-- Self join: employees with their managers
SELECT
e.first_name AS employee,
m.first_name AS manager
FROM employees e
LEFT JOIN employees m ON m.id = e.manager_id;LATERAL Join
sql-- LATERAL: subquery can reference columns from preceding tables
-- Get the 3 most recent orders per customer
SELECT c.name, recent.*
FROM customers c
CROSS JOIN LATERAL (
SELECT id, total, created_at
FROM orders
WHERE customer_id = c.id
ORDER BY created_at DESC
LIMIT 3
) recent;When to use LATERAL
Use LATERAL when you need a correlated subquery in the FROM clause — for example, "top N per group" queries. It is essentially a for-each loop at the SQL level.
Advanced Querying
Common Table Expressions (CTEs)
sql-- Standard CTE
WITH active_employees AS (
SELECT * FROM employees WHERE is_active = TRUE
)
SELECT department, COUNT(*) AS headcount
FROM active_employees
GROUP BY department;
-- Multiple CTEs
WITH
dept_stats AS (
SELECT department, AVG(salary) AS avg_salary
FROM employees
GROUP BY department
),
high_earners AS (
SELECT e.*, d.avg_salary
FROM employees e
JOIN dept_stats d ON d.department = e.department
WHERE e.salary > d.avg_salary * 1.5
)
SELECT * FROM high_earners ORDER BY salary DESC;Recursive CTEs
sql-- Recursive CTE: org chart traversal
WITH RECURSIVE org_tree AS (
-- Base case: top-level managers (no manager)
SELECT id, first_name, manager_id, 1 AS depth
FROM employees
WHERE manager_id IS NULL
UNION ALL
-- Recursive case: find reports
SELECT e.id, e.first_name, e.manager_id, t.depth + 1
FROM employees e
INNER JOIN org_tree t ON t.id = e.manager_id
)
SELECT * FROM org_tree ORDER BY depth, first_name;
-- Generate a date series (useful for filling gaps)
WITH RECURSIVE dates AS (
SELECT '2026-01-01'::DATE AS day
UNION ALL
SELECT day + 1 FROM dates WHERE day < '2026-01-31'
)
SELECT day FROM dates;Window Functions
sql-- ROW_NUMBER: assign a unique row number per partition
SELECT
department,
first_name,
salary,
ROW_NUMBER() OVER (PARTITION BY department ORDER BY salary DESC) AS rank
FROM employees;
-- RANK and DENSE_RANK
SELECT
first_name,
salary,
RANK() OVER (ORDER BY salary DESC) AS rank,
DENSE_RANK() OVER (ORDER BY salary DESC) AS dense_rank
FROM employees;
-- Running total
SELECT
id,
created_at,
amount,
SUM(amount) OVER (ORDER BY created_at) AS running_total
FROM transactions;
-- Moving average (last 7 rows)
SELECT
day,
revenue,
ROUND(AVG(revenue) OVER (
ORDER BY day ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
), 2) AS moving_avg_7d
FROM daily_revenue;
-- LEAD and LAG: peek at next/previous rows
SELECT
day,
revenue,
LAG(revenue, 1) OVER (ORDER BY day) AS prev_day,
LEAD(revenue, 1) OVER (ORDER BY day) AS next_day,
revenue - LAG(revenue, 1) OVER (ORDER BY day) AS daily_change
FROM daily_revenue;
-- NTILE: divide rows into N equal buckets
SELECT
first_name,
salary,
NTILE(4) OVER (ORDER BY salary DESC) AS salary_quartile
FROM employees;JSONB Operations
sql-- Extracting values
SELECT
metadata->>'role' AS role, -- text
metadata->'address'->>'city' AS city, -- nested text
metadata->'skills' AS skills, -- jsonb array
(metadata->>'experience')::INT AS years -- cast to int
FROM employees;
-- Containment: does the JSONB contain this subset?
SELECT * FROM employees
WHERE metadata @> '{"role": "senior"}';
-- Key existence
SELECT * FROM employees
WHERE metadata ? 'certifications'; -- has key
WHERE metadata ?| ARRAY['aws', 'gcp']; -- has any key
WHERE metadata ?& ARRAY['aws', 'gcp']; -- has all keys
-- Path query (PostgreSQL 12+)
SELECT * FROM employees
WHERE metadata @@ '$.skills[*] == "PostgreSQL"';
-- Update JSONB field
UPDATE employees
SET metadata = jsonb_set(metadata, '{role}', '"lead"')
WHERE id = 'some-uuid';
-- Remove a key
UPDATE employees
SET metadata = metadata - 'temporary_flag';
-- Append to a JSONB array
UPDATE employees
SET metadata = jsonb_set(
metadata,
'{skills}',
(metadata->'skills') || '"Kubernetes"'
)
WHERE id = 'some-uuid';
-- Aggregate rows into a JSON array
SELECT jsonb_agg(
jsonb_build_object('name', first_name, 'salary', salary)
) AS employees_json
FROM employees
WHERE department = 'Engineering';| Operator | Returns | Description |
|---|---|---|
-> | jsonb | Get JSON object field by key |
->> | text | Get JSON object field as text |
#> | jsonb | Get JSON object at path |
#>> | text | Get JSON object at path as text |
@> | boolean | Contains (left contains right) |
<@ | boolean | Contained by |
? | boolean | Key exists |
?| | boolean | Any key exists |
?& | boolean | All keys exist |
|| | jsonb | Concatenate two JSONB values |
- | jsonb | Delete key or array element |
Indexing & Performance
B-tree Indexes (Default)
sql-- Standard index
CREATE INDEX idx_employees_department ON employees (department);
-- Composite index (column order matters!)
CREATE INDEX idx_employees_dept_salary
ON employees (department, salary DESC);
-- Unique index
CREATE UNIQUE INDEX idx_employees_email ON employees (email);Partial and Expression Indexes
sql-- Partial index: only index rows that match a condition
CREATE INDEX idx_active_employees
ON employees (department, salary)
WHERE is_active = TRUE;
-- Expression index: index a computed value
CREATE INDEX idx_employees_lower_email
ON employees (LOWER(email));
-- JSONB GIN index (for @>, ?, ?| operators)
CREATE INDEX idx_employees_metadata
ON employees USING GIN (metadata);
-- GIN index for JSONB path operations
CREATE INDEX idx_employees_metadata_path
ON employees USING GIN (metadata jsonb_path_ops);Covering Indexes (INCLUDE)
sql-- Include columns in the index without indexing them
-- Enables index-only scans for common queries
CREATE INDEX idx_employees_dept_covering
ON employees (department)
INCLUDE (first_name, last_name, salary);Concurrent Index Creation
sql-- Create without locking the table (production-safe)
CREATE INDEX CONCURRENTLY idx_orders_date
ON orders (created_at);
-- Drop without locking
DROP INDEX CONCURRENTLY IF EXISTS idx_orders_date;CONCURRENTLY caveats
Concurrent index builds take longer and cannot run inside a transaction. If the build fails, it leaves an INVALID index you must drop manually. Always check pg_stat_user_indexes after.
EXPLAIN and EXPLAIN ANALYZE
sql-- Show the query plan without executing
EXPLAIN
SELECT * FROM employees WHERE department = 'Engineering';
-- Execute and show actual timings + row counts
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM employees WHERE department = 'Engineering';
-- JSON format (easier to parse programmatically)
EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON)
SELECT * FROM employees WHERE department = 'Engineering';Reading EXPLAIN output
Look for Seq Scan on large tables (you probably need an index), high actual rows vs estimated rows (stale statistics — run ANALYZE tablename), and Sort Method: external merge (needs more work_mem).
VACUUM and ANALYZE
sql-- Update planner statistics
ANALYZE employees;
-- Reclaim dead tuple space
VACUUM employees;
-- Full vacuum (rewrites table, locks it — use sparingly)
VACUUM FULL employees;
-- Check table bloat
SELECT
relname,
n_live_tup,
n_dead_tup,
ROUND(100.0 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup, 0), 1) AS dead_pct
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 10;User & Role Management
Creating Roles and Users
sql-- Create a login role (user)
CREATE ROLE app_user WITH LOGIN PASSWORD 'strong_password_here';
-- Create a role with specific privileges
CREATE ROLE readonly_user WITH
LOGIN
PASSWORD 'readonly_pass'
NOSUPERUSER
NOCREATEDB
NOCREATEROLE
VALID UNTIL '2027-01-01';
-- Create a group role (no login)
CREATE ROLE engineering NOLOGIN;
-- Add a user to a group
GRANT engineering TO jose_gayares;Granting and Revoking Permissions
sql-- Grant database-level access
GRANT CONNECT ON DATABASE myapp TO app_user;
-- Grant schema-level access
GRANT USAGE ON SCHEMA public TO app_user;
GRANT CREATE ON SCHEMA public TO app_user;
-- Grant table-level access
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;
-- Grant for future tables (so you don't have to re-grant)
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO app_user;
-- Read-only access
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT ON TABLES TO readonly_user;
-- Revoke permissions
REVOKE INSERT, UPDATE, DELETE ON employees FROM intern_user;
-- Revoke all
REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA public FROM old_user;Row-Level Security (RLS)
sql-- Enable RLS on a table
ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
-- Create a policy: users can only see their own documents
CREATE POLICY user_documents ON documents
FOR ALL
USING (owner_id = current_setting('app.current_user_id')::UUID);
-- Create separate policies for SELECT and INSERT
CREATE POLICY select_own ON documents
FOR SELECT
USING (owner_id = current_setting('app.current_user_id')::UUID);
CREATE POLICY insert_own ON documents
FOR INSERT
WITH CHECK (owner_id = current_setting('app.current_user_id')::UUID);
-- Bypass RLS for admin roles
ALTER TABLE documents FORCE ROW LEVEL SECURITY;
CREATE POLICY admin_all ON documents
FOR ALL
TO admin_role
USING (TRUE)
WITH CHECK (TRUE);
-- Set the user context in your application
SET app.current_user_id = 'a1b2c3d4-...';Backup & Restore
pg_dump (Export)
bash# Dump entire database (custom format — best for pg_restore)
pg_dump -U postgres -Fc -f myapp.dump myapp
# Dump as plain SQL
pg_dump -U postgres -Fp -f myapp.sql myapp
# Dump specific tables only
pg_dump -U postgres -Fc -t employees -t departments -f tables.dump myapp
# Dump schema only (no data)
pg_dump -U postgres -Fc --schema-only -f schema.dump myapp
# Dump data only (no schema)
pg_dump -U postgres -Fc --data-only -f data.dump myapp
# Dump a specific schema
pg_dump -U postgres -Fc -n billing -f billing.dump myapp
# Compressed SQL dump
pg_dump -U postgres myapp | gzip > myapp_$(date +%Y%m%d).sql.gzpg_restore (Import)
bash# Restore from custom format dump
pg_restore -U postgres -d myapp -Fc myapp.dump
# Restore with clean (drop objects first)
pg_restore -U postgres -d myapp --clean --if-exists -Fc myapp.dump
# Restore specific tables only
pg_restore -U postgres -d myapp -t employees -Fc myapp.dump
# Restore into a new database
createdb -U postgres myapp_staging
pg_restore -U postgres -d myapp_staging -Fc myapp.dump
# Restore a plain SQL dump
psql -U postgres -d myapp -f myapp.sql
# Parallel restore (faster for large databases)
pg_restore -U postgres -d myapp -j 4 -Fc myapp.dump| Flag | Description |
|---|---|
-Fc | Custom format (compressed, most flexible) |
-Fp | Plain SQL format (human-readable) |
-Fd | Directory format (parallel dump/restore) |
-j N | Use N parallel jobs |
-t table | Dump/restore specific table |
-n schema | Dump/restore specific schema |
--schema-only | Structure only, no data |
--data-only | Data only, no structure |
--clean | Drop objects before restoring |
--if-exists | Don't error if objects don't exist |
Always test your backups
A backup you haven't restored is not a backup. Schedule periodic test restores to a staging database to verify your dumps are valid and complete.
Conclusion
PostgreSQL gives you more power than most developers ever use. This cheat sheet covers the patterns that account for 90% of daily database work — from basic CRUD to window functions, JSONB queries, performance tuning, and backup strategies.
The key to mastering PostgreSQL is not memorizing every function — it is knowing what is possible and having a reference ready when you need the exact syntax. Keep this page bookmarked, and next time you forget how ON CONFLICT works or need a quick EXPLAIN ANALYZE, you will know where to look.
Keep learning
- Official PostgreSQL Documentation:
postgresql.org/docs/16 - Use
pgBadgerfor log analysis and slow query detection - Explore
pg_stat_statementsto find your heaviest queries - Consider
pgBouncerfor connection pooling in production
Quick Reference Card
Most Used Commands
\dt— List tables\d table— Describe table\x auto— Smart expanded output\timing— Show query time
Performance Checklist
- Index columns in WHERE / JOIN / ORDER BY
- Use EXPLAIN ANALYZE on slow queries
- Run ANALYZE after bulk inserts
- Use keyset pagination over OFFSET