A Migration from Postgres to Cloudflare D1
A quick note before you migrate. You don't have to move your data to Cloudflare to feel the speed of Cloudflare's network. Hyperdrive sits between your Worker and any Postgres or MySQL database you already have running (RDS, Supabase, PlanetScale, etc). It keeps a warm pool of connections near the database, cuts handshake trips, and can even cache popular queries.
Your code uses the same driver or ORM; you only swap the connection string. The free plan now includes Hyperdrive, so trying it costs nothing. If faster queries are all you need, start there. Point your Worker at Hyperdrive, watch the latency drop, and keep shipping. You can stay on Postgres forever if it's doing the job. The rest of this guide is for the day you want a serverless, edge‑native database too. That's where D1 shines.
Why you might want to move
Postgres is great, but it lives in one place. D1 lives on Cloudflare's edge. Your data sits close to every user, and you pay only for what you use.
Compatibility check
Size matters. Over 10 GB? D1 isn't for you (yet).
Features matter. If you lean on extensions, custom types, or heavy write concurrency, test first.
Run these queries in Postgres to see if your database is compatible:
-- 1. Is the DB over 10 GB?
SELECT pg_size_pretty(pg_database_size(current_database())) AS size,
CASE WHEN pg_database_size(current_database()) > 10737418240
THEN '❌ Over 10 GB' ELSE '✅ Under 10 GB' END AS fits_in_d1;
-- 2. Columns with unsupported data types
SELECT table_schema, table_name, column_name, data_type
FROM information_schema.columns
WHERE data_type NOT IN (
'integer','real','text','blob','numeric','boolean','date','timestamp'
);
-- 3. Custom domain types
SELECT domain_schema, domain_name FROM information_schema.domains;
-- 4. User‑defined types (composite, enum, range, domain)
SELECT n.nspname AS schema, t.typname, t.typtype
FROM pg_type t
JOIN pg_namespace n ON n.oid = t.typnamespace
WHERE t.typtype IN ('c','e','r','d');
-- 5. Generated columns
SELECT table_schema, table_name, column_name
FROM information_schema.columns
WHERE is_generated = 'ALWAYS';
-- 6. Array columns
SELECT table_schema, table_name, column_name
FROM information_schema.columns
WHERE data_type LIKE '%[]';
-- 7. Installed extensions
SELECT extname, extversion FROM pg_extension;
If the size check fails or any query returns rows, you will need to refactor before moving on. That's outside the scope of this guide though.
Create a D1 database
npm install -g wrangler # Install Wrangler
wrangler d1 create mydb # Create the database
Wrangler prints a binding name and ID, paste both into wrangler.json.
Export from Postgres
pg_dump \
--data-only \
--column-inserts \
--inserts \
--no-owner \
--no-acl \
--file=pg.sql \
your_database
You now have a file full of simple INSERT statements.
Make the dump SQLite‑friendly
Search‑and‑replace (or script):
- TRUE → 1, FALSE → 0
- SERIAL → INTEGER PRIMARY KEY AUTOINCREMENT
- Drop lines starting with SET or SELECT pg_catalog.*
For big dumps, tools like pg2sqlite or dbmate save time. Prefer CSV? Dump each table as CSV and import with SQLite's .import.
(Optional) Test locally
sqlite3 local.db < pg.sql
Run a few SELECTs. If they work here, they'll work on D1.
Create tables on D1
wrangler d1 migrations create mydb init
# open migrations/0000_init.sql, paste your CREATE TABLE statements
wrangler d1 migrations apply mydb
Migrations are versioned files you can track in Git.
Load the data
wrangler d1 import mydb pg.sql
Wrangler streams the file over HTTPS and loads it at the edge. Prefer an HTTP API? D1 exposes one; check the docs.
Wire up your code
export default {
async fetch(request, env) {
const { results } = await env.DB
.prepare('SELECT id, title FROM posts WHERE id = ?')
.bind(42)
.first();
return Response.json(results);
}
}
env.DB is the binding you added to wrangler.toml.
Test locally
wrangler dev --local
Wrangler spins up a real D1 engine on your machine, so you can run the Worker and the database offline.
Don't sleep on Time Travel
wrangler d1 time-travel mydb --to "2025-04-12T10:15:00Z"
D1 keeps snapshots for the last 30 days. Roll back any bad migration in seconds.
A simple checklist
- Trim schema to SQLite‑friendly types.
- Dump Postgres data as SQL or CSV.
- Fix type or keyword mismatches.
- Create the D1 database and run migrations.
- Import the data.
- Point your app at the new endpoint.
- Test under load; watch writes.
- Turn on Time Travel and schedule exports.
Parting thoughts
Moving from Postgres to D1 isn't right for every app or team. But when you need a small, fast database that lives at the edge, D1 shines.
D1 is constantly evolving. What it can't do today, hopefully it will do tomorrow. Cloudflare keeps adding features based on feedback we get. In the end, simpler infrastructure means less things to manage and debug. When your database, compute, and network all live in the same place, you can focus on building what matters, your app.
If you do migrate, tell me about it. We are always trying to improve D1 product and make the best developer experience possible.
You can reach me on x/twitter @burcs.