PostgreSQL Migration Guide
A 7-phase, step-by-step process for migrating from Oracle or SQL Server to PostgreSQL — covering schema conversion, data migration, rehearsal runs, zero-downtime cutover, and post-migration tuning.
Discovery & Assessment
Week 1–2The success of your migration depends entirely on how thoroughly you understand the source database before touching anything. Skip this phase and you will hit surprises mid-cutover.
- Catalogue every table, view, stored procedure, function, trigger, and sequence
- Identify all Oracle/SQL Server-specific syntax: CONNECT BY, ROWNUM, TOP, identity columns, DB links
- Map data types with no direct PostgreSQL equivalent (NUMBER, VARCHAR2, DATETIME, MONEY)
- Document all scheduled jobs and their business purpose
- Identify all applications, services, and scripts that connect to the database
- Review character set and collation — confirm Unicode (UTF-8) compatibility
- Estimate data volume per table and total size for transfer planning
Recommended Tools
Schema Conversion
Week 2–4Convert the schema first, with no data. Get all objects creating cleanly in PostgreSQL before any data movement begins.
Never auto-convert PL/SQL to PL/pgSQL without manual review. Auto-converters miss business logic nuances.
- Convert all tables with PostgreSQL-native data types (NUMERIC, VARCHAR, TIMESTAMPTZ, BOOLEAN)
- Replace Oracle sequences / SQL Server identity with PostgreSQL SERIAL or GENERATED ALWAYS AS IDENTITY
- Rewrite all stored procedures and functions from PL/SQL / T-SQL to PL/pgSQL
- Convert CONNECT BY hierarchical queries to recursive CTEs (WITH RECURSIVE)
- Replace ROWNUM / TOP with LIMIT and OFFSET
- Rebuild all indexes, constraints, and foreign keys in PostgreSQL syntax
- Re-implement DBMS_SCHEDULER / SQL Server Agent jobs using pg_cron
- Rebuild audit triggers in PL/pgSQL with identical coverage
Recommended Tools
Initial Data Migration
Week 3–5Move data in stages — start with reference/lookup tables, then historical data, then transactional data. Validate at every stage.
- Disable foreign key constraints and indexes before bulk load for speed
- Load reference tables first (they have no dependencies)
- Load historical/archived data (low-change risk)
- Load transactional tables with validation row counts after each table
- Re-enable constraints and indexes after load
- Run ANALYZE on all tables after load to generate statistics
- Compare row counts and checksums between source and target
Recommended Tools
Rehearsal Migrations
Week 4–6Run at least three complete end-to-end rehearsals against a copy of production data before any live cutover. Each rehearsal reduces cutover time and surfaces issues.
If rehearsal 3 exceeds your maintenance window, you need a parallel-replication cutover strategy instead of a snapshot migration.
- Rehearsal 1: Full process run — expect issues, document every one
- Rehearsal 2: Fix issues from rehearsal 1, re-run, measure total time
- Rehearsal 3: Target cutover time under your maintenance window with a clean run
- Validate application connectivity against PostgreSQL after each rehearsal
- Run regression tests against all critical business functions
- Test all scheduled jobs run correctly in the new environment
- Measure and document actual cutover window duration
Parallel Operation & Replication
Week 5–7 (zero-downtime only)For zero-downtime migrations, run both databases in parallel with continuous change replication until cutover. This is the most complex phase but eliminates downtime risk entirely.
- Set up logical replication from source to PostgreSQL for ongoing change capture
- Monitor replication lag — target under 1 second before cutover
- Run application in read-only shadow mode against PostgreSQL to verify query compatibility
- Write a cutover runbook with exact commands, timing, and rollback triggers
- Define rollback criteria: what conditions trigger aborting the cutover
- Confirm monitoring and alerting is live on the PostgreSQL instance
Recommended Tools
Cutover & Validation
1 cutover windowExecute the cutover during your lowest-traffic window with the full team available. Follow the runbook exactly — no improvisation.
- Announce maintenance window to stakeholders
- Stop all application write traffic to the source database
- Wait for replication lag to reach zero (or complete final data sync for snapshot migrations)
- Update all application connection strings to point to PostgreSQL
- Run smoke tests: login, core CRUD operations, report generation
- Run automated validation suite: row counts, business logic outputs, report comparisons
- Confirm all scheduled jobs are running on schedule
- Monitor error rates for 30 minutes before declaring success
- Keep source database in read-only mode for 48–72 hours as rollback safety net
Post-Migration Optimisation
Week 1–4 post-cutoverPostgreSQL's query planner may choose different execution plans than your source database. Monitor for slow queries in the first weeks and tune as needed.
- Enable pg_stat_statements and review top 20 queries by total time daily
- Run EXPLAIN ANALYZE on any new slow queries and add missing indexes
- Tune shared_buffers, work_mem, and max_connections for your workload
- Set up autovacuum tuning for your highest-write tables
- Configure connection pooling (PgBouncer) if not already in place
- Establish baseline metrics: query latency, cache hit ratio, replication lag
- Decommission source database after 30-day soak period
Planning a PostgreSQL Migration?
We've delivered migrations from Oracle, SQL Server, and MySQL to PostgreSQL with zero data loss and zero downtime. Book a free assessment call.
BOOK FREE ASSESSMENT