HomeDocumentationc_001_architectural-insigh
c_001_architectural-insigh
5 min read

Database Migrations: How to Stop Losing Your Mind Over SQL Schema Changes

Introduction

Picture this. You decided to build a website. Nothing crazy — just a solid Node.js backend, a MariaDB database, and a simple frontend sitting on the same cloud VM. Your app supports the basics: login, signup, a user profile, maybe some orders and messages. It's clean, it works, and you're proud of it. You deployed the whole thing in a weekend.

There are one or two developers on the project. Everything is fast, everything is scrappy, and that's completely fine for where you are.

Then the app starts to grow.

This posts's target audience is solo or small-team devs who already have a running app and are starting to feel the schema chaos — or anyone who wants to learn more about the "why" and the "how exactly" regarding sql migrations .

The Problem Nobody Warns You About Early Enough

You want to add a new feature. Let's say users can now upload a profile picture, so you need an `avatar_url` column in your `users` table. Easy enough. You open your local database client, run an `ALTER TABLE`, and everything works great on your machine.

Now you need to push it to production.

You SSH into your VM, you try to remember exactly what you changed three days ago, and you type it in. If you're lucky, you got it right. If you forgot a `DEFAULT NULL` or mistyped the column name, something breaks. Then you fix it. Then you wonder if there were other changes you made that you forgot about entirely.

A few months pass. Your schema has diverged from itself in ways you can't fully reconstruct. Your local database has columns that production doesn't. Your production database has entries in tables that you modified locally but never updated the structure for. A new developer joins — or it's just you returning after a vacation — and nobody knows what the "true" state of the database is.

The frustration is real. And the worst part is that the code side of your project is perfectly organized. You have Git. You have commits, branches, history. If something breaks in your JavaScript, you can roll back. You can see exactly what changed, when, and why. But the database? The database is a mystery that lives only in your memory and a couple of sticky notes.

At some point, you start daydreaming:

Pro Tip
"What if the database schema just... traveled with the code? What if when I did a `git pull` on the server, the schema updated itself too?"

That thought, right there, is the seed of database migrations.

The Homemade Solution (And Why It's Smarter Than It Sounds)

Before reaching for a specialized tool, a lot of developers — especially solo ones — arrive at the same intuitive solution: write a JavaScript file that connects to the database and runs the SQL for you.

It might start as something like this:

javascript
const mysql = require('mysql2/promise');

async function runMigrations() {
  const connection = await mysql.createConnection({ /* your config */ });

  await connection.execute(`
    CREATE TABLE IF NOT EXISTS users (
      id INT AUTO_INCREMENT PRIMARY KEY,
      email VARCHAR(255) NOT NULL,
      password_hash VARCHAR(255) NOT NULL,
      created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
    )
  `);

  await connection.execute(`
    ALTER TABLE users
    ADD COLUMN IF NOT EXISTS avatar_url VARCHAR(500) DEFAULT NULL
  `);

  console.log('Done.');
  await connection.end();
}

runMigrations();

You commit this file to your repo. When you `git pull` on your production VM, this file is right there. You run `node migrate.js`, and your database catches up to the new schema. No more sticky notes. No more SSH sessions trying to remember what you did two weeks ago.

This is genuinely clever. And it teaches you something important: **your database schema is code, and it deserves to be treated like code.**

The `CREATE TABLE IF NOT EXISTS` pattern is your best friend here — it means you can run the file over and over without it blowing up on you. The `ADD COLUMN IF NOT EXISTS` (supported in MariaDB) does the same for columns. The script becomes idempotent: you can run it ten times and it will always leave the database in the correct state.

You can go even further. Want to seed your database with reference data — say, a list of subscription plans or feature flags that should always exist? You can do that too:

javascript
await connection.execute(`
  INSERT INTO features (slug, label) VALUES
    ('dark_mode', 'Dark Mode'),
    ('export_pdf', 'Export to PDF')
  ON DUPLICATE KEY UPDATE label = VALUES(label)
`);

Now your initial data is also version-controlled. When your teammate clones the project and sets up a fresh database, they get the same starting point you have.

And if you ever need to move data from one environment to another, `mysqldump` can export a full table — data and all — as a giant `INSERT` statement. Drop that into your migration file, and you've essentially got a portable snapshot of your data that lives in your repository. It sounds a bit wild at first, but it works, and it drives home a fundamental idea:

Pro Tip
Anything you can express in SQL can become part of your deployment process.

Someone Already Built a Tool for This

Here's the part where it gets interesting: you're not the first person to have this idea. The concept of managing schema changes as a sequence of versioned files has been around for decades, and the community gave it a name: database migrations.

The core idea is simple. Instead of one big script that tries to describe the entire current state of your database, you maintain a series of small, numbered files — each one describing a single change:

javascript
migrations/
├── 001-create-users-table.sql
├── 002-add-avatar-to-users.sql
├── 003-create-orders-table.sql
└── 004-add-status-to-orders.sql

A migration runner keeps track of which files have already been applied (usually in a special table called something like `schema_migrations`). When you run it, it looks at what's been applied, finds the new ones, and runs them in order. It never runs the same migration twice.

This solves a problem that the single-file approach starts to struggle with over time: as your "one big script" grows, it gets harder to reason about what changed and when. With numbered migration files, the history is built into the structure. You can see at a glance that the orders table was added after avatars. You can trace exactly when a column appeared.

Several mature tools exist for this in the Node.js ecosystem. db-migrate is one of the most established, supporting both SQL and JavaScript migration files with up/down rollback support. Knex.js includes a solid migration system as part of its query builder. If you're using an ORM, Prisma and TypeORM both have first-class migration support with schema diffing. For a more framework-oriented approach, Sequelize has its own migration runner as well.

A `db-migrate` migration file looks roughly like this:

javascript
// migrations/20240209120000-add-avatar-to-users.js
exports.up = async function(db) {
  await db.addColumn('users', 'avatar_url', {
    type: 'string',
    length: 500,
    defaultValue: null,
    notNull: false
  });
};

exports.down = async function(db) {
  await db.removeColumn('users', 'avatar_url');
};

The `up` function applies the change. The `down` function reverses it. If something goes wrong after a deployment, you can roll back cleanly.

Pro Tip
Always write your `down` migrations, even when it feels tedious. You will not regret it at 2am when a deployment goes sideways and you need to roll back in a hurry.

A Pragmatic Middle Ground: Schema Sync

Not every project needs the full ceremony of a migration tool, especially in the early stages. Sometimes what you want is something simpler: a way to define what your tables *should* look like, and have the system figure out what's missing.

This is the idea behind schema sync. You keep one SQL file per table, written as a complete `CREATE TABLE IF NOT EXISTS` statement. When you run the sync script, it compares those definitions against what's actually in your database and adds anything that's missing.

javascript
-- schema/tables/001-users.sql
CREATE TABLE IF NOT EXISTS users (
    id INT AUTO_INCREMENT PRIMARY KEY,
    email VARCHAR(255) NOT NULL,
    password_hash VARCHAR(255) NOT NULL,
    avatar_url VARCHAR(500) DEFAULT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

If the `avatar_url` column doesn't exist yet, the sync script generates and runs this:

javascript
ALTER TABLE users ADD COLUMN avatar_url VARCHAR(500) DEFAULT NULL;

And if everything is already in sync, it does nothing. Safe to run anytime.

The key property that makes this work is that the sync is additive only — it creates tables and adds columns, but it never drops anything. This means you can run it in production without fear of accidentally deleting data. For the more destructive operations — renaming a column, changing a type, removing something — you handle those separately, as explicit migration files that get reviewed before they run.

Here's what a typical workflow looks like in practice:

javascript
# Add a new column to your table definition
# (just edit the SQL file and add the column)

# Preview what the sync will do — no changes yet
npm run schema:check

# Apply the changes
npm run schema:sync

# Commit the updated schema file
git add schema/tables/001-users.sql
git commit -m "Add avatar_url to users"

When a teammate pulls that commit and runs `npm run schema:sync`, their database gets the new column automatically. When you deploy to production, same thing. The schema travels with the code, exactly like you wished for.

Handling the Tricky Stuff: Renames and Type Changes

Adding columns is easy. The hairy operations are the ones that touch existing data: renaming a column, changing its type, shrinking a `VARCHAR`. These are where people lose data.

The safest pattern for a type change — especially one that involves incompatible types — is to never touch the original column until you're sure the new one is correct:

javascript
-- Step 1: Add a new column with the desired type
ALTER TABLE users ADD COLUMN age_new VARCHAR(10);

-- Step 2: Copy the data over
UPDATE users SET age_new = CAST(age AS CHAR);

-- Step 3: Verify the counts match
SELECT
  COUNT(*) AS total,
  COUNT(age) AS old_count,
  COUNT(age_new) AS new_count
FROM users;

-- Step 4: Only after verification, drop the old column
ALTER TABLE users DROP COLUMN age;

-- Step 5: Rename the new column
ALTER TABLE users CHANGE COLUMN age_new age VARCHAR(10);

This "create-copy-verify-swap" pattern means your original data is untouched right up until the moment you've confirmed the new column is correct. It's more steps, but each step is reversible until the very end.

For renames, MariaDB and MySQL 8+ support `CHANGE COLUMN` which handles it in a single statement:

javascript
ALTER TABLE users
CHANGE COLUMN username user_name VARCHAR(255) NOT NULL;

Always include the full column definition when using `CHANGE COLUMN` — the engine needs it even if the only thing changing is the name.

Pro Tip
Before running any migration that modifies column types in production, run `mysqldump` first. It takes thirty seconds and has saved databases more times than anyone wants to admit.

The Moment Everything Clicks

The reason migrations feel like a revelation when you first really get them is that they close a gap that's been there the whole time. Your application code has always been versioned — every function, every route, every bug fix is tracked in Git. But the database, which is just as much a part of your application, has been living outside that system.

Migrations bring the database into the same model. A schema change becomes a commit. Deploying to a new environment becomes `git pull && npm run schema:sync`. Onboarding a new developer becomes `npm install && npm run schema:sync` rather than a two-hour session of "okay, now you need to manually add these twelve columns."

You don't need to start with the most sophisticated tool. A single `migrate.js` file with `CREATE TABLE IF NOT EXISTS` statements is a perfectly valid beginning. What matters is the habit: when the schema changes, the change lives in the repo. From that starting point, you can grow into more structured tooling as your project demands it.

The database is part of your codebase. Treat it like one.

Troubleshooting: The Hits

A few things you'll run into and how to handle them:

Foreign key errors during sync.

If a table with a foreign key tries to create before the table it references, the creation fails. Fix this by numbering your files to ensure referenced tables come first — `001-users.sql` before `005-orders.sql`.

Column already exists" errors.

This usually means a migration was partially applied or the sync ran against a database that already has that column. With proper `ADD COLUMN IF NOT EXISTS` syntax (MariaDB) or by checking `INFORMATION_SCHEMA` first (MySQL), this is avoidable entirely. Most good migration tools handle this automatically.

Data truncation on type changes.

If you try to shrink a column and some rows have data that's too long, the ALTER will fail or silently truncate. Before any column shrink, run a quick check: `SELECT id FROM users WHERE LENGTH(email) > 100` — if it returns rows, deal with them before running the migration.

Migration ran twice.

This happens when the tracking table (`schema_migrations`) gets out of sync with reality — maybe someone ran the SQL manually and forgot to record it. The fix is to insert the migration's filename into the tracking table so the runner knows it's been applied:

javascript
INSERT INTO schema_migrations (filename, success) VALUES ('001-my-migration.sql', TRUE)

Final thoughts

Whether you build it yourself, bolt on a library, or implement a hybrid sync approach, the principle is the same. Keep your schema in version control. Make changes reproducible. Deploy with confidence.

Your future self — the one SSH'd into production at midnight — will thank you.

Related Topics

database migrations nodejssql schema versioningmariadb migrationsdb-migrate nodejsschema syncalter table nodejshow to manage database changesmigrate.jsdatabase version controlexpress mysql migrations

Ready to build your app?

Turn your ideas into reality with our expert mobile app development services.