MCP Servers
16

MCP Servers for Node.js Developers: A Working Guide

MCP Servers for Node.js Developers — A Working Guide

April 30, 2026

This is the pillar post for our MCP series. It is the article we wanted to find when we first started building MCP servers and couldn't — not a quickstart (there are plenty of those, and they are mostly the same five copy-pasted snippets) and not a spec deep-dive (the spec is the spec; the official docs are well-written). Something in between. The article that maps "I'm a Node.js developer who can ship an Express API" to "I now have a working intuition for what an MCP server is, what the SDK is doing on my behalf, where the protocol's rough edges sit, and what I would write into a job spec if I needed one built."

The anchor for the build is a small but real Node.js MCP server we shipped recently called mcp-blog-publisher. It exposes three tools — listDrafts, readDraft, publishDraft — that allow Claude Code, running on a developer machine, to read markdown files from a blog-drafts/ directory, ask the human which one to publish, and insert the row into the production blog database. About 200 lines of code in total, two evenings of work, plus the half-day spent untangling the assumptions we'll walk through below.

If you want the conceptual scaffolding before the build, the sibling posts in this series cover it. The reading order we'd suggest:

  1. Everything we got wrong about MCP before building one — host-vs-server-vs-LLM, why the model has no idea MCP exists.
  2. What is a Node.js program when it isn't a server? — Node as a process, stdin/stdout as IPC, the reframe before MCP makes sense.
  3. MCP, agents, and LLMs: where each piece sits — terminology, the layered picture, where intelligence lives.

This post is the pillar that pulls those threads into a working build, and points forward to the deeper posts on tool descriptions, security, transports, and OAuth.

What MCP actually is, in one paragraph

The Model Context Protocol is a JSON-RPC 2.0 dialect that allows a host application — Claude Desktop, Claude Code, Cursor, Zed, Windsurf, any application that wants to drive an LLM with tools — to connect to a server (a process you write that exposes tools, resources, or prompts) without either side knowing the other's internal shape. The host asks the server "what can you do?" The server replies with a list of capabilities and their schemas. The host injects those into the LLM API call as standard function-calling tools. When the LLM decides to call a tool, the host routes the call to the right server, gets the result, and feeds it back to the LLM. That is the entire loop.

What MCP solved is not capability — it is interoperability. Before MCP, every host application invented its own format for tools. A tool you wrote for Cursor did not work in Claude Desktop. After MCP, you write the server once, configure it in any compliant host, done. This is structurally the same move the Language Server Protocol pulled for editors and language tooling. A standard plug shape, and an ecosystem that compounds because of it.

If you internalize that one paragraph, the rest of this post is detail.

The shape of a minimal MCP server

Here is the small, useful piece of an MCP server, using the official SDK. This is a stripped-down version of what the entry point of an mcp server (call it mcp-blog-publisher) looks like:

ts
// server.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "blog-publisher",
  version: "0.1.0",
});

server.registerTool(
  "listDrafts",
  {
    title: "List blog drafts",
    description:
      "Returns the filenames of every markdown draft in blog-drafts/. " +
      "Use this before reading a specific draft or before publishing one.",
    inputSchema: {},
  },
  async () => {
    const files = await listDrafts(); // your implementation
    return {
      content: [
        { type: "text", text: files.join("\n") },
      ],
    };
  },
);

const transport = new StdioServerTransport();
await server.connect(transport);

That is a complete, working MCP server. You technically can run it with node server.js — except you gain nothing that way. Because the LLM runs it and manages it. More on that in a moment.

A few things deserve attention even at this size.

There is no app.listen(). There is no port. There is no HTTP. The StdioServerTransport connects to process.stdin and process.stdout, reads JSON-RPC messages from one and writes them to the other. The host is the parent process; the MCP server is its child; they communicate over a pipe. If you have never written a Node program shaped this way before, the post on what is a Nodejs program if it's not a server is the place to start.

There is no global registration step. Nowhere in this code do you tell Anthropic, OpenAI, or anyone else that the server exists. No manifest is uploaded anywhere. The model itself has no idea this server exists. The only thing that knows about it is the host's local config, which lives in a file on the developer's machine. The "model has no idea MCP exists" mental model is covered in detail in the post that opens this series; the version worth carrying with you is short — MCP is a host-side mechanism. The model just sees a list of tools at API call time.

The tool description is doing a lot of work. The schema is empty (the tool takes no arguments), the handler is a single line, and the description is already three sentences and could reasonably be longer. That is not an accident. The description is the entire interface the LLM ever sees. The handler is invisible to it. There is a dedicated post on writing tool descriptions well — for now, just notice that the description is doing more work than the code, and that ratio holds for almost every MCP tool you will ever write.

Wiring the server into a host

This is where the protocol stops being abstract.

Claude Desktop reads a config file (path varies by operating system, typically ~/Library/Application Support/Claude/claude_desktop_config.json on macOS). To register your server, you add an entry like this:

json
{
  "mcpServers": {
    "blog-publisher": {
      "command": "node",
      "args": ["/Users/me/code/mcp-blog-publisher/dist/server.js"]
    }
  }
}

Claude Code uses a similar file at ~/.claude/mcp_servers.json (or per-project — check the docs for the version you are running). Cursor, Zed, Windsurf — same shape, different file. The configuration model is uniform across compliant hosts on purpose; it is part of what makes the ecosystem portable.

When the host starts, it reads that config, sees the entry, runs node /Users/me/.../server.js as a subprocess, and connects to the subprocess's stdin and stdout. From the server's perspective, it just gets spawned. It has no awareness of which host launched it. As long as the parent speaks the protocol, the server does not care.

The first message the host sends is an initialize request — the protocol's handshake. The server responds with its capabilities (which tools it has, whether it supports resources, whether it supports prompts, what protocol version it speaks). The host follows up with a tools/list request to enumerate the tools. From there, every user prompt that lands in the LLM's tool list includes the descriptions you wrote.

When the LLM decides to call listDrafts, the API response contains a tool_use block. The host intercepts that block, sees that listDrafts belongs to the blog-publisher server, and dispatches a tools/call JSON-RPC request to that server's stdin. The server's handler runs, returns a result, the SDK serializes it into a JSON-RPC response on stdout, and the host feeds that result back to the LLM in the next API call. The conversation continues.

That whole flow — handshake, list, call, response — is exactly what the SDK is wrapping. You can implement it from scratch in plain Node.js without the SDK; that is the next post in this series, where the SDK is peeled off and the protocol is exposed in its full simplicity. It is a useful exercise even if you intend to keep using the SDK for production work. The SDK saves a day, not a project.

What the SDK is actually doing

Before deciding to use or skip the SDK, it helps to know what it is buying you. The @modelcontextprotocol/sdk package handles:

  • JSON-RPC 2.0 framing — message IDs, request/response correlation, error envelopes, the usual hygiene.
  • The MCP lifecycleinitialize, capability negotiation, the initialized notification, shutdown.
  • Schema validation — the inputSchema you declare (with Zod, in TypeScript) is the same shape that gets shown to the LLM and the same shape that validates incoming arguments before your handler runs. One source of truth.
  • Transport abstractionStdioServerTransport, StreamableHTTPServerTransport, the older HTTP+SSE one. Same handler code, different wire.
  • Type generation — in TypeScript, the SDK exposes types for tool definitions, content blocks, errors, and the rest of the surface area.

That is the whole list. The SDK is roughly the size of a well-designed Express middleware library. There is no magic, no registration with a remote service, no special permissions. Just protocol bookkeeping with reasonable ergonomics.

There are two reasons to know this even if you never plan to write a non-SDK server.

First, the SDK is going to change. It already has, once. Earlier versions used server.tool(name, description, schema, callback) — the form most online tutorials still show. Newer versions use server.registerTool(name, { title, description, inputSchema }, callback). Old code keeps working with deprecation warnings; old tutorials do not. If you understand what the SDK is doing, an API rename is half an hour of follow-the-deprecation-warning work, not an afternoon of "what does this even mean." The protocol itself moves more slowly than the SDK ergonomics on top of it; an investment in understanding the protocol pays back across SDK versions.

Second, knowing the protocol means you can debug like an adult. When something goes wrong — and something will, eventually — you can reach for wireshark-style thinking ("what is actually on the wire?") instead of guessing at the SDK's intent. The next post in the series does this exercise from scratch, and we recommend reading it even if you never plan to write a server without the SDK. The exercise is the point. The artifact is incidental.

A real handler, end to end

Here is what a slightly less trivial tool looks like — publishDraft, the actual publish step from mcp-blog-publisher. This is the tool with security implications, because once it runs, the LLM has effectively written to a production database.

ts
const DRAFTS_ROOT = "/Users/me/code/blog-drafts";

server.registerTool(
  "publishDraft",
  {
    title: "Publish a blog draft",
    description:
      "Publishes a draft from blog-drafts/ into the live blog. " +
      "Only filenames returned by listDrafts are accepted. " +
      "WARNING: this is a write operation. " +
      "Confirm with the user before calling, every single time.",
    inputSchema: {
      filename: z
        .string()
        .describe(
          "Filename returned by listDrafts. Must end in .md. " +
          "Path traversal is rejected.",
        ),
    },
  },
  async ({ filename }) => {
    const target = path.resolve(DRAFTS_ROOT, filename);
    if (!target.startsWith(DRAFTS_ROOT + path.sep)) {
      throw new Error("Path escapes drafts root");
    }
    if (!target.endsWith(".md")) {
      throw new Error("Only .md files allowed");
    }

    const markdown = await fs.readFile(target, "utf8");
    const { id, slug } = await blogApi.publish({ markdown });

    return {
      content: [
        {
          type: "text",
          text: `Published. id=${id}, slug=${slug}`,
        },
      ],
    };
  },
);

Twenty-something lines. The interesting parts are not the database call — that is whatever your normal data access looks like — but the two lines of validation in the middle. path.resolve plus the startsWith check is the path-traversal guard. The schema's .describe() annotation shapes the LLM's expectation; the runtime check enforces the boundary. Together they form the security wall. Neither alone is enough.

Three things deserve a second look here, because they generalize across almost every MCP tool worth writing in production.

The schema and the handler are both part of the security boundary. This is a different posture from a typical REST API, where validation often lives entirely in middleware. In MCP, the schema teaches the LLM what shape the tool expects; the handler enforces what the schema is unable to express. A schema that says filename: string and a handler that does no validation will eventually be passed ../../etc/passwd. A schema with a tight .describe() and a handler with explicit boundary checks turns that class of issue into a runtime error rather than a breach.

The description includes a directive aimed at the LLM, not the developer. "Confirm with the user before calling, every single time" is not commentary. It is a prompt fragment that ends up in the tool list the LLM sees. Models read these instructions and act on them — not perfectly, but materially. Treating tool descriptions as a place to write prompt-engineered guardrails is one of the most undervalued patterns in MCP design. It does not replace human-in-the-loop confirmation in the host, and it does not replace runtime validation, but it changes the LLM's defaults in a way that compounds across every call.

The result is text, not a structured object. MCP tools return content blocks; the simplest is plain text. The LLM consumes text well — better, in many cases, than it consumes nested JSON. A returned string of "Published. id=... slug=..." is easier for the model to summarize back to the user than a { "id": ..., "slug": ... } payload. Designing tool outputs as if you were writing for a careful but text-native consumer is a useful frame.

This pattern — schema and handler both contributing to the security boundary, descriptions doing work as soft guardrails, outputs shaped for LLM consumption — is the heart of the security post in this series. MCP security is mostly a tool-design problem before it is an infrastructure problem. If your tool can only do things you can enumerate in advance ("publish this specific filename that already existed before the LLM started talking"), the blast radius collapses by an order of magnitude.

The friction nobody warns you about

Three things bit us during the first build. They will save anyone reading this an afternoon.

The module-system trap. A tutorial snippet had "type": "commonjs" in package.json but ESM import syntax in the code. Node crashed on line 2 with a syntax error that did not obviously point at package.json. Roughly ten minutes lost the first time. If your import lines are getting a syntax error, check "type" in package.json first, before anything else. Make it "module" for ESM, or rewrite the imports as require(). There is no neutral default in 2026; the file declares its module system, and the code has to match.

Stdout is the wire. A console.log("server starting") at the top of server.ts, dropped in for a sanity check, caused Claude Desktop to drop the connection silently — no error, no log entry visible to the developer, just nothing. The reason is that stdout is reserved for JSON-RPC messages, and console.log writes to stdout. The first byte of "server starting" corrupted the JSON-RPC stream and the host hung up. The fix is one character of muscle memory: console.error instead of console.log. Stderr is fine; stdout is the protocol. Once you have been bitten by this, the habit is permanent.

SDK API drift. Mentioned above, but worth repeating: server.tool(...) is the old form, server.registerTool(...) is the current form, and the deprecation warning is not the most prominent thing the runtime tells you about. If your tutorial is more than six months old, cross-check it against the current SDK README before assuming the mismatch is something you did wrong. This will keep happening. The SDK ergonomics are still settling.

Each of these has a longer treatment elsewhere in the series. The point of mentioning them in the pillar is to flag them so they are recognizable when they bite, rather than mysterious.

Tools, resources, prompts — the three primitives

Almost everything above has been about tools, because tools are roughly 95% of the MCP work that gets done in the wild. But the protocol defines three primitives, and the other two are underused:

  • Tools — model-callable functions. The kind we just built.
  • Resources — read-only data the host can attach into context. "Here is the contents of README.md, in case the model wants to reference it." The host decides whether to attach; the server just exposes.
  • Prompts — user-invokable templates. The model does not choose to invoke these. The user does, often through a slash-command UI in the host.

We are not going to walk through resources and prompts in this article — there is a dedicated post on them — but it is worth knowing they exist and that most MCP servers in the wild today expose only tools. That means most MCP servers are using one third of the protocol. There are real categories of capability — large reference documents, structured templates, user-driven shortcuts — that are awkward to fit into the tools primitive but elegant to express as resources or prompts. Knowing the full surface area is part of being competent here.

Stdio vs Streamable HTTP

The entire post so far has used the stdio transport. There is a second transport — Streamable HTTP — for remote MCP servers. That is the transport you reach for when the MCP server needs to live on a VPS and be called by Claude.ai over the public internet, or by an enterprise's hosted agent, or anywhere the parent-spawns-child relationship does not fit naturally.

Streamable HTTP brings ports back. It also brings authentication back, which means OAuth 2.1 and PKCE, which is a whole cluster of follow-on questions. Doing the topic justice in the pillar is impossible. There is a transport post on when to pick which, an OAuth 2.1 walkthrough for the auth side, and a PKCE explainer for the cryptographic primitive that ties it together.

The summary, for now: most local-developer MCP servers should be stdio. Most enterprise integrations will need HTTP. The choice between them is a deployment-model decision, not a technical preference. Reaching for HTTP because it feels more "real" is a common error — it adds real engineering tax in exchange for capability that may not be needed. Reaching for stdio when the integration is genuinely remote is the opposite mistake, and it is harder to unwind once a few users are depending on the wrong shape.

What good MCP work actually looks like

If you came to this article to figure out whether to hire someone — or be hired — to build an MCP server, an honest profile of the work is useful.

The split is roughly:

  • 70% backend engineering you already know. Argument validation, error handling, talking to APIs and databases, writing tests, running in production, observability. This is the ordinary work of building a service.
  • 20% protocol-aware work. Getting the lifecycle right, getting the schemas right, getting the transport choice right, getting the auth right when the server is remote. This is the part most online MCP content focuses on, partly because it is the most novel and partly because it is the easiest to demo.
  • 10% writing. And it is the 10% that decides whether the LLM uses your tools correctly. Tool descriptions, schema annotations, error messages — all written for a consumer that is non-deterministic by design. This is the most undervalued skill in the MCP ecosystem, and it is the one that compounds the hardest. We have a whole post on it; it is worth your time even if you have no immediate plans to write a server.

If you are commissioning an MCP server — for a CRM, an internal admin, a support pipeline, a content workflow — the budget conversation should focus on which tools you want, how narrow each one needs to be, and what the security posture looks like. The protocol itself is the smallest part of the work. The judgment about what to expose, what not to expose, how to describe it, and how to guard it is what you are paying for.

This is the framing we use at Amazing Resources when scoping MCP work. We treat each MCP server as a small, well-bounded backend service — designed against the client's actual domain, with clear tool semantics, real schemas, real error handling, real audit logs, and real operational care. We borrow heavily from the same architectural toolkit that produces good APIs: bounded contexts from Domain-Driven Design, clean separation between domain and infrastructure, and an obsession with making the boundary contract precise. Those habits transfer directly. A well-designed MCP server reads, runs, and operates a lot like a well-designed REST service that happens to have a language model on the consumer side, with all the strengths and oddities that brings.

A diagnostic worth keeping

If you ship an MCP server and it works but feels wrong, there is a useful diagnostic.

Replace your handler bodies with stubs that return canned data. Ship the stubbed version. Then try to use the server through a real host with a real LLM at the wheel.

If the LLM can still figure out which tool to call, in what order, with which arguments, from the descriptions and schemas alone — the descriptions are doing their job. The handlers were not the bottleneck.

If the LLM cannot — calls the wrong tool, passes wrong arguments, gets confused about ordering — the code was never the problem. The writing was.

This diagnostic separates concerns in a way that is hard to do any other way, and it is the cheapest experiment in the entire MCP toolbox. Almost every team that runs it once, runs it permanently.

Where to go from here

A sensible reading order through the rest of the series:

  1. Build a minimal MCP server from scratch, no SDK — to demystify the protocol completely.
  2. Writing tool descriptions LLMs can reason about — the part that decides whether your server is actually useful.
  3. Securing your MCP server — auth, scope, the narrow-funnel principle, audit, and the discipline of shrinking tools until the LLM's range of motion only covers outcomes you are comfortable with.
  4. Stdio vs Streamable HTTP — the deployment-model choice and its consequences.
  5. Resources and prompts: the underused half — beyond tools.

The protocol is small. The runtime is mechanical. The judgment is everything. That is the trade you are making when you choose to do MCP work seriously, and the rest of this series is the ledger of decisions that judgment has to make.

Building an MCP server for your CRM, internal tool, or product? Get a quote →

Related Topics

mcp server nodejsmcp typescriptmodel context protocol nodejsmcp tutorial nodejs

Ready to build your app?

Flutter apps built on Clean Architecture — documented, tested, and yours to own. See which plan fits your project.