How We Built an MCP Server for Our SaaS
A walkthrough of the architecture decisions, the 16 tools we exposed, and what we learned shipping a Model Context Protocol server to npm. The TL;DR: it took less code than we expected, and the discovery surface was the harder problem.
What is MCP, in one paragraph
The Model Context Protocol (MCP) is an open standard for exposing tools and data sources to large language models. A model running inside Claude Desktop, Claude Code, Cursor, Cline, or Continue can call your API through a thin protocol layer. The model decides which tool to invoke; your server defines the schema and runs the actual work. The protocol is JSON-RPC over stdio (for local clients) or HTTP/SSE (for hosted servers). Think of it as “function calling, but standardized across vendors.”
Why we built one
Visual Sentinel is a website-monitoring SaaS. The core operations (creating monitors, listing incidents, acknowledging alerts, reading status pages) are repetitive, well-shaped, and naturally script-like. They are also exactly the kind of thing a developer running an AI assistant would want to do without leaving their editor: “add monitor for example.com,” “show me the incidents from the last hour,” “what is the SSL status of my checkout endpoint.”
These workflows already exist as REST endpoints in our public OpenAPI spec. An MCP server is mostly a translation layer: it wraps the existing API in a schema that an LLM can understand and dispatch against. We did not build new functionality. We built a discovery surface for the functionality we already had.
Architecture
The server is a single Node.js process that speaks JSON-RPC over stdio. It depends on the official @modelcontextprotocol/sdk for the protocol layer, plus a tiny custom HTTP client that calls the public Visual Sentinel API.
The codebase is three files:
src/index.tswires up the SDK, registers tools, and starts the stdio transport.src/api-client.tsis a 60-line typed wrapper overfetch()that handles base URL, API-key headers, and JSON parsing.src/tools.tsdefines all 16 tools with JSON Schema input definitions and async handlers.
The build is plain tsup producing ESM output. The published bundle is around 14kB, plus the SDK transitive dependencies. Total install time on a developer machine is under three seconds.
The 16 tools
We split tools into two groups: five that work without authentication (free tools that probe public endpoints), and eleven that need a personal API key from the dashboard.
Public (no API key)
vs_health: verifies the server is reachable.vs_dns_check: A, AAAA, MX, CNAME, TXT lookup.vs_ssl_check: certificate validation, expiry, chain, protocol.vs_speed_test: Core Web Vitals (LCP, CLS, INP, TTFB).vs_website_check: single-shot HTTP status and load time.
Authenticated (personal API key)
vs_monitor_list,vs_monitor_get,vs_monitor_create,vs_monitor_update,vs_monitor_delete,vs_monitor_pause,vs_monitor_resume: full CRUD over monitors.vs_incident_list,vs_incident_acknowledge: incident triage.vs_alert_list: recent alerts across all monitors.vs_status_pages_list: public status-page management.
We deliberately did NOT expose every API endpoint. Bulk import, billing-related operations, and team-management endpoints stayed out of scope. The principle: ship the operations a developer would script first, then expand based on actual usage.
Schema design lessons
The schema you give the LLM IS the documentation. The model reads tool names and argument descriptions to decide what to call. Vague names produce vague behavior. Three concrete lessons:
- Prefix every tool name with a namespace. We used
vs_*across all 16 tools. The model is more likely to pick the right tool when it can scope by prefix, especially in environments with multiple MCP servers loaded. - Argument descriptions matter more than the name. A tool called
vs_monitor_createwith arg description “The URL to monitor, must include the protocol (https://...)” performs noticeably better than the same tool with a bare argurl. The LLM uses the description to decide what to pass. - Return structured errors.When the API rejects a request, return the validation error verbatim in the tool result, not a generic “something failed.” The LLM will read it and self-correct on the next call.
Distribution
Once the server worked, the next problem was: how do users find and install it? Three answers, in priority order:
- npm. Published as
@visualsentinel/mcp-server. Run vianpx -y @visualsentinel/mcp-server, no install step. This is the path most users take because it is the path most MCP server documentation shows. - A landing page on our marketing site. Visit /mcp for install snippets covering five clients (Claude Desktop, Claude Code, Cursor, Cline, Continue). Copy-paste the JSON, restart the client, the tools appear. We also serve a machine-readable server card at
/.well-known/mcp/server-card.jsonso future MCP catalogs can auto-discover it. - Awesome lists and registries. Submitting to
awesome-mcp-serverson GitHub and the official MCP Registry was the slowest, lowest-leverage step but the one with the longest tail. PRs sit for days before review. We did it anyway because that is where developers searching for MCP servers actually land.
Our Claude plugins marketplace (a separate repo, also under the VisualSentinel GitHub org) bundles the MCP server with three Claude Code skills (monitor-status, incident-triage, monitor-onboarding) and three slash commands. That is the one-command install path for Claude Code users specifically.
What we would do differently
Add the MCP Registry submission step earlier. The Registry asks for an mcpName field in your published package.json. We did not include it on first publish, so we needed a patch release to add it. Plan it into the initial publish.
Lean harder on JSON Schema constraints. Several tools accept enum-shaped strings (severity levels, alert channel types). Encoding them as enums in the input schema gives the LLM a tighter set of valid values to pick from and reduces malformed calls.
Add streaming for long-running tools. A monitor-create plus immediate-check workflow takes a few seconds end to end. The MCP protocol supports progress notifications; we did not wire them up because the SDK examples skipped over them. Streaming progress feels nicer in interactive UIs and is on the next-version list.
Try it
If you use Claude Desktop, Claude Code, Cursor, Cline, or Continue, you can install the Visual Sentinel MCP server in 60 seconds. The five free tools (DNS, SSL, speed test, website checker, health) work without an account. The other 11 tools use a personal API key from the dashboard.
Install snippets and tool referenceSource code
- github.com/VisualSentinel/mcp-server (TypeScript, MIT licensed, ESM, Node 18+).
- npm: @visualsentinel/mcp-server (published bundle).
- github.com/VisualSentinel/openapi (the underlying OpenAPI 3.1 spec the server wraps).
- github.com/VisualSentinel/claude-plugins (Claude Code plugin marketplace bundling the MCP server with skills and slash commands).