Building a Custom MCP Server: What Business and Development Teams Need to Know

Building a Custom MCP Server: What Business and Development Teams Need to Know
Photo by Solen Feyissa / Unsplash

The Model Context Protocol is everywhere. Anthropic released it in November 2024, and within months, it became the de facto standard for connecting AI assistants to external tools and data. OpenAI and Google DeepMind have adopted it. It's being called "USB-C for AI applications."

I built my own MCP server recently. Claude's built-in filesystem server was having reliability issues, and building a replacement was straightforward. Perhaps too straightforward.

When something that grants an AI access to your systems is that easy to build, it's worth stopping to think about the implications.

What Is MCP and Why Should Businesses Care?

If you're a business leader hearing "MCP" in technical discussions, here's what you need to know.

The Model Context Protocol is a standardised way for AI assistants to interact with external systems: your files, databases, APIs, development tools, and internal applications. Think of it as a universal adapter that lets AI reach beyond its training data and actually *do things* in your environment.

Before MCP, every AI integration was bespoke. Each tool needed custom code, authentication handling, and error management. MCP standardises this into a common protocol, enabling tools built once to work across different AI platforms.

This matters because the real value of AI assistants lies not in answering questions. It's taking action. Drafting documents, querying databases, running analyses, managing workflows. MCP is the plumbing that enables this. If your organisation is serious about AI productivity, you'll encounter MCP whether you realise it or not.

Why I Built My Own

I use Claude extensively as a day-to-day productivity tool. Not just for development work, but for research, document drafting, analysis, and managing complex projects across multiple workstreams.

The real power comes from filesystem access. Instead of constantly re-explaining context, uploading documents, or referencing previous conversations, I point Claude at my working directories. It reads the relevant files, understands my current project status, and picks up where we left off. The AI becomes a genuine assistant with persistent context, rather than a stateless chatbot; I have to brief from scratch every session.

The built-in filesystem MCP handles the basics, but I wanted something tailored to my workflow: .NET build and test integration, optimised file operations for large codebases, and better diagnostics when issues arise.
often get overlooked
Building it was a standard process: get it working, then refine. The performance optimisations were MCP-specific, but the governance and security considerations are the same principles that should apply to any system integration. They often get overlooked in the rush to ship AI tooling.

For Business: The Productivity Promise and the Security Reality

The Genuine Benefits

MCP-enabled AI assistants can genuinely transform productivity. Instead of switching between applications, copying data, and manually orchestrating workflows, you describe what you need, and the AI handles the execution. For knowledge workers managing complex projects across multiple systems, this is a significant step change.

The filesystem access I described earlier is a good example. The AI maintains context across sessions, references previous work, and builds on existing thinking. This isn't a gimmick; it fundamentally changes how you work with AI.

The Security Reality

But there's a problem. Security researchers have identified multiple serious vulnerabilities in the MCP ecosystem, and the protocol itself wasn't designed with security as a first principle.

In April 2025, security researchers released an analysis concluding there are "multiple outstanding security issues with MCP, including prompt injection, tool permissions that allow for combining tools to exfiltrate data, and lookalike tools that can silently replace trusted ones."[1]

A Bitdefender analysis noted: "It's surprising to see a new core protocol introduced in 2025 where security isn't 'secure by default'... This security oversight serves as a warning sign that even basic web security best practices were not consistently applied from the beginning."[2]

Red Hat's security analysis highlighted the "confused deputy" problem: "If it's not implemented correctly, a user could gain access to resources that should not be available to them but that are available to the MCP server, violating the principle of least privilege."[3]

The "Helpful Assistant" Problem

Here's the uncomfortable truth: AI assistants are designed to be helpful. That's the problem.

An AI with access to your systems will do exactly what it thinks you asked, even when that means exposing confidential data in a summary, including sensitive file contents in its reasoning, or executing a destructive operation because the request seemed legitimate.

As the Jit security analysis put it: "The model did exactly what it was designed to do, and that was the problem."[4]

The guardrails can't be behavioural ("please don't access sensitive files"). They must be architectural. The AI shouldn't be able to access what it shouldn't access, regardless of what anyone asks.

What Business Leaders Should Ask

Before any MCP implementation touches business data, these questions need answers:
1. What data can this AI access, and who approved that access?
2. How do we authenticate and authorise users of this system?
3. Where do AI interactions get logged, and for how long?
4. What happens to data after the AI processes it?
5. How do we revoke access if something goes wrong?
6. Does this integration comply with our regulatory obligations?
7. Who is accountable when the AI does something it shouldn't?

If your technical team can't answer these questions, the implementation isn't ready for production.

Regulated Environments: The Higher Bar

If you're in insurance, financial services, healthcare, or any regulated industry, the bar is higher still.

Data residency - Where is the AI processing your data? If you're using cloud-hosted AI services, does that data cross jurisdictional boundaries?

Explainability - Can you explain what the AI did and why? Regulators increasingly expect organisations to demonstrate they understand their AI systems.

Human oversight - Certain decisions in regulated environments require human approval. Your MCP implementation needs to support, not circumvent, those checkpoints.

Audit trails - You will be operating under GDPR, so you need to demonstrate what personal data was accessed and why. Under FCA rules, you need complete audit trails for any system touching financial data. If you're building AI systems seriously, ISO 42001 (the AI management system standard) explicitly requires documented processes for AI system monitoring and logging. I implemented this at a previous organisation, and the audit trail requirements alone would rule out most naive MCP implementations.

For Developers: It's Not Just Another API

If you're building MCP servers, the technical implementation is straightforward. That's not the hard part.

The hard part is that MCP introduces two unpredictable elements that traditional API design doesn't account for: the LLM itself, and the human asking questions.

The Unpredictability Problem

With a traditional API, you control the inputs. You define the endpoints, validate the parameters, and handle the responses. The caller follows your specification or gets an error.

With MCP, the LLM decides which tools to call and in what order. It interprets natural language requests and translates them into tool invocations. A user asking "clean up my project" could result in file deletions you didn't anticipate.

The human adds another layer of unpredictability. They might ask ambiguous questions. They might not understand what the AI has access to. They might paste text from untrusted sources that contains hidden instructions (prompt injection attacks are real and documented [4]).

Security Is Harder, Not Easier

This unpredictability means security is harder with MCP than with traditional integrations:

Access scoping: Don't grant broad access and expect the AI to use it responsibly. Implement explicit allowlists. My server can only access directories I've specifically permitted. Everything else is denied by default.

Principle of least privilege - Don't build one MCP server with access to everything. Build focused servers with specific capabilities. A code analysis tool doesn't need access to HR files. A document summariser doesn't need write permissions.

Confirmation for destructive operations - Build friction into dangerous operations. My server requires two steps for deletion: request (returns a token), then confirm (requires the token). This creates a checkpoint where the human can verify intent.

Comprehensive logging - Log every operation with timestamps, parameters, and outcomes. When something goes wrong, you need to answer: What did the AI do? When? At whose request?

MCP-Specific Considerations

Beyond standard security practices, MCP has some specific technical considerations:

Transport matters - MCP supports multiple transports, but the most common for desktop integration is JSON-RPC over standard input/output (stdio). With this approach, any stray logging, startup messages, or debug output corrupts the protocol and causes silent failures. HTTP-based transports don't have this problem, but stdio is the most common transport for desktop AI assistants.

Response size affects performance - The LLM has to process every character you return. Verbose output means slower responses and higher token costs. Parse and summarise. A successful build should return "OK", not 50 lines of compiler output.

Tool descriptions are prompts - The descriptions you write for your tools guide how the LLM uses them. Vague descriptions lead to misuse. Be explicit about what the tool does, what inputs it expects, and how output should be interpreted.

Context enables action - When returning file content, include line numbers, file size, truncation markers, and guidance on which tools to use for different editing scenarios. The LLM can't ask clarifying questions mid-operation.

The Bottom Line
MCP is becoming infrastructure. The productivity benefits are real. But the security considerations are being systematically underweighted in the rush to ship.
For business leaders: Ask the hard questions before MCP touches your data. The answers matter more than the demo.

For developers: Yes, get it working. But don't treat it like another API. The unpredictable elements, the LLM and the human change the security model entirely. The same principles that apply to any system integration still apply here. They're just harder to implement when you can't predict what questions will be asked.

The organisations that get this right will build AI integrations that actually deliver. Those that don't will find themselves explaining to regulators or customers why their AI had access to data it shouldn't have.

That's not a conversation anyone wants to have.

References
1. Wikipedia, "Model Context Protocol" - Security researchers' April 2025 analysis of MCP vulnerabilities.

2. Bitdefender, "Security Risks of Agentic AI: A Model Context Protocol (MCP) Introduction" - Analysis of MCP security maturity.

3. Red Hat, "Model Context Protocol (MCP): Understanding security risks and controls" - Analysis of confused deputy vulnerabilities.

4. Jit, "The Hidden Dangers of MCP: Emerging Threats for the Novel Protocol" - Detailed analysis of prompt injection and attack patterns.

5. Anthropic, "MCP Security Best Practices" - Official security guidance.

6. arXiv, "Securing the Model Context Protocol (MCP): Risks, Controls, and Governance" - Academic analysis mapping MCP to ISO 42001 and other frameworks.


Chris Brown is a fractional CTO and enterprise architect with 28 years of experience across insurance, emergency services, and enterprise software. He writes about the Build Paradox: the tension between shipping quickly and building systems that last.*

Read more