Hey everyone,
Today I want to talk about an interesting project that really made me think about what innovation and modernization actually mean for an application.
I recently attended Codemotion 2025 a Milano (a really well-organized tech event here in Italy) and caught a brief talk from OpenApi about the challenges they were facing while building MCP servers that exposed the same functionality as their REST APIs. On the surface, it doesn’t sound that complex—if the business logic is already there, how hard can it be to spin up a new project at the API layer? Shouldn’t be a big deal.
Shouldn’t be, right?

Objective
Wanting to replicate OpenAI’s approach, I grabbed one of my own projects (a REST API for integration) that I’d built for my HomeLab and added an MCP server running parallel to the API.
I set myself the following constraints:
- Both the API project and the MCP project had to be maintained
- The two projects had to be independent of each other
- Both projects needed to work in “standalone” mode without requiring the other to be running
What is an MCP server
The concept of MCP servers should be familiar to every developer by now, but let’s do a quick recap. Back in late 2024, Anthropic, the company behind Claude, introduced the MCP protocol—a “standard” for integrating any external application with the increasingly popular LLM models. Today, this protocol is recognized by other major AI players including OpenAI, Google, and Microsoft.
Unlike a REST API, which you access via an address and port, an MCP server communicates with the MCP client using the JSON-RPC 2.0 protocol—essentially continuous communication with JSON messages over stdin and stdout channels.
Architecture matters
The project was written in .NET 10 and followed a clean architecture pattern:

Business logic lives in Application, Infrastructure contains the repositories and external service access modules, while Domain houses the domain entities.
The company also has several services that could be used to build MCP servers, bringing core business functionality directly into an LLM, or embedding an agent into existing solutions. Everyone loved the idea, but as I started diving into the actual projects, I wanted to cry:
No real architecture to speak of
Business logic scattered across database, controllers, classes, and REST APIs
Logic duplicated everywhere
Implementations all over the place

What does implementing an MCP server even mean here? Which business logic should I use? How much time would I need to centralize the logic in a project with an architecture straight out of Harry Potter’s “Monster Book of Monsters”?
First major takeaway: architecture is crucial. It matters, and it pays back every single extra minute you invest instead of taking the “easy” and “fast” route.
The new architecture
Moving forward, let’s consider the API project built with clean architecture. Adding an MCP server means arriving at this model:

Implementing MCP functionality directly inside the API project is wrong for several reasons.
First, we need to maintain separation of concerns—each project should do one thing well. One handles APIs, the other handles MCP.
Second, in a production environment, I need to scale the two interfaces independently or even deploy just the APIs without the MCP server. The right choice is to have a dedicated project for each concern, allowing them to grow and depend only on what they actually need.
Configuration
My project was written in .NET with centralized configuration in the API project. In .NET, the configuration file (appsettings.json) gets loaded by the API project, which also orchestrates service dependency injection. After adding the MCP project to the solution, we need to figure out how to centralize configuration—duplicating it is a non-starter.
For this project, I added a new shared project to the solution to centralize and share resources, including the appsettings:
IntegrationProject/
├── IntegrationProject.API/
├── IntegrationProject.MCP/
├── IntegrationProject.Application/
├── IntegrationProject.Domain/
├── IntegrationProject.Infrastructure/
├── IntegrationProject.Shared/ # <--- Shared configurations
└── IntegrationProject.Test/ The configuration file also needs to be split up: not all configuration applies to the core project elements—some is API-specific, while other settings are dedicated to the MCP project. We end up with two appsettings files: one centralized and one project-specific. If a third project comes along tomorrow—say, GraphQL—it’ll also have its own base appsettings plus the shared one.
How do we load the configuration file? In development, we load both appsettings files:
// Build path of shared folder
var sharedFolder = Path.Combine(Directory.GetCurrentDirectory(), "..", "IntegrationProject.Shared");
builder.Configuration
#if DEBUG
//Shared settings as first
.AddJsonFile(Path.Combine(sharedFolder, "Configurations\\appsettings.Shared.Development.json"), optional: false, reloadOnChange: true)
//Override with API specific settings
.AddJsonFile("appsettings.Development.json", optional: false, reloadOnChange: true)
#elif RELEASE
//[...]
#endif
;In production, we instead specify the path to a dedicated configuration file, passed as a parameter to our container via an environment variable:
#if RELEASE
if(string.IsNullOrEmpty(appsettingPath)) {
throw new ArgumentException("Appsettings path missing! you must provide it with --settingsPath argument");
}
#endif
#if DEBUG
//[...]
#elif RELEASE
.AddJsonFile(appsettingPath, optional: false, reloadOnChange: true)
#endif
;Secrets
For managing keys and sensitive information, this project used Visual Studio-managed secrets during development and environment variables in production.
In this specific case, I had to duplicate the secrets since they’re managed at the project level, not the solution level. If this were a corporate project, I wouldn’t handle it this way—I’d probably use a KeyVault service instead.
Docker
Before implementing the MCP project, the REST APIs were already published as a Docker container with the image pushed to my Docker Hub.
What changes with the MCP server? Not much—the new project gets its own dedicated Dockerfile for building the image.
For this project, I created 3 different docker-compose files:
- One to start both projects (MCP + API)
- One to start just the APIs
- One to start just the MCP server
IntegrationProject/
├── IntegrationProject.API/
├──── DockerFile # Build API container
├──── docker-compose.yml # Run only api container
├── IntegrationProject.MCP/
├──── DockerFile # Build MCP server container
├──── docker-compose.yml # Run only api container
├── IntegrationProject.Application/
├── IntegrationProject.Domain/
├── IntegrationProject.Infrastructure/
├── IntegrationProject.Shared/
├── IntegrationProject.Test/
└── docker-compose.yml # Run API + MCP serverThe only thing to watch out for is adding the “stdin_open: true” and “tty: false” parameters in the docker-compose for the MCP server container, as they’re required for MCP client communication.
services:
the-integration-project-mcp:
image: <image-name-mcp>:latest
# [...]
stdin_open: true
tty: false
# [...]Cache
In an enterprise scenario, this project’s architecture would evolve further into a microservices system: one for REST APIs, one for the MCP server, and one with business logic for each domain area.
That was overkill for my HomeLab API service, but I didn’t like having both systems completely isolated without sharing any resources. The API service already used HybridCache (check out my dedicated article if you haven’t!) to store some information, so I thought: why not add a second-level cache?
The HybridCache library lets us manage a first-level cache (in-memory) and optionally a second-level one (like Redis). So I added a Redis container instance to the docker-compose to centralize caching across both services and prevent redundant reads from the source.

In the docker-compose (the one at the project root for starting both the MCP server and API together) I added Redis and a dedicated network for inter-container communication:
name: the-integration-project
services:
###################################
# Rest API
###################################
the-integration-project-api:
image: <image-name-api>:latest
# [...]
depends_on:
- redis
networks:
- the-integration-project-net
###################################
# MCP Server
###################################
the-integration-project-mcp:
image: <image-name-mcp>:latest
# [...]
stdin_open: true
tty: false
depends_on:
- redis
networks:
- the-integration-project-net
###################################
# Shared cache
###################################
redis:
image: "redis:alpine"
# [...]
networks:
- the-integration-project-net
###################################
# Network
###################################
networks:
the-integration-project-net:
driver: bridgeVersioning
Every time new code gets merged into the repository’s master branch, a DevOps pipeline automatically builds the Docker image with a new version and pushes it to my Docker Hub.
Integrations
As mentioned earlier, MCP servers communicate with clients via the JSON-RPC 2.0 protocol, and this works even when the solution runs in a container. Docker handles starting the solution while tools like n8n or Claude Desktop connect and start consuming tools, resources, and other functionality.
I’ll be publishing a dedicated article in the next few days on how to integrate a dockerized MCP server with Claude Desktop.
Development time
The project I migrated exposed about ten REST APIs, and in 3-4 hours I:
- Created the new MCP project
- Wrote the tools with method and parameter descriptions
- Wrote the tools with method and parameter descriptions
- Updated the documentation
- Split the appsettings between shared and project-specific configurations
How long would this have taken in a project with the same endpoints but no clear structure and scattered, disorganized business logic? I’ll let you answer that one.
Conclusions
Adding MCP functionality to our projects will become increasingly necessary as AI continues its advance (it’s already embedded in many tools and devices we use daily). I use the example project from this article at home, connected to an LLM model. This way I’ve extended the model’s capabilities with specific functions that I’d otherwise have to handle manually. Now I can talk to my “Jarvis” and ask it to perform specific actions (like activating irrigation, controlling house lights, etc.).
This is where project architecture becomes critical. Building an application “quickly” without proper structure and with poor architecture will come back to bite you when it’s time to extend functionality and evolve the product.
And let’s be clear—this applies way beyond just this context. As developers, we build “black box” products. End users will never see the code or appreciate how well it’s written. But developing with solid architecture and longevity in mind allows your product to adapt to our constantly evolving world in the shortest time possible.
Thanks for reading, catch you next time!