
Remember when APIs were the backbone of everything? They held the core business logic, acted as middleware for databases and file systems, and were the connective tissue of modern applications. We spent years perfecting them: stabilizing interfaces, versioning carefully, and building robust auth systems to protect them. With the advent of MCP servers, things are shifting to a more AI-based approach.
The landscape has shifted dramatically with AI applications. MCP provides a standardized way for applications to:
In the early days, most MCP implementations were built as local servers, communicating via the STDIO protocol. Local MCP servers rely on users having the latest packages whenever the server team updates or upgrades their MCP server capabilities.
As AI-powered applications evolved, teams are extending MCP servers beyond local networks to support distributed agent-based systems. The introduction of remote HTTP based MCP servers opened up several possibilities, including actions in third-party APIs, automated workflows etc.
This transition, however, introduces significant security challenges. Unlike their predecessors operating within protected network boundaries, remote MCP Servers are exposed to potential threats across networks. This exposure demands robust security measures, particularly around authentication and authorization. Resource servers, such as MCP servers, are responsible for enforcing access control and protecting sensitive business logic. Resource servers can advertise their metadata, including authorization server information, via a resource metadata url, which helps clients discover server capabilities and facilitates secure OAuth flows.
MCP servers are becoming the de facto for AI agent workflows. But there’s one glaring problem:
Almost all MCP servers are being shipped completely naked.
No auth. No identity layer. No idea who's calling them, what they're allowed to do, or how long they should be able to do it for. The architecture's flexibility and security can be enhanced by implementing authorization server discovery mechanisms, which enable MCP clients to locate and interact with the correct authorization servers efficiently.
If your MCP server is callable from an AI agent or a remote workflow…and there’s no authorization layer in front of it? That’s not just an oversight. That’s a security hole.
MCP updated their protocol in March 2025 specifically mandating OAuth as the mechanism to access remote MCP servers. Authorization server discovery is a key part of this mandate, allowing MCP clients to efficiently locate and interact with the correct authorization servers, ensuring secure and seamless client-server interactions.
Remote MCP servers must enforce secure authorization, ensuring only authenticated actors can access sensitive tools and data. MCP clients must be able to discover and interact with authorization servers to ensure proper OAuth flow. However, improper token validation or integration with external authorization servers can lead to authorization server issues, such as invalid access tokens or security vulnerabilities.
Let's explore what implementing OAuth for MCP looks like in practice.
This means that if you are building a remote MCP Server, you need to implement OAuth 2.1 based authorization server that is responsible for minting tokens and ensuring that only the authorized actors can access the MCP Servers.
The practical approach is separating your concerns:
Think of it like this: the authorization server is your nightclub bouncer: it checks IDs and, as the corresponding authorization server, issues wristbands (access tokens) to authorized clients. The MCP server is the venue: it only admits people with the right wristband.

Implementing scopes in the OAuth flow gives you critical control. During the OAuth consent process, users are presented with the requested scopes so they can review and understand the permissions being asked for before granting access:
mcp:exec:functions.weather: Can only call weather function
mcp:exec:functions.*: Can call any function
mcp:read:models: Can only read model information
Authorization flows enable granular permission management by allowing you to specify and manage scopes according to the principle of least privilege. Without scopes, you're essentially giving all-or-nothing access to your entire MCP server—and by extension, to all your backend systems it can reach.
Granting consent for specific scopes is essential to enhance security and user experience, ensuring users or enterprises only approve the access that is necessary.
Here’s what you’ll need to implement (and what to watch out for):
The good news? You don't need to reinvent this wheel. At Scalekit, we are launching a drop-in OAuth authorization server that attaches to your MCP server without major rewrites or migrations.
Scalekit provides turnkey auth infrastructure for MCP servers. Implementation takes minutes, not weeks (We actually built our own MCP server)
Enterprise teams are rolling out MCPs into production pipelines—and the attack surface is expanding fast.
Stop shipping naked MCP servers. Check out Scalekit's MCP auth now.