Literally nothing here is specific to MCP - it all has to do with the fact that Chrome extensions can make HTTP connections to localhost ports, which could be running any kind of server. This is not an unrestricted backdoor either - Chrome extensions already need permissions in the manifest to talk to localhost, except via content scripts, which run in the context of the website and so could be served by the website without any extension installed.
fluffet 22 hours ago [-]
I take away that the combination is the problem. Bleach and ammonia isn't so bad on their own, but mixing the two is not a good idea. MCP would provide crazy attack vectors.
Especially if you could ask another AI "I have access to an MCP running on a Victim computer with these tools. What can you do with them?" => "Well, start by reading .ssh/id_rsa and I'd look for any crypto wallets. Then you can move on to reading personal files for blackmailing or sniff passwords..." and just let it "do its thing" as an attacking agent in an automated way. It could be automated which creeps me out!
eMPee584 21 hours ago [-]
Don't you give THEM ideas!
im3w1l 20 hours ago [-]
My intuition tells me that blackmailing at scale has the potential to be quite terrifying if you ask for favors that each seem innocent enough on their own. E.g. one favor may be as simple as asking the guy walking his dog to delay it for half an hour. He will surely comply without hesitation. But hidden reason was that he would otherwise witness a murder.
rickandmortyy 13 hours ago [-]
[dead]
kypro 22 hours ago [-]
Yeah, that's exactly what I took away from this too... I get why it's worth noting MCP servers in the article since these could provide a large attack vector, but it seems odd to focus on that as if that is the core security vulnerability here.
I guess the bit I'm more surprised about is why Chrome extensions are even allowed to make localhost connections without requesting user approval? Is the assumption that everything running locally must be safe? What am I missing here?
nightpool 22 hours ago [-]
I mean, the core security vulnerability explained here is that MCP does not expose / allow for any kind of authentication or user consent before accessing your computer's most sensitive resources, like a terminal or list of private Slack messages. Spotify, 1Password, or other services on your computer that use `localhost` do not have the same issue.
This would be a non-issue if some kind of simple origin-authenticated token exchange was built into the protocol itself.
marcus_holmes 11 hours ago [-]
It's crazy that, after all our experience with this, we're implementing another protocol that doesn't have any auth built in.
You'd think the last 30+ years of regret and hacky attempt to add auth to email and http (as just the top two to come to mind) hadn't happened.
maple3142 10 hours ago [-]
I think the reason is that MCP also works over a pipe (stdio), which does not need authentication.
cruffle_duffle 20 hours ago [-]
How could it? The agent calling into the MCP server is the one exposing an interface to the end user. It’s the agents job to prompt the user (and both Claude desktop and cursor do).
It’s the “system administrator”’s job to make sure the MCP is running at the right privilege level with correct data access levels. The MCP server can’t stop somebody from running it as root the same way any other program can’t.
At the end of the day the MCP should be treated as an extension of the user. Whatever the user can do, so too can the MCP server. (I mean, this isn’t technically true.. you can run the MCP under its own account or inside some sandbox… this will probably start to happen soon enough)
Dylan16807 14 hours ago [-]
The problem isn't the permissions the MCP has, it's about whose orders it obeys.
Many other programs on the system aren't an extension of the user. And they can access ports.
How could it do authentication? Easily. The most basic option is for the server to put a secret token in your user folder, so only code with access to that token can talk to it.
On Linux it can be even simpler. Don't attach the server to a port, attach it to a socket file.
fluffet 23 hours ago [-]
Woah, I had no idea. Thanks for the article.
I feel like some cycle phenomenon has been reached here..
The first protocols of the internet were very naive. Why'd you need to encrypt traffic? What do you mean exploit DNS, why would anyone do that?
Then people realised that the internet is a really, really wild place and that won't do.
I suddenly feel old, because this new AI tool era seems to have forgotten that lesson.
I feel it's like watching crypto learn by any% speedrunning why regulations and oversight might be a good in the first place (FTX and such).
I hope the next generation of AI tech/protocols are more robust, trust just doesn't cut it, or we'll see plenty of fingers being burnt at the stove.
dowager_dan99 22 hours ago [-]
I did a presentation on AI Agents from the perspective of an AI newbie and one of my comments/conclusions was that it felt like releasing a browser from 2000 in the middle of today's scary 2025 environment. MCP and similar are missing 20+ years of responding to new and emerging threats, and the hype men (executives everywhere) don't realize, care or have the ability to respond.
outworlder 13 hours ago [-]
Is the presentation public?
esafak 22 hours ago [-]
It's a new technology so it is understandable that practitioners are not aware of the security best practices, like https://genai.owasp.org/
Also, the security tooling is still nascent.
deadbabe 22 hours ago [-]
In early days it's always best to push security risk onto users in a bid to gain as much market share as possible. By the time they realize they've been screwed, technology will have matured and you can hand wave those old criticisms away, and even trumpet them as new innovations and upgrades.
npace12 24 hours ago [-]
I built little-rat (chrome extension) a couple of years ago that can track and block traffic from other extensions:
Wow thanks for building this! Any idea the effort it would take for someone to port this to Firefox?
npace12 21 hours ago [-]
it's not possible in firefox, that traffic is not visible (at least as of the last time I tried 1.5 years ago)
euazOn 23 hours ago [-]
Hey, thanks for that, Anon Kode, Anon Codex and other projects, very cool!
npace12 23 hours ago [-]
also check out the claude-mcp extension, very much related to this post :)
bhelx 21 hours ago [-]
This is the first i've heard of people using the SSE transport locally. What purpose what that serve? Is this by design because the chrome extension could not talk to it otherwise?
BTW, you should really run your MCP servers in a sandboxed environment, esp if they don't need to do things like `exec` or read from the filesystem. We do this with the https://mcp.run ecosystem by wrapping them in wasm. Because they are wasm you could also run them right in the chrome extension!
20 hours ago [-]
skybrian 11 hours ago [-]
I guess the security hole is that “allow connecting to localhost” might sound like an innocuous permission, but it becomes increasingly risky as you run more servers on local ports that have no other protection.
The permission itself doesn’t tell you anything about what powers it might grant. You need to know how all your local processes work to determine that, and most people have no idea.
It’s too generic for users to make reasonable decisions about. And that means that servers on localhost really should have authentication. Connecting client A to server B should be explicit.
zharknado 9 hours ago [-]
Great observation! The legibility or the permission grant matters a lot.
OsrsNeedsf2P 23 hours ago [-]
Lots of people think MCP is a case of "wow, how did we forget basic security", but I wonder if there were other competitors that MCP beat _because_ they had security friction.
happyopossum 13 hours ago [-]
This isn’t an MCP issue though - if you were running a webserver, or any application that listens on localhost that happens to have vulnerabilities, an extension could hit those too.
Literally nothing about MCP makes this easier or worse
fpoling 15 hours ago [-]
Any service running on local should reject HTTP requests with Origin header as those are generated from browser JS API. In addition requests with UserAgent should also be typically be rejected.
rafram 8 hours ago [-]
> In addition requests with UserAgent should also be typically be rejected.
No, all HTTP clients set User-Agent.
22 hours ago [-]
ttoinou 5 hours ago [-]
Sooo we just need a new standard MCPS (MCP Secure connections), right ?
babyshake 16 hours ago [-]
Is it correct that this exploit would not be possible with streamable HTTP MCP servers? I'd imagine that fairly soon every MCP server that does not need filesystem access will use this transport method unless there is some reason why STDIO/SSE would be needed instead. Can anyone confirm if this is the case and if they agree or disagree with this assessment?
olalonde 19 hours ago [-]
So do we add authentication to MCP servers or does Chrome fix this by restricting unauthorized calls to localhost?
brap 21 hours ago [-]
I still don’t understand why we even need a new protocol when we already have something like the OpenAPI spec, which can also be used to describe common authentication mechanisms like OAuth2. And it supports almost every existing API out of the box.
Granted it doesn’t separate between “resources”, “tools” and “prompts” but I think the line is blurry anyway.
And yes it can be used locally.
cruffle_duffle 21 hours ago [-]
I think people who consider Open API to be a “competitor” to MCP haven’t really played with MCP.
MCP is a tool calling protocol. Models are trained on it as a way to do stuff outside their sandbox. OpenAPI isn’t a tool calling protocol but more of a schema to describe interfaces.
You could write an MCP that exposes an OpenAPI compatible set of interfaces, but you couldn’t write an OpenAPI thing to call… well… anything. OpenAPI doesn’t cover the actual tool calling.
In addition, even if OpenAPI would work it’s massive and contains a ton of extra “stuff” that would overwhelm the models precious context window. Unless the OpenAPI schema was explicitly intended for LLM consumption, the results will be a mixed bag as the LLM will have to spend half its time making sense of the schema. A well designed MCP might take an OpenAPI endpoint suite and wrap it in thoughtful tool calls so the LLM doesn’t have to parse a giant schema doc (also… the LLM actually needs to make the HTTP call and guess how it will do that? Why though MCP of course!)
By contrast, MCP tools expose a slender LLM optimized interface that requires little “thought” to call.
Honestly though, comparing OpenAPI to MCP is a bit like comparing an xml schema to curl. They are completely different. MCP is for tool calling. It’s how you expose… well… anything from calling into your shell to looking something up in your database. The only similarity is that MCP exposes a schema to the model to tell it what kinds of tool calls the model can make. And did you read the spec I’d imagine said schema looks a wee bit like OpenAPI (wouldn’t know as I haven’t looked though).
Seriously. Go write an MCP for something you think would be cool. Like go write an MCP for Claude that connects to your logging and lets Claude search the logs in a more structured way. Make something like “find_request(request_id)” and then let your code do all the searching and have it return the relevant logs. Watch as the model doesn’t have to spend a billion tokens figuring out your database schema, how to grep, etc… good MCP’s do all the grunt work so the LLM can focus on your task and not spend tons of time bootstrapping. The entire exercise won’t even take a half day and you’ll have yourself a cool new tool that saves you time.
In short, MCP and OpenAPI are two entirely different concepts.
Also, credentials scattered in clear text inside the MCP configuration. They forgot how to do security!
rvz 22 hours ago [-]
Every time a startup uses an MCP server in their product software offering or even offers their own, I can only see the number of security consultants waiting for a massive payout when an LLM causes a security incident.
T3RMINATED 5 minutes ago [-]
[dead]
gitroom 18 hours ago [-]
bruh this stuff honestly makes my head spin - feels like were all relearning the same old security lessons
Especially if you could ask another AI "I have access to an MCP running on a Victim computer with these tools. What can you do with them?" => "Well, start by reading .ssh/id_rsa and I'd look for any crypto wallets. Then you can move on to reading personal files for blackmailing or sniff passwords..." and just let it "do its thing" as an attacking agent in an automated way. It could be automated which creeps me out!
I guess the bit I'm more surprised about is why Chrome extensions are even allowed to make localhost connections without requesting user approval? Is the assumption that everything running locally must be safe? What am I missing here?
This would be a non-issue if some kind of simple origin-authenticated token exchange was built into the protocol itself.
You'd think the last 30+ years of regret and hacky attempt to add auth to email and http (as just the top two to come to mind) hadn't happened.
It’s the “system administrator”’s job to make sure the MCP is running at the right privilege level with correct data access levels. The MCP server can’t stop somebody from running it as root the same way any other program can’t.
At the end of the day the MCP should be treated as an extension of the user. Whatever the user can do, so too can the MCP server. (I mean, this isn’t technically true.. you can run the MCP under its own account or inside some sandbox… this will probably start to happen soon enough)
Many other programs on the system aren't an extension of the user. And they can access ports.
How could it do authentication? Easily. The most basic option is for the server to put a secret token in your user folder, so only code with access to that token can talk to it.
On Linux it can be even simpler. Don't attach the server to a port, attach it to a socket file.
I feel like some cycle phenomenon has been reached here..
The first protocols of the internet were very naive. Why'd you need to encrypt traffic? What do you mean exploit DNS, why would anyone do that?
Then people realised that the internet is a really, really wild place and that won't do.
I suddenly feel old, because this new AI tool era seems to have forgotten that lesson.
I feel it's like watching crypto learn by any% speedrunning why regulations and oversight might be a good in the first place (FTX and such).
I hope the next generation of AI tech/protocols are more robust, trust just doesn't cut it, or we'll see plenty of fingers being burnt at the stove.
Also, the security tooling is still nascent.
https://github.com/dnakov/little-rat
BTW, you should really run your MCP servers in a sandboxed environment, esp if they don't need to do things like `exec` or read from the filesystem. We do this with the https://mcp.run ecosystem by wrapping them in wasm. Because they are wasm you could also run them right in the chrome extension!
The permission itself doesn’t tell you anything about what powers it might grant. You need to know how all your local processes work to determine that, and most people have no idea.
It’s too generic for users to make reasonable decisions about. And that means that servers on localhost really should have authentication. Connecting client A to server B should be explicit.
Literally nothing about MCP makes this easier or worse
No, all HTTP clients set User-Agent.
Granted it doesn’t separate between “resources”, “tools” and “prompts” but I think the line is blurry anyway.
And yes it can be used locally.
MCP is a tool calling protocol. Models are trained on it as a way to do stuff outside their sandbox. OpenAPI isn’t a tool calling protocol but more of a schema to describe interfaces.
You could write an MCP that exposes an OpenAPI compatible set of interfaces, but you couldn’t write an OpenAPI thing to call… well… anything. OpenAPI doesn’t cover the actual tool calling.
In addition, even if OpenAPI would work it’s massive and contains a ton of extra “stuff” that would overwhelm the models precious context window. Unless the OpenAPI schema was explicitly intended for LLM consumption, the results will be a mixed bag as the LLM will have to spend half its time making sense of the schema. A well designed MCP might take an OpenAPI endpoint suite and wrap it in thoughtful tool calls so the LLM doesn’t have to parse a giant schema doc (also… the LLM actually needs to make the HTTP call and guess how it will do that? Why though MCP of course!)
By contrast, MCP tools expose a slender LLM optimized interface that requires little “thought” to call.
Honestly though, comparing OpenAPI to MCP is a bit like comparing an xml schema to curl. They are completely different. MCP is for tool calling. It’s how you expose… well… anything from calling into your shell to looking something up in your database. The only similarity is that MCP exposes a schema to the model to tell it what kinds of tool calls the model can make. And did you read the spec I’d imagine said schema looks a wee bit like OpenAPI (wouldn’t know as I haven’t looked though).
Seriously. Go write an MCP for something you think would be cool. Like go write an MCP for Claude that connects to your logging and lets Claude search the logs in a more structured way. Make something like “find_request(request_id)” and then let your code do all the searching and have it return the relevant logs. Watch as the model doesn’t have to spend a billion tokens figuring out your database schema, how to grep, etc… good MCP’s do all the grunt work so the LLM can focus on your task and not spend tons of time bootstrapping. The entire exercise won’t even take a half day and you’ll have yourself a cool new tool that saves you time.
In short, MCP and OpenAPI are two entirely different concepts.