Aggregator
How Main Line Health Secures Devices With Microsegmentation
New European Emissions Regs Include Cybersecurity Rules
Automakers are generally on track to implement new EU cybersecurity requirements in tailpipe emissions regulations instigated by the long shadow of Volkswagen's emissions scandal, but there could be a clash between those new rules and others that are intended to guarantee the right-to-repair.
Trump's Cyber Strategy Puts Private Sector on the Offensive
The Trump administration's national cyber strategy calls for a stronger partnership between the federal government and private companies, heralding a shift in the ways private enterprise could participate in offensive operations against nation-state adversaries, ransomware gangs and cybercriminals.
ISMG Editors: Iran Conflict Expands Into Cyber Warfare
In this week's panel, four ISMG editors discuss the cyber activity tied to the U.S.-Israel-Iran conflict, the Pentagon's standoff with AI firm Anthropic and a new report that reveals how document fraud reflects deeper weaknesses in verification systems.
Bold Launches With $40M to Target AI Risks on Endpoints
Bold Security exited stealth with $40 million to build an endpoint platform for the artificial intelligence era. CEO Nati Hazut said companies can no longer rely on older controls as employees and AI agents access data locally, creating new blind spots around apps, files and device activity.
D3 Morpheus for Your Microsoft Security Environment
You have Sentinel. You have Defender. Here is what fills the autonomous investigation gap between detection and autonomous resolution.
The post D3 Morpheus for Your Microsoft Security Environment appeared first on D3 Security.
The post D3 Morpheus for Your Microsoft Security Environment appeared first on Security Boulevard.
CVE-2026-32635 | angular up to 19.2.19/20.3.17/21.2.3 cross site scripting
CVE-2026-32732 | leanprover vscode-lean4 up to 0.1.x cross site scripting (EUVD-2026-12181)
CVE-2026-32617 | Mintplex-Labs anything-llm up to 1.11.1 HTTP Endpoint cross-domain policy
CVE-2026-32614 | emmansun gmsm up to 0.41.0 SM2/SM3/SM4/SM9/ZUC signature verification
CVE-2026-32626 | Mintplex-Labs anything-llm up to 1.11.1 PromptReply markdown.js cross site scripting
CVE-2026-32720 | ctfer-io monitoring up to 0.2.0 access control
CVE-2026-32715 | Mintplex-Labs anything-llm up to 1.11.1 Generic Endpoint authorization
CVE-2026-32707 | PX4 PX4-Autopilot up to 1.17.0-rc1 CAN tattu_can stack-based overflow (EUVD-2026-12152)
CVE-2026-32706 | PX4 PX4-Autopilot up to 1.17.0-rc1 buffer overflow (EUVD-2026-12150)
CVE-2025-48865
CVE-2026-32704 | SiYuan up to 3.6.0 Custom Attributes renderSprig improper authorization
CVE-2026-32616 | kasuganosoras Pigeon up to 1.0.200 Message HTTP_HOST injection (EUVD-2026-12133)
An AI Agent Didn’t Hack McKinsey. Its Exposed APIs Did.
This week’s McKinsey incident should be a wake-up call for every enterprise moving fast to deploy AI.
Not because AI itself is inherently insecure.
But because too many organizations are still thinking about AI security at the model layer, while the real enterprise risk sits in the action layer: the APIs, MCP servers, internal services, and shadow integrations that AI agents can reach, invoke, and manipulate.
That is the part most companies still do not see.
The technical details matter here. Public reporting described an internal AI platform with a broad API footprint, including more than 200 documented endpoints and a set of unauthenticated APIs that could allegedly be reached externally. The same reporting described potential exposure paths to tens of millions of chat messages, hundreds of thousands of files, user accounts, and system prompts. Whether or not every possible impact was realized, the takeaway for security leaders is clear: when internal AI systems are wired into weakly governed APIs, the blast radius can become enormous very quickly.
And this is not an isolated case.
The McDonald’s AI hiring incident points to the same structural problem. Different companies. Different workflow. Same core mistake. Reporting on that case described exposed administrative access, weak authentication practices, and the potential exposure of a massive pool of applicant records. Again, the story was not just about the chatbot. It was about the application and API infrastructure around it.
That is the lesson the market needs to understand.
The real risk is not the LLM. It is what the agent can do.A lot of the AI security market today is focused on prompts, model behavior, jailbreaks, and output controls.
Those matter.
But they are only one layer.
In the enterprise, AI agents do not create value by talking. They create value by taking action. They retrieve data, call APIs, invoke tools, access systems, trigger workflows, and increasingly operate through MCP servers and connected services.
That means the real blast radius of AI is determined by the action layer.
- If an internal API is left exposed without authentication, an agent can find it.
- If a shadow service is internet-accessible, an agent can reach it.
- If an MCP server is misconfigured, an agent can use it.
- If sensitive business logic is sitting behind undocumented or forgotten endpoints, an agent can chain those calls together at machine speed.
This is why the industry framing of “AI security” is still too narrow. The attack surface is no longer just the model. It is the full connected system around it.
McKinsey and McDonald’s security breaches are the same storyAt first glance, these incidents look different. McKinsey was an internal AI platform. McDonald’s was an AI-powered hiring workflow.
But structurally they are the same. Both point to a growing enterprise reality: organizations are connecting AI systems to internal and external application infrastructure faster than they are securing that infrastructure.
And in many cases, the weakest point is not a sophisticated model exploit. It is a plain old exposed API, weak authentication, forgotten endpoint, misconfigured access control, or third-party integration that quietly became internet reachable.
That is exactly why I believe one of the most dangerous categories emerging right now is shadow APIs connected to agents.
These are internal or lightly governed APIs that were never meant to become part of an external attack surface, but once they are connected to copilots, workflows, MCP servers, browser agents, coding agents, or AI applications, they effectively become part of one.
The company still thinks of them as “internal.” The attacker does not.
The blind spot: shadow APIs plus agent connectivityThis is the gap I worry about most for enterprises today. Every company has APIs it knows about. Many also have APIs it has forgotten, never fully documented, or does not realize are externally reachable.
Now add AI. The moment an agent is connected to those systems, or an MCP server is exposed with access to them, the attack surface expands dramatically.
What used to be obscure, low traffic, and semi-internal becomes:
- Discoverable
- Callable
- Chainable
- Exploitable at machine speed
That is the shift. In the pre-agentic world, a hidden or weakly governed API might sit quietly for months or years. In the agentic world, it only needs to be reachable once.
The new security model enterprises needIf you are deploying AI, you need to stop asking only: Is the model safe? and start asking:
- What can this agent reach?
- What APIs back this workflow?
- Which endpoints are exposed externally?
- Which MCP servers exist across the company?
The next generation of AI incidents will come from agents sitting on top of weak action layers: exposed APIs, unauthenticated services, forgotten integrations, and misconfigured MCP servers.
This is exactly why we built Salt SurfaceAt Salt, we have spent years helping enterprises discover and secure APIs they did not know were exposed. That problem now matters even more in the age of AI.
With Salt Surface, organizations can map their exposed API footprint, including AI-related APIs and externally reachable services, without deploying an agent or installing anything in their environment.
If you are building with AI, the first question should not be whether your prompt is protected. It should be whether your action layer is exposed.
The model is not the whole attack surface. The API layer is.
Get your free exposure scanIf you want to know whether your company has exposed APIs, AI-connected endpoints, or internet-reachable services, we will show you. No installation. No heavy lift. Just a domain and you get visibility into the attack surface you need to understand now.
Request your free Salt Surface scan today.
Roey Eliyahu is the Co-Founder and CEO of Salt Security, the leader in Agentic Security.
The post An AI Agent Didn’t Hack McKinsey. Its Exposed APIs Did. appeared first on Security Boulevard.