Aggregator
石油能源危机推动向可再生能源的转型
CVE-2024-21320
CVE-2025-55184
Anvilogic’s Blueprints replaces SOAR complexity with natural language security automation
Anvilogic has launched Blueprints, a workflow automation capability that captures expert analyst practices and turns them into scalable, repeatable workflows across security teams. Instead of requiring specialized engineers to build and maintain code, Blueprints lets analysts author automation in natural language, deploy it the same day, and have it execute to automate processes across data onboarding, detection engineering, threat hunting, investigation and response. “Your best analyst, at infinite scale,” said Mackenzie Kyle, Chief Product Officer … More →
The post Anvilogic’s Blueprints replaces SOAR complexity with natural language security automation appeared first on Help Net Security.
Vorlon Survey: 99% of Organizations Got Hit by a SaaS or AI Security Incident in 2025
A survey of 500 U.S. CISOs published by Vorlon ahead of RSAC 2026 found that 99.4% of organizations experienced at least one SaaS or AI ecosystem security incident in 2025. Only three out of 500 reported zero incidents. The numbers get more uncomfortable from there. One in three enterprises dealt with a security incident involving..
The post Vorlon Survey: 99% of Organizations Got Hit by a SaaS or AI Security Incident in 2025 appeared first on Security Boulevard.
⚡ Weekly Recap: CI/CD Backdoor, FBI Buys Location Data, WhatsApp Ditches Numbers & More
Astrix advances AI agent security platform to govern shadow and enterprise agents
Astrix Security has revealed a major expansion of its AI agent security platform, covering every layer where AI agents operate in the enterprise: from managed AI platforms to shadow deployments running on managed devices, detecting both agent existence and unauthorized access to enterprise resources, and enforcing policy over what agents are allowed to do. AI governance programs are not built for the speed at which agents are being deployed. Like third-party risk committees that never … More →
The post Astrix advances AI agent security platform to govern shadow and enterprise agents appeared first on Help Net Security.
Ridge Security Brings Agentic AI Pentesting to SMBs With PurpleRidge 3.0
Ridge Security released PurpleRidge 3.0 at RSAC 2026, a self-service penetration testing platform that uses agentic AI to give small and mid-sized businesses the kind of offensive security validation that has traditionally required dedicated teams and six-figure budgets. The upgrade marks a shift from the platform’s earlier machine-learning architecture to one built on agentic AI,..
The post Ridge Security Brings Agentic AI Pentesting to SMBs With PurpleRidge 3.0 appeared first on Security Boulevard.
Straiker enables visibility and runtime protection for enterprise AI agents
Straiker has launched Discover AI and expanded Defend AI to secure coding agents, productivity agents, and custom-built agent platforms. Agents are operating across enterprise systems with broad access, growing autonomy, and zero security oversight. That’s why Straiker built Discover AI and Defend AI: to give security teams visibility into what agents are running and protection against what they might do. Coding agents like Cursor, Claude Code, and GitHub Copilot are transforming how enterprise software gets … More →
The post Straiker enables visibility and runtime protection for enterprise AI agents appeared first on Help Net Security.
RSA Launches ID Plus Sovereign Deployment for Organizations That Can’t Afford Identity Downtime
RSA opened RSAC 2026 with a new deployment model for its ID Plus identity platform, aimed squarely at government agencies, financial services firms, and critical infrastructure operators that need identity security to work even when everything else fails. RSA ID Plus Sovereign Deployment is a “deploy anywhere” identity and access management solution that gives organizations..
The post RSA Launches ID Plus Sovereign Deployment for Organizations That Can’t Afford Identity Downtime appeared first on Security Boulevard.
CVE-2026-1958 | BRI KlinikaXP Insertino/KlinikaXP hard-coded credentials (EUVD-2026-14411)
CVE-2026-31851 | Nexxt Solutions Nebula 300+ up to 12.01.01.37 Authentication Interface excessive authentication
Farming at the Edge: Where Autonomous Robots and Edge Compute Meet
Launching Cloudflare’s Gen 13 servers: trading cache for cores for 2x edge compute performance
Inside Gen 13: how we built our most powerful server yet
The hidden cost of AI speed: Unmanaged cyber risk
AI isn’t just moving fast. It’s creating new attack paths. Cyber teams must now manage vulnerabilities – and their ramifications throughout their IT environments – in AI tools deployed without enough governance guardrails. The answer for securing this new attack surface? Unified exposure management.
Key takeaways- AI as an attack vector: By connecting to core workflows and sensitive cloud data, AI models have transformed from isolated targets into active attack vehicles.
- The AI governance gap: Rapid deployment has outpaced security controls, resulting in high-risk ecosystems plagued by over-privileged access, inactive “ghost” identities, and vulnerable software supply chains.
- Unified exposure management is essential: Securing AI requires abandoning siloed security dashboards in favor of unified exposure management that maps the complex web of interactions, identities, and permissions surrounding the models.
Most organizations have focused their AI efforts on speed, prioritizing fast adoption and experimentation to boost productivity. However, a different reality is emerging for cybersecurity teams.
While AI creates rapid business value, it also introduces a new class of cyber exposure that extends far beyond large language models (LLMs) and expands the existing attack surface. The result? It’s now much harder to secure the organization’s entire IT environment.
Vulnerabilities in popular AI modelsCybersecurity teams are now fully aware that the AI tools their employees are using contain serious vulnerabilities.
Tenable Research recently uncovered seven vulnerabilities and attack techniques in OpenAI’s ChatGPT, including indirect prompt injection, persistence, evasion, bypassing of safety mechanisms, and the exfiltration of personal user information. These flaws could have allowed attackers to stealthily extract private information from users’ memories and chat history.
In a separate study, Tenable Research discovered three vulnerabilities in Google Gemini that exposed users to serious privacy risks across Cloud Assist, Search Personalization, and the Browsing Tool. Again, the significance of the discovery went beyond the specific vulnerabilities. It showed that AI itself can become the attack vector and not merely the target.
When AI systems connect to search functions, cloud logs, saved preferences, tools, and browsing services, the model can become part of the attack path to sensitive data. The risk is no longer confined to protecting the model from misuse. You must understand how threat actors can manipulate the model to reach into its surrounding environment.
Moreover, the risk does not begin and end with AI-native tools alone. Vulnerabilities in connected data platforms can also create downstream AI exposure, including the potential for data poisoning. Take Tenable Research’s recent discovery of nine novel flaws in Google Looker Studio, a popular business intelligence tool. Those vulnerabilities, which we collectively named “LeakyLooker,” could have allowed attackers to expose, alter, and delete sensitive cloud data.
AI tools are getting incorporated into orgs’ critical operationsAccording to the recent “Tenable Cloud and AI Security Risk Report 2026,” 70% of organizations now depend on AI and model context protocol packages as core components of their production cloud stack.
What’s more, Deloitte’s “State of AI in the Enterprise” report, published in January, suggests that this shift is happening fast. Employee access to AI rose by 50% in 2025, and the number of companies with 40% or more in-production AI projects is expected to double in the next six months, according to the Deloitte report.
In other words, the environments in which these risks could play out are no longer on the edge of the business as pilot programs or proof-of-concept trials. They are being embedded into live environments, connected to the core workflows, and increasingly relied upon to support daily operations.
Unfortunately, governance has not kept pace with deployment. In many organizations, AI is being operationalized faster than security teams can fully map, assess, or control the risk it introduces. It has gone from an experimental side project to mimicking the actions of a full-time employee before organizations have even set their security permissions. This gap is particularly critical in the case of agentic AI tools, which are designed to act autonomously.
The critical importance of AI governanceGoverning AI usage requires organizations to clearly define which AI services employees are allowed to use and to prevent sensitive data from being sent to external models. The following example, detected by Tenable AI Exposure, part of the Tenable One Exposure Management Platform, shows a real case of an employee attempting to access sensitive company information, followed by an attempt to exfiltrate that data through an external AI service.
This is more than a visibility problem. In some environments, it is already a problem of overprivileged identities. The “Tenable Cloud and AI Security Risk Report 2026” found that 18% of organizations have granted AI services administrative permissions that are rarely audited, creating a ready-made set of elevated privileges that attackers could exploit.
And that risk does not sit neatly within the model itself. Much of the danger lies in the surrounding infrastructure, such as the identities and permissions that support AI services behind the scenes.
More than half (52%) of non-human identities hold excessive permissions, while 37% are inactive “ghost” identities, according to the Tenable report. In AWS environments specifically, 73% of Amazon SageMaker roles and 70% of Bedrock agent roles were found to be inactive. On paper, those roles may look harmless or forgotten. In practice, they represent dormant access paths that can expand the blast radius of an incident.
This is what makes AI risk so difficult to contain. The exposure is often buried in the machine identities, inherited privileges, and back-end connections that most organizations are not closely watching.
The same pattern appears in the software supply chain. AI systems do not operate in isolation. They are built on layers of third-party code, open-source packages, external integrations, and access to providers. That creates another set of dependencies that can quietly widen risk.
In the Tenable report, we found that 86% of organizations have at least one third-party code package with a critical vulnerability, while 13% have deployed third-party packages with a known history of compromise.
Add to that the finding that 53% of organizations have granted third parties access to internal systems via external accounts with highly risky, excessive permissions, and the AI risk picture becomes clearer.
These findings show that AI exposure isn’t limited to what a model might do. It extends to the broader ecosystem, such as the code, access, and trust relationships that attackers may be able to exploit long before anyone notices.
Manage AI risk with exposure managementBecause AI security risks are often created through the interactions of AI tools with surrounding infrastructure and identities, managing these risks requires more than just siloed monitoring of AI tools. It demands a new approach, not another disconnected dashboard telling security teams that AI exists in their environment. That new approach is exposure management.
Exposure management gives CISOs the big picture. It enables their teams to proactively identify, prioritize, and close their organization's highest-risk cyber exposures — the vulnerabilities, misconfigurations, and identity weaknesses that combine to create attack paths leading to their organization's most sensitive systems and data. It gives them visibility and context across their entire attack surface — IT, OT, cloud, identities, and AI — so they can see AI security risks alongside IT, OT, cloud, and identity security risks and the ways in which they combine.
With exposure management, you’re able to not just detect the AI systems your employees are using, but also more deeply how they’re using them, and where AI workloads, agents and tools are running. Critically, exposure management also surfaces how these systems, via the interconnectedness with other assets, expand your cyber risk.
Here’s what we’ve found regarding the prevalence of AI in real-world scenarios. Among Tenable customers that had at least one security finding over the past year, 54% had AI tools in their environments. The most commonly detected tool was Google Gemini, followed by ChatGPT and Anthropic’s Claude. At least 1% of these customers had ClawdBot present in their environments, which could introduce disproportionate risk due to its access to sensitive data, and ability to take actions across systems.
But, as stated before, visibility alone is not enough, organizations need context. To effectively reduce risk, they need to understand what the workload can access, which permissions it requires, what external dependencies it introduces, and how attackers might chain those elements together into a real path to breach.
In short, the conversation should no longer focus on securing AI as a standalone innovation project. Security teams should zero in on securing the full web of relationships around it.
In this AI era, exposure does not start and end with an AI model. It sits in the identities attached to it, the code feeding it, the cloud services extending it, and the data it can quietly reach. As organizations adopt AI at full speed, cybersecurity teams must focus on detecting the exposures that come with it before attackers do.
Download the "Tenable Cloud and AI Security Risk Report 2026."