Randall Munroe’s XKCD ‘’Ping”
via the comic artistry and dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘’Ping” appeared first on Security Boulevard.
via the comic artistry and dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘’Ping” appeared first on Security Boulevard.
The percentage of companies choosing to pay ransoms dropped significantly,
while threat actors shift their tactics in response to decreasing profits.
The post Insider Threats Loom while Ransom Payment Rates Plummet appeared first on Security Boulevard.
Authors, Creators & Presenters: PAPERS Understanding reCAPTCHAv2 via a Large-Scale Live User Study Andrew Searles (University of California Irvine), Renascence Tarafder Prapty (University of California Irvine), Gene Tsudik (University of California Irvine) Modeling End-User Affective Discomfort With Mobile App Permissions Across Physical Contexts Yuxi Wu (Georgia Institute of Technology and Northeastern University), Jacob Logas (Georgia Institute of Technology), Devansh Ponda (Georgia Institute of Technology), Julia Haines (Google), Jiaming Li (Google), Jeffrey Nichols (Apple), W. Keith Edwards (Georgia Institute of Technology), Sauvik Das (Carnegie Mellon University) Understanding Influences on SMS Phishing Detection: User Behavior, Demographics, and Message Attributes Daniel Timko (California State University San Marcos), Daniel Hernandez Castillo (California State University San Marcos), Muhammad Lutfor Rahman (California State University San Marcos) Throwaway Accounts and Moderation on Reddit Cheng Guo (Clemson University), Kelly Caine (Clemson University) A Field Study to Uncover and a Tool to Support the Alert Investigation Process of Tier-1 Analysts Leon Kersten (Eindhoven University of Technology), Kim Beelen (Eindhoven University of Technology), Emmanuele Zambon (Eindhoven University of Technology), Chris Snijders (Eindhoven University of Technology), Luca Allodi (Eindhoven University of Technology) Security Advice on Content Filtering and Circumvention for Parents and Children as Found on YouTube and TikTok Ran Elgedawy (The University of Tennessee, Knoxville), John Sadik (The University of Tennessee, Knoxville), Anuj Gautam (The University of Tennessee, Knoxville), Trinity Bissahoyo (The University of Tennessee, Knoxville), Christopher Childress (The University of Tennessee, Knoxville), Jacob Leonard (The University of Tennessee, Knoxville), Clay Shubert (The University of Tennessee, Knoxville), Scott Ruoti (The University of Tennessee, Knoxville) "Do We Call Them That? Absolutely Not.": Juxtaposing the Academic and Practical Understanding of Privacy-Enhancing Technologies Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Luca Favaro (Technical University of Munich), and Florian Matthes (Technical University of Munich)
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the organization’s’ YouTube channel.
The post NDSS 2025 – Symposium on Usable Security and Privacy (USEC) 2025 Afternoon, Paper Session 2 appeared first on Security Boulevard.
Paris, France, 24th October 2025, CyberNewsWire
The post Arsen Launches Smishing Simulation to Help Companies Defend Against Mobile Phishing Threats appeared first on Security Boulevard.
As organizations eagerly adopt AI, cybersecurity teams are racing to protect these new systems. In this special edition of the Cybersecurity Snapshot, we round up some of the best recent guidance on how to fend off AI attacks, and on how to safeguard your AI systems.
Key takeawaysIn case you missed it, here’s fresh guidance for protecting your organization against AI-boosted attacks, and for securing your AI systems and tools.
1- OWASP: How to safeguard agentic AI appsAgentic AI apps are all the rage because they can act autonomously without human intervention. That’s also why they present a major security challenge. If an AI app can act on its own, how do you stop it from going rogue or getting hijacked?
If you’re building or deploying these “self-driving” AI apps, take a look at OWASP’s new “Securing Agentic Applications Guide.”
Published in August, this guide gives you “practical and actionable guidance for designing, developing, and deploying secure agentic applications powered by large language models (LLMs).”
It's a guide aimed at the folks in the trenches, including developers, AI/ML engineers, security architects, and security engineers. Topics include:
It even provides examples of how to apply security principles in different agentic architectures.
For more information about agentic AI security:
Think that OWASP guide is just theoretical? Think again. In a stark example of agentic AI's potential for misuse, AI vendor Anthropic recently revealed how a sophisticated cyber crook weaponized its Claude Code product to “an unprecedented degree” in a broad extortion and data-theft campaign.
It’s a remarkable story, even by the standards of the AI world. The hacker used this agentic AI coding tool to:
The incident takes AI-assisted cyber crime to another level.
“Agentic AI has been weaponized. AI models are now being used to perform sophisticated cyberattacks, not just advise on how to carry them out,” Anthropic wrote in an August blog post.
This new breed of agentic AI abuse makes security exponentially harder because the tool is autonomous, so it adapts to defenses in real time.
(Image generated by Tenable using Google Gemini)
By the time Anthropic shut the attacker down, at least 17 organizations had been hit, including healthcare, emergency services, government, and religious groups.
Anthropic says it has since built new classifiers – automated screening tools – and detection methods to catch these attacks faster.
This incident, which Antropic labeled “vibe hacking,” is just one of 10 real-world use cases included in Anthropic’s “Threat Intelligence Report: August 2025” that detail abuses of the company’s AI tools.
Anthropic said it hopes the report helps the broader AI security community strengthen their own defenses.
“While specific to Claude, the case studies … likely reflect consistent patterns of behaviour across all frontier AI models. Collectively, they show how threat actors are adapting their operations to exploit today’s most advanced AI capabilities,” the report reads.
For more information about AI security, check out these Tenable Research blogs:
The Anthropic attack, in which an agentic AI tool stole credentials, highlights a fundamental vulnerability: managing identities for autonomous systems. What happens when you give these autonomous AI systems the keys to your organization’s digital identities?
It’s a question that led the Cloud Security Alliance (CSA) to develop a proposal for how to better protect digital identities in agentic AI tools.
In its new paper "Agentic AI Identity and Access Management: A New Approach," published in August, the CSA argues that traditional approaches for identity and access management (IAM) fall short when applied to agentic AI systems.
“Unlike conventional IAM protocols designed for predictable human users and static applications, agentic AI systems operate autonomously, make dynamic decisions, and require fine-grained access controls that adapt in real-time,” the CSA paper reads.
Their solution? A new, adaptive IAM framework that ditches old-school, predefined roles and permissions for a continuous, context-aware approach.
The framework is built on several core principles:
The CSA’s proposed framework is built on “rich, verifiable” identities that track an AI agent’s capabilities, origins, behavior, and security posture.
Key components of the framework include an agent naming service (ANS) and a unified global session-management and policy-enforcement layer.
For more information about IAM in AI systems:
While agentic AI attacks illustrate novel AI-abuse methods, attackers are also misusing conventional AI chatbots for more pedestrian purposes.
For example, as OpenAI recently disclosed, attackers have recently attempted to use ChatGPT to refine malware, set up command-and-control hubs, write multi-language phishing emails, and run cyber scams.
In other words, these attackers weren’t trying to use ChatGPT to create sci-fi-level super-attacks, but mostly trying to amplify their classic scams, according to OpenAI’s report “Disrupting malicious uses of AI: an update.”
“We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models,” OpenAI wrote in the report, published in October.
The report identifies several key trends among threat actors:
Incidents detailed in the report include the malicious use of ChatGPT by:
“Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users,” OpenAI wrote in the statement “Disrupting malicious uses of AI: October 2025.”
For more information about AI security, check out these Tenable resources:
Hackers aren't the only ones using AI to code. Your own developers are, too. But the productivity gain they get from AI coding assistants can be costly if they’re not careful.
To help developers with this issue, the Open Source Security Foundation (OpenSSF) published the “Security-Focused Guide for AI Code Assistant Instructions.”
“AI code assistants are powerful tools,” reads the OpenSSF blog “New OpenSSF Guidance on AI Code Assistant Instructions.” “But they also create security risks, because the results you get depend heavily on what you ask.”
The guide, published in September, provides developers tips and best practices on how to prompt these AI helpers to reduce the risk that they’ll generate unsafe code.
Specifically, the guide aims to ensure that AI coding assistants consider:
“In practice, this means fewer vulnerabilities making it into your codebase,” reads the guide.
For more information about the cyber risks of AI coding assistants:
Finally, here’s how organizations are fighting back. They’re leaning heavily into AI to strengthen their cyber defenses, including by prioritizing the use of defensive agentic AI tools.
That’s according to PwC’s new “2026 Global Digital Trust Insights: C-suite playbook and findings” report, based on a global survey of almost 4,000 business and technology executives.
“AI’s potential for transforming cyber capabilities is clear and far-reaching,” reads a PwC article with highlights from the report, published in October.
For example, organizations are prioritizing the use of AI to enhance how they allocate cyber budgets; use managed cybersecurity services; and address cyber skills gaps.
With regards to respondents' priorities for AI cybersecurity capabilities in the coming year, threat hunting ranked first, followed by agentic AI. Other areas include identity and access management, and vulnerability scanning / vulnerability assessments.
AI security capabilities organizations will prioritize over the next 12 months
Meanwhile, organizations plan to use agentic AI primarily to bolster cloud security, data protection, and security operations in the coming year. Other agentic AI priority areas include security testing; governance, risk and compliance; and identity and access management.
“Businesses are recognising that AI agents — autonomous, goal-directed systems capable of executing tasks with limited human intervention — have enormous potential to transform their cyber programmes,” reads the report.
Beyond AI, the report also urges cyber teams to prioritize prevention over reaction. Proactive work like monitoring, assessments, testing, and training is always cheaper than the crisis-mode alternative of incident response, remediation, litigation, and fines. Yet, only 24% of organizations said they spend “significantly more” on proactive measures.
Other topics covered in the report include geopolitical risk; cyber resilience; the quantum computing threat; and the cyber skills gap.
For more information about AI data security, check out these Tenable resources:
Check back here next Friday, when we’ll share some of the best AI risk-management and governance best practices from recent months.
The post Cybersecurity Snapshot: Top Advice for Detecting and Preventing AI Attacks, and for Securing AI Systems appeared first on Security Boulevard.
Oct 24, 2025 - Alan Fagan - Quick Facts: Shadow AI DetectionShadow AI often hides in day-to-day tools; chatbots, plug-ins, or automation apps.It rarely looks like a threat; it starts as convenience.The signs: odd data access, unknown app traffic, missing visibility.Firetail AI helps uncover hidden AI tools and activity before problems escalate.The earlier you detect Shadow AI, the easier it is to keep data secure and compliance intact.The Quiet Spread of Shadow AIMost companies don’t notice Shadow AI until someone asks a simple question: what AI tools are we actually using?That’s when it hits. Nobody really knows. Marketing’s testing one thing, HR’s using another, and IT can’t see half of it. These aren’t rogue employees - they’re just trying to get their work done faster. But every unsanctioned tool opens a small hole in your data perimeter.Traditional monitoring doesn’t catch it. A chatbot that lives in a browser tab looks nothing like an installable app. A plug-in that “helps summarize reports” feels harmless - until you realize it’s been quietly sending data outside your environment for months.Shadow AI hides in plain sight, and often, by the time it’s discovered, it’s already part of daily workflows.Signs Something’s Not RightThere’s no single way Shadow AI shows up. Sometimes it’s subtle - polished reports that appear too quickly, or “AI insights” popping up from a tool you didn’t approve. Sometimes it’s a quiet uptick in outbound traffic to unfamiliar domains.You might spot employees linking personal accounts to external AI platforms, or an app suddenly requesting access it never needed before. None of these things scream “breach,” but they whisper “risk.”If you’re finding it hard to answer where and how AI is being used across departments, that’s usually your biggest clue.Finding What You Can’t SeeThe only way to manage Shadow AI is to make it visible. That starts with people, not just tools. Ask your teams what AI apps they’re experimenting with. You can get a lot of insight from 5 honest conversations.Then, look at the data itself - where it moves, who accesses it, and which systems it touches on the way out. Log reviews, API monitoring, and identity checks can reveal AI-related activity that’s otherwise invisible.The goal isn’t to punish experimentation. It’s to understand it. Once you know what’s happening, you can separate harmless productivity boosts from genuine security concerns.Why It’s Worth Catching Shadow AI EarlyShadow AI rarely stays small. One unapproved app in one department can multiply across the business within a quarter. By the time anyone notices, data has likely travelled far beyond your control.Early Shadow AI detection helps you:Stop sensitive data from leaking into external AI models.Meet compliance expectations before auditors come knocking.Steer employees toward safe, approved tools.Keep leadership confident that innovation isn’t coming at the cost of security.You can’t ban curiosity, but you can channel it safely - if you catch it early enough.How Firetail Brings Shadow AI Into ViewFiretail was built for this exact blind spot. Most cybersecurity systems were never designed to recognize AI behavior. They track endpoints, not models. They see software, not what the software learns.Firetail changes that. It continuously scans networks, endpoints, and cloud environments to detect AI-related traffic and usage patterns. It flags unapproved tools, highlights risky data flows, and gives security teams a complete picture of how AI is operating inside the company.With that visibility, you can approve what’s useful, block what’s risky, and stay compliant without slowing innovation. Firetail integrates with the security stack you already use, so you get AI awareness without rebuilding your entire system.Think of it as turning on the lights in a room you didn’t realize was full of open laptops.The Balance That Actually WorksAI isn’t the enemy. Employees will keep using it, whether you sanction it or not. The smart move isn’t to ban AI - it’s to detect, monitor, and guide how it’s used.That balance of innovation and control is exactly where Firetail fits. It gives leaders the clarity they need to let AI thrive safely. Because when you can finally see what’s happening, you can manage it on your terms.
The post How to Detect Shadow AI in Your Organization – FireTail Blog appeared first on Security Boulevard.
Web applications are integral to modern business and online operations, but they can be vulnerable to security threats. Cross-Site Scripting (XSS) is a common vulnerability where attackers inject malicious scripts into trusted websites, compromising user data and website integrity. At StrongBox IT, we help organizations identify and mitigate such vulnerabilities, focusing on detecting and preventing […]
The post Cross Site Scripting first appeared on StrongBox IT.
The post Cross Site Scripting appeared first on Security Boulevard.
Not too long ago, the shimmering perimeter of enterprise networks was seen as an impregnable citadel, manned by fortresses of firewalls, bastions of secure gateways, and sentinels of intrusion prevention. Yet, in the cruel irony of our digital age, these sentinels themselves are now being subverted. When Defenses Become the Weapon Since the beginning of […]
The post The Enterprise Edge is Under Siege appeared first on ColorTokens.
The post The Enterprise Edge is Under Siege appeared first on Security Boulevard.
Discover top email deliverability solutions that help you improve inbox placement, monitor sender reputation, and fix authentication issues with tools like PowerDMARC.
The post Top Email Deliverability Solutions for Better Inbox Placement in 2025 appeared first on Security Boulevard.
The sharing of ownership is more secure within the company. There are still standards set by the CISO and the core program being executed, but business owners, product team, IT,...
The post Cybersecurity Accountability: Why CISOs Must Share Ownership Across the Enterprise appeared first on Strobes Security.
The post Cybersecurity Accountability: Why CISOs Must Share Ownership Across the Enterprise appeared first on Security Boulevard.
Explore essential factors for successful SSO implementation, including security, user experience, and integration. Guide for CTOs and engineering VPs.
The post Key Considerations for Implementing Single Sign-On Solutions appeared first on Security Boulevard.
In a significant development in one of the year’s largest fintech breaches, new reports released today confirm that Prosper Marketplace, the San Francisco–based peer-to-peer lending platform, suffered a data compromise affecting roughly 17.6 million people. The updated figure, first published by TechRadar and Tom’s Guide, sheds light on the scale of the incident and reveals […]
The post Prosper Marketplace Data Breach Expands: 17.6 Million Users Impacted in Database Intrusion appeared first on Centraleyes.
The post Prosper Marketplace Data Breach Expands: 17.6 Million Users Impacted in Database Intrusion appeared first on Security Boulevard.
Key Takeaways Strong governance depends on current, coherent, and well-implemented policies. They define how decisions are made, risks are managed, and accountability is enforced. Yet, policy management remains one of the least mature governance functions. Modern governance calls for a continuous, system-level approach to policy management that mirrors the way organizations manage other critical processes: […]
The post Blog: From Review to Rollout: Effective Strategies for Updating Policies and Procedures appeared first on Centraleyes.
The post Blog: From Review to Rollout: Effective Strategies for Updating Policies and Procedures appeared first on Security Boulevard.
The post What is an Autonomous SOC? The Future of Security Operations Centers appeared first on AI Security Automation.
The post What is an Autonomous SOC? The Future of Security Operations Centers appeared first on Security Boulevard.
PALO ALTO, Calif., Oct. 23, 2025, CyberNewswire: SquareX released critical research exposing a new class of attack targeting AI browsers.
The AI Sidebar Spoofing attack leverages malicious browser extensions to impersonate trusted AI sidebar interfaces, which is used to trick … (more…)
The post News Alert: SquareX reveals new browser threat — AI sidebars cloned to exploit user trust first appeared on The Last Watchdog.
The post News Alert: SquareX reveals new browser threat — AI sidebars cloned to exploit user trust appeared first on Security Boulevard.
How Can Organizations Fortify Their Cybersecurity with Non-Human Identities? Where automation is ubiquitous, how can organizations ensure their systems remain secure against sophisticated threats? The answer lies in managing Non-Human Identities (NHIs) effectively. While digital ecosystems expand, the security of machine identities becomes a critical consideration for cybersecurity professionals, especially for organizations with robust cloud […]
The post Capable Defenses Against Advanced Threats appeared first on Entro.
The post Capable Defenses Against Advanced Threats appeared first on Security Boulevard.
Are Your Cybersecurity Investments Justified? Where organizations increasingly shift to cloud computing, the debate over justified spending on cybersecurity has never been more pertinent. With the rise of Non-Human Identities (NHIs) and Secrets Security Management, many companies are re-evaluating how they protect their digital assets. NHIs, often seen as machine identities in cybersecurity, represent unique […]
The post Justify Your Investment in Cybersecurity appeared first on Entro.
The post Justify Your Investment in Cybersecurity appeared first on Security Boulevard.
Security Information and Event Management (SIEM) has long been the backbone of enterprise security operations—centralizing log collection, enabling investigation, and supporting compliance. But traditional SIEM deployments are often expensive, noisy, and slow to deliver value. They rely heavily on manual rule-writing, produce overwhelming volumes of alerts, and demand teams of specialists to tune, triage, and
The post SIEM Solutions appeared first on Seceon Inc.
The post SIEM Solutions appeared first on Security Boulevard.
Learn how AI agents are redefining online fraud in 2025. Explore the 6 key takeaways from the Loyalty Security Alliance’s “Rise of AI Fraud” webinar.
The post 6 Takeaways from “The Rise of AI Fraud” Webinar: How AI Agents Are Rewriting Fraud Defense in 2025 appeared first on Security Boulevard.
Authors, Creators & Presenters: PAPERS Vision: Retiring Scenarios -- Enabling Ecologically Valid Measurement in Phishing Detection Research with PhishyMailbox Oliver D. Reithmaier (Leibniz University Hannover), Thorsten Thiel (Atmina Solutions), Anne Vonderheide (Leibniz University Hannover), Markus Dürmuth (Leibniz University Hannover) Vision: Towards True User-Centric Design for Digital Identity Wallets Yorick Last (Paderborn University), Patricia Arias Cabarcos (Paderborn University) Vision: The Price Should Be Right: Exploring User Perspectives on Data Sharing Negotiations Jacob Hopkins (Texas A&M University - Corpus Christi), Carlos Rubio-Medrano (Texas A&M University - Corpus Christi), Cori Faklaris (University of North Carolina at Charlotte) Vision: Comparison of AI-assisted Policy Development Between Professionals and Students Rishika Thorat (Purdue University), Tatiana Ringenberg (Purdue University)
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the organization’s’ YouTube channel.
The post NDSS 2025 – Symposium on Usable Security and Privacy (USEC) 2025, co-located with the Network and Distributed System Security (NDSS) Symposium 2025 Afternoon, Session 3 appeared first on Security Boulevard.