Aggregator
Nacos 事件脉络,沉睡4年的 sql 2 rce
Course 622: Cyber Security Risks for Travelling University Employees
Mythic 3.3 Beta: Rise of the Events
从安全视角,看研发安全
Public Report - Security Risks of AI Hardware for Personal and Edge Computing Devices
Researchers: Weak Security Defaults Enabled Squarespace Domains Hijacks
MITRE Engenuity ATT&CK 如何评估 EDR
I'll make you an offer you can't refuse...
IDC《中国工控安全审计市场份额,2023》报告发布,威努特全国第二!
Securing APIs While Navigating Today?s Booming API Economy
15th July – Threat Intelligence Report
For the latest discoveries in cyber research for the week of 15th July, please download our Threat Intelligence Bulletin. TOP ATTACKS AND BREACHES American telecom giant AT&T has disclosed a massive data breach that exposed personal information of 110M of its customers. The data was stolen from the company’s workspace on a third-party cloud platform, […]
The post 15th July – Threat Intelligence Report appeared first on Check Point Research.
Uncoordinated Vulnerability Disclosure: The Continuing Issues with CVD
On patch Tuesday last week, Microsoft released an update for CVE-2024-38112, which they said was being exploited in the wild. We at the Trend Micro Zero Day Initiative (ZDI) agree with them because that’s what we told them back in May when we detected this exploit in the wild and reported it to Microsoft. However, you may notice that no one from Trend or ZDI was acknowledged by Microsoft. This case has become a microcosm of the problems with coordinated vulnerability disclosure (CVD) as vendors push for coordinated disclosure from researchers but rarely practice any coordination regarding the fix. This lack of transparency from vendors often leaves researchers who practice CVD with more questions than answers.
If you’re just interested in the technical details of the bugs we discovered in the wild, my colleagues Peter and Ali have a fantastic write-up here, which includes IOCs and detection guidance.
What happened to transparency in communication?
We were surprised by the publication of CVE-2024-38112, and we weren’t the only ones. When an In-the-Wild (ITW) exploit goes unpatched for that long a time, it’s not unusual for someone else to independently discover it. In this instance, it was Haifei Li at Check Point who also detected this ITW and reported it to Microsoft. He too was surprised by the patch and noted it wasn’t the first occurrence:
Figure 1 - Tweet from Haifei Li - https://x.com/HaifeiLi/status/1810743597127582135
That doesn’t seem to be the only communication problem from this release. Kẻ soi mói, a researcher from Dataflow Security, expressed his dismay about SharePoint bugs getting fixed that were similar to bugs he had submitted but were as yet unactioned:
Figure 2 - Tweets from Kẻ soi mói - https://x.com/testanull/status/1810837531770134709
His frustration with the disclosure process here is clear. Even when there are no obvious communication issues when reporting a bug, there can be issues when the fix is disclosed. That is what happened with Valentina Palmiotti, a researcher with IBM X-Force. She had a winning entry at Pwn2Own this year, but when the fix was released, despite literally handing Microsoft a working exploit, they made an odd choice in the CVSS rating:
Figure 3 - Tweet from Valentina Palmiotti - https://x.com/chompie1337/status/1800691497949614135
CVD doesn’t work if the only ones coordinating are the researchers. While these are Microsoft examples, there are multiple occasions from various vendors where “coordination” simply means “You tell us everything you know about this bug, and maybe something will happen.”
How can we trust you if you aren’t acting trustworthy?
In the Fall of 2023, Microsoft launched its Secure Future Initiative (SFI). Many in the security community – me included – hoped this would be like the TwC memo from Bill Gates that launched the MSRC and would result in a new age of security and transparency from the Redmond giant. Sadly, that has failed to materialize.
In May of this year, they updated their SFI site with additional information, including a whitepaper entitled, Setting a new standard for faster vulnerability response and security updates [PDF]. Interestingly, it included the following graphic defining the vulnerability lifecycle at Microsoft.
Figure 4 – Microsoft’s view of the vulnerability lifecycle
As a former Microsoft employee, I recognize this graphic as it was something similar to one I saw while there. At the bottom there is “Partner with Researcher”, which implies that open communications happen throughout the process, however, that clearly isn’t happening.
Vendors want researchers to trust them, but they aren’t taking the necessary steps to earn our trust. What’s sad is that we aren’t asking for a lot. Tell us you’ve received the report. Confirm or deny our findings. Tell us when a patch is coming. Acknowledge us appropriately (and spell our name right). And finally, once the patch is available, tell us where we can find the patch. Strangely, one of the biggest problems we have at the ZDI is just getting vendors to tell us when something is fixed.
Who arbitrates disagreements?
Here at the ZDI, one of the best things we offer researchers who sell bugs to us is that we handle the communications with the vendors. What happens when the vendor states the fix should be a defense-in-depth update rather than a full CVE? What happens when the vendor states the impact is spoofing but the bug results in remote code execution? If you’re a lone researcher, your voice might not be heard. The ZDI has existed since 2005, so we have a lot of experience in disclosing bugs and understand how to argue on behalf of the researcher.
There are times when even the ZDI is often not heard when these disagreements are had. However, we have the advantage of blogging platforms, social media, PR agencies, and legal teams backing us up. The average researcher typically doesn’t have access to such resources. The option to just 0-day the bug and release everything publicly becomes quite attractive. It’s not just about disagreements – it’s about network defenders and end users not having the right information to estimate the risk to their enterprises.
Let’s take CVSS ratings as an example. In the last patch Tuesday, Microsoft rated a RADIUS vulnerability as CVSS 7.5, but the researcher who discovered the bug rated it as a 9.0. Not only does that change the severity level from High to Critical, but it could also change how quickly a patch is deployed. Maybe your enterprise rolls out Critical-rated patches in 30 days but allows up to 90 for High severity bugs. A simple disagreement could result in a drastically different security posture for millions of people. In one sense, the CVE program was supposed to solve this problem. There exists a methodology for disputing CVEs [PDF], but this dispute process has not proved effective.
Other than bounties, why would researchers report bugs to vendors who don’t coordinate?
If you don’t offer a bounty payout and don’t coordinate with researchers or properly credit them, why in the world would anyone report bugs to you? Let’s say you’ve had a bad experience with a vendor. For most, they stop reporting bugs to that vendor. It’s not worth their time or effort. And I totally understand that feeling. However, that doesn’t mean that bugs in that vendor’s product simply disappear. Threat actors don’t care if a vendor is difficult to work with; they just keep exploiting the bugs until they can’t.
If researchers are actually worried about exploitation but don’t want to deal with the vendor, what’s stopping them from just dropping 0-day? That way, they know they will be credited, and the vendors typically move fast on publicly known bugs. Of course, vendors say that would be irresponsible of the researchers. And the researchers think it’s irresponsible for multi-billion dollar corporations to release shoddy code. That’s where the idea of “coordinated disclosure” came from. It took the good ideas of “responsible disclosure” and removed the moral judgments. CVD was also meant to acknowledge the vendor has a responsibility to the researcher. This is the part of the CVD process that needs the most improvement. Researchers have responded by doing their part; now it’s time to have the vendors step up and do theirs.
Quis custodiet ipsos custodes?
In April of 2024, the Cyber Safety Review Board (CSRB) released a report on Microsoft’s response to the Exchange Online intrusion from 2023. As a result of that report and the recommendations laid out in it, Microsoft president Brad Smith testified before Congress concerning the ongoing security improvements at Microsoft. Interestingly, on the same day, an article from ProPublica detailed how Microsoft ignored warnings from one of their engineers about a problem that led to the SolarWinds compromise.
While it’s good to see some official accountability somewhere, we as an industry must hold each other accountable for open and transparent communications as well. That’s one of the reasons the ZDI will be launching the Vanguard Awards at this year’s Black Hat. These awards are designed to highlight the best of both researchers and vendors. There won’t be a “failure” category because we’d rather reward outstanding work rather than highlight mistakes or miscalculations. We’ll be presenting vendor awards in five categories:
- Best Security Advisories
- Most Transparent Communication
- Most Collaborative
- Most Improved
- Fastest to Patch
The winners of some of these will likely turn some heads, but it’s our way of monitoring and rewarding those in the industry who go above and beyond to make CVD work for everyone.
Conclusion
Why is CVD not working? Have the number of bugs being disclosed increased to the level where vendors simply cannot cope with the level of coordination? Have budget cuts reduced the number of response personnel vendors employ? Has the rush to automation come at the expense of coordination? Are researchers just reporting to an API and no humans are reviewing the reports? As I said, we’re left with more questions than answers.
The lack of coordination doesn’t just hurt the vendor/researcher relationship. It hurts the end users. It lowers their ability to gauge risk, and it lowers their trust in security patches as well. Hopefully, increased government scrutiny and industry programs like the Vanguard Awards will serve as a stick-and-carrot approach that results in positive changes. It is said that security is a journey rather than a destination. Improvement works the same way. We can always get better, but we have to agree that we need to.
You can find me on Twitter at @dustin_childs and on Mastodon at @TheDustinChilds, and follow the team on Twitter, Mastodon, LinkedIn, or Instagram for the latest in exploit techniques and security patches.
New BugSleep Backdoor Deployed in Recent MuddyWater Campaigns
Key Findings Introduction MuddyWater, an Iranian threat group affiliated with the Ministry of Intelligence and Security (MOIS), is known to be active since at least 2017. During the last year, MuddyWater engaged in widespread phishing campaigns targeting the Middle East, with a particular focus on Israel. Since October 2023, the actors’ activities have increased significantly. Their methods […]
The post New BugSleep Backdoor Deployed in Recent MuddyWater Campaigns appeared first on Check Point Research.
Scaling Up Malware Analysis with Gemini 1.5 Flash
Written by:
Bernardo Quintero, Founder of VirusTotal and Security Director, Google Cloud Security
Alex Berry, Security Manager of the Mandiant FLARE Team, Google Cloud Security
Ilfak Guilfanov, author of IDA Pro and CTO, Hex-Rays
Vijay Bolina, Chief Information Security Officer & Head of Cybersecurity Research, Google DeepMind
Executive Summary
- Following up on our Gemini 1.5 Pro for malware analysis post, this time around we tested to see if our light-weight Gemini 1.5 Flash model is capable of large-scale malware dissection.
- The Gemini 1.5 Flash model was created to optimize efficiency and speed while maintaining performance, which allows us to utilize Gemini 1.5 Flash to process up to 1,000 requests per minute and 4 million tokens per minute.
- To evaluate the real-world performance of our malware analysis pipeline, we analyzed 1,000 Windows executables and DLLs randomly selected from VirusTotal's incoming stream. The system effectively resolved cases of false positives, samples with obfuscated code, and malware with zero detections on VirusTotal.
- On average, Gemini 1.5 Flash processed each file in 12.72 seconds (excluding the unpacking and decompilation stages), providing accurate summary reports in human-readable language.
In our previous post, we explored how Gemini 1.5 Pro could be used to automate the reverse engineering and code analysis of malware binaries. Now, we're focusing on Gemini 1.5 Flash, Google's new lightweight and cost-effective model, to transition that analysis from the lab to a production-ready system capable of large-scale malware dissection. With the ability to handle 1 million tokens, Gemini 1.5 Flash offers impressive speed and can manage large workloads. To support this, we've built an infrastructure on Google Compute Engine, incorporating a multi-stage workflow that includes scaled unpacking and decompilation stages. While promising, this is just the first step on a long journey to overcome accuracy challenges and unlock AI's full potential in malware analysis.
VirusTotal analyzes an average of 1.2 million unique new files each day, ones that have never been seen before on the platform. Nearly half of these are binary files (PE_EXE, PE_DLL, ELF, MACH_O, APK, etc.) that could benefit from reverse engineering and code analysis. Traditional, manual methods simply cannot keep pace with this volume of new threats. Building a system to automatically unpack, decompile, and analyze this quantity of code in a timely and efficient manner is a significant challenge, one that Gemini 1.5 Flash is designed to help address.
Building on the extensive capabilities of Gemini 1.5 Pro, the Gemini 1.5 Flash model was created to optimize efficiency and speed while maintaining performance. Both models share the same robust, multimodal capabilities and are capable of handling a context window of over 1 million tokens; however, Gemini 1.5 Flash is particularly designed for rapid inference and cost-effective deployment. This is achieved through parallel computation of attention and feedforward components, as well as the use of online distillation techniques. The latter enables Flash to learn directly from the larger and more complex Pro model during training. These architectural optimizations allow us to utilize Gemini 1.5 Flash to process up to 1,000 requests per minute and 4 million tokens per minute.
To illustrate how this pipeline works, we'll first showcase examples of Gemini 1.5 Flash analyzing decompiled binaries. Then we'll briefly outline the preceding steps of unpacking and decompilation at scale.
Analysis Speed and ExamplesTo evaluate the real-world performance of our malware analysis pipeline, we analyzed 1,000 Windows executables and DLLs randomly selected from VirusTotal's incoming stream. This selection ensured a diverse range of samples, encompassing both legitimate software and various types of malware. The first thing that struck us was the speed of Gemini 1.5 Flash. This aligns with the performance benchmarks highlighted in the Google Gemini team's paper, where Gemini 1.5 Flash consistently outperformed other large language models in terms of text generation speed across multiple languages.
The fastest processing time we observed was a mere 1.51 seconds, while the slowest was 59.60 seconds. On average, Gemini 1.5 Flash processed each file in 12.72 seconds. It's important to note that these times exclude the unpacking and decompilation stages, which we'll explore further later in this blog post.
These processing times are influenced by factors such as the size and complexity of the input code, and the length of the resulting analysis. Importantly, these measurements encompass the entire end-to-end process: from sending the decompiled code to the Gemini 1.5 Flash API on Vertex AI, through the model's analysis, to receiving the complete response back on our Google Compute Engine instance. This end-to-end perspective highlights the low latency and speed achievable with Gemini 1.5 Flash in real-world production scenarios.
Example 1: Dispelling a False Positive in 1.51 SecondsOut of the 1,000 binaries we analyzed, this one was processed the fastest, highlighting the remarkable speed of Gemini 1.5 Flash. The file goopdate.dll (103.52 KB) triggered a single anti-virus detection on VirusTotal, a common occurrence that often requires time-consuming manual review.
Imagine this file triggered an alert in your SIEM system and you need answers fast. Gemini 1.5 Flash delivers, analyzing the decompiled code in just 1.51 seconds and providing a clear explanation: the file is a simple executable launcher for the "BraveUpdate.exe" application, likely a web browser component. This rapid, code-level insight allows analysts to confidently dismiss the alert as a false positive, preventing unnecessary escalation and saving valuable time and resources.
Example 2: Resolving Another False PositiveIn another example, the file BootstrapPackagedGame-Win64-Shipping.exe (302.50 KB) was flagged by two anti-virus engines on VirusTotal, again requiring further scrutiny.
Gemini 1.5 Flash analyzes the decompiled code in just 4.01 seconds, revealing that the file is a game launcher. Gemini details the sample's functionality, which includes checking for prerequisites like Microsoft Visual C++ Runtime and DirectX, locating and executing redistributable installers, and ultimately launching the main game executable. This level of understanding allows analysts to confidently categorize the file as legitimate, avoiding unnecessary time and effort spent on a potential false positive.
Example 3: Longest Processing with Obfuscated CodeThe file svrwsc.exe (5.91 MB) stood out during our analysis for requiring the longest processing time: 59.60 seconds. Factors such as the size of the decompiled code and the presence of obfuscation techniques like XOR encryption likely contributed to the longer analysis time. Nevertheless, Gemini 1.5 Flash completed its analysis in less than a minute. This is a notable achievement, considering that manually reverse engineering such a binary could take a human analyst several hours.
Gemini correctly determined the sample to be malicious and pinpointed its backdoor functionality, which is designed to exfiltrate data and connect to command-and-control (C2) servers located on Russian domains. The analysis delivers a wealth of IOCs such as potential C2 server URLs, mutexes used for process synchronization, altered registry keys, and suspicious file names. This information enables security teams to swiftly investigate and respond to the threat.
Example 4: CryptominerThis example shows Gemini 1.5 Flash analyzing the decompiled code of a cryptominer named colto.exe. It's important to note that the model only receives the decompiled code as input, with no additional metadata or context from VirusTotal. In just 12.95 seconds, Gemini 1.5 Flash delivered a comprehensive analysis, identifying the malware as a cryptominer, highlighting obfuscation techniques, and extracting key IOCs, such as the download URL, file path, mining pool, and wallet address.
Example 5: Understanding Legitimate Software with Agnostic ApproachThis example showcases Gemini 1.5 Flash analyzing a legitimate 3D viewer application named 3DViewer2009.exe in 16.72 seconds. Even with goodware, understanding a program's functionality can be valuable for security purposes. It's important to highlight that, just like in the previous examples, the model only receives the decompiled code for analysis without any additional metadata from VirusTotal, such as whether the binary is digitally signed by a trusted entity. This information is often taken into account by traditional malware detection systems, but we are adopting a code-centric approach.
Gemini 1.5 Flash successfully identifies the core purpose of the application (loading and displaying 3D models) and even recognizes the specific type of 3D data it handles (DTM). The analysis highlights the use of OpenGL for rendering, configuration file loading, and custom file classes for data management. This level of understanding could help security teams differentiate between legitimate software and malware that might attempt to mimic its behavior.
This agnostic approach to code analysis that focuses solely on functionality could be particularly valuable for scrutinizing digitally signed binaries, which might not always receive the same level of security analysis as unsigned files. This opens up new possibilities for identifying potentially malicious behavior, even within supposedly trusted software.
Example 6: Unmasking a Zero-Hour KeyloggerThis example showcases the true power of analyzing code for malicious behavior: detecting threats that traditional security solutions miss. The executable AdvProdTool.exe (87KB) was submitted to VirusTotal, where it evaded all anti-virus engines, sandboxes, and detection systems at the time of its initial upload and analysis. However, Gemini 1.5 Flash uncovers its true nature. In just 4.7 seconds, the model analyzes the decompiled code, identifies it as a keylogger, and even reveals the IP address and port where it exfiltrates stolen data.
The analysis highlights the code's use of OpenSSL to establish a secure TLS connection to the IP address on port 443. Crucially, Gemini points out the suspicious use of keyboard input capture functions (GetAsyncKeyState, GetKeyState) and their connection to data transmission over the secure channel (SSL_write).
This example underscores the potential of code analysis to identify zero-hour threats in early stages of development, as this keylogger appears to be. It also highlights a critical advantage of Gemini 1.5 Flash: analyzing the raw functionality of code can reveal malicious intent, even when disguised by metadata or detection evasion techniques.
Workflow OverviewOur malware analysis pipeline consists of three key stages: unpacking, decompilation, and code analysis with Gemini 1.5 Flash. Two critical processes drive the first two stages: automated unpacking and decompilation at scale. We leverage Mandiant Backscatter, our internal cloud-based malware analysis service, to dynamically unpack incoming binaries. The unpacked binaries are then processed by a cluster of Hex-Rays Decompilers running on Google Compute Engine. While Gemini is capable of analyzing both disassembled and decompiled code, we've opted for decompilation in our pipeline. The determining factor was decompiled code being 5–10 times more concise than disassembled code, making it a more efficient choice given the token window limitations of large language models. This decompiled code is ultimately fed to Gemini 1.5 Flash for analysis.
By orchestrating this workflow on Google Cloud, we can process a massive number of binaries, including the entire daily influx of over 500,000 new binaries submitted to VirusTotal.
Mandiant BackscatterOur internal Mandiant Malware Analysis Backscatter Service, hosted on the Google Compute Engine, provides scalable malware configuration extraction. As part of extracting configurations, Backscatter also performs malware deobfuscation, decryption and unpacking in-line with our VirusTotal pipeline to decompose the malware into artifacts. From these artifacts, configurations are extracted and the resulting IOCs are used to identify and track malware threats and actors across hundreds of malware families in our Google Threat Intelligence platform. The artifacts, including unpacked binaries, are also resubmitted back into the pipeline, allowing tools such as Gemini 1.5 Flash to perform additional processing to extend our knowledge of what operations the malware is performing with the IOCs identified in previous stages.
Hex-Rays DecompilerOur cluster of Hex-Rays IDA Pro Decompilers, hosted on Google Compute Engine, provides the scalable decompilation power necessary for this pipeline. We leverage the new IDA LIB, a headless version of IDA Pro designed for automated workflows, which is scheduled for release in Q3 2024. The cluster seamlessly integrates with our pipeline, reading unpacked binaries from a Google Cloud Pub/Sub queue fed by Mandiant Backscatter. The resulting decompiled pseudo-C code is then stored in a Google Cloud Storage bucket, ready for analysis by Gemini 1.5 Flash. Currently, each node in the cluster can decompile more than 3,000 files per hour, ensuring we can keep pace with the high volume of incoming binaries.
Challenges and Ongoing DevelopmentAs expected, our tests highlighted a crucial aspect of this pipeline: the performance of Gemini 1.5 Flash is heavily dependent on the quality of the preceding unpacking and decompilation stages. For instance, if the unpacking phase fails to fully unpack a new or unknown packer, the decompiler will only be able to extract the code of the packer itself, not the original program logic hidden within. In such cases, Gemini correctly reports that it's analyzing a program performing unpacking, decryption, or deobfuscation operations, and that it won't be able to analyze the true purpose of the code concealed by the packer.
Similarly, the quality of the decompiled code directly impacts Gemini's ability to understand and analyze the program's behavior. The decompiled code is the raw material for Gemini's analysis, so any errors or inconsistencies in this code will propagate to the final report. Moreover, Gemini must also contend with various code-level obfuscation methods, including new approaches employed by attackers, requiring it to continuously adapt and improve its analysis capabilities in this evolving landscape.
This interdependence underscores the importance of continuously improving all three stages of the pipeline. A weakness in any part of this sequential workflow will directly impact the performance of the subsequent phases. Improved outputs from these stages directly translate to more successful analysis by Gemini. Therefore, our ongoing development efforts focus not only on enhancing Gemini's analytical capabilities but also on refining the unpacking and decompilation stages to ensure they deliver the highest quality output for analysis.
On the decompilation side, we are working closely with Hex-Rays to enhance their decompiler, focusing on three key areas:
- Improved Language-Specific Structure Recognition: We aim to enhance the decompiler's ability to recognize structures unique to specific programming languages. This includes elements like try-catch statements or class member definitions within C++, Rust, and Golang code. By adding a new semantic layer to the decompiler, we can enable it to interpret the underlying code more effectively. This leads to more accurate and readable output, ultimately benefiting Gemini's analysis.
- More Meaningful Function and Variable Naming: Clear and descriptive names for functions and variables within the decompiled code significantly aid Gemini's analysis. We're exploring techniques to generate such names during the decompilation process, including the possibility of integrating Gemini for this purpose.
- Richer Contextual Information: Beyond improved decompiled code, we're investigating methods to provide the model with richer contextual data. This might include visual representations like data flow diagrams and control flow graphs, or even a complete export of IDA Pro's IDB. This additional information can provide valuable insights into the program's overall structure and logic, enabling a more thorough and accurate analysis.
This is just the beginning of our exploration into leveraging AI for large-scale threat analysis. We are excited to announce that these types of code analysis reports will soon be integrated into VirusTotal's Code Insight section. This integration will provide the VirusTotal community with valuable insights into the behavior of binary files, powered by the speed and scalability of Gemini 1.5 Flash.
For an even more powerful analysis experience, we are developing an advanced version of this pipeline within Google Threat Intelligence. This implementation will leverage the capabilities of Gemini 1.5 Pro enhanced by AI agents that can use specialized malware analysis tools and correlate threat information from across Google, Mandiant, and VirusTotal. This advanced analysis will be available within our Private Scanning service, ensuring the confidentiality of the content processed. Watch our recent webinar for more on Gemini in Google Threat Intelligence.
We will continue to share our progress and new advancements in AI-driven threat analysis as we strive to make the digital world a safer place. Here at GSEC Malaga, we are dedicated to pushing the boundaries of what's possible in cybersecurity and exploring new ways to apply AI to protect users from evolving threats.
Samples DetailsThe following table contains details on the binary samples discussed in this post.
Filename
SHA-256
goopdate.dll
0d2115d3de900bcd5aeca87b9af0afac90f99c5a009db7c162101a200fbfeb2c
BootstrapPackagedGame-Win64-Shipping.exe
07db922be22e4feedbacea7f92983f51404578bd0c495abaae3d4d6bf87ae6d0
svrwsc.exe
0cdb71e81b07247ee9d4ea1e1005c9454a5d3eb5f1078279a905f0095fd88566
colto.exe
091e505df4290f1244b3d9a75817bb1e7524ac346a2f28b0ef3c689c445beb45
3DViewer2009.exe
08f20e0a2d30ba259cd3fe2a84ead6580b84e33abfcec4f151c5b2e454602f81
AdvProdTool.exe
04af0519d0dbe20bc8dc8ba4d97a791ae3e3474c6372de83087394d219babd47