Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Ayer — 23 Mayo 2025Salida Principal

This Week in Security: Signal DRM, Modern Phone Phreaking, and the Impossible SSH RCE

23 Mayo 2025 at 14:00

Digital Rights Management (DRM) has been the bane of users since it was first introduced. Who remembers the battle it was getting Netflix running on Linux machines, or the literal legal fight over the DVD DRM decryption key? So the news from Signal, that DRM is finally being put to use to protect users is ironic.

The reason for this is Microsoft Recall — the AI powered feature that takes a snapshot of everything on the user’s desktop every few seconds. For whatever reason, you might want to exempt some windows from Recall’s memory window. It doesn’t speak well for Microsoft’s implementation that the easiest way for an application to opt out of the feature is to mark its window as containing DRM content. Signal, the private communications platform, is using this to hide from Recall and other screenshotting applications.

The Signal blogs warns that this may be just the start of agentic AI being rolled out with insufficient controls and permissions. The issue here isn’t the singularity or AI reaching sentience, it’s the same old security and privacy problems we’ve always had: Too much information being collected, data being shared without permission, and an untrusted actor having access to way more than it should.

Legacy Malware?

The last few stories we’ve covered about malicious code in open source repositories have featured how quickly the bad packages were caught. Then there’s this story about two-year-old malicious packages on NPM that are just now being found.

It may be that the reason these packages weren’t discovered until now, is that these packages aren’t looking to exfiltrate data, or steal bitcoin, or load other malware. Instead, these packages have a trigger date, and just sabotage the systems they’re installed on — sometimes in rather subtle ways. If a web application you were writing was experiencing intermittent failures, how long would it take you to suspect malware in one of your JavaScript libraries?

Where Are You Calling From?

Phone phreaking isn’t dead, it has just gone digital. One of the possibly apocryphal origins of phone phreaking was a toy bo’sun whistle in boxes of cereal, that just happened to play a 2600 Hz tone. More serious phreakers used more sophisticated, digital versions of the whistle, calling them blue boxes. In modern times, apparently, the equivalent of the blue box is a rooted Android phone. [Daniel Williams] has the story of playing with Voice over LTE (VoLTE) cell phone calls. A bug in the app he was using forced him to look at the raw network messages coming from O2 UK, his local carrier.

And those messages were weird. VoLTE is essentially using the Session Initiation Protocol (SIP) to handle cell phone calls as Voice over IP (VoIP) calls using the cellular data network. SIP is used in telephony all over the place, from desk phones to video conferencing solutions. SIP calls have headers that work to route the call, which can contain all sorts of metadata about the call. [Daniel] took a look at the SIP headers on a VoLTE call, and noticed some strange things. For one, the International Mobile Subscriber Identity (IMSI) and International Mobile Equipment Identity (IMEI) codes for both the sender and destination were available.

He also stumbled onto an interesting header, the Cellular-Network-Info header. This header encodes way too much data about the network the remote caller is connected to, including the exact tower being used. In an urban environment, that locates a cell phone to an area not much bigger than a city block. Together with leaking the IMSI and IMEI, this is a dangerous amount of information to leak to anyone on the network. [Daniel] attempted to report the issue to O2 in late March, and was met with complete silence. However, a mere two days after this write-up was published, on May 19th, O2 finally made contact, and confirmed that the issue had finally been resolved.

ARP Spoofing in Practice

TCP has an inherent security advantage, because it’s a stateful connection, it’s much harder to make a connection from a spoofed IP address. It’s harder, but it’s not impossible. One of the approaches that allows actual TCP connections from spoofed IPs is Address Resolution Protocol (ARP) poisoning. Ethernet switches don’t look at IP addresses, but instead route using MAC addresses. ARP is the protocol that distributes the MAC Address to IP mapping on the local network.

And like many protocols from early in the Internet’s history, ARP requests don’t include any cryptography and aren’t validated. Generally, whoever claims an IP address first wins, so the key is automating this process. And hence, enter NetImposter, a new tool specifically designed to automate this process, sending spoofed ARP packets, and establishing an “impossible” TCP connection.

Impossible RCE in SSH

Over two years ago, researchers at Qualsys discovered a pre-authentication double-free in OpenSSH server version 9.1. 9.2 was quickly released, and because none of the very major distributions had shipped 9.1 yet, what could have been a very nasty problem was patched pretty quietly. Because of the now-standard hardening features in modern Linux and BSD distributions, this vulnerability was thought to be impossible to actually leverage into Remote Code Execution (RCE).

If someone get a working OpenSSH exploit from this bug, I'm switching my main desktop to Windows 98 😂 (this bug was discovered by a Windows 98 user who noticed sshd was crashing when trying to login to a Linux server!)

— Tavis Ormandy (@taviso) February 14, 2023

The bug was famously discovered by attempting to SSH into a modern Linux machine from a Windows 98 machine, and Tavis Ormandy claimed he would switch to Windows 98 on his main machine if someone did actually manage to exploit it for RCE. [Perri Adams] thought this was a hilarious challenge, and started working an exploit. Now we have good and bad news about this effort. [Perri] is pretty sure it is actually possible, to groom the heap and with enough attempts, overwrite an interesting pointer, and leak enough information in the process to overcome address randomization, and get RCE. The bad news is that the reward of dooming [Tavis] to a Windows 98 machine for a while wasn’t quite enough to be worth the pain of turning the work into a fully functional exploit.

But that’s where [Perri’s] OffensiveCon keynote took an AI turn. How well would any of the cutting-edge AIs do at finding, understanding, fixing, and exploiting this vulnerability? As you probably already guessed, the results were mixed. Two of the three AIs thought the function just didn’t have any memory management problems at all. Once informed of the problem, the models had more useful analysis of the code, but they still couldn’t produce any remotely useful code for exploitation. [Perri’s] takeaway is that AI systems are approaching the threshold of being useful for defensive programming work. Distilling what code is doing, helping in reverse engineering, and working as a smarter sort of spell checker are all wins for programmers and security researchers. But fortunately, we’re not anywhere close to a world where AI is developing and deploying exploitations.

Bits and Bytes

There are a pair of new versions of reverse engineering/forensic tools released very recently. Up first is Frida, a runtime debugger on steroids, that is celebrating its 17th major version release. One of the major features is migrating to pluggable runtime bridges, and moving away from strictly bundling them. We also have Volatility 3, a memory forensics framework. This isn’t the first Volatility 3 release, but it is the release where version three officially has parity with the version two of the framework.

The Foscam X5 security camera has a pair of buffer overflows, each of which can be leveraged to acieve arbitrary RCE. One of the proof-of-concepts has a very impressive use of a write-null-anywhere primitive to corrupt a return pointer, and jump into a ROP gadget. The concerning element of this disclosure is that the vendor has been completely unresponsive, and the vulnerabilities are still unaddressed.

And finally, one of the themes that I’ve repeatedly revisited is that airtight attribution is really difficult. [Andy Gill] walks us through just one of the many reasons that’s difficult. Git cryptographically signs the contents of a commit, but not the timestamps. This came up when looking through the timestamps from “Jia Tan” in the XZ compromise. Git timestamps can be trivially rewritten. Attestation is hard.

AnteayerSalida Principal

This Week in Security: Lingering Spectre, Deep Fakes, and CoreAudio

16 Mayo 2025 at 14:00

Spectre lives. We’ve got two separate pieces of research, each finding new processor primitives that allow Spectre-style memory leaks. Before we dive into the details of the new techniques, let’s quickly remind ourselves what Spectre is. Modern CPUs use a variety of clever tricks to execute code faster, and one of the stumbling blocks is memory latency. When a program reaches a branch in execution, the program will proceed in one of two possible directions, and it’s often a value from memory that determines which branch is taken. Rather than wait for the memory to be fetched, modern CPUs will predict which branch execution will take, and speculatively execute the code down that branch. Once the memory is fetched and the branch is properly evaluated, the speculatively executed code is rewound if the guess was wrong, or made authoritative if the guess was correct. Spectre is the realization that incorrect branch prediction can change the contents of the CPU cache, and those changes can be detected through cache timing measurements. The end result is that arbitrary system memory can be leaked from a low privileged or even sandboxed user process.

In response to Spectre, OS developers and CPU designers have added domain isolation protections, that prevent branch prediction poisoning in an attack process from affecting the branch prediction in the kernel or another process. Training Solo is the clever idea from VUSec that branch prediction poisoning could just be done from within the kernel space, and avoid any domain switching at all. That can be done through cBPF, the classic Berkeley Packet Filter (BPF) kernel VM. By default, all users on a Linux system can run cBPF code, throwing the doors back open for Spectre shenanigans. There’s also an address collision attack where an unrelated branch can be used to train a target branch. Researchers also discovered a pair of CVEs in Intel’s CPUs, where prediction training was broken in specific cases, allowing for a wild 17 kB/sec memory leak.

Also revealed this week is the Branch Privilege Injection research from COMSEC. This is the realization that Intel Branch Prediction happens asynchronously, and in certain cases there is a race condition between the updates to the prediction engine, and the code being predicted. In short, user-mode branch prediction training can be used to poison kernel-mode prediction, due to the race condition.

(Editor’s note: Video seems down for the moment. Hopefully YouTube will get it cleared again soon. Something, something “hackers”.)

Both of these Spectre attacks have been patched by Intel with microcode, and the Linux kernel has integrated patches for the Training Solo issue. Training Solo may also impact some ARM processors, and ARM has issued guidance on the vulnerability. The real downside is that each fix seems to come with yet another performance hit.

Is That Real Cash? And What Does That Even Mean?

Over at the Something From Nothing blog, we have a surprisingly deep topic, in a teardown of banknote validators. For the younger in the audience, there was a time in years gone by where not every vending machine had a credit card reader built-in, and the only option was to carefully straighten a bill and feed it into the bill slot on the machine. Bow how do those machines know it’s really a bill, and not just the right sized piece of paper?

And that’s where this gets interesting. Modern currency has multiple security features in a single bill, like magnetic ink, micro printing, holograms, watermarks, and more. But how does a bill validator check for all those things? Mainly LEDs and photodetectors, it seems. With some machines including hall effect sensors, magnetic tape heads for detecting magnetic ink, and in rare cases a full linear CCD for scanning the bill as it’s inserted. Each of those detectors (except the CCD) produces a simple data stream from each bill that’s checked. Surely it would be easy enough to figure out the fingerprint of a real bill, and produce something that looks just like the real thing — but only to a validator?

In theory, probably, but the combination of sensors presents a real problem. It’s really the same problem with counterfeiting a bill in general: implementing a single security feature is doable, but getting them all right at the same time is nearly impossible. And so with the humble banknote validator.

Don’t Trust That Phone Call

There’s a scam that has risen to popularity with the advent of AI voice impersonation. It usually takes the form of a young person calling a parent or grandparent from jail or a hospital, asking for money to be wired to make it home. It sounds convincing, because it’s an AI deepfake of the target’s loved one. This is no longer just a technique to take advantage of loving grandparents. The FBI has issued a warning about an ongoing campaign using deepfakes of US officials. The aim of this malware campaign seems to be just getting the victim to click on a malicious link. This same technique was used in a LastPass attack last year, and the technique has become so convincing, it’s not likely to go away anytime soon.

AI Searching SharePoint

Microsoft has tried not to be left behind in the current flurry of AI rollouts that every tech company seems to be engaging in. Microsoft’s SharePoint is not immune, and the result is Microsoft Copilot for SharePoint. This gives an AI agent access to a company’s SharePoint knowledge base, allowing users to query it for information. It’s AI as a better search engine. This has some ramifications for security, as SharePoint installs tend to collect sensitive data.

The first ramification is the most straightforward. The AI can be used to search for that sensitive data. But Copilot pulling data from a SharePoint file doesn’t count as a view, making for a very stealthy way to pull data from those sensitive files. Pen Test Partners found something even better on a real assessment. A passwords file hosted on SharePoint was unavailable to view, but in an odd way. This file hadn’t been locked down using SharePoint permissions, but instead the file was restricted from previewing in the browser. This was likely an attempt to keep eyes off the contents of the file. And Copilot was willing to be super helpful, pasting the contents of that file right into a chat window. Whoops.

Fuzzing Apple’s CoreAudio

Googler [Dillon Franke] has the story of finding a type confusion flaw in Apple’s CoreAudio daemon, reachable via Mach Inter-Process Communication (IPC) messages, allowing for potential arbitrary code execution from within a sandboxed process. This is a really interesting fuzzing + reverse engineering journey, and it starts with imagining the attack he wanted to find: Something that could be launched from within a sandboxed browser, take advantage of already available IPC mechanisms, and exploit a complex process with elevated privileges.

Coreaudiod ticks all the boxes, but it’s a closed source daemon. How does one approach this problem? The easy option is to just fuzz over the IPC messages. It would be a perfectly viable strategy, to fuzz CoreAudio via Mach calls. The downside is that the fuzzer would run slower, and have much less visibility into what’s happening in the target process. A much more powerful approach is to build a fuzzing harness that allows hooking directly to the library in question. There is some definite library wizardry at play here, linking into a library function that hasn’t been exported.

The vulnerability that he found was type confusion, where the daemon expected an ioctl object, but could be supplied arbitrary data. As an ioctl object contains a pointer to a vtable, which is essentially a collection of function pointers. It then attempts to call a function from that table. It’s an ideal situation for exploitation. The fix from Apple is an explicit type check on the incoming objects.

Bits and Bytes

Asus publishes the DriverHub tool, a gui-less driver updater. It communicates with driverhub.asus.com using RPC calls. The problem is that it checks for the right web URL using a wildcard, and driverhub.asus.com.mrbruh.com was considered completely valid. Among the functions DriverHub can perform is to install drivers and updates. Chaining a couple of fake updates together results in relatively easy admin code execution on the local machine, with the only prerequisites being the DriverHub software being installed, and clicking a single malicious link. Ouch.

The VirtualBox VGA driver just patched a buffer overflow that could result in VM escape. The vmsvga3dSurfaceMipBufferSize call could be manipulated so no memory is actually allocated, but VirtualBox itself believes a buffer is there and writable. This memory write ability can be leveraged into arbitrary memory read and write capability on the host system.

And finally, what’s old is new again. APT28, a Russian state actor, has been using very old-school Cross Site Scripting (XSS) attacks to gain access to target’s webmail systems. The attack here is JavaScript in an email’s HTML code. That JS then used already known XSS exploits to exfiltrate emails and contacts. The worst part of this campaign is how low-effort it was. These aren’t cutting-edge 0-days. Instead, the target’s email servers just hadn’t been updated. Keep your webmail installs up to date!

This Week in Security: Encrypted Messaging, NSO’s Judgement, and AI CVE DDoS

9 Mayo 2025 at 14:00

Cryptographic messaging has been in the news a lot recently. Like the formal audit of WhatsApp (the actual PDF). And the results are good. There are some minor potential problems that the audit highlights, but they are of questionable real-world impact. The most consequential is how easy it is to add additional members to a group chat. Or to put it another way, there are no cryptographic guarantees associated with adding a new user to a group.

The good news is that WhatsApp groups don’t allow new members to read previous messages. So a user getting added to a group doesn’t reveal historic messages. But a user added without being noticed can snoop on future messages. There’s an obvious question, as to how this is a weakness. Isn’t it redundant, since anyone with the permission to add someone to a group, can already read the messages from that group?

That’s where the lack of cryptography comes in. To put it simply, the WhatsApp servers could add users to groups, even if none of the existing users actually requested the addition. It’s not a vulnerability per se, but definitely a design choice to keep in mind. Keep an eye on the members in your groups, just in case.

The Signal We Have at Home

The TeleMessage app has been pulled from availability, after it was used to compromise Signal communications of US government officials. There’s political hay to be made out of the current administration’s use and potential misuse of Signal, but the political angle isn’t what we’re here for. The TeleMessage client is Signal compatible, but adds message archiving features. Government officials and financial companies were using this alternative client, likely in order to comply with message retention laws.

While it’s possible to do long term message retention securely, TeleMessage was not doing this particularly well. The messages are stripped of their end-to-end encryption in the client, before being sent to the archiving server. It’s not clear exactly how, but those messages were accessed by a hacker. This nicely demonstrates the inherent tension between the need for transparent archiving as required by the US government for internal communications, and the need for end-to-end encryption.

The NSO Judgement

WhatsApp is in the news for another reason, this time winning a legal judgement against NSO Group for their Pegasus spyware. The $167 Million in damages casts real doubt on the idea that NSO has immunity to develop and deploy malware, simply because it’s doing so for governments. This case is likely to be appealed, and higher courts may have a different opinion on this key legal question, so hold on. Regardless, the era of NSO’s nearly unrestricted actions is probably over. They aren’t the only group operating in this grey legal space, and the other “legal” spyware/malware vendors are sure to be paying attention to this ruling as well.

The $5 Wrench

In reality, the weak point of any cryptography scheme is the humans using it. We’re beginning to see real world re-enactments of the famous XKCD $5 wrench, that can defeat even 4096-bit RSA encryption. In this case, it’s the application of old crime techniques to new technology like cryptocurrency. To quote Ars Technica:

We have reached the “severed fingers and abductions” stage of the crypto revolution

The flashy stories involve kidnapping and torture, but let’s not forget that the most common low-tech approach is simple deception. Whether you call it the art of the con, or social engineering, this is still the most likely way to lose your savings, whether it’s conventional or a cryptocurrency.

The SonicWall N-day

WatchTowr is back with yet another reverse-engineered vulnerability. More precisely, it’s two CVEs that are being chained together to achieve pre-auth Remote Code Execution (RCE) on SonicWall appliances. This exploit chain has been patched, but not everyone has updated, and the vulnerabilities are being exploited in the wild.

The first vulnerability at play is actually from last year, and is in Apache’s mod_rewrite module. This module is widely used to map URLs to source files, and it has a filename confusion issue where a url-encoded question mark in the path can break the mapping to the final filesystem path. A second issue is that when DocumentRoot is specified, instances of RewriteRule take on a weird dual-meaning. The filesystem target refers to the location inside DocumentRoot, but it first checks for that location in the filesystem root itself. This was fixed in Apache nearly a year ago, but it takes time for patches to roll out.

SonicWall was using a rewrite rule to serve CSS files, and the regex used to match those files is just flexible enough to be abused for arbitrary file read. /mnt/ram/var/log/httpd.log%3f.1.1.1.1a-1.css matches that rule, but includes the url-encoded question mark, and matches a location on the root filesystem. There are other, more interesting files to access, like the temp.db SQLite database, which contains session keys for the currently logged in users.

The other half of this attack is a really clever command injection using one of the diagnostic tools included in the SonicWall interface. Traceroute6 is straightforward, running a traceroute6 command and returning the results. It’s also got good data sanitization, blocking all of the easy ways to break out of the traceroute command and execute some arbitrary code. The weakness is that while this sanitization adds backslashes to escape quotes and other special symbols, it stores the result in a fixed-length result buffer. If the result of this escaping process overflows the result buffer, it writes over the null terminator and into the buffer that holds the original command before it’s sanitized. This overflow is repeated when the command is run, and with some careful crafting, this results in escaping the sanitization and including arbitrary commands. Clever.

The AI CVE DDoS

[Daniel Stenberg], lead developer of curl, is putting his foot down. We’ve talked about this before, even chatting with Daniel about the issue when we had him on FLOSS Weekly. Curl’s bug bounty project has attracted quite a few ambitious people, that don’t actually have the skills to find vulnerabilities in the curl codebase. Instead, these amateur security researchers are using LLMs to “find vulnerabilities”. Spoiler, LLMs aren’t yet capable of this task. But LLMs are capable of writing fake vulnerability reports that look very convincing at first read. The game is usually revealed when the project asks a question, and the fake researcher feeds the LLM response back into the bug report.

This trend hasn’t slowed, and the curl project is now viewing the AI generated vulnerability reports as a form of DDoS. In response, the curl Hackerone bounty program will soon ask a question with every entry: “Did you use an AI to find the problem or generate this submission?” An affirmative answer won’t automatically disqualify the report, but it definitely puts the burden on the reporter to demonstrate that the flaw is real and wasn’t hallucinated. Additionally, “AI slop” reports will result in permanent bans for the reporter.

It’s good to see that not all AI content is completely disallowed, as it’s very likely that LLMs will be involved in finding and describing vulnerabilities before long. Just not in this naive way, where a single prompt results in a vulnerability find and generates a patch that doesn’t even apply. Ironically, one of the tells of an AI generated report is that it’s too perfect, particularly for someone’s first report. AI is still the hot new thing, so this issue likely isn’t going away any time soon.

Bits and Bytes

A supply chain attack has been triggered against several hundred Magento e-commerce sites, via at least three software vendors distributing malicious code. One of the very odd elements to this story is that it appears this malicious code has been incubating for six years, and only recently invoked for malicious behavior.

On the WordPress side of the fence, the Ottokit plugin was updated last month to fix a critical vulnerability. That update was force pushed to the majority of WordPress sites running that plugin, but that hasn’t stopped threat actors from attempting to use the exploit, with the first attempts coming just an hour and a half after disclosure.

It turns out it’s probably not a great idea to allow control codes as part of file names. Portswigger has a report of a couple ways VS Code can do the wrong thing with such filenames.

And finally, this story comes with a disclaimer: Your author is part of Meshtastic Solutions and the Meshtastic project. We’ve talked about Meshtastic a few times here on Hackaday, and would be remiss not to point out CVE-2025-24797. This buffer overflow could theoretically result in RCE on the node itself. I’ve seen at least one suggestion that this is a wormable vulnerability, which may be technically true, but seems quite impractical in practice. Upgrade your nodes to at least release 2.6.2 to get the fix.

Rayhunter Sniffs Out Stingrays for $30

Por: Lewin Day
5 Mayo 2025 at 11:00

These days, if you’re walking around with a cellphone, you’ve basically fitted an always-on tracking device to your person. That’s even more the case if there happens to be an eavesdropping device in your vicinity. To combat this, the Electronic Frontier Foundation has created Rayhunter as a warning device.

Rayhunter is built to detect IMSI catchers, also known as Stingrays in the popular lexicon. These are devices that attempt to capture your phone’s IMSI (international mobile subscriber identity) number by pretending to be real cell towers. Information on these devices is tightly controlled by manufacturers, which largely market them for use by law enforcement and intelligence agencies.

Rayhunter in use.

To run Rayhunter, all you need is an Orbic RC400L mobile hotspot, which you can currently source for less than $30 USD online. Though experience tells us that could change as the project becomes more popular with hackers. The project offers an install script that will compile the latest version of the software and flash it to the device from a  computer running Linux or macOS — Windows users currently have to jump through a few extra hoops to get the same results.

Rayhunter works by analyzing the control traffic between the cell tower and the hotspot to look out for hints of IMSI-catcher activity. Common telltale signs are requests to switch a connection to less-secure 2G standards, or spurious queries for your device’s IMSI. If Rayhunter notes suspicious activity, it turns a line on the Orbic’s display red as a warning. The device’s web interface can then be accessed for more information.

While IMSI catchers really took off on less-secure 2G networks, there are developments that allow similar devices to work on newer cellular standards, too. Meanwhile, if you’ve got your own projects built around cellular security, don’t hesitate to notify the tipsline!

This Week in Security: AirBorne, EvilNotify, and Revoked RDP

2 Mayo 2025 at 14:00

This week, Oligo has announced the AirBorne series of vulnerabilities in the Apple Airdrop protocol and SDK. This is a particularly serious set of issues, and notably affects MacOS desktops and laptops, the iOS and iPadOS mobile devices, and many IoT devices that use the Apple SDK to provide AirPlay support. It’s a group of 16 CVEs based on 23 total reported issues, with the ramifications ranging from an authentication bypass, to local file reads, all the way to Remote Code Execution (RCE).

AirPlay is a WiFi based peer-to-peer protocol, used to share or stream media between devices. It uses port 7000, and a custom protocol that has elements of both HTTP and RTSP. This scheme makes heavy use of property lists (“plists”) for transferring serialized information. And as we well know, serialization and data parsing interfaces are great places to look for vulnerabilities. Oligo provides an example, where a plist is expected to contain a dictionary object, but was actually constructed with a simple string. De-serializing that plist results in a malformed dictionary, and attempting to access it will crash the process.

Another demo is using AirPlay to achieve an arbitrary memory write against a MacOS device. Because it’s such a powerful primative, this can be used for zero-click exploitation, though the actual demo uses the music app, and launches with a user click. Prior to the patch, this affected any MacOS device with AirPlay enabled, and set to either “Anyone on the same network” or “Everyone”. Because of the zero-click nature, this could be made into a wormable exploit.

Apple has released updates for their products for all of the CVEs, but what’s going to really take a long time to clean up is the IoT devices that were build with the vulnerable SDK. It’s likely that many of those devices will never receive updates.

EvilNotify

It’s apparently the week for Apple exploits, because here’s another one, this time from [Guilherme Rambo]. Apple has built multiple systems for doing Inter Process Communications (IPC), but the simplest is the Darwin Notification API. It’s part of the shared code that runs on all of Apple’s OSs, and this IPC has some quirks. Namely, there’s no verification system, and no restrictions on which processes can send or receive messages.

That led our researcher to ask what you may be asking: does this lack of authentication allow for any security violations? Among many novel notifications this technique can spoof, there’s one that’s particularly problematic: The device “restore in progress”. This locks the device, leaving only a reboot option. Annoying, but not a permanent problem.

The really nasty version of this trick is to put the code triggering a “restore in progress” message inside an app’s widget extension. iOS loads those automatically at boot, making for an infuriating bootloop. [Guilherme] reported the problem to Apple, made a very nice $17,500 in the progress. The fix from Apple is a welcome surprise, in that they added an authorization mechanism for sensitive notification endpoints. It’s very likely that there are other ways that this technique could have been abused, so the more comprehensive fix was the way to go.

Jenkins

Continuous Integration is one of the most powerful tools a software project can use to stay on top of code quality. Unfortunately as those CI toolchains get more complicated, they are more likely to be vulnerable, as [John Stawinski] from Praetorian has discovered. This attack chain would target the Node.js repository at Github via an outside pull request, and ends with code execution on the Jenkins host machines.

The trick to pulling this off is to spoof the timestamp on a Pull Request. The Node.js CI uses PR labels to control what CI will do with the incoming request. Tooling automatically adds the “needs-ci” label depending on what files are modified. A maintainer reviews the PR, and approves the CI run. A Jenkins runner will pick up the job, compare that the Git timestamp predated the maintainer’s approval, and then runs the CI job. Git timestamps are trivial to spoof, so it’s possible to load an additional commit to the target PR with a commit timestamp in the past. The runner doesn’t catch the deception, and runs the now-malicious code.

[John] reported the findings, and Node.js maintainers jumped into action right away. The primary fix was to do SHA sum comparisons to validate Jenkins runs, rather than just relying on timestamp. Out of an abundance of caution, the Jenkins runners were re-imaged, and then [John] was invited to try to recreate the exploit. The Node.js blog post has some additional thoughts on this exploit, like pointing out that it’s a Time-of-Check-Time-of-Use (TOCTOU) exploit. We don’t normally think of TOCTOU bugs where a human is the “check” part of the equation.

2024 in 0-days

Google has published an overview of the 75 zero-day vulnerabilities that were exploited in 2024. That’s down from the 98 vulnerabilities exploited in 2023, but the Threat Intelligence Group behind this report are of the opinion that we’re still on an upward trend for zero-day exploitation. Some platforms like mobile and web browsers have seen drastic improvements in zero-day prevention, while enterprise targets are on the rise. The real stand-out is the targeting of security appliances and other network devices, at more than 60% of the vulnerabilities tracked.

When it comes to the attackers behind exploitation, it’s a mix between state-sponsored attacks, legal commercial surveillance, and financially motivated attacks. It will be interesting to see how 2025 stacks up in comparison. But one thing is for certain: Zero-days aren’t going away any time soon.

Perplexing Passwords for RDP

The world of computer security just got an interesting surprise, as Microsoft declared it not-a-bug that Windows machines will continue to accept revoked credentials for Remote Desktop Protocol (RDP) logins. [Daniel Wade] discovered the issue and reported it to Microsoft, and then after being told it wasn’t a security vulnerability, shared his report with Ars Technica.

So what exactly is happening here? It’s the case of a Windows machine login via Azure or a Microsoft account. That account is used to enable RDP, and the machine caches the username and password so logins work even when the computer is “offline”. The problem really comes in how those cached passwords get evicted from the cache. When it comes to RDP logins, it seems they are simply never removed.

There is a stark disconnect between what [Wade] has observed, and what Microsoft has to say about it. It’s long been known that Windows machines will cache passwords, but that cache will get updated the next time the machine logs in to the domain controller. This is what Microsoft’s responses seem to be referencing. The actual report is that in the case of RDP, the cached passwords will never expire, regardless of changing that password in the cloud and logging on to the machine repeatedly.

Bits and Bytes

Samsung makes a digital signage line, powered by the MagicINFO server application. That server has an unauthenticated endpoint, accepting file uploads with insufficient filename sanitization. That combination leads to arbitrary pre-auth code execution. While that’s not great, what makes this a real problem is that the report was first sent to Samsung in January, no response was ever received, and it seems that no fixes have officially been published.

A series of Viasat modems have a buffer overflow in their SNORE web interface. This leads to unauthenticated, arbitrary code execution on the system, from either the LAN or OTA interface, but thankfully not from the public Internet itself. This one is interesting in that it was found via static code analysis.

IPv6 is the answer to all of our IPv4 induced woes, right? It has Stateless Address Autoconfiguration (SLAAC) to handle IP addressing without DHCP, and Router Advertisement (RA) to discover how to route packets. And now, taking advantage of that great functionality is Spellbinder, a malicious tool to pull off SLACC attacks and do DNS poisoning. It’s not entirely new, as we’ve seen Man in the Middle attacks on IPv4 networks for years. IPv6 just makes it so much easier.

This Week in Security: XRP Poisoned, MCP Bypassed, and More

25 Abril 2025 at 14:00

Researchers at Aikido run the Aikido Intel system, an LLM security monitor that ingests the feeds from public package repositories, and looks for anything unusual. In this case, the unusual activity was five rapid-fire releases of the xrpl package on NPM. That package is the XRP Ledger SDK from Ripple, used to manage keys and build crypto wallets. While quick point releases happen to the best of developers, these were odd, in that there were no matching releases in the source GitHub repository. What changed in the first of those fresh releases?

The most obvious change is the checkValidityOfSeed() function added to index.ts. That function takes a string, and sends a request to a rather odd URL, using the supplied string as the ad-referral header for the HTML request. The name of the function is intended to blend in, but knowing that the string parameter is sent to a remote web server is terrifying. The seed is usually the root of trust for an individual’s cryptocurrency wallet. Looking at the actual usage of the function confirms, that this code is stealing credentials and keys.

The releases were made by a Ripple developer’s account. It’s not clear exactly how the attack happened, though credential compromise of some sort is the most likely explanation. Each of those five releases added another bit of malicious code, demonstrating that there was someone with hands on keyboard, watching what data was coming in.

The good news is that the malicious releases only managed a total of 452 downloads for the few hours they were available. A legitimate update to the library, version 4.2.5, has been released. If you’re one of the unfortunate 452 downloads, it’s time to do an audit, and rotate the possibly affected keys.

Zyxel FLEX

More specifically, we’re talking about Zyxel’s USG FLEX H series of firewall/routers. This is Zyxel’s new Arm64 platform, running a Linux system they call Zyxel uOS. This series is for even higher data throughput, and given that it’s a new platform, there are some interesting security bugs to find, as discovered by [Marco Ivaldi] of hn Security and [Alessandro Sgreccia] at 0xdeadc0de. Together they discovered an exploit chain that allows an authenticated user with VPN access only to perform a complete device takeover, with root shell access.

The first bug is a wild one, and is definitely something for us Linux sysadmins to be aware of. How do you handle a user on a Linux system, that you don’t want to have SSH access to the system shell? I’ve faced this problem when a customer needed SFTP access to a web site, but definitely didn’t need to run bash commands on the server. The solution is to set the user’s shell to nologin, so when SSH connects and runs the shell, it prints a message, and ends the shell, terminating the SSH connection. Based on the code snippet, the FLEX is doing something similar, perhaps with -false set as the shell instead:

$ ssh user@192.168.169.1
(user@192.168.169.1) Password:
-false: unknown program '-false'
Try '-false --help' for more information.
Connection to 192.168.169.1 closed.

It’s slightly janky, but seems set up correctly, right? There’s one more step to do this completely: Add a Match entry to sshd_config, and disable some of the other SSH features you may not have thought about, like X11 forwarding, and TCP forwarding. This is the part that Zyxel forgot about. VPN-only users can successfully connect over SSH, and the connection terminates right away with the invalid shell, but in that brief moment, TCP traffic forwarding is enabled. This is an unintended security domain transverse, as it allows the SSH user to redirect traffic into internal-only ports.

Next question to ask, is there any service running inside the appliance that provides a pivot point? How about PostgreSQL? This service is set up to allow local connections on port 5432 — without a password. And PostgreSQL has a wonderful feature, allowing a COPY FROM command to specify a function to run using the system shell. It’s essentially arbitrary shell execution as a feature, but limited to the PostgreSQL user. It’s easy enough to launch a reverse shell to have ongoing shell access, but still limited to the PostgreSQL user account.

There are a couple directions exploitation can go from there. The /tmp/webcgi.log file is accessible, which allows for grabbing an access token from a logged-in admin. But there’s an even better approach, in that the unprivileged user can use the system’s Recovery Manager to download system settings, repack the resulting zip with a custom binary, re-upload the zip using Recovery Manager, and then interact with the uploaded files. A clever trick is to compile a custom binary that uses the setuid(0) system call, and because Recovery Manager writes it out as root, with the setuid bit set, it allows any user to execute it and jump straight to root. Impressive.

Power Glitching an STM32

Micro-controllers have a bit of a weird set of conflicting requirements. They need to be easily flashed, and easily debugged for development work. But once deployed, those same chips often need to be hardened against reading flash and memory contents. Chips like the STM32 series from ST Microelectronics have multiple settings to keep chip contents secure. And Anvil Secure has some research on how some of those protections could be defeated. Power Glitching.

The basic explanation is that these chips are only guaranteed to work when run inside their specified operating conditions. If the supply voltage is too low, be prepared for unforeseen consequences. Anvil tried this, and memory reads were indeed garbled. This is promising, as the memory protection settings are read from system memory during the boot process. In fact, one of the hardest challenges to this hack was determining the exact timing needed to glitch the right memory read. Once that was nailed down, it took about 6 hours of attempts and troubleshooting to actually put the embedded system into a state where firmware could be extracted.

MCP Line Jumping

Trail of Bits is starting a series on MCP security. This has echoes of the latest FLOSS Weekly episode, talking about agentic AI and how Model Context Protocol (MCP) is giving LLMs access to tools to interact with the outside world. The security issue covered in this first entry is Line Jumping, also known as tool poisoning.

It all boils down to the fact that MCPs advertise the tools that they make available. When an LLM client connects to that MCP, it ingests that description, to know how to use the tool. That description is an opportunity for prompt injection, one of the outstanding problems with LLMs.

Bits and Bytes

Korean SK Telecom has been hacked, though not much information is available yet. One of the notable statements is that SK Telecom is offering customers a free SIM swapping protection service, which implies that a customer database was captured, that could be used for SIM swapping attacks.

WatchTowr is back with a simple pre-auth RCE in Commvault using a malicious zip upload. It’s a familiar story, where an unauthenticated endpoint can trigger a file download from a remote server, and file traversal bugs allow unzipping it in an arbitrary location. Easy win.

SSD Disclosure has discovered a pair of Use After Free bugs in Google Chrome, and Chrome’s Miracleptr prevents them from becoming actual exploits. That technology is a object reference count, and “quarantining” deleted objects that still show active references. And for these particular bugs, it worked to prevent exploitation.

And finally, [Rohan] believes there’s an argument to be made, that the simplicity of ChaCha20 makes it a better choice as a symmetric encryption primitive than the venerable AES. Both are very well understood and vetted encryption standards, and ChaCha20 even manages to do it with better performance and efficiency. Is it time to hang up AES and embrace ChaCha20?

This Week in Security: No More CVEs, 4chan, and Recall Returns

18 Abril 2025 at 14:00

The sky is falling. Or more specifically, it was about to fall, according to the security community this week. The MITRE Corporation came within a hair’s breadth of running out of its contract to maintain the CVE database. And admittedly, it would be a bad thing if we suddenly lost updates to the central CVE database. What’s particularly interesting is how we knew about this possibility at all. An April 15 letter sent to the CVE board warned that the specific contract that funds MITRE’s CVE and CWE work was due to expire on the 16th. This was not an official release, and it’s not clear exactly how this document was leaked.

Many people made political hay out of the apparent imminent carnage. And while there’s always an element of political maneuvering when it comes to contract renewal, it’s worth noting that it’s not unheard of for MITRE’s CVE funding to go down to the wire like this. We don’t know how many times we’ve been in this position in years past. Regardless, MITRE has spun out another non-profit, The CVE Foundation, specifically to see to the continuation of the CVE database. And at the last possible moment, CISA has announced that it has invoked an option in the existing contract, funding MITRE’s CVE work for another 11 months.

Android Automatic Reboots

Mobile devices are in their most secure state right after boot, before the user password is entered to unlock the device for the first time. Tools like Cellebrite will often work once a device has been unlocked once, but just can’t exploit a device in the first booted state. This is why Google is rolling out a feature, where Android devices that haven’t been unlocked for three days will automatically reboot.

Once a phone is unlocked, the encryption keys are stored in memory, and it only takes a lock screen bypass to have full access to the device. But before the initial unlock, the device is still encrypted, and the keys are safely stored in the hardware security module. It’s interesting that this new feature isn’t delivered as an Android OS update, but as part of the Google Play Services — the closed source libraries that run on official Android phones.

4chan

4chan has been hacked. It turns out, running ancient PHP code and out-of-date libraries on a controversial site is not a great idea. A likely exploit chain has been described, though this should be considered very unofficial at this point: Some 4chan boards allow PDF uploads, but the server didn’t properly vet those files. A PostScript file can be uploaded instead of a PDF, and an old version of Ghostscript processes it. The malicious PostScript file triggers arbitrary code execution in Ghostscript, and a SUID binary is used to elevate privileges to root.

PHP source code of the site has been leaked, and the site is still down as of the time of writing. It’s unclear how long restoration will take. Part of the fallout from this attack is the capture and release of internal discussions, pictures of the administrative tools, and even email addresses from the site’s administration.

Recall is Back

Microsoft is back at it, working to release Recall in a future Windows 11 update. You may remember our coverage of this, castigating the security failings, and pointing out that Recall managed to come across as creepy. Microsoft wisely pulled the project before rolling it out as a full release.

If you’re not familiar with the Recall concept, it’s the automated screenshotting of your Windows machine every few seconds. The screenshots are then locally indexed with an LLM, allowing for future queries to be run against the data. And once the early reviewers got over the creepy factor, it turns out that’s genuinely useful sometimes.

On top of the security hardening Microsoft has already done, this iteration of Recall is an opt-in service, with an easy pause button to temporarily disable the snapshot captures. This is definitely an improvement. Critics are still sounding the alarm, but for a much narrower problem: Recall’s snapshots will automatically extract information from security focused applications. Think about Signal’s disappearing messages feature. If you send such a message to a desktop user, that has Recall enabled, the message is likely stored in that user’s Recall database.

It seems that Microsoft has done a reasonably good job of cleaning up the Recall feature, particularly by disabling it by default. It seems like the privacy issues could be furthered addressed by giving applications and even web pages a way to opt out of Recall captures, so private messages and data aren’t accidentally captured. As Recall rolls out, do keep in mind the potential extra risks.

16,000 Symlinks

It’s been recently discovered that over 16,000 Fortinet devices are compromised with a trivial backdoor, in the form of a symlink making the root filesystem available inside the web-accessible language folder. This technique is limited to devices that have the SSL VPN enabled. That system exposes a web interface, with multiple translation options. Those translation files live in a world-accessible folder on the web interface, and it makes for the perfect place to hide a backdoor like this one. It’s not a new attack, and Fortinet believes the exploited devices have harbored this backdoor since the 2023-2024 hacking spree.

Vibes

We’re a little skeptical on the whole vibe coding thing. Our own [Tyler August] covered one of the reasons why. LLMs are likely to hallucinate package names, and vibe coders may not check closely, leading to easy typosquatting (LLMsquatting?) attacks. Figure out the likely hallucinated names, register those packages, and profit.

But what about Vibe Detections? OK, we know, letting an LLM look at system logs for potentially malicious behavior isn’t a new idea. But [Claudio Contin] demonstrates just how easy it can be, with the new EDV tool. Formally not for production use, this new gadget makes it easy to take Windows system events, and feed them into Copilot, looking for potentially malicious activity. And while it’s not perfect, it did manage to detect about 40% of the malicious tests that Windows Defender missed. It seems like LLMs are going to stick around, and this might be one of the places they actually make sense.

Bits and Bytes

Apple has pushed updates to their entire line, fixing a pair of 0-day vulnerabilities. The first is a wild vulnerability in CoreAudio, in that playing audio from a malicious audio file can lead to arbitrary code execution. The chaser is the flaw in the Pointer Authentication scheme, that Apple uses to prevent memory-related vulnerabilities. Apple has acknowledged that these flaws were used in the wild, but no further details have been released.

The Gnome desktop has an interesting problem, where the yelp help browser can be tricked into reading the contents of arbitrary filesystem files. Combined with the possibility of browser links automatically opening in yelp, this makes for a much more severe problem than one might initially think.

And for those of us following along with Google Project Zero’s deep dive into the Windows Registry, part six of that series is now available. This installment dives into actual memory structures, as well as letting us in on the history of why the Windows registry is called the hive and uses the 0xBEE0BEE0 signature. It’s bee themed, because one developer hated bees, and another developer thought it would be hilarious.

❌
❌