Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

This Week in Security: GhostWrite, Localhost, and More

You may have heard some scary news about RISC-V CPUs. There’s good news, and bad news, and the whole thing is a bit of a cautionary tale. GhostWrite is a devastating vulnerability in a pair of T-Head XuanTie RISC-V CPUs. There are also unexploitable crashes in another T-Head CPU and the QEMU soft core implementation. These findings come courtesy of a group of researchers at the CISPA Helmholtz Center for Information Security in Germany. They took at look at RISC-V cores, and asked the question, do any of these instructions do anything unexpected? The answer, obviously, was “yes”.

Undocumented instructions have been around just about as long as we’ve had Van Neumann architecture processors. The RISC-V ISA put a lampshade on that reality, and calls them “vendor specific custom ISA extensions”. The problem is that vendors are in a hurry, have limited resources, and deadlines wait for no one. So sometimes things make it out the door with problems. To find those problems, CISPA researchers put together a test framework is called RISCVuzz, and it’s all about running each instruction on multiple chips, and watching for oddball behavior. They found a couple of “halt-and-catch-fire” problems, but the real winner (loser) is GhostWrite.

Now, this isn’t a speculative attack like Meltdown or Spectre. It’s more accurate to say that it’s a memory mapping problem. Memory mapping helps the OS keep programs independent of each other by giving them a simplified memory layout, doing the mapping from each program to physical memory in the background. There are instructions that operate using these virtual addresses, and one such is vs128.v. That instruction is intended to manipulate vectors, and use virtual addressing. The problem is that it actually operates directly on physical memory addresses, even bypassing cache. That’s not only memory, but also includes hardware with memory mapped addresses, entirely bypassing the OS. This instruction is the keys to the kingdom.

So yeah, that’s bad, for this one particular RISC-V model. The only known fix is to disable the vector extensions altogether, which comes with a massive performance penalty. One benchmark showed a 77% performance penalty, nearly slashing the CPU’s performance in half. The lessons here are that as exciting as the RISC-V is, with its open ISA, individual chips aren’t necessarily completely Open Sourced, and implementation quality may very wildly between vendors.

0.0.0.0 Day Vulnerability

We’ve come a long way since the days when the web was young, and the webcam was strictly for checking on how much coffee was left. Now we have cross-site scripting attacks and cross-site request forgeries to deal with. You might be tempted to think that we’ve got browser security down. You’d be wrong. But finally, a whole class of problems are getting cleaned up, and a related problem you probably didn’t even realize you had. That last one is thanks to researchers at Oligo, who bring us this story.

The problem is that websites from the wider Internet are accessing resources on the local network or even the localhost. What happens if a website tries to load a script, using the IP address of your router? Is there some clever way to change settings using nothing but a JS script load? In some cases, yes. Cross Origin Resource Sharing (CORS) fixes this, surely? CORS doesn’t prevent requests, it just limits what the browser can do after the request has been made. It’s a bit embarrassing how long this has been an issue, but PNA finally fixes this, available as an origin trial in Chrome 128. This divides the world into three networks, with the Internet as the least privileged layer, then the local network, and finally the local machine and localhost as the inner, most protected. A page hosted on localhost can pull scripts from the Internet, but not the other way around.

And this brings us to 0.0.0.0. What exactly is that IP address? Is it even an IP address? Sort of. In some cases, like in a daemon’s configuration file, it indicates all the network devices on the local machine. It also gets used in DHCP as the source IP address for DHCP requests before the machine has an IP address. But what happens when you use it in a browser? On Windows, nothing much. 0.0.0.0 is a Unixism that hasn’t (yet) made its way into Windows. But on Linux and MacOS machines, all the major browsers treat it as distinct from 127.0.0.1, but also as functionally equivalent to localhost. And that’s really not great, as evidenced by the list of vulnerabilities in various applications when a browser can pull this off. The good news is that it’s finally getting fixed.

PLCs Sleuthing

Researchers at Claroty have spent some time digging into Unitronics Programmable Logic Controllers (PLCs), as those were notably cracked in a hacking campaign last fall. This started with a very familiar story, of rigging up a serial connection to talk to the controller. There is an official tool to administrate the controller over serial, so capturing that data stream seemed promising. This led to documenting the PCOM protocol, and eventually building a custom admin application. The goal here is to build tooling for forensics, to pull data off of one of those compromised devices.

You Don’t Need to See My JWT

Siemens had a bit of a problem with their AMA Cloud web application. According to researchers at Traceable ASPEN, it’s a surprisngly common problem with React web applications. The login flow here is that upon first visiting the page, the user is redirected to an external Single Sign On provider. What catches the eye is that the React application just about fully loads before that redirect fires. So what happens if that redirect JS code is disabled? There’s the web application, just waiting for data from the back end.

That would be enough to be interesting, but this goes a step further. After login, the authenticated session is handled with a JSON Web Token (JWT). That token was checked for by the front-end code, but the signature wasn’t checked. And then most surprisingly, the APIs behind the service didn’t check for a JWT either. The authentication was all client-side, in the browser. Whoops. Now to their credit, Siemens pushed a fix within 48 hours of the report, and didn’t drop the ball on disclosure.

(Hackaday’s parent company, Supplyframe, is owned by Siemens.)

Bits and Bytes

If you run NeatVNC, 0.8.1 is a pretty important security update. Specifying the security type is left up to clients, and “none” is a valid option. That’s not great.

Apparently we owe Jia Tan a bit of our thanks, as the extra attention on SSH has shaken loose a few interesting findings. While there isn’t a single glaring vulnerabiltiy to cover, HD Moore and Rob King found a bunch of implementation problems, particularly in embedded devices. This was presented at Black Hat, so hopefully the presentation will eventually be made available. For now, we do have a nifty new tool, SSHamble, to play with.

In 2023, the Homebrew project undertook an audit by Trail of Bits. And while there weren’t any High severity problems found, there were a decent handful of medium and lower issues. Those have mostly been fixed, and the audit results have now been made public. Homebrew is the “missing package manager for MacOS”, and if that sounds interesting, be sure to watch for next week’s FLOSS Weekly episode, because we’re chatting with Homebrew about this, their new Workbrew announcement, and more!

FLOSS Weekly Episode 795: Liferay, Now We’re Thinking With Portals

This week Jonathan Bennett and Doc Searls chat with Olaf Kock and Dave Nebinger about Liferay! That’s a Java project that started as an implementation of a web portal, and has turned into a very flexible platform for any sort of web application. How has this Open Source project turned into a very successful business? And how is it connected to most iconic children’s educational show of all time? Listen to find out!

Did you know you can watch the live recording of the show Right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

This Week in Security: Echospoofing, Ransomware Records, and Github Attestations

It’s a bit of bitter irony, when a security product gets used maliciously, to pull off the exact attack it was designed to prevent. Enter Proofpoint, and the EchoSpoofing attack. Proofpoint offers an email security product, filtering spam and malicious incoming emails, and also handling SPF, DKIM, and DMARC headers on outgoing email. How does an external service provide those email authentication headers?

One of the cardinal sins of running an email server is to allow open relaying. That’s when anyone can forward email though an SMTP server without authentication. What we have here is two nearly open relays, that wound up with spoofed emails getting authenticated just like the real thing. The first offender is Microsoft’s Office365, which seems to completely skip checking for email spoofing when using SMTP relaying from an allowed IP address. This means a valid Office365 account allows sending emails as any address. The other half relies on the way Proofpoint works normally, accepting SMTP traffic from certain IP addresses, and adding the authentication headers to those emails. There’s an option in Proofpoint to add the Microsoft Office 365 servers to that list, and apparently quite a few companies simply select that option.

The end result is that a clever spammer can send millions of completely legitimate looking emails every day, that look very convincing even to sophisticated users. At six months of activity, averaging three millions emails a day, this campaign managed just over half a billion malicious emails from multiple high-profile domains.

The good news here is that Proofpoint and Guardio discovered the scheme, and worked with Microsoft to develop the X-OriginatorOrg header that is now applied to every email sent from or through the Office365 servers. This header marks the account tenant the email belongs to, giving vendors like Proofpoint a simple way to determine email validity.

Ransomware Gets Bigger

It’s not just spam emails that posted eye-watering numbers this year. We’ve broken the record for the biggest ransomware payment, too. Zscaler is reporting a $75 million ransom payment from a “Fortune 50” company. Reading the tea leaves indicates Cencora as a likely candidate, as this pharmaceutical giant was hit by an attack in February, and none of the usual suspects ever claimed responsibility. This leads one to suspect that it was the Dark Angels ransomware operation.

The Linux CNA, with a Grain of Salt

One of the interesting recent security developments is the proliferation of CNAs, CVE Numbering Authorities, particularly among Open Source projects. Part of the reason is the growing prevalence of CVE issued on bogus bugs, or with completely overblown CVSS scores. One of these new CNAs is the Linux Kernel itself, which has taken an odd approach to handing CVEs: Every bug gets one. On one hand, the kernel developers make a valid point that in kernel land, basically any bug can be a vulnerability. And it’s certainly one way to put the pressure on vendors to use more up-to-date kernel releases. On the other hand, Linux is now the most prolific CNA measured in CVEs generated each year.

The situation does not sit well with everyone. Grsecurity has published a case study on CVE-2021-4440, that highlights some issues. The emphasis here seems to be that the kernel team uses a lot of automation tools to manage CVEs, and these tools aren’t always great at accuracy or clarity. Now one thing to keep in mind, grsecurity in particular has had a rocky relationship with the upstream kernel, so a grain of salt might be taken with this one. Regardless, it seems like the kernel is experiencing some growing pains, as it comes into its new role as CNA.

Github Adds Attestation

Github has added a new attestation feature, presumably spurred by the XZ attack. Github Attestations are a cryptographic proof that a tarball or binary was built from the source code publicly available, and hasn’t been tampered with. This has been out for about a month now, and this week is joined by a quick starter guide to publishing attestations with everything a project releases. There’s also a Kubernetes project that only allows running images that have valid attestations in place, which is handy!

ZDI on Windows

ZDI has some interesting coverage of some recently discovered vulnerabilities in antivirus products, all around the idea of link following. Put simply, what if an antivirus detects a malicious file, but before it can delete that file, an attacker switches the file out with a filesystem link to a different file? In many cases, the antivirus follows the link and deletes something it shouldn’t. Part two is some more advanced ways to pull off this trick, like using NTFS Alternate Data Streams to trick the antivirus into action.

While we’re talking ZDI disclosures, there’s a Deep Sea Electronics communication module that had some problems, like a configuration backup endpoint that has no authentication check. There are a couple of other endpoints missing authentication, as well as trivial Denial of Service situations. Unfortunately this is a case where the vendor dropped the ball, and these vulnerabilities are assumed to be unpatched for this device.

Bits and Bytes

The unfortunate state of mobile applications is that just because it’s published on an official app store, there is no guarantee that an app is safe. Mandrake was first seen in 2016, with several waves of malicious activity, to disappear for a couple years. It’s back, and has eluded notice til very recently. I was intrigued by the idea that it excluded 90 countries from being targeted, and found this in the source document: “It avoids running in low income states, African nations, former Soviet Union countries or predominantly Arabic-speaking nations.”

Once again, in IoT the S stands for security. A wifi security camera with a generic brand name shipped with a hard-coded root password. On this one, the journey is most of the fun. But really, WiFi cameras have bigger problems, and it’s apparently becoming common for thieves to use wifi jammers to cover their tracks. Hardwire your cameras, and keep them away from the Internet.

While it didn’t rise to the Crowdstrike level, Microsoft had an Azure outage this week, that caused some headache. It turns out it was a DDoS attack, and Microsoft’s own Denial of Service mitigation tooling amplified the attack instead of mitigating it. Decidedly non-ideal.

This Week in Security: EvilVideo, Crowdstrike, and InSecure Boot

First up this week is the story of EvilVideo, a clever telegram exploit that disguises an APK as a video file. The earliest record we have of this exploit is on June 6th when it was advertised on a hacking forum.

Researchers at ESET discovered a demo of the exploit, and were able to disclose it to Telegram on June 26th. It was finally patched on July 11. While it was advertised as a “one-click” exploit, that’s being a bit generous, as the ESET demo video shows. But it was a clever exploit. The central trick is that an APK file can be sent in a Telegram chat, and it displays what looks like a video preview. Tap the “video” file to watch it, and Telegram prompts you to play it with an external player. But it turns out the external player in this case is Android itself, which prompts the target to install the APK. Sneaky.

Traffic Control

We briefly covered this story a couple months ago, focusing on how bad of an idea it is to threaten a good faith researcher with legal action. Well the details of this traffic controller hack are available, and it’s about what you’d expect. Part one is all about getting the hardware and finding a trivial security bypass. The “web security” tab in the user interface seems to be an iframe, and navigating directly to that iframe address simply doesn’t trigger a login prompt. That’s the issue that [Andrew Lemon] first disclosed to Q-Free, leading to the legal nastygram.

Well now we have part two of that research, and spoilers: it doesn’t get any better. A couple false starts led [Andrew] to a desperation move. He had a new box to test and no login for it, so he started at the basics with the Burp proxy. And lo and behold, in the request was an odd string. 1.3.6.1.4.1.1206.3.36.1.6.10.1*IDO_0=2&

That is an Object IDentifier (OID) for the Simple Network Management Protocol (SNMP). These things use a version of SNMP known as National Transportation Communications for Intelligent Transportation System Protocol, or NTCIP. And this device not only uses that protocol, it seems to do so without authentication. Among the fields that are readable and writable without auth are the system username and system password. No hashing in sight. Now we can only hope that this is ancient hardware that isn’t in use any longer, or at least no longer connected to the Internet. And we’ll also hope that vendors like Q-Free have learned their lessons since this software was written. Though given their response to the vulnerability disclosure, we’re not holding our breaths.

The Rest of the Crowdstrike Story

You may have noticed a bit of weirdness around the world last Friday. Early in the morning of the 18th, Croudstrike pushed a rapid response content update to their Falcon antivirus platform. Rapid Response data does get tested, but does not get a staged roll out. And in this case, a bug in the testing platform led to the invalid file being pushed out, and because the rollout was not staged, it went everywhere all at once.

This bogus configuration data triggered an out-of-bounds memory read in the Falcon kernel driver, leading to system crashes. The particularly bitter context is that Crowdstrike had done the same thing to Linux machines a few months earlier. It’s beginning to seem that antivirus kernel drivers are a bad idea.

Microsoft has made it clear that this wasn’t a Microsoft incident. And the little known fact is that Microsoft tried to put an end to antivirus kernel drivers years ago, and was blocked by government regulators. And why didn’t Windows offer to boot without the crashing driver? The Crowdstrike kernel driver marks itself as a boot-start driver. The one ray of hope is that it’s possible for the system to stay up just long enough for Crowdstrike to pull an update before the system crash. It only takes something like 15 reboots.

This time it was Microsoft

There was, apparently, another Blue Screen crash this month. The July Patch Tuesday update dropped some computers into the BitLocker recovery screen, which just happens to be that same shade of blue. It’s not yet clear what about this set of fixes triggered the problem, but it seems that getting the recovery key does get these machines running again.

LetsKill OCSP

Let’s Encrypt surprised a few of us by announcing the end of OCSP this week. The Online Certificate Status Protocol is used to query whether a given certificate is still valid. One of the problems with that protocol is that it requests status updates per DNS address, effectively sending a running browsing history over the Internet. There’s a technical issue, in that the attacks that OCSP is designed to defend against also place the attacker in a position to block OCSP requests, and clients will silently ignore OCSP requests that time out.

The replacement is the Certificate Revocation List (CRL), which is a simple list of revoked certificates. The problem is that those lists can be huge. Mozilla and Google have rolled out a clever solution, that uses data compression and aggressive optimization to handle those CRLs like any other browser update. And hence, OCSP is destined to go away.

InSecure Boot

Binarly is sounding the alarm on Secure Boot. The biggest problem is that at least five device manufacturer used demo keys in production. The master key predictably leaked, and as a result about 200 devices have broken secure boot protections. That key is labeled DO NOT TRUST - AMI Test PK? Perfect, ship it!

Bits and Bytes

Docker Engine had a nasty regression, where a flaw fixed in 2019 wasn’t properly forward-ported to later versions. CVE-2024-41110 is a CVSS 10.0 issue, where an API call with Content-Length of 0 is forwarded without any authentication.

An interesting bug was just fixed in curl, where a TLS certificate could trigger the curl ASN.1 parser to fail and return an error. When it did this, the function in question can call free() on a stack buffer, which is particularly bad idea. This is notable as the curl developers refer to it as a “C mistake (likely to have been avoided had we not been using C)”. Time to add some Rust code to curl?

And finally, there’s something you should know about Github. Code is forever. This is all working as intended, but can catch you if you’re not aware. Namely, private or deleted commits that are attached to a public repo are still accessible, if you know or guess the short commit hash. This has some important ramifications for cleaning up data leaks, and developing private forks. Knowing is half the battle!

This Week in Security: Snowflake, The CVD Tension, and Kaspersky’s Exit — And Breaking BSOD

In the past week, AT&T has announced an absolutely massive data breach. This is sort of a multi-layered story, but it gives me an opportunity to use my favorite piece of snarky IT commentary: The cloud is a fancy way to talk about someone else’s servers. And when that provider has a security problem, chances are, so do you.

The provider in question is Snowflake, who first made the news in the Ticketmaster breach. As far as anyone can tell, Snowflake has not actually been directly breached, though it seems that researchers at Hudson Rock briefly reported otherwise. That post has not only been taken down, but also scrubbed from the wayback machine, apparently in response to a legal threat from Snowflake. Ironically, Snowflake has confirmed that one of their former employees was compromised, but Snowflake is certain that nothing sensitive was available from the compromised account.

At this point, it seems that the twin problems are that big organizations aren’t properly enforcing security policy like Two Factor Authentication, and Snowflake just doesn’t provide the tools to set effective security policy. The Mandiant report indicates that all the breaches were the result of credential stealers and other credential-based techniques like credential stuffing.

Cisco’s Easy Password Reset

Cisco has patched a vulnerability in the Smart Software Manager On-Prem utility, a tool that allows a business to manage their own Cisco licenses. The flaw was a pretty nasty one, where any user could change the password of any other user.

While there are no workarounds, an update with the fix has been released for free. As [Dan Goodin] at Ars speculates, full administrative access to this management console could provide unintended access to all the rest of the Cisco gear in a given organization. This seems like one to get patched right away.

Bye Bye Kaspersky

Kaspersky Labs has officially started started winding down their US operations, as a direct result of the US Commerce Department ban. As a parting gift, anyone who wants it gets a free six-month subscription.

Just a reminder, any Kaspersky installs will stop getting updates at that six-month mark, so don’t forget to go on a Kaspersky uninstall spree at that time. We’ve got the twin dangers, that the out-of-date antivirus could prevent another solution like Windows Defender from running, and that security products without updates are a tempting target for escalation of privilege attacks.

Uncoordinated Vulnerability Disclosure

Let’s chat a bit about coordinated vulnerability disclosure. That’s the process when a researcher finds a vulnerability, privately reports it to the vendor, and together they pick a date to make the details public, usually somewhere around 90 or 120 days from disclosure. The researcher gets credit for the find, sometimes a bug bounty payout, and the vendor fixes their bug.

Things were not always this way. Certain vendors were once well known for ignoring these reports for multiple months at a time, only to rush out a fix if the bug was exploited in the wild. This slapdash habit led directly to our current 90-day industry standard. And in turn, a strict 90-day policy is usually enough to provoke responsible behaviors from vendors.

Usually, but not always. ZDI discovered the Internet Explorer technique that we discussed last week being used in the wild. Apparently [Haifei Li] at Check Point Research independently discovered the vulnerability, and it’s unclear which group actually reported it first. What is clear is that Microsoft dropped the ball on the patch, surprising both research teams and failing to credit the ZDI researcher at all. And as the ZDI post states, this isn’t an isolated incident:

microsoft: Exploit Code Unporoven

me: i literally gave you a compiled PoC and also exploit code

m$: No exploit code is available, or an exploit is theoretical.

me: pic.twitter.com/tIXJAbkRu4

— chompie (@chompie1337) June 12, 2024

While these are Microsoft examples, there are multiple occasions from various vendors where “coordination” simply means “You tell us everything you know about this bug, and maybe something will happen.”

Bits and Bytes

Claroty’s Team82 has documented their rather impressive entry in the 2023 Pwn2Own IoT contest. The two part series starts with a WAN side attack, targeting a router’s dynamic DNS. We briefly discussed that last week. This week is the juicy details of an unauthenticated buffer overflow, leading to RCE on the device. This demonstrates the clever and terrifying trick of attacking a network from the Internet and establishing presence on an internal device.

There are times when you really need to see into an SSL stream, like security research or auditing. Often times that’s as easy as adding a custom SSL certificate to the machine’s root store, so the application sees your forced HTTPS proxy as legitimate. In the case of Go, applications verify certificates independently of the OS, making this inspection much more difficult. The solution? Just patch the program to turn on the InsecureSkipVerify feature. The folks at Cyberark have dialed in this procedure, and even have a handy Python script for ease of use. Neat!

Speaking of tools, we were just made aware of EMBA, the EMBedded Analyzer. That’s an Open Source tool to take a look into firmware images, automatically extract useful data.

Breaking BSOD

Just as we were wrapping this week’s column, a rash of Windows Blue Screens of Death, BSODs, starting hitting various businesses around the world. The initial report suggests that it’s a Crowdstrike update gone wrong, and Crowdstrike seems to be investigating. It’s reported that renaming the C:\windows\system32\drivers\crowdstrike folder from within safe mode will get machines booting again, but note that this is not official guidance at this point.

Something super weird happening right now: just been called by several totally different media outlets in the last few minutes, all with Windows machines suddenly BSoD’ing (Blue Screen of Death). Anyone else seen this? Seems to be entering recovery mode: pic.twitter.com/DxdLyA9BLA

— Troy Hunt (@troyhunt) July 19, 2024

This Week in Security: Blast-RADIUS, Gitlab, and Plormbing

The RADIUS authentication scheme, short for “Remote Authentication Dial-In User Service”, has been widely deployed for user authentication in all sorts of scenarios. It’s a bit odd, in that individual users authenticate to a “RADIUS Client”, sometimes called a Network Access Server (NAS). In response to an authentication request, a NAS packages up the authentication details, and sends it to a central RADIUS server for verification. The server then sends back a judgement on the authentication request, and if successful the user is authenticated to the NAS/client.

The scheme was updated to its current form in 1994, back when MD5 was considered a cryptographically good hash. It’s been demonstrated that MD5 has problems, most notably a chosen-prefix collision attack demonstrated in 2007. The basis of this collision attack is that given two arbitrary messages, it is possible to find a pair of values that, when appended to the end of those messages, result in matching md5 hashes for each combined message. It turns out this is directly applicable to RADIUS.

The attack is a man-in-the-middle, but not against an authenticating user. This attack is a man-in-the-middle between the NAS and the RADIUS server, and a real user isn’t even required. This elevated position does make an attack harder to achieve in some cases, but situations like RADIUS providing authentication for administrative access to a device is squarely in scope. Wrapping the RADIUS backend communications in a TLS layer does protect against the attack.

Gitlab

It’s once again time to go update your Gitlab instances, and this one sounds familiar. It’s another issue where an attacker could run pipeline jobs as an arbitrary user. This comes as one more of a series of problems in Gitlab, with at least one of them being exploited in the wild. It’s not surprising to see a high-visibility vulnerability leading to the discovery of several more similar problems. With this latest issue being so similar to the previous pipeline problem, it’s possible that it’s actually an incomplete patch or additional workaround discovered to exploit the same issue.

Exim

There’s a bug in the Exim email server, that impacts the processing of attachment blocking rules. Specifically, the filename in the email header is broken into multiple parts, with some confusing extra bytes in between. It’s technically compliant with the right RFC, but Exim’s mime handling code gets confused, and misses the right message name.

Exim server can be configured to block certain file types, and this vulnerability allows those blocked attachments through. The original CVSS of 9.1 is a tad insane. The latest update drops that to a 5.4, which seems much more appropriate.

Plormbing Your ORM

Prisma is a “Next Generation ORM (Object Relational Mapper), that takes database schema, and maps it to code objects. In other words, it helps write code that interacts with a database. There’s some potential problems there, like using filters on protected data, to leak information one byte at a time, in a very Hollywood manner.

This brings us to a second approach, a time-based data leak. Here a SQL query will execute slowly or quickly depending on the data in the database. The plormber tool is designed to easily build attempts at time-based leaks. Hence the pun. If you have a leak in your ORM, call a plORMber. *sigh*

Internet Explorer Rises Again

When Microsoft finally obsoleted Internet Explorer in 2022, I had some hope that it wouldn’t be the cause of any more security issues. And yet here we are, in 2024, talking about an exploitation campaign that used a 0-day in Windows to launch Internet Explorer.

A very odd file extension, .pdf.url, manages to appear as a pdf file with the appropriate icon, and yet opens IE when executed. This finally got classified by Microsoft as a vulnerability and fixed.

Bits and Bytes

There’s another SSH issue, related to regreSSHion. This time a vendor patch makes a call to cleanup_exit() from a signal handler function, calling more async-unsafe code. If that doesn’t make any sense, circle back around to last week’s installment of the column for the details. This time it’s Fedora, Red Hat, and other distros that used the patch.

One of the security barriers that most of us rely on is that traffic originating from the WAN side of the router should stay there. When that paradigm breaks down, we have problems. And that’s exactly what the folks at Claroty are working to defeat. The trick this time is a vulnerability in a router’s Dynamic DNS service. Manage to spoof a DNS lookup or MitM that connection, and suddenly it’s RCE on the router.

And finally, we’ve covered a pair of outstanding stories this week here at Hackaday. You should go read about how Ticketmaster’s app was reverse engineered, followed by a brilliant and completely impractical scheme to get your Internet connection for free while flying.

This Week in Security: Hide Yo SSH, Polyfill, and Packing It Up

The big news this week was that OpenSSH has an unauthorized Remote Code Execution exploit. Or more precisely, it had one that was fixed in 2006, that was unintentionally re-introduced in version 8.5p1 from 2021. The flaw is a signal handler race condition, where async-unsafe code gets called from within the SIGALARM handler. What does that mean?

To understand, we have to dive into the world of Linux signal handling. Signals are sent by the operating system, to individual processes, to notify the process of a state change. For example SIGHUP, or SIGnal HangUP, originally indicated the disconnect of the terminal’s serial line where a program was running. SIGALRM is the SIGnal ALaRM, which indicates that a timer has expired.

What’s interesting about signal handling in Unix is how it interrupts program execution. The OS has complete control over execution scheduling, so in response to a signal, the scheduler pauses execution and immediately handles the signal. If no signal handler function is defined, that means a default handler provided by the OS. But if the handler is set, that function is immediately run. And here’s the dangerous part. Program execution can be anywhere in the program, when it gets paused, the signal handler run, and then execution continues. From Andries Brouwer in The Linux Kernel:

It is difficult to do interesting things in a signal handler, because the process can be interrupted in an arbitrary place, data structures can be in arbitrary state, etc. The three most common things to do in a signal handler are (i) set a flag variable and return immediately, and (ii) (messy) throw away all the program was doing, and restart at some convenient point, perhaps the main command loop or so, and (iii) clean up and exit.

The term async-signal-safe describes functions that have predictable behavior even when called from a signal handler, with execution paused at an arbitrary state. How can such a function be unsafe? Let’s consider the async-signal-unsafe free(). Here, sections of memory are marked free, and then pointers to that memory are added to the table of free memory. If program execution is interrupted between these points, we have an undefined state where memory is both free, and still allocated. A second call to free() during execution pause will corrupt the free memory data structure, as the code is not intended to be called in this reentrant manner.

So back to the OpenSSH flaw. The SSH daemon sets a timer when a new connection comes in, and if the authentication hasn’t completed, the SIGALRM signal is generated when the timer expires. The problem is that this signal handler uses the syslog() system call, which is not an async-safe function, due to inclusion of malloc() and free() system calls. The trick is start an SSH connection, wait for the timeout, and send the last bytes of a public-key packet just before the timeout signal fires. If the public-key handling function just happens to be at the correct point in a malloc() call, when the SIGALRM handler reenters malloc(), the heap is corrupted. This corruption overwrites a function pointer. Replace the pointer with an address where the incoming key material was stored, and suddenly we have shellcode execution.

There are several problems with turing this into a functional exploit. The first is that it’s a race condition, requiring very tight timing to split program execution in just the right spot. The randomness of network timing makes this a high hurdle. Next, all major distros use Address Space Layout Randomization (ASLR), which should make that pointer overwrite very difficult. It turns out, also on all the major distros, ASLR is somewhat broken. OK, on 32-bit installs, it’s completely broken. On the Debian system tested, there’s literally a single bit of ASLR in play for the glibc library. It can be located at one of two possible memory locations.

Assuming the default settings for max SSH connections and LoginGraceTime, it takes an average of 3-4 hours to win the race condition to trigger the bug, and then there’s a 50% chance of guessing the correct address on the first try. That seems to put the average time at five and a quarter hours to crack a 32-bit Debian machine. A 64-bit machine does have ASLR that works a bit better. A working exploit had not been demonstrated as of when the vulnerability write-up was published, but the authors suggest it could be achieved in the ballpark of a week of attacking.

So what systems should we really worry about? The regression was introduced in 8.5p1, and fixed in 9.8p1. That means Debian 11, RHEL 8, and their derivatives are in the clear, as they ship older OpenSSH versions. Debian 12 and RHEL 9 are in trouble, though both of those distros now have updates available that fix the issue. If you’re on one of those distros, particularly the 32-bit version, it’s time to update OpenSSH and restart the service. You can check the OpenSSH version by running nc -w1 localhost 22 -i 1, to see if you’re possibly vulnerable.

Polyfill

The Polyfill service was once a useful tool, to pull JavaScript functions in to emulate newer browser features in browsers that weren’t quite up to the task. This worked by including the polyfill JS script from polyfill.io. The problem is that the Funnull company acquired the polyfill domain and Github account, and began serving malicious scripts instead of the legitimate polyfill function.

The list of domains and companies caught in this supply chain attack is pretty extensive, with nearly 400,000 still trying to link to the domain as of July 3rd. We say “trying”, as providers have taken note of Sansec’s report, breaking the story. Google has blocked associated domains out of advertising, Cloudflare is rewriting calls to polyfill to a clean cache, and Namecheap has blackholed the domain, putting an end to the attack. It’s a reminder that just because a domain is trustworthy now, it may not be in the future. Be careful where you link to.

Pack It Up

We’re no strangers to disagreement over CVE severity drama. There can be a desire to make a found vulnerability seem severe, and occasionally this results in a wild exaggeration of the impact of an issue. Case in point, the node-ip project has an issue, CVE-2023-42282, that originally scored a CVSS of 9.8. The node-IP author has taken the stance that it’s not a vulnerability at all, since it requires an untrusted input to be passed into node-ip, and then used for an authorization check. It seems to be a reasonable objection — if an attacker can manipulate the source IP address in this way, the source IP is untrustworthy, regardless of this issue in node-ip.

The maintainer, [Fedor] made the call to simply archive the node-ip project in response to the seemingly bogus CVE, and unending stream of unintentional harassment over the issue. Auditing tools starting alerting developers about the issue, and they started pinging the project. With seemingly no way to fight back against the report, archiving the project seemed like the best solution. However, the bug has been fixed, and Github has reduced the severity to “low” in their advisory. As a result, [Fedora] did announce that the project is coming back, and indeed it is again an active project on Github.

Bits and Bytes

[sam4k] found a remote Use After Free (UAF) in the Linux Transparent Inter Process Communication (TIPC) service, that may be exploitable to achieve RCE. This one is sort of a toy vulnerability, found while preparing a talk on bug hunting in the Linux kernel. It’s also not a protocol that’s even built in to the kernel by default, so the potential fallout here is quite low. The problem is fragmentation handling, as the error handling misses a check for the last fragment buffer, and tries to free it twice. It was fixed this May, in Kernel version 6.8.

CocaoPods is a dependency manager for Swift/Objective-C projects, and it had a trio of severe problems. The most interesting was the result of a migration, where many packages lost their connection to the correct maintainer account. Using the CocaoPods API and a maintainer email address, it was possible for arbitrary users to claim those packages and make changes. This and a couple other issues were fixed late last year.

FLOSS Weekly Episode 790: Better Bash Scripting with Amber

This week Jonathan Bennett and Dan Lynch chat with Paweł Karaś about Amber, a modern scripting language that compiles into a Bash script. Want to write scripts with built-in error handling, or prefer strongly typed languages? Amber may be for you!

https://github.com/Ph0enixKM/Amber
https://amber-lang.com/
https://docs.amber-lang.com/

Did you know you can watch the live recording of the show Right on our YouTube Channel? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

❌