Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerHackaday

This Week in Security: Kaspersky Ban, Project Naptime, and More

28 Junio 2024 at 14:00

The hot news this week is that Kaspersky is banned in the USA. More specifically, Kaspersky products will be banned from sale in the US starting on September 29. This ban will extend to blocking software updates, though it’s unclear how that will actually be accomplished. It’s reasonable to assume that payment processors will block payments to Kaspersky, but will ISPs be required to block traffic that could contain antivirus updates?

WordPress Plugin Backdoor

A Quartet of WordPress plugins have been found to have recently included backdoor code. It’s a collection of five Open Source plugins, seemingly developed by unrelated people. Malicious updates first showed up on June 21st, and it appears that all five plugins are shipping the same malicious code.

Rabbit AI API

The Rabbit R1 was released to less than thunderous applause. The idea is a personal AI device, but the execution has been disappointing, to the point of reviewers suggesting some of the earlier claims were fabricated. Now it seems there’s a serious security issue, in the form of exposed API keys that have *way* too many privileges.

The research seems to be done by the rabbitude group, who found the keys back in May. Of the things allowed by access to the API keys, the most worrying for user privacy was access to every text-to-speech call. Rabbitude states in their June 25 post, that “rabbit inc has known that we have had their elevenlabs (tts) api key for a month, but they have taken no action to rotate the api keys.” On the other hand, rabbit pushed a statement on the 26th, claiming they were just then made aware of the issue, and made the needed key rotations right away.

MOVEit is Back

Last year a severe vulnerability in MOVEit file transfer server led to some big-deal compromises in 2023 and 2024. MOVEit is back, this time disclosing an authentication bypass. The journey to finding this vulnerability starts with an exception, thrown whenever an SSH connection is attempted with a public key.

…the server is attempting to open the binary data representing our auth material, as a file path, on the server.

Uh-oh. There’s no way that’s good. What’s worse, that path can be an external SMB path. That’s even worse. This behavior does depend on the incoming connection referencing a valid username, but this has the potential to enable password stealing, pass-the-hash attacks, and username mapping. So what’s actually going on here? The SSH server used here is IPWorks SSH, which has some useful additions to SSH. One of these additions seems to be an odd delegated authentication scheme that goes very wrong in this case.

The attack flow goes like this: Upload a public SSH key to any location on the MOVEit server, log in with any valid username signing the connection with the uploaded key, and send the file location of the uploaded key instead of an actual key. Server pulls the key, makes sure it matches, and lets you in. The only pesky bit is how to upload a key without an account. It turns out that the server supports PPK keys, and those survive getting written to and read from the system logs. Ouch.

The flaws got fixed months ago, and a serious effort has been carried out to warn MOVEit customers and get them patched. On the other hand, a full Proof of Concept (PoC) is now available, and Internet monitoring groups are starting to see the attack being attempted in the wild.

Cat File: Pop Calc

We all know not to trust files from the Internet. Don’t execute the script, don’t load the spreadsheet, and definitely don’t install the package. But what about running cat or strings on an untrusted file? Apparently the magic of escape strings makes those dangerous too. The iTerm2 terminal was accidentally set to allow “window title reporting”, or copying the window title to the command line. Another escape code can set that value, making for an easy way to put an arbitrary command on the command line. One more quirk in the form of tmux integration allowed the injection of a newline — running the arbitrary command. Whoops. Versions 3.5.0 and 3.5.1 are the only iterm2 versions that are vulnerable, with version 3.5.2 containing the fix.

Putting LLM to Work During Naptime

There’s been a scourge of fake vulnerability reports, where someone has asked ChatGPT to find a vulnerability in a project with a bug bounty. First off, don’t do this. But second, it would be genuinely useful if a LLM could actually find vulnerabilities. This idea intrigued researchers at Google’s Project Zero, so they did some research, calling it “Project Naptime”, in a playful reference to napping while the LLM works.

The secret sauce seems to be in extending an LLM to look at real code, to run Python scripts in a sandbox, and have access to a debugger. The results were actually encouraging, that LLM could eventually be a useful tool. It’s not gonna replace the researcher, but it won’t surprise me to cover vulnerabilities found by a LLM instead of a fuzzing tool. Or maybe that’s an LLM guided fuzzer?

Github Dishes on Chrome RCE

Github’s [Man Yue Mo] discovered and reported CVE-2024-3833 in Chrome back in March, a fix was released in April, and it’s now time to get the details. This one is all about how object cloning and code caching interacts. Cloning an object in a particular circumstance ends up with an object that exists in a superposition between having unused property fields, and yet a full property array. Or put simply, the internal object state incorrectly indicates there is unused allocated memory. Try to write a new property, and it’s an out of bounds write.

The full exploit is involved, but the whole thing includes a sandbox escape as well, using overwritten WebAssembly functions. Impressive stuff.

Bits and Bytes

[Works By Design] is taking a second crack at building an unpickable lock. This one has some interesting features, like a ball-bearing spring system that should mean that levering one pin into place encourages the rest to drop out of position. A local locksmith wasn’t able to pick it, given just over half-an-hour. The real test will be what happens when [LockPickingLawyer] gets his hands on it, which is still to come.

Gitlab just fixed a critical issue that threatened to let attackers run CI pipelines as arbitrary users. The full details aren’t out yet, but CVE-2024-5655 weighs in at a CVSS 9.6, and Gitlab is “strongly recommending” immediate updates.

FLOSS Weekly Episode 789: You Can’t Eat the Boards

26 Junio 2024 at 23:00

This week Jonathan Bennett and Doc Searls chat with Igor Pecovnik and Ricardo Pardini about Armbian, the Debian-based distro tailor made for single-board computers. There’s more than just Raspberry Pi to talk about, with the crew griping about ancient vendor kernels, the less-than-easy ARM boot process, and more!

https://www.armbian.com/
https://github.com/armbian

Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

This Week in Security: Chat Control, Vulnerability Extortion, and Emoji Malware

21 Junio 2024 at 14:00

Way back in 2020, I actually read the proposed US legislation known as EARN IT, and with some controversy, concluded that much of the criticism of that bill was inaccurate. Well what’s old is new again, except this time it’s the European Union that’s wrestling with how to police online Child Sexual Abuse Material (CSAM). And from what I can tell of reading the actual legislation (pdf), this time it really is that bad.

The legislation lays out two primary goals, both of them problematic. The first is detection, or what some are calling “upload moderation”. The technical details are completely omitted here, simply stating that services “… take reasonable measures to mitigate the risk of their services being misused for such abuse …” The implication here is that providers would do some sort of automated scanning to detect illicit text or visuals, but exactly what constitutes “reasonable measures” is left unspecified.

The second goal is the detection order. It’s worth pointing out that interpersonal communication services are explicitly mentioned as required to implement these goals. From the bill:

Providers of hosting services and providers of interpersonal communications services that have received a detection order shall execute it by installing and operating technologies approved by the Commission to detect the dissemination of known or new child sexual abuse material or the solicitation of children…

This bill is careful not to prohibit end-to-end encryption, nor require that such encryption be backdoored. Instead, it requires that the apps themselves be backdoored, to spy on users before encryption happens. No wonder Meredith Whittaker has promised to pull the Signal app out of the EU if it becomes law. As this scanning is done prior to encryption, it’s technically not breaking end-to-end encryption.

You may wonder why that’s such a big deal. Why is it a non-negotiable for the Signal app to not look for CSAM in messages prior to encryption? For starters, it’s a violation of user trust and an intentional weakening of the security of the Signal system. But maybe most importantly, it puts a mechanism in place that will undoubtedly prove too tempting for future governments. If Signal can be forced into looking for CSAM in the EU, why not anti-government speech in China?

This story is ongoing, with the latest news that the EU has delayed the next step in attempting to ratify the proposal. It’s great news, but the future is still uncertain. For more background and analysis, see our conversation with the minds behind Matrix, on this very topic:

Bounty or Extortion?

A bit of drama played out over Twitter this week. The Kraken cryptography exchange had a problem where a deposit could be interrupted, and funds added to the Kraken account without actually transferring funds to back the deposit. A security research group, which turned out to be the CertiK company, discovered and disclosed the flaw via email.

Kraken Security Update:

On June 9 2024, we received a Bug Bounty program alert from a security researcher. No specifics were initially disclosed, but their email claimed to find an “extremely critical” bug that allowed them to artificially inflate their balance on our platform.

— Nick Percoco (@c7five) June 19, 2024

All seemed well, and the Kraken team managed to roll a hotfix out in an impressive 47 minutes. But things got weird when they cross referenced the flaw to see if anyone had exploited it. Three accounts had used it to duplicate money. The first use was for all of four dollars, which is consistent with doing legitimate research. But additionally, there were more instances from two other users, totaling close to $3 million in faked transfers — not to mention transfers of *real* money back out of those accounts. Kraken asked for the details and the money back.

According to the Kraken account, the researchers refused, and instead wanted to arrange a call with their “business development team”. The implication is that the transferred money was serving as a bargaining chip to request a higher bug bounty payout. According to Kraken, that’s extortion.

There is a second side to this story, of course. CertiK has a response on their x.com account where they claim to have wanted to return the transferred money, but they were just testing Kraken’s risk control system. There are things about this story that seem odd. At the very least, it’s unwise to transfer stolen currency in this way. At worst, this was an attempt at real theft that was thwarted. The end result is that the funds were eventually completed.

There are two fundamental problems with vuln disclosure/bounty:
#1 companies think security researchers are trying to extort them when they are not
#2 security researchers trying to extort companies https://t.co/I7vnk3oXi5

— Robert Graham 𝕏 (@ErrataRob) June 20, 2024

Report Bug, Get Nastygram

For the other side of the coin, [Lemon] found a trivial flaw in a traffic controller system. After turning it in, he was rewarded with an odd letter that was a combination of “thank you” and your work “may have constituted a violation of the Computer Fraud and Abuse Act”. This is not how you respond to responsible disclosure.

I received my first cease and desist for responsibly disclosing a critical vulnerability that gives a remote unauthenticated attacker full access to modify a traffic controller and change stoplights. Does this make me a Security Researcher now? pic.twitter.com/ftW35DxqeF

— Lemon (@Lemonitup) June 18, 2024

Emoji Malware

We don’t talk much about malware in South Asia, but this is an interesting one. DISGOMOJI is a malware attributed to a Pakistani group, mainly targeting government Linux machines in India. What really makes it notable is that the command and control system uses emoji in Discord channels. The camera emoji instructs the malware to take a screenshot. A fox triggers a hoovering of the Firefox profiles, and so on. Cute!

Using Roundcube to break PHP

This is a slow moving vulnerability, giving that the core is a 24-year old buffer overflow in iconv() in glibc. [Charles Fol] found this issue, which can pop up when using iconv() to convert to the ISO-2022-CN-EXT character set, and has been working on how to actually trigger the bug in a useful way. Enter PHP. OK, that’s not entirely accurate, since the crash was originally found in PHP. It’s more like we’re giving up on finding something else, and going back to PHP.

The core vulnerability can only overwrite one, two, or three bytes past the end of a buffer. To make use of that, the PHP bucket structure can be used. This is a growable doubly-linked list that is used for data handling. Chunked HTTP messages can be used to build a multi-bucket structure, and triggering the iconv() flaw overwrites one of the pointers in that structure. Bumping that pointer by a few bytes lands in attacker controlled data, which can land in a fake data structure, and continuing the dechunking procedure gives us an arbitrary memory write. At that point, a function pointer just has to be pointed at system() for code execution.

That’s a great theoretical attack chain, but actually getting there in the wild is less straightforward. There has been a notable web application identified that is vulnerable: Roundcube. Upon sending an email, the user can specify the addresses, as well as the character set parameter. Roundcube makes an iconv() call, triggering the core vulnerability. And thus an authenticated user has a path to remote code execution.

Bits and Bytes

Speaking of email, do you know the characters that are allowed in an email address? Did you know that the local user part of an email address can be a quoted string, with many special characters allowed? I wonder if every mail server and email security device realized that quirk? Apparently not, at least in the case of MailCleaner, which had a set of flaws allowing such an email to lead to full appliance takeover. Keep an eye out for other devices and applications to fall to this same quirk.

Nextcloud has a pair of vulnerabilities to pay attention to, with the first being an issue where a user with read and share permissions to an object could reshare it with additional permissions. The second is more troubling, giving an attacker a potential method to bypass a two-factor authentication requirement. Fixes are available.

Pointed out by [Herr Brain] on Hackaday’s Discord, we have a bit of bad news about the Arm Memory Tagging Extensions (MTE) security feature. Namely, speculative execution can reveal the needed MTE tags about 95% of the time. While this is significant, there is a bit of chicken-and-egg problem for attackers, as MTE is primarily useful to prevent running arbitrary code at all, which is the most straightforward way to achieve a speculative attack to start with.

And finally, over at Google Project Zero, [Seth Jenkins] has a report on a trio of Android devices, and finding vulnerabilities in their respective kernel drivers. In each case, the vulnerable drivers can be accessed from unprivileged applications. [Seth]’s opinion is that as the Android core code gets tighter and more secure, these third-party drivers of potentially questionable code quality will quickly become the target of choice for attack.

FLOSS Weekly Episode 788: Matrix, It’s Git, for Communications

19 Junio 2024 at 23:00

This week Jonathan Bennett and Simon Phipps chat with Matthew Hodgson and Josh Simmons about Matrix, the open source decentralized communications platform. How is Matrix a Git for Communications? Are the new EU and UK laws going to be a problem? And how is the Matrix project connected with the Element company?

https://matrix.org/blog
https://element.io/

Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

This Week in Security: Unicode Strikes Again, Trust No One (Redditor), and More

14 Junio 2024 at 14:00

There’s a popular Sysadmin meme that system problems are “always DNS”. In the realm of security, it seems like “it’s always Unicode“. And it’s not hard to see why. Unicode is the attempt to represent all of Earth’s languages with a single character set, and that means there’s a lot of very similar characters. The two broad issues are that human users can’t always see the difference between similar characters, and that libraries and applications sometimes automatically convert exotic Unicode characters into more traditional text.

This week we see the resurrection of an ancient vulnerability in PHP-CGI, that allows injecting command line switches when a web server launches an instance of PHP-CGI. The solution was to block some characters in specific places in query strings, like a query string starting with a dash.

The bypass is due to a Windows feature, “Best-Fit”, an automatic down-convert from certain Unicode characters. This feature works on a per-locale basis, which means that not every system language behaves the same. The exact bypass that has been found is the conversion of a soft hyphen, which doesn’t get blocked by PHP, into a regular hyphen, which can trigger the command injection. This quirk only happens when the Windows locale is set to Chinese or Japanese. Combined with the relative rarity of running PHP-CGI, and PHP on Windows, this is a pretty narrow problem. The XAMPP install does use this arrangement, so those installs are vulnerable, again if the locale is set to one of these specific languages. The other thing to keep in mind is that the Unicode character set is huge, and it’s very likely that there are other special characters in other locales that behave similarly.

Downloader Beware

The ComfyUI project is a flowchart interface for doing AI image generation workflows. It’s an easy way to build complicated generation pipelines, and the community has stepped up to build custom plugins and nodes for generation. The thing is, it’s not always the best idea to download and run code from strangers on the Internet, as a group of ComfyUI users found out the hard way this week. The ComfyUI_LLMVISION node from u/AppleBotzz was malicious.

The node references a malicious Python package that grabs browser data and sends it all to a Discord or Pastebin. It appears that some additional malware gets installed, for continuing access to infected systems. It’s a rough way to learn.

PyTorch Scores a Dubious 10.0

CVE-2024-5480 is a PyTorch flaw that allows PyTorch worker nodes to trigger arbitrary eval() calls on the master node. No authentication is required to add a PyTorch worker, so this is technically an unauthorized RCE, earning the CVSS of 10.0. Practically speaking it’s not that dire of a problem, as your PyTorch cluster shouldn’t be on the Internet to start with, and there’s no authentication as a design choice. It’s not clear the the PyTorch developers consider this a legitimate security vulnerability at all. It may or may not be fixed with version 2.3.

Next Level Smishing

My least favorite term in infosec has to be “smishing”, a frankenword for SMS phishing. Cell phone carriers around the world are working hard to blocking spam messages, making smishing an impossible task. And that’s why it’s particularly interesting to hear about a bypass that a pair of criminals were using in London. The technical details are light, but the police reported a “homemade mobile antenna”, “illegitimate telephone mast”, and “text message blaster” as part of the seized kit. The initial report sounds like it may be a sort of reverse stingray, where messages are skipping the regular cellular infrastructure and are getting sent directly to nearby cell phones. Hopefully more information will be forthcoming soon.

Zyxel’s NsaRescueAngel

The programmers at Zyxel apparently have a sense of humor, given the naming used for this mis-feature. Zyxel NAS units have a bit of magic code that writes a password for the new user, NsaRescueAngel, to the shadow password file. The SSH daemon is restarted, and upnp is fired off to request port forwarding from the outside world. One of the script names, possibly from a previous iteration, was open_back_door.sh, which seems to be sort of lampshading the whole thing.

It’s presumably intended to be a great troubleshooting tool, when a customer is stuck and needs help, to be able to visit a web url to enable remote access for a Zyxel tech. The problem is that the Zyxel NAS already has an authentication bypass flaw, and while it’s been patched, it wasn’t patched very well, making this whole scheme accessible without authentication, just by slapping /favicon.ico onto the url. The additional problems have been fixed in a more recent update.

Russian Secure Phablet?

A Twitter thread tells the story of a Russian secure device, left behind on the back of a bus in England. That’s an interesting premise. But the thread continues, that ‘conveniently the owner also left a briefcase with design notes, architecture, documentation, implementation, marketing material and internal Zoom demos about “trusted” devices too!’ OK, now this has to either be a fanfic, or a fell-off-the-back-of-a-truck story. There’s some convincing looking screenshots, and even rom dumps. What’s going on here?

Nobody knew how the devices worked, conveniently the owner also left a briefcase with design notes, architecture, documentation, implementation, marketing material and internal Zoom demos about "trusted" devices too! We'd all have been lost without those. https://t.co/LN7cTybxOV pic.twitter.com/j5OCHprSie

— hackerfantastic.x (@hackerfantastic) June 11, 2024

The most likely explanation is that somebody got their hands on a trove of data on these devices, and wanted to dump it online with a silly story. But fair warning, don’t trust any of the shared files. Who knows what’s actually in there. Taking a look at something untrusted like this is an art in itself, best done with isolated VMs and burner machines, maybe a Linux install you don’t mind wiping?

Bits and Bytes

Buskill just published their 8th warrant canary, a cryptographically signed statement attesting that they have not been served any secret warrants or national security letters that would undermine the trustworthiness of the Buskill project or code. In addition to a good cryptographic signature, this canary includes a handful of latest news headlines in the signed material, proving it is actually a recently generated document.

[Aethlios] has published Reset Tolkien, an open source tool for finding and attacking a very specific sort of weakness in time based tokens. The targeted flaw is a token generated from improper randomness source, like the current time. If the pattern can be found, a “sandwich attack” can narrow down the possible reset codes by requesting a reset code for a controlled account, requesting one for the target account, and then once again for the controlled account. The target code must come between the two known codes.

And finally, TPM security is hard. This time, the Trusted Platform Module can be reset by reclaiming the GPIO pins connected to it, and simulating a reboot by pulling the reset pin. This results in the TPM possibly talking to an application when it thinks it is talking to the CPU doing boot decryption. In short, it can result in compromised keys. Thanks to [char] from Discord for sending this one in!

FLOSS Weekly Episode 787: VDO Ninja — It’s a Little Bit Hacky

12 Junio 2024 at 23:00

This week Jonathan Bennett and Katherine Druckman chat with Steve Seguin about VDO.Ninja and Social Stream Ninja, tools for doing live WebRTC video calls, recording audio and video, wrangling comments on a bunch of platforms, and more!

https://docs.vdo.ninja/
https://docs.vdo.ninja/steves-helper-apps
https://docs.vdo.ninja/sponsor

Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

This Week in Security: Recall, Modem Mysteries, and Flipping Pages

7 Junio 2024 at 14:00

Microsoft is racing to get into the AI game as part of Windows 11 on ARM, calling it Copilot+. It’s an odd decision, but clearly aimed at competing with the Apple M series of MacBooks. Our focus of interest today is Recall, a Copilot+ feature that not only has some security problems, but also triggers a sort of visceral response from regular people: My computer is spying on me? Eww.

Yes, it really sort of is. Recall is a scheme to take screen shots of the computer display every few seconds, run them through character recognition, and store the screenshots and results in a database on the local machine hard drive. There are ways this could be useful. Can’t remember what website had that recipe you saw? Want to revisit a now-deleted tweet? Is your Google-fu failing you to find a news story you read last week? Recall saw it, and Recall remembers. But what else did Recall see? Every video you watched, ever website you visited, and probably some passwords and usernames you typed in.

Now to their credit, the folks at Microsoft knew this could be a problem, and took some steps to keep this data safe. The huge win here is that Windows 11 with Copilot+ will run an Azure AI instance right on the laptop, to do all the AI processing without sending any private data up to the cloud. And then on top of that, Recall data is encrypted at rest, which Microsft claims is enough to keep attackers and other users out. The problem there is that encryption at rest only protects data from a physical, offline attack. And even that is incredibly hard to get right.

So let’s cut to the chase. How bad is it? [Kevin Beaumont] took a look, and the results aren’t pretty. The description sounded like Recall uses a per-user encryption system like EFS to keep the data safe. It’s not. Any admin user can access all the Recall databases on the machine. And of course, malware that gets installed can access it too. There’s already a tool available to decode the whole database, TotalRecall.

Recall is only planned to run on these Copilot+ devices, and can be turned off by the end user. Some of the security problem can be fixed, like the cross-user availability of the data. It’s going to be much harder to fix the privacy and malware issues.

Modem Mystery

This is sort of a two-part story, starting with a real mystery. [Sam Curry] was doing some research on a vulnerability, and noticed something odd when sending HTTP requests from his home network to a test server. Each HTTP request was sent a second time, from a separate IP address. That’s odd. A bit of investigation discovered that these were HTTP packets that were sent through his cable modem, and the mystery IP was a DigitalOcean VM. The culprit was a compromised cable modem, but it’s still an open mystery, what exactly the purpose was of mirroring HTTP traffic this way. [Sam] went to his cable company to request a new modem, and turned the compromised unit over in the exchange, ruining his chance to figure out exactly what was on it.

The second part of this story is that curiosity about exactly how malware ends up on a modem eventually led [Sam] down the rabbit hole of Cox APIs and TR-069, the protocol that allows an ISP to manage devices at scale. The Cox API used a reverse proxy that could be tricked into showing a Swagger-ui page, nicely documenting all the API endpoints available. That API had a quirk. Send the same request multiple times, and it’s eventually accepted without authorization. That was the motherload, allowing for arbitrary access to customer devices via the TR-069 support.

So mystery solved? Was this how [Sam]’s modem was hacked? Cox responded very rapidly to the vulnerability report, closing the problematic APIs within hours. But the vulnerability just wasn’t old enough. The original modem malware was in 2021, and this API didn’t launch til 2023. The mystery continues.

Linux Flipping Pages in the Wild

CISA has added another two vulnerabilities to the their list of known-exploited. One is the Check Point arbitrary file leak that we covered last week, and the other is the Flipping Pages vulnerability in the Linux kernel, made public back in March, with the fix predating the announcement, in February.

The core bug itself is pretty simple. A NetFilter chain in the kernel can return one of multiple values, to indicate how to handle an incoming packet. The NF_DROP target drops the packet, frees the memory, and returns a user-supplied error value. The quirk here is that errors are negative values, and the rest of the NetFilter actions are positive values. And NetFilter allows a user to set that error value as a positive value, enabling an odd state where the packet is both dropped and accepted at the same time. The specific bug is a double free, which enables the Dirty Pagetable technique to overwrite arbitrary memory and trigger elevation.

That vulnerability became more important to get patched, once a Proof of Concept (PoC) was published, allowing for easy use. And it’s apparently getting used, given the CISA announcement.

Binding Android

Up next is a nice walk-through of an Android vulnerability making use of the Binder Inter-Process Communication (IPC) device. As all the apps on Android run sandboxed, Binder is both an important part of the OS, and very accessible to apps — and hence not a good place for a vulnerability.

On the other hand, Binder is fairly complicated. It’s doing memory management, connects multiple processes, transferring arbitrary data, and just generally has a difficult dance to do. It’s not surprising that there are vulnerabilities in that code. This one is a logic flaw in error handling, where an error can trigger the cleanup function to clean up unallocated objects. That results in a dangling pointer, which can be used for all sorts of things.

The first step in actual exploitation is to use the dangling pointer to leak a few bytes from kernel heap memory. That data can be used to build a fake binder object in the space, and then a delete function called on that fake object results in an “unlink”, or a way to modify kernel pointers. That unlink can be abused to build an arbitrary read primitive, by unlinking a fake pointer. The last trick is a cross-cache attack, where multiple objects are created and freed, to trick the allocator into putting something important under the dangling pointer. Putting it together, it allows a process to overwrite it’s own credentials struct, setting ID to root.

Make it a 9.8

When a company typos their latest CVE score, reporting it a full point worse than it is, what’s a researcher to do? In this case, put the time in to find a way to make the severity rating worth it. It’s a Remote Code Execution in the Progress Report server. The initial vulnerability report listed it as a post-authentication RCE.

The report server takes reports, and turns it into pretty graphs and charts. Those reports are in the form of a serialized stream. And yes, the flaw is a deserialization attack, a ridiculously deep chain that finally ends in loading an arbitrary .NET type, which leads easily to a process start command.

The vulnerability requires some sort of authenticated user to trigger. We’re looking for pre-auth exploitation here. How about a first-run endpoint that doesn’t have any authentication code applied, and doesn’t go away after the server is configured. It’s not the first software to fall to this trap, and won’t be the last.

Bits and Bytes

The Chrome Root Store is kicking out a trusted Certificate Authority. It doesn’t happen often, but one of the tools to keep CAs behaving is the threat of removing them the browser certificate store. “e-commerce monitoring GmbH” has been trusted for right around three years, and was fraught with problems from the very beginning.

Tavis has the rest of the Libarchive story. Why does Libarchive implement the RarVM, and why did Rar use a bytecode VM? Historical reasons.

The libarchive e8 vulnerability is actually really cool, but the ZDI advisory doesn't explain why it's so wild lol. For some reason, I know about RAR filters, so let me provide the background. 🧵 1/n

— Tavis Ormandy (@taviso) June 6, 2024

The Internet Archive is under attack by a Distributed Denial of Service attack (DDoS). It’s unclear exactly where the attack is coming from, but it is making the archive and the Wayback machine a bit spotty to access these days. And as the post says, it’s not just cyber-bullies trying to mess with our favorite library.

Extra Credit: Crypto is hard. This one takes a bit of time to work through and understand, but the gist is that one of NIST’s cryptography recommendations had a bit of an oversight in it. The scenario is that Alice and Bob both provide key material to produce an agreed upon shared key. When one party gets to pick some of the initialization data, as well as one of the keys used for this multi-key system, careful selection can lead to way too much control over the final produced key. The example given is an encrypted message app, that has a sneaky backdoor. This was discovered, never actually implemented that anyone knows of, and has been fixed in the NIST recommendation.

FLOSS Weekly Episode 786: What Easy Install Script?

5 Junio 2024 at 23:00

This week Jonathan Bennett and Rob Campbell chat with Brodie Robertson about Linux, Wayland, YouTube, Microsoft’s Windows Recall and more. Is Linux ready for new users? Is Recall going to kick off a migration? All this and more!

Main Channel: https://www.youtube.com/@BrodieRobertson

Podcast: https://www.youtube.com/@TechOverTea

Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us!

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

This Week in Security: Operation Endgame, Appliance Carnage, and Router Genocide

31 Mayo 2024 at 14:00

This week saw an impressive pair of takedowns pulled off by law enforcement agencies around the world. The first was the 911 S5 botnet, Which the FBI is calling “likely the world’s largest botnet ever”. Spreading via fake free VPN services, 911 was actually a massive proxy service for crooks. Most lately, this service was operating under the name “Cloud Router”. As of this week, the service is down, the web domain has been seized, and the alleged mastermind, YunHe Wang, is in custody.

The other takedown is interesting in its own right. Operation Endgame seems to be psychological warfare as well as actual arrests and seizures. The website features animated shorts, a big red countdown clock, and a promise that more is coming. The actual target was the ring that manage malware droppers — sort of middlemen between initial shellcode, and doing something useful with a compromised machine. This initial volley includes four arrests, 100+ servers disrupted, and 2,000+ domains seized.

The arrests happened in Armenia and Ukraine. The messaging around this really seems to be aimed at the rest of the gang that’s out of reach of law enforcement for now. Those criminals may still be anonymous, or operating in places like Russia and China. The unmistakable message is that this operation is coming for the rest of them sooner or later.

Checkpoint CloudGuard

And now we turn to the massive number of security and VPN appliances that got detailed exploit write-ups this week. And up first is the Watchtowr treatment of Check Point CloudGuard appliance, and the high priority information exposure CVE. This vulnerability already has a patch, so the obvious starting point is patch diffing. Thanks to a new log message in the patch, it’s pretty clear that this is actually a path traversal attack.

The vulnerable endpoint is /clients/MyCRL, which is a file download endpoint used for fetching updates to the VPN client. Based on Check Point’s CVSS string regarding this vulnerability, that endpoint is accessible without any authentication. The thing about this endpoint is that it takes an argument, and returns the file requested based on that argument. There is a list of allowed files and folders, but the check on incoming requests uses the strstr() C function, which simply checks whether one string contains a second.

One of the entries on this list was the CSHELL/ directory, which is the last piece of the puzzle to make for a nasty exploit. Send a POST to /clients/MyCRL requesting aCSHELL/../../../../../../../etc/shadow and the shadow password file is returned. This gives essentially arbitrary file read due to path traversal on a public endpoint.

Interestingly, the vendor states that the issue only affects devices with username-and-password authentication enabled, and not with the (much stronger) certificate authentication enabled.

There’s some definite weirdness going on with how the CVSS score was calculated, and how Check Point opted to disclose this. Cross-referencing from another vendor’s statement, it becomes clear that the fastest way to turn this into a full exploit is by grabbing the password hashes of users, and any legacy local users with password-only accounts can be mined for weak passwords. But make no mistake, this is an unauthorized arbitrary file read vulnerability, and the hash capture is just one way to exploit it. Attacks are ongoing, and the fix is available.

Fortinet FortiSIEM

One of my most/least favorite things to cover is trivial vulnerability patch bypasses. There’s nothing that disturbs and amuses like knowing that a Fortinet command injection in the NFS IP address was rediscovered in the NFS mount point field of the exact same endpoint.

If the botched fix wasn’t bad enough, the public disclosure was almost worse. There was over a month of lag between the disclosure and reproduction of the reported issue. Then Fortinet silently rolled out patches a couple weeks later, with no disclosure at all. The CVEs were eventually released, but then claimed to be a duplicate, and published in error. And now finally the whole story is available.

Ivanti Landesk

And rounding out the appliance vulnerabilities is this one in the Avanti Landesk, where a data flow can reach a strncpy() call, that takes user-supplied input for the number of bytes to copy, and a fixed buffer destination. Overflowing that buffer allows for function pointer overwrite, and writing even more data into this area eventually reaches a read-only section of memory. The write attempt triggers an exception, which bounces through a few functions, and eventually calls a pointer that has already been overwritten in the attack. A bit of Return Oriented Programming (ROP) magic, and the shellcode is marked executable and jumped into, for arbitrary code execution.

The flaw does require a low-privilege user account, and the vulnerable code hasn’t been in the product since the 2021.1 release. Ivanti has issued a CVE, but since the last vulnerable release is outside its support window, there won’t be any patches published.

Bricking 600,000 Routers

This one is just odd. Last year, the US ISP Windstream had about 600,000 DSL routers crash and permanently die over three days. The theory at the time was that this was a flubbed firmware upgrade, but researchers from Lumen did some quick detective work, and managed to snag malicious binaries that were actively flowing to the Windstream network.

It turns out that those routers were infected by the Chalubo malware, although the the initial infection vector is still unknown. Given the circumstances, it’s likely due to an internal breach at Windstream, possibly even an insider attack. Chalubo is designed to enable remote access, and can be used to launch DDOS attacks, among other capabilities. It’s not typical for this malware to immediately wipe devices, leading to the speculation that the malware was used for plausible deniability, to shield the actual perpetrators. This has signs of being an insider attack, by a disgruntled admin at Windstream, though there is not any hard evidence at the moment.

Bits and Bytes

Like a bad penny, North Korea has come back up with the FakePenny malware campaign. In Microsoft’s fun APT naming scheme, this is the work of Moonstone Sleet, whose usual strategy is to backdoor popular software and spread it however they can. In a major ransomware deployment, Moonstone Sleet requested $6.6 million in Bitcoin, which is quite the step up from previous campaigns.

And lastly, Ticketmaster seems to have a 560 million user data breach on its hands. Data brokers on the Breach Forums claim to have this in a 1.3 terabyte database, and is willing to part with it for merely half-a-million dollars. There is a bit of a backstory here, as Breach Forums is run by ShinyHunters, and the whole operation was shut down by the FBI a couple weeks ago. That didn’t last long, and it looks like they’re back, and back in business.

FLOSS Weekly Episode 785: Designing GUIs and Building Instruments with EEZ

29 Mayo 2024 at 23:00

This week Jonathan Bennett chats with Dennis and Goran about EEZ, the series of projects that started with an Open Source programmable power supply, continued with the BB3 modular test bench tool, and continues with EEZ Studio, a GUI design tool for embedded devices.

EEZ hardware:

https://www.envox.eu/bench-power-supply/introduction/https://www.envox.eu/eez-bb3/

https://hackaday.io/projects/hacker/90785

Build Yourself an Awesome Modular Power Supply

EEZ software:

https://www.envox.eu/studio/studio-introduction/

Goran’s EEZ related work:

https://www.envox.eu/2021/12/22/production-testing-automation-for-open-hardware-ulx3s-board

https://intergalaktik.eu/projects/stm32-ulx3s-module

https://intergalaktik.eu/projects/bb3-cm4
https://intergalaktik.eu/news/bb3-cm4-emc-h7
https://intergalaktik.eu/projects/stm32-ulx3s-module

Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us!

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

This Week in Security: Drama at the C-Level, Escape Injection, and Audits

24 Mayo 2024 at 14:00

There was something of a mystery this week, with the c.root-servers.net root DNS server falling out of sync with it’s 12 siblings. That’s odd in itself, as these are the 13 servers that keep DNS working for the whole Internet. And yes, that’s a bit of a simplification, it’s not a single server for any of the 13 entities — the C “server” is actually 12 different machines. The intent is for all those hundreds of servers around the world to serve the same DNS information, but over several days this week, the “C” servers just stopped pulling updates.

The most amusing/worrying part of this story is how long it took for the problem to be discovered and addressed. One researcher cracked a ha-ha-only-serious sort of joke, that he had reported the problem to Cogent, the owners of the “C” servers, but they didn’t “seem to understand that they manage a root server”. The problem first started on Saturday, and wasn’t noticed til Tuesday, when the servers were behind by three days. Updates started trickling late Tuesday or early Wednesday, and by the end of Wednesday, the servers were back in sync.

Cogent gave a statement that an “unrelated routing policy change” both affected the zone updates, and the system that should have alerted them to the problem. It seems there might room for an independent organization, monitoring some of this critical Internet Infrastructure.

ANSI Injection One

On to vulnerabilities, there were a pair of interesting ANSI escape sequence injection flaws discovered this week. ANSI escape codes are strings sent to the terminal that don’t get directly written to the screen, but instead instruct the terminal how to write to the screen.

Just for example, to get green text on the terminal, you can run:

printf 'Hello \033[32mTHIS IS GREEN\033[0m\007'

The first vulnerability was in WinRAR, in the handling of the comments field of a RAR file. You may already see where this is going, but the problem is that ANSI escape sequences were blindly passed through as part of a comment, when doing something like listing the contents of a directory. This would be particularly useful to overwrite the file name to be extracted, to hide an executable or even path traversal attack. It’s worth noting that the rar and unrar had and have patched similar problems.

ANSI Injection Two

The second ANSI injection is a bit trickier. On the Mac, terminals like iterm2 can register as the default handler for URIs, like x-man-page://. The issue here is that some of those URIs aren’t necessarily safe, like the man link above, which supports the -P pager option. That flag specifies which paging utility to use to show multiple pages of text, like less, more, etc. Opening that from a browser will at least show a warning before launch. ANSI codes lets an attack be quite sneaky, hiding the full text inside an in-terminal clickable link. The terminal won’t warn the user about what they’re about to do, so instant execution on click. Clever.

QNAPping At The Wheel

QNAP has had its share of problems over the years. The fine folks at Watchtowr decided to pitch in and try to find a few more, and then do a responsible disclosure to try to fix them the right way. And they didn’t disappoint. The unofficial audit found fifteen issues, but this write-up focuses on CVE-2024-27130, an unauthenticated overflow leading to Remote Code Execution (RCE).

Given the history of vulnerabilities, this shouldn’t be a big surprise, but the source of QNAP OS is a mess. The underpinnings are a Linux system, but the web interface on top of that is a tangle of a custom web server written in C, CGI scripts also written in C, strange leftover code bits in languages like PHP, and at least one code snippet that looks suspiciously like a backdoor.

And that’s all before we get to the real vulnerability. The cgi-bin/filemanager/share.cgi endpoint segfaults when providing a valid “ssid” and then an overlong file name. Inside the vulnerable code, it’s a simple strcpy() call, that copies an arbitrary, user-provided string into a fixed-length buffer. Write past the end of it, and you overwrite local variables, and then the return address, too. And because of how returns work, you also get to set some registers, like r0, the traditional first argument register. So… what if you just set the return address to the system() function, and put a pointer to shellcode in r0? It’s pretty much that easy, except a real exploit would also need to overcome Address Space Layout Randomization (ASLR). Watchtowr researchers opted to leave that step out, to hopefully give QNAP users a few extra days before attacks happen in the wild.

Boost Got Audited, Too

And in a win for the Open Source way, the Boost C++ library came through an audit with mostly flying colors. The most severe finding was a CRLF injection in HTTP Headers, that’s only ranked medium severity. There are four low severity flaws, and two that only rank as informational. For the breadth of code that Boost covers, that seems pretty impressive. The entire report is available.

Where’d that come from?

The Justice AV Solution Viewer is an interesting new target for malware. It was discovered that the official javs.com website was hosting a backdoored installer for this software. The installer was signed by another valid signing key, and included an fffmpeg.exe binary that gets up to no good on install.

The malware then proceeds to steal authentication cookies and passwords. As this software is primarily used in courtrooms, it’s unclear what the exact motivation is. One possibility is that the viewer software is used by lawyers outside the courtroom, and a law office could be a very interesting target. For any computers infected, the recommendation is to re-image, and then also do a mass password rotation, to invalidate any stolen credentials.

Phishing Fire Drills

[Matt Linton], a “chaos Specialist” at Google has some thoughts about Phishing, specifically the style of phishing tests that get routinely aimed at users at larger companies. The TL;DR here is that phishing tests are a bad idea, and we should collectively stop it. A powerful argument he makes is that the Federally mandated phishing tests require existing anti-phishing protections to be disabled. A real attack is guaranteed not to look like the tests. And the data bears this out. Phishing tests are measurably counterproductive.

His suggestion is to stop doing phishing tests, and start doing phishing drills. Just an email to remind users that phishing is a thing, with links to more information, and instructions on what to do when the real thing comes along. And just for fun, take a look at Google’s slick phishing quiz, and see how you score. Let us know in the comments!

Bits and Bytes

It’s time again to update your GitLab installs. There’s a handful of medium severity bugs, as well as one high severity fixed with this round of updates. That last one is a weakness in the GitLab VS code editor, that can enable Cross-Site Scripting attacks. It’s unclear if that results in information exfiltration, or full account compromise, or perhaps the information loss can lead to compromise. Regardless, it’s worth pulling out your console and running the update.

Lastpass has finally fixed one of its longstanding weak-points, now encrypting URLs in your secure vault. When the service first launched, URLs were deemed to computationally expensive to encrypt. In the handful of security breaches at LastPass since then, it’s become very clear that unencrypted URLs was a terrible choice, as it gave that much more information away about users. Good for LastPass for continuing to work to right the ship.

And finally, you should go check out the FLOSS Weekly interview from earlier this week! We interviewed François Proulx, and talked about Poutine, a project from Boost Security, that scans code bases for vulnerable CI pipelines. If you work with GitHub actions or GitLab pipelines, it’s worth checking out!

FLOSS Weekly Episode 784: I’ll Buy You A Poutine

22 Mayo 2024 at 23:00

This week Jonathan Bennett and Dan Lynch talk with François Proulx about Poutine, the Open Source security scanner for build pipeline vulnerabilities. This class of vulnerability isn’t as well known as it should be, and threatens to steal secrets, or even allow for supply chain attacks in FLOSS software.

Poutine does a scan over an organization or individual repository, looking specifically for pipeline issues. It runs on both GitHub and GitLab, with more to come!

https://boostsecurity.io/blog/unveiling-poutine-an-open-source-build-pipelines-security-scanner
https://github.com/boostsecurityio/poutine/blob/main/README.md
https://www.youtube.com/watch?v=DyioLvIVur4

Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us!

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

This Week in Security: The Time Kernel.org Was Backdoored and Other Stories

17 Mayo 2024 at 14:00

Researchers at Eset have published a huge report on the Ebury malware/botnet (pdf), and one of the high profile targets of this campaign was part of the kernel.org infrastructure. So on one hand, this isn’t new news, as the initial infection happened back in 2011, and was reported then. On the other hand, according to the new Eset report, four kernel.org servers were infected, with two of them possibly compromised for as long as two years. That compromise apparently included credential stealing or password cracking.

The Ebury attackers seem to gain initial access through credential stuffing — a huge list of previously captured credentials are tried one at a time. However, once the malware has a foothold in the network, a combination of automated and manual steps are taken to move laterally. The most obvious is to grab any private SSH keys from that system, and try using them to access other machines on the local network. Ebury also replaces a system library that gets called as a part of sshd, libkeyutils.so. This puts it in a position to quietly capture credentials.

For a targeted attack against a more important target, the people behind Ebury seem to go hands-on-keyboard, using techniques like Man-in-the-Middle attacks against SSH logins on the local network using ARP spoofing. In this case, someone was doing something nasty.

And that doesn’t even start to cover the actual payload. That’s nasty too, hooking into Apache to sniff for usernames and passwords in HTTP/S traffic, redirecting links to malicious sites, and more. And of course, the boring things you might expect, like sending spam, mining for Bitcoin, etc. Ebury isn’t exactly easy to notice, either, since it includes a rootkit module that hooks into system functions to hide itself. Thankfully there are a couple of ways to get a clean shell to look for the malware, like using systemd-run or launching a local shell on the system console.

And the multi-million dollar question: Who was behind this? Sadly we don’t know. A single arrest was made in 2014, and recovered files implicated another Russian citizen, but the latest work indicates this was yet another stolen identity. The rest of the actors behind Ebury have gone to great lengths to remain behind the curtain.

The Great 12 Second Ethereum Heist

Ethereum moved from a proof-of-work to proof-of-stake model back in 2022. That had some interesting ramifications, like making the previously random block times a predictable 12 second cadence. With a change that big, there were bound to be some bugs around the edges. In this case, the edge is the MEV-Boost algorithm, which a pair of brothers managed to manipulate into giving them $25 million worth of Ethereum. The DOJ went after the two for wire fraud, and made a point that the hack happened in just 12 seconds.

That seems to be the point. The trick here is was to make a bunch of worthless transactions look really appealing to the MEV-Boost algorithm, while also running the validator that actually processed the block. Being in this privileged position on both sides of the block creation allowed for tampering with the transaction. It’s definitely not what the Ethereum network needed. And pro tip: If you’re going to commit wire fraud, try not to leave a search engine history detailing your money laundering and extradition avoidance plans.

WiFi and SSID Confusion

There’s a new WiFi issue, that seems to be an actual vulnerability in the 802.11 spec itself. The key is a man-in-the-middle attack on the raw wireless traffic, which is fairly easy. Some of the negotiation process happens in the clear, so the attacker can manipulate that data, to rewrite the SSID in the initial connection attempt. This results in the legitimate access point sending back a message suggesting that the victim connect to a different AP. There is one important point to make here: The “good” and “bad” network can use different security types, but do need to use the same password. So this is of limited use, though there are certainly cases where it could be misused.

16 Year Long Tale

Turn back your clocks to 2008, and remember CVE-2008-0166, a vulnerability in the Debian OpenSSL packages. That was a Debian patch to fix a usage of uninitialized memory in the OpenSSL random number generator. Yes it was accessing uninitialized memory — to use it as seed for the random number generator. The result is that on Debian systems from 2006 to 2008 could only generate a few thousand discrete keys. What [Hanno Böck] realized was that the DKIM email specification was released in 2007. How many DKIM certificates still around today were generated on Debian before the bug was fixed? At least 855, or about a quarter of a percent of the domains checked. Oops. That’s a very long tail indeed.

Bits and Bytes

Care to shut down a website? If that site is protected by a vulnerable Web Application Firewall, it’s fairly easy to include a comment that looks suspicious enough to trick the WAF into blocking its own site. The rule that’s triggered is actually intended to keep internal error messages from spilling out into the open web. It’s a bit tamer version of the database deletion trick we covered a few weeks ago.

Sprocket Security did the hard work for us, patch diffing the fix for CVE-2024-3400 in Palo Alto GlobalProtect. This write-up is more about the actual approach to getting a vulnerable copy of the software to make the comparison against. And if you wondered, it’s command injection, due to shell=True where it shouldn’t be.

And finally, there was a VirtualBox vulnerability used in the latest Pwn2Own competition, and the details are available. A bit clearing routine for the virtual VGA device has a memory handling error in it, allowing manipulation of memory outside the intended bounds. In the competition, it was used for privilege escalation inside the VM, but it has potential for full VM escape in some cases.

FLOSS Weekly Episode 783: Teaching Embedded with the Unphone

15 Mayo 2024 at 23:00

This week Jonathan Bennett and Rob Campbell talk with Gareth Coleman and Hamish Cunningham! It’s all about the Unphone, an open source handset sporting an ESP32, color touchscreen, and LoRa radio. It’s open hardware, and used in a 3rd year university course to teach comp sci majors about hardware and embedded development.

https://unphone.net/

Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us!

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

This Week in Security: TunnelVision, Scarecrows, and Poutine

10 Mayo 2024 at 14:00

There’s a clever “new” attack against VPNs, called TunnelVision, done by researchers at Leviathan Security. To explain why we put “new” in quotation marks, I’ll just share my note-to-self on this one written before reading the write-up: “Doesn’t using a more specific DHCP route do this already?” And indeed, that’s the secret here: in routing, the more specific route wins. I could not have told you that DHCP option 121 is used to set extra static routes, so that part was new to me. So let’s break this down a bit, for those that haven’t spent the last 20 years thinking about DHCP, networking, and VPNs.

So up first, a route is a collection of values that instruct your computer how to reach a given IP address, and the set of routes on a computer is the routing table. On one of my machines, the (slightly simplified) routing table looks like:

# ip route
default via 10.0.1.1 dev eth0
10.0.1.0/24 dev eth0

The first line there is the default route, where “default” is a short-hand for 0.0.0.0/0. That indicate a network using the Classless Inter-Domain Routing (CIDR) notation. When the Internet was first developed, it was segmented into networks using network classes A, B, and C. The problem there was that the world was limited to just over 2.1 million networks on the Internet, which has since proven to be not nearly enough. CIDR came along, eliminated the classes, and gave us subnets instead.

In CIDR notation, the value after the slash is commonly called the netmask, and indicates the number of bits that are dedicated to the network identifier, and how many bits are dedicated to the address on the network. Put more simply, the bigger the number after the slash, the fewer usable IP addresses on the network. In the context of a route, the IP address here is going to refer to a network identifier, and the whole CIDR string identifies that network and its size.

Back to my routing table, the two routes are a bit different. The first one uses the “via” term to indicate we use a gateway to reach the indicated network. That doesn’t make any sense on its own, as the 10.0.1.1 address is on the 0.0.0.0/0 network. The second route saves the day, indicating that the 10.0.1.0/24 network is directly reachable out the eth0 device. This works because the more specific route — the one with the bigger netmask value, takes precedence.

The next piece to understand is DHCP, the Dynamic Host Configuration Protocol. That’s the way most machines get an IP address from the local network. DHCP not only assigns IP addresses, but it also sets additional information via numeric options. Option 1 is the subnet mask, option 6 advertises DNS servers, and option 3 sets the local router IP. That router is then generally used to construct the default route on the connecting machine — 0.0.0.0/0 via router_IP.

Remember the problem with the gateway IP address belonging to the default network? There’s a similar issue with VPNs. If you want all traffic to flow over the VPN device, tun0, how does the VPN traffic get routed across the Internet to the VPN server? And how does the VPN deal with the existence of the default route set by DHCP? By leaving those routes in place, and adding more specific routes. That’s usually 0.0.0.0/1 and 128.0.0.0/1, neatly slicing the entire Internet into two networks, and routing both through the VPN. These routes are more specific than the default route, but leave the router-provided routes in place to keep the VPN itself online.

And now enter TunnelVision. The key here is DHCP option 121, which sets additional CIDR notation routes. The very same trick a VPN uses to override the network’s default route can be used against it. Yep, DHCP can simply inform a client that networks 0.0.0.0/2, 64.0.0.0/2, 128.0.0.0/2, and 192.0.0.0/2 are routed through malicious_IP. You’d see it if you actually checked your routing table, but how often does anybody do that, when not working a problem?

There is a CVE assigned, CVE-2024-3661, but there’s an interesting question raised: Is this a vulnerability, and in which component? And what’s the right solution? To the first question, everything is basically working the way it is supposed to. The flaw is that some VPNs make the assumption that a /1 route is a bulletproof way to override the default route. The solution is a bit trickier.

Wireguard on Linux already has a very robust solution that users can opt in to, the use of network namespaces to further isolate traffic inside and outside the VPN. Another approach is to simply ignore DHCP option 121 like Android does, making it one of the few unaffected platforms. It seems reasonable that platforms that do need option 121 could re-use the existing trusted vs untrusted designation for networks, only honoring option 121 for trusted networks.

And one final thought before moving on, this can really be a problem on semi-trusted networks, where an adversary could set up a malicious rogue DHCP server. Proper host isolation seems like it would make that a challenge, but not every network does so. The biggest threat I see is an informed attacker using TunnelVision to capture traffic meant for internal traffic. Internal IPs don’t have valid HTTPS certificates, so this seems like it could be used in a highly targeted campaign to capture data intended for such a device.

A Scarecrow Would Have Saved You

We’re going to take a quick look at a new way the zEus malware is getting distributed, and then chat about a new tool that may have helped prevent infection. So first, it was embedded in a Minecraft Source Pack. Now as far as we can tell, this isn’t a vulnerability in Minecraft, it’s just your normal self-extracting RAR that runs the malware while extracting a copy of the files you actually want. For our purposes, the interesting part is the anti-analysis component.

This malware has a list of computer names and running programs that it checks for before deploying. If your computer is named george, or you’re running Wireshark, the payload won’t trigger. That’s not unique to this particular malware, and it’s exactly the malware quirk that Cyber Scarecrow uses. This might be one of those “dumb ideas” that works, and therefore isn’t a dumb idea. Regardless, Cyber Scarecrow runs on Windows, and launches multiple fake analysis indicators, trying to trick malware into leaving your computer alone.

Poutine

Boost Security has released poutine, a new Open Source security scanner for both GitHub and Gitlab actions/pipelines. The scanner looks for misconfigurations that allow things like arbitrary code execution from external contributors.

Running the tool appears to be pretty simple, and it’s available via docker, brew, or a binary download. On the roadmap is support for CircleCI and Azure, with more misconfigurations to be added.

Gitlab’s Deep Dive

Last for this week is Gitlab’s own coverage of a file write vulnerability. And it all starts with the Ruby Gem devfile calling an external binary. That raised suspicions, as interfaces like those are prone to bugs. And here was a bug at the interface: the parent key worked by copying files to the local directory. That’s not what you want.

The good news is that this was guarded against in the Ruby code. But there was a bypass, by specifying a binary sequence, the safeguard in the Ruby code doesn’t trigger, but the binary code did see the unsafe entry. The only trick left to find was how to do path traversal to put the payload where it needed to go. And the answer was to use a devfile registry to pull a tarfile. A tarfile that can technically include both dots and slashes in its filename. Yep, you can put the ../ directory traversal right in the filename, for ultimate ease of use. The fixes landed back in January, and surely we’ve all updated out Gitlab instances since then, right?

FLOSS Weekly Episode 782: Nitric — In Search of the Right Knob

8 Mayo 2024 at 23:00

This week Jonathan Bennett and David Ruggles chat with Rak Siva and Steve Demchuck to talk about Nitric! That’s the Infrastructure from Code framework that makes it easy to use a cloud back-end in your code, using any of multiple providers, in multiple programming languages.

The group chatted about the role and form of good documentation, as well as whether a Contributor License Agreement is ever appropriate, and what a good CLA would actually look like. Don’t miss it!

Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us!

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

This Week in Security: Default Passwords, Lock Slapping, and Mastodown

3 Mayo 2024 at 14:00

The UK has the answer to all our IoT problems: banning bad default passwords. Additionally, the new UK law requires device makers to provide contact info for vulnerability disclosures, as well as a requirement to advertise vulnerability fix schedules. Is this going to help the security of routers, cameras, and other devices? Maybe a bit.

I would argue that default passwords are in themselves the problem, and complexity requirements only nominally help security. Why? Because a good default password becomes worthless once the password, or algorithm leaks. Let’s lay out some scenarios here. First is the static default password. Manufacturer X makes device Y, and sets the devices to username/password admin/new_Complex_P@ssword1!. Those credentials make it onto a default password list, and any extra security is lost.

What about those devices that have a different, random-looking password for each device? Those use an algorithm to derive that password from the MAC address and/or serial number. That may help the situation, but the algorithm can be retrieved from the firmware, and most serial numbers are predictable in one way or another. This approach is better, but not a silver bullet.

So what would a real solution to the password problem look like? How about no default password at all, but no device functionality until the new password passes a cracklib complexity and uniqueness check. I have seen a few devices that do exactly this. The requirement for a disclosure address is a great idea, which we’ve talked about before regarding the similar EU legislation.

Lock Vulnerabilities

Vulnerabilities and bypasses aren’t unique to software. They are, however, much harder to patch in hardware. Take for instance, the Mul-T-Lock SBNE12 padlock. This lock really looks like it was carefully made to be secure. If you’re not familiar with [LockPickingLawyer]’s videos, the two minutes it takes him to pick this lock is a ringing endorsement. However, the lock does have a weakness, and LPL challenges us to figure it out. I’ll give you a hint, that the problem can be seen during the lock teardown at 2:25. See below the video for the explanation.

The problem here is demonstrated by [Trevor], AKA @McNallyOfficial. It’s the springs. The retainer pin is also the lock pin, and that pin is held in place by a pair of springs. The lock is probably designed such that you can lock the shackle without the key, and that means that there is enough play in those springs to slip the pin over the locking lip. Just a good smack in the right place uses the inertia of the locking pin to compress the springs and slip the shackle. It’s a bit disconcerting how many locks can be opened this way.

Fuzzing!

First up we have a walk-through to setting up a function fuzzing run with American Fuzzy Loop (AFL). First let’s cover *why* you might want to do this. We’re looking for vulnerabilities, and the scenario [Craig Young] lays out is one where we have a binary listening for HTTP calls on port 8080, and using some internal code to parse the request bodies. We want to throw a bunch of weird data at that parser to see how it breaks, and using real HTTP requests is way too slow when compared to direct function calls.

AFL is quite clever about its approach, particularly when you run it with “instrumentation”, or injected code that tracks what target code is being run in response to the AFL input. This allows AFL to track what fuzzing input resulted in exercising new target code paths. (It did something new, make a note!) That only works when re-compiling a program from source. The approach to use with a pre-compiled binary is to run it under QEMU so AFL can spy on execution. And in this case, that executable into a shared library to get to the target function directly.

To make that bit of magic work, the Library to Instrument Executable Files (LIEF) is used. When given the function address, this spits out a cooked shared object .so file. The actual harness is a bit of trivial code to call into that function and capture the output. And with that, you can start throwing interesting fuzz data at compiled code.

Fuzzing the Kernel

The Linux kernel has support for NVMe-oF, or Non-Volatile Memory express over Fabric, a high speed data storage link that can run over fiber, Ethernet, or simple TCP. That TCP support is interesting, as it means the kernel itself is opening INET sockets directly, which is why [Alon Zahavi] found it a juicy target for finding vulnerabilities. Because we’re talking the kernel, we can’t just trivially connect AFL like above. Thankfully there’s a fuzzer that’s written specifically for kernel fuzzing: syzkaller.

This one is a bit more complicated, and code has to be added for each subsystem that is supported. But NVME-oF is a supported module. The kernel also has the KCOV subsystem, for collecting coverage data during a run. It took quite a bit of work to add all the necessary bits, but [Alon] but the time in, and came up with 5 nice bug finds in the targeted code. Nice!

Water Hacking

We’ve looked at reported hacks against water treatment plants in the past, and so far there has been lots of splash, and very little substance. That hasn’t kept government agencies from beating the proverbial wardrum about attacks. That said, there is a lot of room for improvement in how these critical systems are secured. Apparently there was one actual breach that caused a tank to overflow, and a second attempt, where 37,000 credential were stuffed into a public-facing firewall over a span of about four days. If our critical systems actually have Internet-facing login pages, then something has truly gone wrong in a fundamental way.

Windows Registry Continued

The ongoing dive into the Windows registry continues over at Google Project Zero. This time with a blast from the past: The registry as it was in Windows 3.1. Way back then it was strictly for file type handling and OLE and COM object handling. Then Windows NT came along with NT 3.1, and started stuffing more and more setting data into the registry. Today, we’re at a crazy 100,000 lines of registry code in the Windows kernel. It’s no wonder this struck Project Zero as a good place to look for Windows vulnerabilities.

Mastodown

We got a bit of a chuckle out of this one, as the folks at It’s FOSS are asking readers not to share links to itsfoss.com stories on Mastodon. Why? Because apparently posting a link to Mastodon triggers a micro-DDOS, as each of the federated Mastodon instances pull a copy of the linked site, to generate a preview. The Mastodon code currently uses a random delay of up to 60 seconds to mitigate the issue. But for some sites that’s just not enough, and the traffic spike from multiple servers pulling a preview copy can be enough to take the site temporarily offline.

A bit of research was done back in 2022, which found that a moderately well-connected Mastodon server would generate just shy of 400 page loads per minute, when a link was shared. It’s likely those numbers are higher now, but still unlikely to be the sort of volume that a post going viral on a link aggregator would generate. Put another way, this seems to be a smaller problem than the classic “Slashdot Effect”.

So on one hand, it would be nice if the Mastodon project could puzzle out a way to keep every federated server from having to pull an independent copy of the site just to generate a preview. But on the other hand, a site that is actively trying to attract attention and visitors needs to be big enough to handle this level of traffic. But for now, at least for “It’s FOSS”, if you want to post a link to Mastodon, please Mastodon’t.

[Editor’s note: Hackaday has a pretty robust CDN. Toot away!]

Bits and Bytes

Hopefully you’re aware that there are malicious images on Docker Hub. Some images mine cryptocurrency in the background, while others try to steal credentials. Researchers at Jfrog have found a class of repositories that plant malicious links in their descriptions. From phishing, to malware, to straight up spam, these repositories are a real pain, and make up nearly 20% of the Docker Hub library. That apparently doesn’t even include the cryptocurrency miners. Oof. It’s probably a good idea to stick to the “Trusted Content” section of Docker Hub.

Nettitude Labs got their hands on a Cisco C195 email security appliance, and went through the steps to make it fully their own. That includes BIOS modification to run arbitrary code, finding a command injection attack in the Cisco firmware, building a full exploit, and finally running Doom on the box. It’s an epic hack and a great write-up.

And finally, HPE Aruba has published fixes for four critical vulnerabilities that allow unauthorized attackers to execute arbitrary code on affected devices. In a refreshing turn of events, these aren’t being used in-the-wild, and there hasn’t been any public Proof-of-concept code published yet. The HPE advisory has a few more details. As always, expect these to eventually get exploited in the wild.

FLOSS Weekly Episode 781: Resistant To The Wrath Of God

1 Mayo 2024 at 23:00

This week Jonathan Bennett and Doc Searls sit down with Mathias Buus Madsen and Paolo Ardoino of Holepunch, to talk about the Pear Runtime and the Keet serverless peer-to-peer platform. What happens when you take the technology built for BitTorrent, and apply it to a messaging app? What else does that allow you to do? And what’s the secret to keeping the service running even after the servers go down?

Holepunch (the company behind Pear Runtime): https://www.holepunch.to

Pear Runtime Website – https://pears.com/

Launch Press Release – https://pears.com/news/holepunch-unveils-groundbreaking-open-source-peer-to-peer-app-development-platform-pear-runtime/

Twitter – https://twitter.com/Pears_p2p

Documentation – https://docs.pears.com

Keet – http://www.keet.io

Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us!

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

This Week in Security: Cisco, Mitel, and AI False Flags

26 Abril 2024 at 14:00

There’s a trend recently, of big-name security appliances getting used in state-sponsored attacks. It looks like Cisco is the latest victim, based on a report by their own Talos Intelligence.

This particular attack has a couple of components, and abuses a couple of vulnerabilities, though the odd thing about this one is that the initial access is still unknown. The first part of the infection is Line Dancer, a memory-only element that disables the system log, leaks the system config, captures packets and more. A couple of the more devious steps are taken, like replacing the crash dump process with a reboot, to keep the in-memory malware secret. And finally, the resident installs a backdoor in the VPN service.

There is a second element, Line Runner, that uses a vulnerability to arbitrary code from disk on startup, and then installs itself onto the device. That one is a long term command and control element, and seems to only get installed on targeted devices. The Talos blog makes a rather vague mention of a 32-byte token that gets pattern-matched, to determine an extra infection step. It may be that Line Runner only gets permanently installed on certain units, or some other particularly fun action is taken.

Fixes for the vulnerabilities that allowed for persistence are available, but again, the initial vector is still unknown. There’s a vulnerability that just got fixed, that could have been such a vulnerability. CVE-2024-20295 allows an authenticated user with read-only privileges perform a command injection as root. Proof of Concept code is out in the wild for this one, but so far there’s no evidence it was used in any attacks, including the one above.

Mitel Pop From the Front Panel

The good folks at Baldur decided to go hunting for bugs in Mitel VoIP phones. These are pretty commonly used in businesses and hotel back offices. And the first brilliant find was a system compromise just from punching buttons on the phone. Under diagnostics in the menu, the diagnostic server setting is used to upload logs and system information. That setting apparently gets passed into a shell command, as an ampersand is all it takes to execute commands. You can bet that the next time I’m around a Mitel phone, I’m trying &reboot;. That’s technically protected by an admin password — which is usually set to “1234”.

But wait, there’s more. The front panel hack was useful for getting a toehold to run a debugger and other tools, but we need to go deeper. There’s a webserver on port 80, for doing device configuration. It has GET requests locked down reasonably well, but there’s a really odd quirk, that POST requests don’t have to be authorized, so long as a valid GET request has been made within the last 10 minutes. That would be something on it’s own, but even better is the fact that there are a few GET requests that trigger the timer, and don’t require authentication. The winner here is the humble favicon.

The last step was finding a buffer overflow in a routine that sets the MAC address from within the web interface. The tricky thing here is that the overflow code first gets handled by a strcat and strcpy, meaning a NULL byte ends the exploit data. It took some doing, but the team found a gadget chain that got to shellcode while walking the tightrope. They celebrated with a bit of the Imperial march.

False Flag Malware

What happens when you have a database where a user can upload arbitrary data, and an over-zealous pattern-matching anti-malware engine is running? Database deletion wasn’t on my bingo card, but here we are. It’s a literal false flag: create a fake malicious signature, to trick the anti-malware into doing the malicious thing instead. Microsoft Defender and Kaspersky EDR are the two applications called out here, though it’s likely other anti-virus programs would be subject to similar tricks. Microsoft issued a CVE and has shipped a fix, and Kaspersky rolled out some mitigations as well.

False Flag Slander

And then there was this AI-enabled false flag. A school principal was “caught” on a hot mic, expressing some concerning and racially-charged opinions about students, community members, and other school staff. The audio was leaked, the student body got wind of it, and the principle’s scalp was metaphorically called for.

But it was the school’s athletic director, with a speech-cloning service, and he has been arrested, which is sure to lead to an interesting court case. And sadly, this isn’t an isolated incident, as hoaxes have become relatively common, and this isn’t the first time an AI voice has been used maliciously. As much as we hate to say it, look for more of this to come.

Zombie Worm

What happens with a self-propagating worm gets its head cut off? Apparently it turns into a zombie worm. A strain of PlugX malware gained the ability to hop a ride on USB drives a few years back, with all of those infected machines reporting to a single C&C server. That server went offline, and researchers managed to snag the IP address. That’s important to prevent someone else raising the zombies back to unlife, but it also gives us a really interesting look into the infected machine stats.

Nigeria seems to hold the crown for the most infected machines, with India holding down second place. Some researchers have seen a Chinese theme in the data, suggesting China was patient zero, the origin of the worm, or maybe both. With researchers in control of the C&C IP, there is the possibility of issuing remote uninstall commands, but there are both legal and logistical challenges to that idea.

PSA: phpecc

And here’s a PSA for you PHP programmers. (We know you’re out there!) The phpecc library appears to have been abandoned. Statistics suggest it’s still getting over a thousand downloads a day, which isn’t great given that there are some outstanding CVEs in the codebase.

The codebase has been forked by Paragon Initiative Enterprises, P.I.E., who warn against fully trusting the code until an audit has completed. This is one to watch for a while, and be aware of the potential faults of the older versions.

Bits and Bytes

Phylum is back, reporting more malicious packages in NPM. These seem to be coming from the same threat actors as have uploaded malware before, and thought to be North Korean actors. It’s fairly straightforward, with a preinstall hook running obfuscated JS code. This one is interesting, as it seems to be going after MacOS systems. There’s also an interesting bashism that has sneaked into the malicious JS, using the logical OR || instead of an if statement. 'linux' === type || exec() Though due to a typo, it looks like this particular sample will never deploy a payload on Linux. os.type() uses the uname output, which always capitalizes Linux. Your English teacher was right! Capitalization does matter.

Earlier this month a series of CVEs against the Robot Operating System (ROS) came across my desk. I opted not to cover them, as it was a wall of CVEs with hardly any detail in any of them. I filed it away mentally, to check back later. It’s later, and I was apparently not the only observer that thought the report was quite thin on substance. It’s beginning to look like the CVEs are bogus, and the “research paper” was a hastily reworded copy of the ROS beginner tutorial. The most convincing evidence of this is that the presumably fake researchers claimed that security updates were coming soon, while core ROS developers never received reports on the CVEs.

And finally, maybe ransomware is good for one thing — keeping the lights on? Oh, no. Those lights are supposed to turn off during the day. Leiccester has had an attack of the ever-lit street lights, after a ransomware attack forced a shutdown a couple months back.

❌
❌