Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Ayer — 5 Junio 2025Salida Principal

Screens of Death: From Diagnostic Aids to a Sad Emoji

Por: Maya Posch
5 Junio 2025 at 14:00

There comes a moment in the life of any operating system when an unforeseen event will tragically cut its uptime short. Whether it’s a sloppily written driver, a bug in the handling of an edge case or just dumb luck, suddenly there is nothing more that the OS’ kernel can do to salvage the situation. With its last few cycles it can still gather some diagnostic information, attempt to write this to a log or memory dump and then output a supportive message to the screen to let the user know that the kernel really did try its best.

This on-screen message is called many things, from a kernel panic message on Linux to a Blue Screen of Death (BSOD) on Windows since Windows 95, to a more contemplative message on AmigaOS and BeOS/Haiku. Over the decades these Screens of Death (SoD) have changed considerably, from the highly informative screens of Windows NT to the simplified BSOD of Windows 8 onwards with its prominent sad emoji that has drawn a modicum of ridicule.

Now it seems that the Windows BSOD is about to change again, and may not even be blue any more. So what’s got a user to think about these changes? What were we ever supposed to get out of these special screens?

Meditating On A Fatal Error

AmigaOS fatal Guru Meditation error screen.
AmigaOS fatal Guru Meditation error screen.

More important than the color of a fatal system error screen is what information it displays. After all, this is the sole direct clue the dismayed user gets when things go south, before sighing and hitting the reset button, followed by staring forlorn at the boot screen. After making it back into the OS, one can dig through the system logs for hints, but some information will only end up on the screen, such as when there is a storage drive issue.

The exact format of the information on these SoDs changes per OS and over time, with AmigaOS’ Guru Meditation screen being rather well-known. Although the naming was the result of an inside joke related to how the developers dealt with frequent system crashes, it stuck around in the production releases.

Interestingly, both Windows 9x and ME as well as AmigaOS have fatal and non-fatal special screens. In the case of AmigaOS you got a similar screen to the Guru Meditation screen with its error code, except in green and the optimistic notion that it might be possible to continue running after confirming the message. For Windows 9x/ME users this might be a familiar notion as well :

BSOD in Windows 95 after typing "C:\con\con" in the Run dialog.
BSOD in Windows 95 after typing “C:\con\con” in the Run dialog.

In this series of OSes you’d get these screens, with mashing a key usually returning you to a slightly miffed but generally still running OS minus the misbehaving application or driver. It could of course happen that you’d get stuck in an endless loop of these screens until you gave up and gave the three-finger salute to put Windows out of its misery. This was an interesting design choice, which Microsoft’s Raymond Chen readily admits to being somewhat quaint. What it did do was abandon the current event and return to the event dispatcher to give things another shot.

Mac OS X 10.2 thru 10.2.8 kernel panic message.
Mac OS X 10.2 thru 10.2.8 kernel panic message.

A characteristic of these BSODs in Windows 9x/ME was also that they didn’t give you a massive amount of information to work with regarding the reason for the rude interruption. Incidentally, over on the Apple side of the fence things were not much more elaborate in this regard, with OS X’s kernel panic message getting plastered over with a ‘Nothing to see here, please restart’ message. This has been quite a constant ever since the ‘Sad Mac’ days of Apple, with friendly messages rather than any ‘technobabble’.

This quite contrasts with the world of Windows NT, where even the already trimmed BSOD of Windows XP is roughly on the level of the business-focused Windows 2000 in terms of information. Of note is also that a BSOD on Windows NT-based OSes is a true ‘Screen of Death’, from which you absolutely are not returning.

A BSOD in Windows XP. A true game over, with no continues.
A BSOD in Windows XP. A true game over, with no continues.

These BSODs provide a significant amount of information, including the faulting module, the fault type and some hexadecimal values that can conceivably help with narrowing down the fault. Compared to the absolute information overload in Windows NT 3.1 with a partial on-screen memory dump, the level of detail provided by Windows 2000 through Windows 7 is probably just enough for the average user to get started with.

It’s here interesting that more recent versions of Windows have opted to default to restarting automatically when a BSOD occurs, which renders what is displayed on them rather irrelevant. Maybe that’s why Windows 8 began to just omit that information and opted to instead show a generic ‘collecting information’ progress counter before restarting.

Times Are Changing

People took the new BSOD screen in Windows 8 well.
People took the new BSOD screen in Windows 8 well.

Although nobody was complaining about the style of BSODs in Windows 7, somehow Windows 8 ended up with the massive sad emoji plastered on the top half of the screen and no hexadecimal values, which would now hopefully be found in the system log. Windows 10 also added a big QR code that leads to some troubleshooting instructions. This overly friendly and non-technical BSOD mostly bemused and annoyed the tech community, which proceeded to brutally make fun of it.

In this context it’s interesting to see these latest BSOD screen mockups from Microsoft that will purportedly make their way to Windows 11 soon.

These new BSOD screens seem to have a black background (perhaps a ‘Black Screen of Death’?), omit the sad emoji and reduce the text to an absolute minimum:

The new Windows 11 BSOD, as it'll likely appear in upcoming releases.
The new Windows 11 BSOD, as it’ll likely appear in upcoming releases.

What’s noticeable here is how it makes the stop code very small on the bottom of the screen, with the faulting module below it in an even smaller font. This remains a big departure from the BSOD formats up till Windows 7 where such information was clearly printed on the screen, along with additional information that anyone could copy over to paper or snap a picture of for a quick diagnosis.

But Why

The BSODs in ReactOS keep the Windows 2000-style format.
The BSODs in ReactOS keep the Windows 2000-style format.

The crux here is whether Microsoft expects their users to use these SoDs for informative purposes, or whether they would rather that they get quickly forgotten about, as something shameful that users shouldn’t concern themselves with. It’s possible that they expect that the diagnostics get left to paid professionals, who would have to dig into the memory dumps, the system logs, and further information.

Whatever the case may be, it seems that the era of blue SoDs is well and truly over now in Windows. Gone too are any embellishments, general advice, and more in-depth debug information. This means that distinguishing the different causes behind a specific stop code, contained in the hexadecimal numbers, can  only be teased out of the system log entry in Event Viewer, assuming it got in fact recorded and you’re not dealing with a boot partition or similar fundamental issue.

Although I’ll readily admit to not having seen many BSODs since probably Windows 2000 or XP — and those were on questionable hardware — the rarity of these events makes it in my view even more pertinent that these screens are as descriptive as possible, which is sadly not a feature that seems to be a priority for mainstream desktop OSes. Nor for niche OSes like Linux and BSD, tragically, where you have to know your way around the Systemd journalctl tool or equivalent to figure out where that kernel panic came from.

This is definitely a point where the SoD generated upon a fiery kernel explosion sets the tone for the user’s response.

Running FreeDOS and 8086tiny on the Game Boy Advance Because You Can

Por: Maya Posch
5 Junio 2025 at 08:00

How many people haven’t looked at their Game Boy Advance (GBA) handheld gaming device and wondered how much better it might be if it could run FreeDOS. Inside an 8086 emulator. If you’re like [ZZAZZ] and similarly suffer intrusive project-related thoughts, then this might be a moment of clear recognition, somewhat like sharing one’s story at a Programmers Anonymous meeting, but we digress.

In the video, the basic premise of making even the 8086tiny emulator work on the GBA seemed improbable on the outset – courtesy of the rather limited memory environment provided by the GBA – before even daring to look at things like disk access.

However, letting silly things like segmented memory and mismatched memory addresses deter us from pleasing said intrusive thoughts would be beyond the pale. Ergo we get a shining example of how days of rewriting code, stripping code, debugging code, fixing alignment issues in code and writing work-arounds for newly discovered issues in code can ultimately lead to the proud moment where FreeDOS boots on the GBA.

Granted it takes over an hour to do so, and has to be started from a butchered Pokémon Emerald save file, courtesy of a well-known exploit in that game, thankfully preserved in counterfeit cartridges.

Admittedly we’re not sure what practical applications there are for FreeDOS on the GBA, but that’s never stopped hackers from taking on impossible projects before, so there’s no sense letting it get in the way now.

Thanks to [Jinxy] for the tip.

AnteayerSalida Principal

My Winter of ’99: The Year of the Linux Desktop is Always Next Year

Por: Maya Posch
3 Junio 2025 at 14:00

Growing up as a kid in the 1990s was an almost magical time. We had the best game consoles, increasingly faster computers at a pace not seen before, the rise of the Internet and World Wide Web, as well the best fashion and styles possible between neon and pastel colors, translucent plastic and also this little thing called Windows 95 that’d take the world by storm.

Yet as great as Windows 95 and its successor Windows 98 were, you had to be one of the lucky folks who ended up with a stable Windows 9x installation. The prebuilt (Daewoo) Intel Celeron 400 rig with 64 MB SDRAM that I had splurged on with money earned from summer jobs was not one of those lucky systems, resulting in regular Windows reinstalls.

As a relatively nerdy individual, I was aware of this little community-built operating system called ‘Linux’, with the online forums and the Dutch PC magazine that I read convincing me that it would be a superior alternative to this unstable ‘M$’ Windows 98 SE mess that I was dealing with. Thus it was in the Year of the Linux Desktop (1999) that I went into a computer store and bought a boxed disc set of SuSE 6.3 with included manual.

Fast-forward to 2025, and Windows is installed on all my primary desktop systems, raising the question of what went wrong in ’99. Wasn’t Linux the future of desktop operating systems?

Focus Groups

Boxed SuSE Linux 6.3 software. (Source: Archive.org)
Boxed SuSE Linux 6.3 software. (Source: Archive.org)

Generally when companies gear up to produce something new, they will determine and investigate the target market, to make sure that the product is well-received. This way, when the customer purchases the item, it should meet their expectations and be easy to use for them.

This is where SuSE Linux 6.3 was an interesting experience for me. I’d definitely have classified myself in 1999 as your typical computer nerd who was all about the Pentiums and the MHz, so at the very least I should have had some overlap with the nerds who wrote this Linux OS thing.

The comforting marketing blurbs on the box promised an easy installation, bundled applications for everything, while suggesting that office and home users alike would be more than happy to use this operating system. Despite the warnings and notes in the installation section of the included manual, installation was fairly painless, with YAST (Yet Another Setup Tool) handling a lot of the tedium.

However, after logging into the new operating system and prodding and poking at it a bit over the course of a few days, reality began to set in. There was the rather rough-looking graphical interface, with what I am pretty sure was the FVWM window manager for XFree86, no font aliasing and very crude widgets. I would try the IceWM window manager and a few others as well, but to say that I felt disappointed was an understatement. Although it generally worked, the whole experience felt unfinished and much closer to using CDE on Solaris than the relatively Windows 98 or the BeOS Personal Edition 5 that I would be playing with around that time as well.

That’s when a friend of my older brother slipped me a completely legit copy of Windows 2000 plus license key. To my pleasant surprise, Windows 2000 ran smoothly, worked great and was stable as a rock even on my old Celeron 400 rig that Windows 98 SE had struggled with. I had found my new forever home, or so I thought.

Focus Shift

Start-up screen of FreeSCO. (Credit: Lewis “Lightning” Baughman, Wikimedia)
Start-up screen of FreeSCO. (Credit: Lewis “Lightning” Baughman, Wikimedia)

With Windows 2000, and later XP, being my primary desktop systems, my focus with Linux would shift away from the desktop experience and more towards other applications, such as the FreeSCO (en français) single-floppy router project, and the similar Smoothwall project. After upgrading to a self-built AMD Duron 600 rig, I’d use the Celeron 400 system to install various Linux distributions on, to keep tinkering with them. This led me down the path of trying out Wine to try out Windows applications on Linux in the 2000s, along with some Windows games ported by Loki Entertainment, with mostly disappointing results. This also got me to compile kernel modules, to make the onboard sound work in Linux.

Over the subsequent years, my hobbies and professional career would take me down into the bowels of Linux and similar with mostly embedded (Yocto) development, so that by now I’m more familiar with Linux from the perspective of the command line and architectural level. Although I have many Linux installations kicking around with a perfectly fine X/Wayland installation on both real hardware and in virtual machines, generally the first thing I do after logging in is pop open a Bash terminal or two or switching to a different TTY.

Yet now that the rainbows-and-sunshine era of Windows 2000 through Windows 7 has come to a fiery end amidst the dystopian landscape of Windows 10 and with Windows 11 looming over the horizon, it’s time to ask whether I would make the jump to the Linux desktop now.

Linux Non-Standard Base

Bringing things back to the ‘focus group’ aspect, perhaps one of the most off-putting elements of the Linux ecosystem is the completely bewildering explosion of distributions, desktop environments, window managers, package managers and ways of handling even basic tasks. All the skills that you learned while using Arch Linux or SuSE/Red Hat can be mostly tossed out the moment you are on a Debian system, never mind something like Alpine Linux. The differences can be as profound as when using Haiku, for instance.

Rather than Linux distributions focusing on a specific group of users, they seem to be primarily about doing what the people in charge want. This is illustrated by the demise of the Linux Standard Base (LSB) project, which was set up in 2001 by large Linux distributions in order to standardize various fundamentals between these distributions. The goals included a standard filesystem hierarchy, the use of the RPM package format and binary compatibility between distributions to help third-party developers.

By 2015 the project was effectively abandoned, and since then distributing software across Linux distributions has become if possible even more convoluted, with controversial ‘solutions’ like Canonical’s Snap, Flatpak, AppImage, Nix and others cluttering the landscape and sending developers scurrying back in a panic to compiling from source like it’s the 90s all over again.

Within an embedded development context this lack of standardization is also very noticeable, between differences in default compiler search paths, broken backwards compatibility — like the removal of ifconfig — and a host of minor and larger frustrations even before hitting big ticket items like service management flittering between SysV, Upstart, Systemd or having invented their own, even if possibly superior, alternatives like OpenRC in Alpine Linux.

Of note here is also that these system service managers generally do not work well with GUI-based applications, as CLI Linux and GUI Linux are still effectively two entirely different universes.

Wrong Security Model

For some inconceivable reason, Linux – despite not having UNIX roots like BSD – has opted to adopt the UNIX filesystem hierarchy and security model. While this is of no concern when you look at Linux as a wannabe-UNIX that will happily do the same multi-user server tasks, it’s an absolutely awful choice for a desktop OS. Without knowledge of the permission levels on folders, basic things like SSH keys will not work, and accessing network interfaces with Wireshark requires root-level access and some parts of the filesystem, like devices, require the user to be in a specific group.

When the expectation of a user is that the OS behaves pretty much like Windows, then the continued fight against an overly restrictive security model is just one more item that is not necessarily a deal breaker, but definitely grates every time that you run into it. Having the user experience streamlined into a desktop-friendly experience would help a lot here.

Unstable Interfaces

Another really annoying thing with Linux is that there is no stable kernel driver API. This means that with every update to the kernel, each of the kernel drivers have to be recompiled to work. This tripped me up in the past with Realtek chipset drivers for WiFi and Bluetooth. Since these were too new to be included in the Realtek driver package, I had to find an online source version on GitHub, run through the whole string of commands to compile the kernel driver and finally load it.

After running a system update a few days later and doing a restart, the system was no longer to be found on the LAN. This was because the WiFi driver could no longer be loaded, so I had to plug in Ethernet to regain remote access. With this experience in mind I switched to using Wireless-N WiFi dongles, as these are directly supported.

Experiences like this fortunately happen on non-primary systems, where a momentary glitch is of no real concern, especially since I made backups of configurations and such.

Convoluted Mess

This, in a nutshell, is why moving to Linux is something that I’m not seriously considering. Although I would be perfectly capable of using Linux as my desktop OS, I’m much happier on Windows — if you ignore Windows 11. I’d feel more at home on FreeBSD as well as it is a far more coherent experience, not to mention BeOS’ successor Haiku which is becoming tantalizingly usable.

Secretly my favorite operating system to switch to after Windows 10 would be ReactOS, however. It would bring the best of Windows 2000 through Windows 7, be open-source like Linux, yet completely standardized and consistent, and come with all the creature comforts that one would expect from a desktop user experience.

One definitely can dream.

The Potential Big Boom In Every Dust Cloud

Por: Maya Posch
2 Junio 2025 at 14:00

To the average person, walking into a flour- or sawmill and seeing dust swirling around is unlikely to evoke much of a response, but those in the know are quite likely to bolt for the nearest exit at this harrowing sight. For as harmless as a fine cloud of flour, sawdust or even coffee creamer may appear, each of these have the potential for a massive conflagration and even an earth-shattering detonation.

As for the ‘why’, the answer can be found in for example the working principle behind an internal combustion engine. While a puddle of gasoline is definitely flammable, the only thing that actually burns is the evaporated gaseous form above the liquid, ergo it’s a relatively slow process; in order to make petrol combust, it needs to be mixed in the right air-fuel ratio. If this mixture is then exposed to a spark, the fuel will nearly instantly burn, causing a detonation due to the sudden release of energy.

Similarly, flour, sawdust, and many other substances in powder form will burn gradually if a certain transition interface is maintained. A bucket of sawdust burns slowly, but if you create a sawdust cloud, it might just blow up the room.

This raises the questions of how to recognize this danger and what to do about it.

Welcome To The Chemical Safety Board

In an industrial setting, people will generally acknowledge that oil refineries and chemical plants are dangerous and can occasionally go boom in rather violent ways. More surprising is that something as seemingly innocuous as a sugar refinery and packing plant can go from a light sprinkling of sugar dust to a violent and lethal explosion within a second. This is however what happened in 2008 at the Georgia Imperial Sugar refinery, which killed fourteen and injured thirty-six. During this disaster, a primary and multiple secondary explosions ripped through the building, completely destroying it.

Georgia Imperial Sugar Refinery aftermath in 2008. (Credit: USCSB)
Georgia Imperial Sugar Refinery aftermath in 2008. (Credit: USCSB)

As described in the US Chemical Safety Board (USCSB) report with accompanying summary video (embedded below), the biggest cause was a lack of ventilation and cleaning that allowed for a build-up of sugar dust, with an ignition source, likely an overheated bearing, setting off the primary explosion. This explosion then found subsequent fuel to ignite elsewhere in the building, setting off a chain reaction.

What is striking is just how simple and straightforward both the build-up towards the disaster and the means to prevent it were. Even without knowing the exact air-fuel ratio for the fuel in question, there are only two points on the scale where you have a mixture that will not violently explode in the presence of an ignition source.

These are either a heavily saturated solution — too much fuel, not enough air — or the inverse. Essentially, if the dust-collection systems at the Imperial Sugar plant had been up to the task, and expanded to all relevant areas, the possibility of an ignition event would have likely been reduced to zero.

Things Like To Burn

In the context of dust explosions, it’s somewhat discomforting to realize just how many things around us are rather excellent sources of fuel. The aforementioned sugar, for example, is a carbohydrate (Cm(H2O)n). This chemical group also includes cellulose, which is a major part of wood dust, explaining why reducing dust levels in a woodworking shop is about much more than just keeping one’s lungs happy. Nobody wants their backyard woodworking shop to turn into a mini-Imperial Sugar ground zero, after all.

Carbohydrates aren’t far off from hydrocarbons, which includes our old friend petrol, as well as methane (CH4), butane (C4H10), etc., which are all delightfully combustible. All that the carbohydrates have in addition to carbon and hydrogen atoms are a lot of oxygen atoms, which is an interesting addition in the context of them being potential fuel sources. It incidentally also illustrates how important carbon is for life on this planet since its forms the literal backbone of its molecules.

Although one might conclude from this that only something which is a carbohydrate or hydrocarbon is highly flammable, there’s a whole other world out there of things that can burn. Case in point: metals.

Lit Metals

On December 9, 2010, workers were busy at the New Cumberland AL Solutions titanium plant in West Virginia, processing titanium powder. At this facility, scrap titanium and zirconium were milled and blended into a powder that got pressed into discs. Per the report, a malfunction inside one blender created a heat source that ignited the metal powder, killing three employees and injuring one contractor. As it turns out, no dust control methods were installed at the plant, allowing for uncontrolled dust build-up.

As pointed out in the USCSB report, both titanium and zirconium will readily ignite in particulate form, with zirconium capable of auto-igniting in air at room temperature. This is why the milling step at AL Solutions took place submerged in water. After ignition, titanium and zirconium require a Class D fire extinguisher, but it’s generally recommended to let large metal fires burn out by themselves. Using water on larger titanium fires can produce hydrogen, leading conceivably to even worse explosions.

The phenomenon of metal fires is probably best known from thermite. This is a mixture of a metal powder and a metal oxide. After ignited by an initial source of heat, the redox process becomes self-sustaining, providing the fuel, oxygen, and heat. While generally iron(III) oxide and aluminium are used, many more metals and metal oxides can be combined, including a copper oxide for a very rapid burn.

While thermite is intentionally kept as a powder, and often in some kind of container to create a molten phase that sustains itself, it shouldn’t be hard to imagine what happens if the metal is ground into a fine powder, distributed as a fine dust cloud in a confined room and exposed to an ignition source. At that point the differences between carbohydrates, hydrocarbons and metals become mostly academic to any survivors of the resulting inferno.

Preventing Dust Explosions

As should be quite obvious at this point, there’s no real way to fight a dust explosion, only to prevent it. Proper ventilation, preventing dust from building up and having active dust extraction in place where possible are about the most minimal precautions one should take. Complacency as happened at the Imperial Sugar plant merely invites disaster: if you can see the dust build-up on surfaces & dust in the air, you’re already at least at DEFCON 2.

A demonstration of how easy it is to create a solid dust explosion came from the Mythbusters back in 2008 when they tested the ‘sawdust cannon’ myth. This involved blowing sawdust into a cloud and igniting it with a flare, creating a massive fireball. After nearly getting their facial hair singed off with this roaring success, they then tried the same with non-dairy coffee creamer, which created an even more massive fireball.

Fortunately the Mythbusters build team was supervised by adults on the bomb range for these experiments, as it shows just how incredibly dangerous dust explosions can be. Even out in the open on a secure bomb range, never mind in an enclosed space, as hundreds have found out over the decades in the US alone. One only has to look at the USCSB’s dust explosions statistics to learn to respect the dangers a bit more.

Testing Brick Layers in OrcaSlicer With Staggered Perimeters

Por: Maya Posch
2 Junio 2025 at 05:00
The OrcaSlicer staggered perimeters in an FDM print, after slicing through the model. (Credit: CNC Kitchen)
The OrcaSlicer staggered perimeters in an FDM print, after slicing through the model. (Credit: CNC Kitchen)

The idea of staggered (or brick) layers in FDM prints has become very popular the past few years, with now nightly builds of OrcaSlicer featuring the ‘Stagger Perimeters’ option to automate the process, as demonstrated by [Stefan] in a recent CNC Kitchen video. See the relevant OrcaSlicer GitHub thread for the exact details, and to obtain a build with this feature. After installing, slice the model as normal, after enabling this new parameter in the ‘Strength’ tab.

In the video, [Stefan] first tries out a regular and staggered perimeter print without further adjustments. This perhaps surprisingly results in the staggered version breaking before the regular print, which [Stefan] deduces to be the result of increasing voids within the print. After increasing the extrusion rate to 110% to fill up said voids, this does indeed result in the staggered part showing a massive boost in strength.

What’s perhaps more telling is that a similar positive effect is observed when the flow is increased with the non-staggered part, albeit with the staggered part still showing more of a strength increase. This makes it obvious that just staggering layers isn’t enough, but that the flowrate and possibly other parameters have to be adjusted as well to fully realize the potential of brick layers. That said, it’s encouraging to see this moving forward despite questionable patent claims.

White LED Turning Purple: Analyzing a Phosphor Failure

Por: Maya Posch
31 Mayo 2025 at 02:00

White LED bulbs are commonplace in households by now, mostly due to their low power usage and high reliability. Crank up the light output enough and you do however get high temperatures and corresponding interesting failure modes. An example is the one demonstrated by the [electronupdate] channel on YouTube with a Philips MR16 LED spot that had developed a distinct purple light output.

The crumbling phosphor coating on top of the now exposed UV LEDs. (Credit: electronupdate, YouTube)
The crumbling phosphor coating on top of the now exposed UV LEDs. (Credit: electronupdate, YouTube)

After popping off the front to expose the PCB with the LED packages, the fault seemed to be due to the phosphor on one of the four LEDs flaking off, exposing the individual UV LEDs underneath. Generally, white LEDs are just UV LEDs that have a phosphor coating on top that converts this UV into broad band visible (white) or a specific wavelength, so this failure mode makes perfect sense.

After putting the PCB under a microscope and having a look at the failed and the other LED packages the crumbled phosphor on not just the one package became obvious, as the remaining three showed clear cracks in the phosphor coating. Whether due to the heat in these high-intensity spot lamps or just age, clearly over time these white LED packages become just UV LEDs. Ideally you could dab on some fresh phosphor, but likely the fix is to replace these LED packages every few years until the power supply in the bulb gives up the ghost.

Thanks to [ludek111] for the tip.

Forced E-Waste PCs and the Case of Windows 11’s Trusted Platform

Por: Maya Posch
29 Mayo 2025 at 14:00

Until the release of Windows 11, the upgrade proposition for Windows operating systems was rather straightforward: you considered whether the current version of Windows on your system still fulfilled your needs and if the answer was ‘no’, you’d buy an upgrade disc. Although system requirements slowly crept up over time, it was likely that your PC could still run the newest-and-greatest Windows version. Even Windows 7 had a graphical fallback mode, just in case your PC’s video card was a potato incapable of handling the GPU-accelerated Aero Glass UI.

This makes a lot of sense, as the most demanding software on a PC are the applications, not the OS. Yet with Windows 11 a new ‘hard’ requirement was added that would flip this on its head: the Trusted Platform Module (TPM) is a security feature that has been around for many years, but never saw much use outside of certain business and government applications. In addition to this, Windows 11 only officially supports a limited number of CPUs, which risks turning many still very capable PCs into expensive paperweights.

Although the TPM and CPU requirements can be circumvented with some effort, this is not supported by Microsoft and raises the specter of a wave of capable PCs being trashed when Windows 10 reaches EOL starting this year.

Not That Kind Of Trusted

Although ‘Trusted Platform’ and ‘security’ may sound like a positive thing for users, the opposite is really the case. The idea behind Trusted Computing (TC) is about consistent, verified behavior enforced by the hardware (and software). This means a computer system that’s not unlike a modern gaming console with a locked-down bootloader, with the TPM providing a unique key and secure means to validate that the hardware and software in the entire boot chain is the same as it was the last time. Effectively it’s an anti-tamper system in this use case that will just as happily lock out an intruder as the purported owner.

XKCD's take on encrypting drives.
XKCD’s take on encrypting drives.

In the case of Windows 11, the TPM is used for this boot validation (Secure Boot), as well as storing the (highly controversial) Windows Hello’s biometric data and Bitlocker whole-disk encryption keys. Important to note here is that a TPM is not an essential feature for this kind of functionality, but rather a potentially more secure way to prevent tampering, while also making data recovery more complicated for the owner. This makes Trusted Computing effectively more a kind of Paranoid Computing, where the assumption is made that beyond the TPM you cannot trust anything about the hardware or software on the system until verified, with the user not being a part of the validation chain.

Theoretically, validating the boot process can help detect boot viruses, but this comes with a range of complications, not the least of which is that this would at most allow you to boot into Windows safe mode, if at all. You’d still need a virus scanner to detect and remove the infection, so using TPM-enforced Secure Boot does not help you here and can even complicate troubleshooting.

Outside of a corporate or government environment where highly sensitive data is handled, the benefits of a TPM are questionable, and there have been cases of Windows users who got locked out of their own data by Bitlocker failing to decrypt the drive, for whatever reason. Expect support calls from family members on Windows 11 to become trickier as a result, also because firmware TPM (fTPM) bugs can cause big system issues like persistent stuttering.

Breaking The Rules

As much as Microsoft keeps trying to ram^Wgently convince us consumers to follow its ‘hard’ requirements, there are always ways to get around these. After all, software is just software, and thus Windows 11 can be installed on unsupported CPUs without a TPM or even an ‘unsupported’ version 1.2 TPM. Similarly, the ‘online Microsoft account’ requirement can be dodged with a few skillful tweaks and commands. The real question here is whether it makes sense to jump through these hoops to install Windows 11 on that first generation AMD Ryzen or Intel Core 2 Duo system from a support perspective.

Fortunately, one does not have to worry about losing access to Microsoft customer support here, because we all know that us computer peasants do not get that included with our Windows Home or Pro license. The worry is more about Windows Updates, especially security updates and updates that may break the OS installation by using CPU instructions unsupported by the local hardware.

Although Microsoft published a list of Windows 11 CPU requirements, it’s not immediately obvious what they are based on. Clearly it’s not about actual missing CPU instructions, or you wouldn’t even be able to install and run the OS. The only true hard limit in Windows 11 (for now) appears to be the UEFI BIOS requirement, but dodging the TPM 2.0 & CPU requirements is as easy as a quick dive into the Windows Registry by adding the AllowUpgradesWithUnsupportedTPMOrCPU key to HKEY_LOCAL_MACHINE\SYSTEM\Setup\MoSetup. You still need a TPM 1.2 module in this case.

When you use a tool like Rufus to write the Windows 11 installer to a USB stick you can even toggle a few boxes to automatically have all of this done for you. This even includes the option to completely disable TPM as well as the Secure Boot and 8 GB of RAM requirements. Congratulations, your 4 GB RAM, TPM-less Core 2 Duo system now runs Windows 11.

Risk Management

It remains to be seen whether Microsoft will truly enforce the TPM and CPU requirements in the future, that is requiring Secure Boot with Bitlocker. Over on the Apple side of the fence, the hardware has been performing system drive encryption along with other ‘security’ features since the appearance of the Apple T2 chip. It might be that Microsoft envisions a similar future for PCs, one in which even something as sacrilegious as dual-booting another OS becomes impossible.

Naturally, this raises the spectre of increasing hostility between users and their computer systems. Can you truly trust that Bitlocker won’t suddenly decide that it doesn’t want to unlock the boot drive any more? What if an fTPM issue bricks the system, or that a sneaky Windows 11 update a few months or years from now prevents a 10th generation Intel CPU from running the OS without crashing due to missing instructions? Do you really trust Microsoft that far?

It does seem like there are only bad options if you want to stay in the Windows ecosystem.

Strategizing

Clearly, there are no good responses to what Microsoft is attempting here with its absolutely user-hostile actions that try to push a closed, ‘AI’-infused ecosystem on its victi^Wusers. As someone who uses Windows 10 on a daily basis, this came only after running Windows 7 for as long as application support remained in place, which was years after Windows 7 support officially ended.

Perhaps for Windows users, sticking to Windows 10 is the best strategy here, while pushing software and hardware developers to keep supporting it (and maybe Windows 7 again too…). Windows 11 came preinstalled on the system that I write this on, but I erased it with a Windows 10 installation and reused the same, BIOS embedded, license key. I also disabled fTPM in the BIOS to prevent ‘accidental upgrades’, as Microsoft was so fond of doing back with Windows 7 when everyone absolutely had to use Windows 10.

I can hear the ‘just use Linux/BSD/etc.’ crowd already clamoring in the comments, and will preface this by saying that although I use Linux and BSD on a nearly basis, I would not want to use it as my primary desktop system for too many reasons to go into here. I’m still holding out some hope for ReactOS hitting its stride Any Day Now™, but it’s tough to see a path forward beyond running Windows 10 into the ground, while holding only faint hope for Windows 12 becoming Microsoft’s gigantic Mea Culpa.

After having used PCs and Windows since the Windows 3.x days, I can say that the situation for personal computers today is unprecedented, not unlike that for the World Wide Web. It seems increasingly less like customer demand is appealed to by companies, and more an inverse where customers have become merely consumers: receptacles for the AI and marketing-induced slop of the day, whose purchases serve to make stock investors happy because Line Goes Up©.

The Cost of a Cheap UPS is 10 Hours and a Replacement PCB

Por: Maya Posch
29 Mayo 2025 at 08:00

Recently [Florin] was in the market for a basic uninterruptible power supply (UPS) to provide some peace of mind for the smart home equipment he had stashed around. Unfortunately, the cheap Serioux LD600LI unit he picked up left a bit to be desired, and required a bit of retrofitting.

To be fair, the issues that [Florin] ended up dealing with were less about the UPS’ capability to deal with these power issues, and more with the USB interface on the UPS. Initially the UPS seemed to communicate happily with HomeAssistant (HA) via Network UPS Tools over a generic USB protocol, after figuring out what device profile matched this re-branded generic UPS. That’s when HA began to constantly lose the connection with the UPS, risking its integration in the smart home setup.

The old and new USB-serial boards side by side. (Credit: VoltLog, YouTube)
The old and new USB-serial boards side by side. (Credit: VoltLog, YouTube)

After tearing down the UPS to see what was going on, [Florin] found that it used a fairly generic USB-serial adapter featuring the common Cypress CY7C63310 family of low-speed USB controller. Apparently the firmware on this controller was simply not up to the task or poorly implemented, so a replacement was needed.

The process and implementation is covered in detail in the video. It’s quite straightforward, taking the 9600 baud serial link from the UPS’ main board and using a Silabs CP2102N USB-to-UART controller to create a virtual serial port on the USB side. These conversion boards have to be fully isolated, of course, which is where the HopeRF CMT8120 dual-channel digital isolator comes into play.

After assembly it almost fully worked, except that a Sonoff Zigbee controller in the smart home setup used the same Silabs controller, with thus the same USB PID/VID combo. Fortunately in Silabs AN721 it’s described how you can use an alternate PID (0xEA63) which fixed this issue until the next device with a CP2102N is installed

As it turns out, the cost of a $40 UPS is actually 10 hours of work and $61 in parts, although one cannot put a value on all the lessons learned here.

Washington Consumers Gain Right to Repair for Cellphones and More

Por: Maya Posch
28 Mayo 2025 at 11:00

Starting January 1st, 2026, Washington state’s new Right to Repair law will come into effect. It requires manufacturers to make tools, parts and documentation available for diagnostics and repair of ‘digital electronics’, including cellphones, computers and similar appliances. The relevant House Bill 1483 was signed into law last week after years of fighting to make it a reality.

A similar bill in Oregon faced strong resistance from companies like Apple, despite backing another Right to Repair bill in California. In the case of the Washington bill, there were positive noises from the side of Google and Microsoft, proclaiming themselves and their products to be in full compliance with such consumer laws.

Of course, the devil is always in the details, with Apple in particular being a good example how to technically comply with the letter of the law, while throwing up many (financial) roadblocks for anyone interested in obtaining said tools and components. Apple’s penchant part pairing is also a significant problem when it comes to repairing devices, even if these days it’s somewhat less annoying than it used to be — assuming you’re running iOS 18 or better.

That said, we always applaud these shifts in the right direction, where devices can actually be maintained and repaired without too much fuss, rather than e.g. cellphones being just disposable items that get tossed out after two years or less.

Thanks to [Robert Piston] for the tip.

NASA Is Shutting Down the International Space Station Sighting Website

Por: Maya Posch
26 Mayo 2025 at 11:00

Starting on June 12, 2025, the NASA Spot the Station website will no longer provide ISS sighting information, per a message recently sent out. This means no information on sighting opportunities provided on the website, nor will users subscribed via the website receive email or text notifications. Instead anyone interested in this kind of information will have to download the mobile app for iOS or Android.

Obviously this has people, like [Keith Cowing] over at Nasa Watch, rather disappointed, due to how the website has been this easy to use resource that anyone could access, even without access to a smart phone. Although the assumption is often made that everyone has their own personal iOS or Android powered glass slab with them, one can think of communal settings where an internet café is the sole form of internet access. There is also the consideration that for children a website like this would be much easier to access. They would now see this opportunity vanish.

With smart phone apps hardly a replacement for a website of this type, it’s easy to see how the app-ification of the WWW continues, at the cost of us users.

Recovering Water From Cooling Tower Plumes With Plume Abatement

Por: Maya Posch
23 Mayo 2025 at 02:00
The French Chinon nuclear power plant with its low-profile, forced-draft cooling towers. (Credit: EDF/Marc Mourceau)
Electrostatic droplet capture system installed on an HVAC condenser. (Credit: Infinite Cooling)

As a common feature with thermal power plants, cooling towers enable major water savings compared to straight through cooling methods. Even so, the big clouds of water vapor above them are a clear indication of how much cooling water is still effectively lost, with water vapor also having a negative impact on the environment. Using so-called plume abatement the amount of water vapor making it into the environment can be reduced, with recently a trial taking place at a French nuclear power plant.

This trial featured electrostatic droplet capture by US-based Infinite Cooling, which markets it as able to be retrofitted to existing cooling towers and similar systems, including the condensers of office HVAC systems. The basic principle as the name suggests involves capturing the droplets that form as the heated, saturated air leaves the cooling tower, in this case with an electrostatic charge. The captured droplets are then led to a reservoir from which it can be reused in the cooling system. This reduces both the visible plume and the amount of cooling water used.

In a 2021 review article by [Shuo Li] and [M.R. Flynn] in Environmental Fluid Mechanics the different approaches to plume abatement are looked at. Traditional plume abatement designs use parallel streams of air, with the goal being to have condensation commence as early as possible rather than after having been exhausted into the surrounding air. Some methods used a mesh cover to provide a surface to condense on, while a commercially available technology are condensing modules which use counterflow in an air-to-air heat exchanger.

Other commercial solutions include low-profile, forced-draft hybrid cooling towers, yet it seems that electrostatic droplet capture is a rather new addition here. With even purely passive systems already seeing ~10% recapturing of lost cooling water, these active methods may just be the ticket to significantly reduce cooling water needs without being forced to look at (expensive) dry cooling methods.

Top image: The French Chinon nuclear power plant with its low-profile, forced-draft cooling towers. (Credit: EDF/Marc Mourceau)

Gene Editing Spiders to Produce Red Fluorescent Silk

Por: Maya Posch
22 Mayo 2025 at 02:00
Regular vs gene-edited spider silk with a fluorescent gene added. (Credit: Santiago-Rivera et al. 2025, Angewandte Chemie)
Regular vs gene-edited spider silk with a fluorescent gene added. (Credit: Santiago-Rivera et al. 2025, Angewandte Chemie)

Continuing the scientific theme of adding fluorescent proteins to everything that moves, this time spiders found themselves at the pointy end of the CRISPR-Cas9 injection needle. In a study by researchers at the University of Bayreuth, common house spiders (Parasteatoda tepidariorum) had a gene inserted for a red fluorescent protein in addition to having an existing gene for eye development disabled. This was the first time that spiders have been subjected to this kind of gene-editing study, mostly due to how fiddly they are to handle as well as their genome duplication characteristics.

In the research paper in Angewandte Chemie the methods and results are detailed, with the knock-out approach of the sine oculis (C1) gene being tried first as a proof of concept. The CRISPR solution was injected into the ovaries of female spiders, whose offspring then carried the mutation. With clear deficiencies in eye development observable in this offspring, the researchers moved on to adding the red fluorescent protein gene with another CRISPR solution, which targets the major ampullate gland where the silk is produced.

Ultimately, this research serves to demonstrate that it is possible to not only study spiders in more depth these days using tools like CRISPR-Cas9, but also that it is possible to customize and study spider silk production.

Fault Analysis of a 120W Anker GaNPrime Charger

Por: Maya Posch
21 Mayo 2025 at 08:00

Taking a break from his usual prodding at suspicious AliExpress USB chargers, [DiodeGoneWild] recently had a gander at what used to be a good USB charger.

The Anker 737 USB charger prior to its autopsy. (Credit: DiodeGoneWild, YouTube)
The Anker 737 USB charger prior to its autopsy.

Before it went completely dead, the Anker 737 GaNPrime USB charger which a viewer sent him was capable of up to 120 Watts combined across its two USB-C and one USB-A outputs. Naturally the charger’s enclosure couldn’t be opened non-destructively, and it turned out to have (soft) potting compound filling up the voids, making it a treat to diagnose. Suffice it to say that these devices are not designed to be repaired.

With it being an autopsy, the unit got broken down into the individual PCBs, with a short detected that eventually got traced down to an IC marked ‘SW3536’, which is one of the ICs that communicates with the connected USB device to negotiate the voltage. With the one IC having shorted, it appears that it rendered the entire charger into an expensive paperweight.

Since the charger was already in pieces, the rest of the circuit and its ICs were also analyzed. Here the gallium nitride (GaN) part was found in the Navitas GaNFast NV6136A FET with integrated gate driver, along with an Infineon CoolGaN IGI60F1414A1L integrated power stage. Unfortunately all of the cool technology was rendered useless by one component developing a short, even if it made for a fascinating look inside one of these very chonky USB chargers.

Plugging Plasma Leaks in Magnetic Confinement With New Guiding Center Model

Por: Maya Posch
21 Mayo 2025 at 02:00

Although the idea of containing a plasma within a magnetic field seems straightforward at first, plasmas are highly dynamic systems that will happily escape magnetic confinement if given half a chance. This poses a major problem in nuclear fusion reactors and similar, where escaping particles like alpha (helium) particles from the magnetic containment will erode the reactor wall, among other issues. For stellarators in particular the plasma dynamics are calculated as precisely as possible so that the magnetic field works with rather than against the plasma motion, with so far pretty good results.

Now researchers at the University of Texas reckon that they can improve on these plasma system calculations with a new, more precise and efficient method. Their suggested non-perturbative guiding center model is published in (paywalled) Physical Review Letters, with a preprint available on Arxiv.

The current perturbative guiding center model admittedly works well enough that even the article authors admit to e.g. Wendelstein 7-X being within a few % of being perfectly optimized. While we wouldn’t dare to take a poke at what exactly this ‘data-driven symmetry theory’ approach exactly does differently, it suggests the use machine-learning based on simulation data, which then presumably does a better job at describing the movement of alpha particles through the magnetic field than traditional simulations.

Top image: Interior of the Wendelstein 7-X stellarator during maintenance.

3D Printing Uranium-Carbide Structures for Nuclear Applications

Por: Maya Posch
20 Mayo 2025 at 02:00
Fabrication of uranium-based components via DLP. (Zanini et al., Advanced Functional Materials, 2024)
Fabrication of uranium-based components via DLP. (Zanini et al., Advanced Functional Materials, 2024)

Within the nuclear sciences, including fuel production and nuclear medicine (radiopharmaceuticals), often specific isotopes have to be produced as efficiently as possible, or allow for the formation of (gaseous) fission products and improved cooling without compromising the fuel. Here having the target material possess an optimized 3D shape to increase surface area and safely expel gases during nuclear fission can be hugely beneficial, but producing these shapes in an efficient way is complicated. Here using photopolymer-based stereolithography (SLA) as  recently demonstrated by [Alice Zanini] et al. with a research article in Advanced Functional Materials provides an interesting new method to accomplish these goals.

In what is essentially the same as what a hobbyist resin-based SLA printer does, the photopolymer here is composed of uranyl ions as the photoactive component along with carbon precursors, creating solid uranium dicarbide (UC2) structures upon exposure to UV light with subsequent sintering. Uranium-carbide is one of the alternatives being considered for today’s uranium ceramic fuels in fission reactors, with this method possibly providing a reasonable manufacturing method.

Uranium carbide is also used as one of the target materials in ISOL (isotope separation on-line) facilities like CERN’s ISOLDE, where having precise control over the molecular structure of the target could optimize isotope production. Ideally equivalent photocatalysts to uranyl can be found to create other optimized targets made of other isotopes as well, but as a demonstration of how SLA (DLP or otherwise) stands to transform the nuclear sciences and industries.

The Lost 256 KB Japanese ROM for the Macintosh Plus Has Been Found

Por: Maya Posch
18 Mayo 2025 at 02:00
Mainboard with the two 128 kB EPROMs containing the special MacIntosh Plus ROM image. (Credit: Pierre Dandumont)

The Apple Macintosh Plus was one of the most long-lived Apple computers and saw three revisions of its 128 kB-sized ROMs during its life time, at least officially. There’s a fourth ROM, sized 256 kB, that merges the Western ROMs with Japanese fonts. This would save a user of a Western MacIntosh Plus precious start-up time & RAM when starting software using these fonts. Unfortunately, this mythical ROM existed mostly as a kind of myth, until [Pierre Dandumont] uncovered one (machine-translated, French original).

The two 128 kB EPROMs containing the special MacIntosh Plus ROM image. (Credit: Pierre Dandumont)
The two 128 kB EPROMs containing the special MacIntosh Plus ROM image. (Credit: Pierre Dandumont)

Since this particular ROM was rumored to exist somewhere in the Japanese market, [Pierre] went hunting for Japanese Macintosh Plus mainboards, hoping to find a board with this ROM. After finally getting lucky, the next task was to dump the two 128 kB EPROMs. An interesting sidenote here is that the MacIntosh Plus’ two ROM sockets use the typical programming voltage pin (Vpp) as an extra address line, enabling 256 kB of capacity across the two sockets.

This detail probably is why this special ROM wasn’t verified before, as people tried to dump them without using that extra address line, i.e. as a typical 27C512 64 kB EPROM instead of this proprietary pinout, which would have resulted in the same 64 kB dump as from a standard ROM. Thanks to [Doc TB]’s help and his UCA device it was possible to dump the whole image, with the images available for download.

Using this ROM image was the next interesting part, as [Pierre] initially didn’t have a system to test it with, and emulators assume the 128 kB ROM format. Fortunately these are all problems that can be solved, allowing the ROM images to be validated on real hardware as well as a modified MAME build. We were informed by [Pierre] that MAME releases will soon be getting support for this ROM as well.

Voyager 1’s Primary Thrusters Revived Before DSN Command Pause

Por: Maya Posch
16 Mayo 2025 at 02:00

As with all aging bodies, clogged tubes form an increasing issue. So too with the 47-year old Voyager 1 spacecraft and its hydrazine thrusters. Over the decades silicon dioxide from an aging rubber diaphragm in the fuel tank has been depositing on the inside of fuel tubes. By switching between primary, backup and trajectory thrusters the Voyager team has been managing this issue and kept the spacecraft oriented towards Earth. Now this team has performed another amazing feat by reviving the primary thrusters that had been deemed a loss since a heater failure back in 2004.

Unlike the backup thrusters, the trajectory thrusters do not provide roll control, so reviving the primary thrusters would buy the mission a precious Plan B if the backup thrusters were to fail. Back in 2004 engineers had determined that the heater failure was likely unfixable, but over twenty years later the team was willing to give it another shot. Analyzing the original failure data indicated that a glitch in the heater control circuit was likely to blame, so they might actually still work fine.

To test this theory, the team remotely jiggled the heater controls, enabled the primary thrusters and waited for the spacecraft’s star tracker to drift off course so that the thrusters would be engaged by the board computer. Making this extra exciting was scheduled maintenance on the Deep Space Network coming up in a matter of weeks, which would troubleshooting impossible for months.

To their relief the changes appears to have worked, with the heaters clearly working again, as are the primary thrusters. With this fix in place, it seems that Voyager 1 will be with us for a while longer, even as we face the inevitable end to the amazing Voyager program.

LACED: Peeling Back PCB Layers With Chemical Etching and a Laser

Por: Maya Posch
15 Mayo 2025 at 20:00
Exposed inner copper on multilayer PCB. (Credit: mikeselectricstuff, YouTube)

Once a printed circuit board (PCB) has been assembled it’s rather hard to look inside of it, which can be problematic when you have e.g. a multilayer PCB of an (old) system that you really would like to dissect to take a look at the copper layers and other details that may be hidden inside, such as Easter eggs on inner layers. [Lorentio Brodeso]’s ‘LACED’ project offers one such method, using both chemical etching and a 5 Watt diode engraving laser to remove the soldermask, copper and FR4 fiberglass layers.

This project uses sodium hydroxide (NaOH) to dissolve the solder mask, followed by hydrogen chloride (HCl) and hydrogen peroxide (H2O2) to dissolve the copper in each layer. The engraving laser is used for the removing of the FR4 material. Despite the ‘LACED’ acronym standing for Laser-Controlled Etching and Delayering, the chemical method(s) and laser steps are performed independently from each other.

This makes it in a way a variation on the more traditional CNC-based method, as demonstrated by [mikeselectricstuff] (as shown in the top image) back in 2016, alongside the detailed setup video of how a multi-layer PCB was peeled back with enough resolution to make out each successive copper and fiberglass layer.

The term ‘laser-assisted etching’ is generally used for e.g. glass etching with HF or KOH in combination with a femtosecond laser to realize high-resolution optical features, ‘selective laser etching’ where the etchant is assisted by the laser-affected material, or the related laser-induced etching of hard & brittle materials. Beyond these there is a whole world of laser-induced or laser-activated etching or functionalized methods, all of which require that the chemical- and laser-based steps are used in unison.

Aside from this, the use of chemicals to etch away soldermask and copper does of course leave one with a similar messy clean-up as when etching new PCBs, but it can provide more control due to the selective etching, as a CNC’s carbide bit will just as happily chew through FR4 as copper. When reverse-engineering a PCB you will have to pick whatever method works best for you.

Top image: Exposed inner copper on multilayer PCB. (Credit: mikeselectricstuff, YouTube)

Turning a Chromebox Into a Proper Power-Efficient PC

Por: Maya Posch
14 Mayo 2025 at 08:00

Google’s ChromeOS and associated hardware get a lot of praise for being easy to manage and for providing affordable hardware for school and other educational settings. It’s also undeniable that their locked-down nature forms a major obstacle and provides limited reusability.

That is unless you don’t mind doing a bit of hacking. The Intel Core i3-8130U based Acer CXI3 Chromebox that the [Hardware Haven] YouTube channel got their mittens on is a perfect example.

The Acer CXI3 in all its 8th-gen Intel Core i3 glory. (Credit: Hardware Haven, YouTube)
The Acer CXI3 in all its 8th-gen Intel Core i3 glory. (Credit: Hardware Haven, YouTube)

This is a nice mini PC, with modular SODIMM RAM, an NVMe storage M.2 slot as well as a slot for the WiFi card (or SATA adapter). After resetting the Chromebox to its default configuration and wiping the previous user, it ran at just a few Watt idle at the desktop. As this is just a standard x86_64 PC, the only thing holding it back from booting non-ChromeOS software is the BIOS, which is where [MrChromebox]‘s exceedingly useful replacement BIOSes for supported systems come into play, with easy to follow instructions.

Reflashing the Acer CXI3 unit was as easy as removing the write-protect screw from the mainboard, running the Firmware Utility Script from a VT2 terminal (Ctrl+Alt+F2 on boot & chronos as login) and flashing either the RW_LEGACY or UEFI ROM depending on what is supported and desired. This particular Chromebox got the full UEFI treatment, and after upgrading the NVMe SSD, Debian-based Proxmox installed without a hitch. Interestingly, idle power dropped from 2.6 Watt under ChromeOS to 1.6 Watt under Proxmox.

If you have a Chromebox that’s supported by [MrChromebox], it’s worth taking a poke at, with some solutions allowing you to even dualboot ChromeOS and another OS if that’s your thing.

❌
❌