Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

The Convoluted Way Intel’s 386 Implemented its Registers

The 386's main register bank, at the bottom of the datapath. The numbers show how many bits of the register can be accessed. (Credit: Ken Shirriff)

The fact that modern-day x86 processors still pretty much support the same operating systems and software as their ancestors did is quite a feat. Much of this effort had already been accomplished with the release of the 80386 (later 386) CPU in 1985, which was not only the first 32-bit x86 CPU, but was also backwards compatible with 8- and 16-bit software dating back to the 1970s. Making this work transparently was anything but straightforward, as [Ken Shirriff]’s recent analysis of the 80386’s main register file shows.

Labelled Intel 80386 die shot. (Credit: Ken Shirriff)
Labelled Intel 80386 die shot. (Credit: Ken Shirriff)

Using die shots of the 386’s registers and surrounding silicon, it’s possible to piece together how backwards compatibility was implemented. The storage cells of the registers are implemented using static memory (SRAM) as is typical, with much of the register file triple-ported (two read, one write).

Most interestingly is the presence of different circuits (6) to support accessing the register file for 8-, 16- or 32-bit writes and reads. The ‘shuffle’ network as [Ken] calls it is responsible for handling these distinct writes and reads, which also leads to the finding that the bottom 16 bits in the registers are actually interleaved to make this process work smoother.

Fortunately for Intel (and AMD) engineers, this feat wouldn’t have to be repeated again with the arrival of AMD64 and x86_64 many years later, when the 386’s mere 275,000 transistors on a 1 µm process would already be ancient history.

Want to dive even deeper in to the 386? This isn’t the first time [Ken] has looked at the iconic chip.

Testing a Cheap Bench Power Supply Sold on Amazon

We’ve all seen those cheap bench power supply units (PSUs) for sale online, promising specifications that would cost at least a hundred dollars or more if it were a name brand model. Just how much of a compromise are these (usually rebranded) PSUs, and should you trust them with your electronics? Recently [Denki Otaku] purchased a cheap unit off Amazon Japan for a closer look, and found it to be rather lacking.

Internals of the cheap bench PSU reviewed by Denki Otaku on YouTube.
Internals of the cheap bench PSU reviewed by Denki Otaku on YouTube.

Major compromises include the lack of an output power switch, no way to check the set current limit without shorting the output, very slow drop in output voltage while adjusting due to the lack of a discharge circuit, and other usability concerns. That’s when the electrical performance of the PSU got tested.

Right off the bat a major issue in this cheap switching mode PSU is clear, as it has 200 mV peak-to-peak noise on its output, meaning very little output filtering. The maximum power output rating was also far too optimistic, with a large voltage drop observed. Despite this, it generally worked well, and the internals – with a big aluminium plate as heatsink – look pretty clean with an interesting architecture.

The general advice is to get a bench PSU that has features like an output power button and an easy way to set the voltage and current limits. Also do not connect it to anything that cares about noise and ripple unless you know that it produces clean, filtered output voltages.

Neutron Flux Impact on Quartz Expansion Rate

Radiation-induced volumetric expansion (RIVE) is a concern for any concrete structures that are exposed to neutron flux and other types of radiation that affect crystalline structures within the aggregate. For research facilities and (commercial) nuclear reactors, RIVE is generally considered to be one of the factors that sets a limit on the lifespan of these structures through the cracking that occurs as for example quartz within the concrete undergoes temporary amorphization with a corresponding volume increase. The significance of RIVE within the context of a nuclear power plant is however still poorly studied.

A recent study by [Ippei Maruyama] et al. as published in the Journal of Nuclear Materials placed material samples in the LVR-15 research reactor in the Czech Republic to expose them to an equivalent neutron flux. What their results show is that at the neutron flux levels that are expected at the biological shield of a nuclear power plant, the healing effect from recrystallization is highly likely to outweigh the damaging effects of amorphization, ergo preventing RIVE damage.

This study follows earlier research on the topic at the University of Tokyo by [Kenta Murakami] et al., as well as by Chinese researchers, as in e.g. [Weiping Zhang] et al. in Nuclear Engineering and Technology. [Murayama] et al. recommend that for validation of these findings concrete samples from decommissioned nuclear plants are to be examined for signs of RIVE.

Heading image: SEM-EDS images of the pristine (left) and the irradiated (right) MC sample. (Credit: I. Murayama et al, 2022)

A Gentle Introduction to COBOL

As the Common Business Oriented Language, COBOL has a long and storied history. To this day it’s quite literally the financial bedrock for banks, businesses and financial institutions, running largely unnoticed by the world on mainframes and similar high-reliability computer systems. That said, as a domain-specific language targeting boring business things it doesn’t quite get the attention or hype as general purpose programming or scripting languages. Its main characteristic in the public eye appears be that it’s ‘boring’.

Despite this, COBOL is a very effective language for writing data transactions, report generating and related tasks. Due to its narrow focus on business applications, it gets one started with very little fuss, is highly self-documenting, while providing native support for decimal calculations, and a range of I/O access and database types, even with mere files. Since version 2002 COBOL underwent a number of modernizations, such as free-form code, object-oriented programming and more.

Without further ado, let’s fetch an open-source COBOL toolchain and run it through its paces with a light COBOL tutorial.

Spoiled For Choice

It used to be that if you wanted to tinker with COBOL, you pretty much had to either have a mainframe system with OS/360 or similar kicking around, or, starting in 1999, hurl yourself at setting up a mainframe system using the Hercules mainframe emulator. Things got a lot more hobbyist & student friendly in 2002 with the release of GnuCOBOL, formerly OpenCOBOL, which translates COBOL into C code before compiling it into a binary.

While serviceable, GnuCOBOL is not a compiler, and does not claim any level of standard adherence despite scoring quite high against the NIST test suite. Fortunately, The GNU Compiler Collection (GCC) just got updated with a brand-new COBOL frontend (gcobol) in the 15.1 release. The only negative is that for now it is Linux-only, but if your distribution of choice already has it in the repository, you can fetch it there easily. Same for Windows folk who have WSL set up, or who can use GnuCOBOL with MSYS2.

With either compiler installed, you are now ready to start writing COBOL. The best part of this is that we can completely skip talking about the Job Control Language (JCL), which is an eldritch horror that one would normally be exposed to on IBM OS/360 systems and kin. Instead we can just use GCC (or GnuCOBOL) any way we like, including calling it directly on the CLI, via a Makefile or integrated in an IDE if that’s your thing.

Hello COBOL

As is typical, we start with the ‘Hello World’ example as a first look at a COBOL application:

IDENTIFICATION DIVISION.
    PROGRAM-ID. hello-world.
PROCEDURE DIVISION.
    DISPLAY "Hello, world!".
    STOP RUN.

Assuming we put this in a file called hello_world.cob, this can then be compiled with e.g. GnuCOBOL: cobc -x -free hello_world.cob.

The -x indicates that an executable binary is to be generated, and -free that the provided source uses free format code, meaning that we aren’t bound to specific column use or sequence numbers. We’re also free to use lowercase for all the verbs, but having it as uppercase can be easier to read.

From this small example we can see the most important elements, starting with the identification division with the program ID and optionally elements like the author name, etc. The program code is found in the procedure division, which here contains a single display verb that outputs the example string. Of note is the use of the period (.) as a statement terminator.

At the end of the application we indicate this with stop run., which terminates the application, even if called from a sub program.

Hello Data

As fun as a ‘hello world’ example is, it doesn’t give a lot of details about COBOL, other than that it’s quite succinct and uses plain English words rather than symbols. Things get more interesting when we start looking at the aspects which define this domain specific language, and which make it so relevant today.

Few languages support decimal (fixed point) calculations, for example. In this COBOL Basics project I captured a number of examples of this and related features. The main change is the addition of the data division following the identification division:

DATA DIVISION.
WORKING-STORAGE SECTION.
01 A PIC 99V99 VALUE 10.11.
01 B PIC 99V99 VALUE 20.22.
01 C PIC 99V99 VALUE 00.00.
01 D PIC $ZZZZV99 VALUE 00.00.
01 ST PIC $*(5).99 VALUE 00.00.
01 CMP PIC S9(5)V99 USAGE COMP VALUE 04199.04.
01 NOW PIC 99/99/9(4) VALUE 04102034.

The data division is unsurprisingly where you define the data used by the program. All variables used are defined within this division, contained within the working-storage section. While seemingly overwhelming, it’s fairly easily explained, starting with the two digits in front of each variable name. This is the data level and is how COBOL structures data, with 01 being the highest (root) level, with up to 49 levels available to create hierarchical data.

This is followed by the variable name, up to 30 characters, and then the PICTURE (or PIC) clause. This specifies the type and size of an elementary data item. If we wish to define a decimal value, we can do so as two numeric characters (represented by 9) followed by an implied decimal point V, with two decimal numbers (99).  As shorthand we can use e.g. S9(5) to indicate a signed value with 5 numeric characters. There a few more special characters, such as an asterisk which replaces leading zeroes and Z for zero suppressing.

The value clause does what it says on the tin: it assigns the value defined following it to the variable. There is however a gotcha here, as can be seen with the NOW variable that gets a value assigned, but due to the PIC format is turned into a formatted date (04/10/2034).

Within the procedure division these variables are subjected to addition (ADD A TO B GIVING C.), subtraction with rounding (SUBTRACT A FROM B GIVING C ROUNDED.), multiplication (MULTIPLY A BY CMP.) and division (DIVIDE CMP BY 20 GIVING ST.).

Finally, there are a few different internal formats, as defined by USAGE: these are computational (COMP) and display (the default). Here COMP stores the data as binary, with a variable number of bytes occupied, somewhat similar to char, short and int types in C. These internal formats are mostly useful to save space and to speed up calculations.

Hello Business

In a previous article I went over the reasons why a domain specific language like COBOL cannot be realistically replaced by a general language. In that same article I discussed the Hello Business project that I had written in COBOL as a way to gain some familiarity with the language. That particular project should be somewhat easy to follow with the information provided so far. New are mostly file I/O, loops, the use of perform and of course the Report Writer, which is probably best understood by reading the IBM Report Writer Programmer’s Manual (PDF).

Going over the entire code line by line would take a whole article by itself, so I will leave it as an exercise for the reader unless there is somehow a strong demand by our esteemed readers for additional COBOL tutorial articles.

Suffice it to say that there is a lot more functionality in COBOL beyond these basics. The IBM ILE COBOL reference (PDF), the IBM Mainframer COBOL tutorial, the Wikipedia entry and others give a pretty good overview of many of these features, which includes object-oriented COBOL, database access, heap allocation, interaction with other languages and so on.

Despite being only a novice COBOL programmer at this point, I have found this DSL to be very easy to pick up once I understood some of the oddities about the syntax, such as the use of data levels and the PIC formats. It is my hope that with this article I was able to share some of the knowledge and experiences I gained over the past weeks during my COBOL crash course, and maybe inspire others to also give it a shot. Let us know if you do!

Building an nRF52840 and Battery-Powered Zigbee Gate Sensor

Recently [Glen Akins] reported on Bluesky that the Zigbee-based sensor he had made for his garden’s rear gate was still going strong after a Summer and Winter on the original 2450 lithium coin cell. The construction plans and design for the unit are detailed in a blog post. At the core is the MS88SF2 SoM by Minew, which features a Nordic Semiconductor nRF52840 SoC that provides the Zigbee RF feature as well as the usual MCU shenanigans.

Previously [Glen] had created a similar system that featured buttons to turn the garden lights on or off, as nobody likes stumbling blindly through a dark garden after returning home. Rather than having to fumble around for a button, the system should detect when said rear gate is opened. This would send a notification to [Glen]’s phone as well as activate the garden lights if it’s dark outside.

Although using a reed relay switch seemed like an obvious solution to replace the buttons, holding it closed turned out to require too much power. After looking at a few commercial examples, he settled for a Hall effect sensor solution with the Ti DRV5032FB in a TO-92 package.

Whereas the average person would just have put in a PIR sensor-based solution, this Zigbee solution does come with a lot more smart home creds, and does not require fumbling around with a smartphone or yelling at a voice assistant to turn the garden lights on.

Comparing ‘AI’ for Basic Plant Care With Human Brown Thumbs

The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])
The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])
Like so many of us, [Liam] has a big problem. Whether it’s the curse of Brown Thumbs or something else, those darn houseplants just keep dying despite guides always telling you how incredibly easy it is to keep them from wilting with a modicum of care each day, even without opting for succulents or cactuses. In a fit of despair [Liam] decided to pin his hopes on what we have come to accept as the Savior of Humankind, namely ‘AI’, which can stand for a lot of things, but it’s definitely really smart and can even generate pretty pictures, which is something that the average human can not. Hence it’s time to let an LLM do all the smart plant caring stuff with ‘PlantMom’.

Since LLMs (so far) don’t come with physical appendages by default, some hardware had to be plugged together to measure parameters like light, temperature and soil moisture. Add to this a grow light & a water pump and all that remained was to tell the LMM using an extensive prompt (containing Python code) what it should do (keep the plant alive) and what responses (Python methods) are available. All that was left now was to let the ‘AI’ (Google’s Gemma 3) handle it.

To say that this resulted in a dramatic failure along with what reads like an emotional breakdown (on the side of the LLM) would be an understatement. The LLM insisted on turning the grow light on when it should be off and had the most erratic watering responses imaginable based on absolutely incorrect interpretations of the ADC data (flipping dry vs wet). After this episode the poor chili plant’s soil was absolutely saturated and is still trying to dry out, while the ongoing LLM experiment (with empty water tank) has the grow light blasting more often than a weed farm.

So far it seems like that the humble state machine’s job is still safe from being taken over by ‘AI’, and not even brown thumb folk can kill plants this efficiently.

ASUS GPU Uses Gyroscope to Warn for Sagging Cards

It’s not really an understatement to say that over the years videocards (GPUs) — much like CPU coolers — have become rather chonky. Unfortunately, the PCIe slots they plug into were never designed with multi-kilogram cards in mind. All this extra weight is of course happily affected by gravity.

The dialog in Asus' GPU Tweak software that shows the degrees of sag for your GPU. (Credit: Asus)

The problem has gotten to the point that the ASUS ROG Astral RTX 5090 card added a Bosch Sensortec BMI323 inertial measurement unit (IMU) to provide an accelerometer and angular rate (gyroscope) measurements, as reported by [Uniko’s Hardware] (in Chinese, see English [Videocardz] article).

There are so-called anti-sag brackets that provide structural support to the top of the GPU where it isn’t normally secured. But since this card weighs in at over 6 pounds (3 kilograms) for the air cooled model, it appears the bracket wasn’t enough, and active monitoring was necessary.

The software allows you to set a sag angle at which you receive a notification, which would presumably either allow you to turn off the system and readjust the GPU, or be forewarned when it is about to rip itself loose from the PCIe slot and crash to the bottom of the case.

YKK’s Self-Propelled Zipper: Less Crazy Than It Seems

The self-propelled zip fastener uses a worm gear to propel itself along the teeth. (Credit: YKK)
The self-propelled zip fastener uses a worm gear to propel itself along the teeth. (Credit: YKK)

At first glance the very idea of a zipper that unzips and zips up by itself seems somewhat ridiculous. After all, these contraptions are mostly used on pieces of clothing and gear where handling a zipper isn’t really sped up by having an electric motor sluggishly move through the rows of interlocking teeth. Of course, that’s not the goal of YKK, which is the world’s largest manufacturer of zip fasteners. The demonstrated prototype (original PR in Japanese) shows this quite clearly, with a big tent and equally big zipper that you’d be hard pressed to zip up by hand.

The basic application is thus more in industrial applications and similar, with one of the videos, embedded below, showing a large ‘air tent’ being zipped up automatically after demonstrating why for a human worker this would be an arduous task. While this prototype appears to be externally powered, adding a battery or such could make it fully wireless and potentially a real timesaver when setting up large structures such as these. Assuming the battery isn’t flat, of course.

It might conceivably be possible to miniaturize this technology to the point where it’d ensure that no fly is ever left unzipped, and school kids can show off their new self-zipping jacket to their friends. This would of course have to come with serious safety considerations, as anyone who has ever had a bit of their flesh caught in a zipper can attest to.

https://www.theverge.com/news/656535/ykk-self-propelled-zipper-prototype

https://www.ykk.com/newsroom/g_news/2025/20250424.html

Abusing DuckDB-WASM To Create Doom In SQL

These days you can run Doom anywhere on just about anything, with things like porting Doom to JavaScript these days about as interesting as writing Snake in BASIC on one’s graphical calculator. In a twist, [Patrick Trainer] had the idea to use SQL instead of JS to do the heavy lifting of the Doom game loop. Backed by the Web ASM version of  the analytical DuckDB database software, a Doom-lite clone was coded that demonstrates the principle that anything in life can be captured in a spreadsheet or database application.

Rather than having the game world state implemented in JavaScript objects, or pixels drawn to a Canvas/WebGL surface, this implementation models the entire world state in the database. To render the player’s view, the SQL VIEW feature is used to perform raytracing (in SQL, of course). Any events are defined as SQL statements, including movement. Bullets hitting a wall or impacting an enemy result in the bullet and possibly the enemy getting DELETE-ed.

The role of JavaScript in this Doom clone is reduced to gluing the chunks of SQL together and handling sprite Z-buffer checks as well as keyboard input. The result is a glorious ASCII-based game of Doom which you can experience yourself with the DuckDB-Doom project on GitHub. While not very practical, it was absolutely educational, showing that not only is it fun to make domain specific languages do things they were never designed for, but you also get to learn a lot about it along the way.

Thanks to [Particlem] for the tip.

How Supercritical CO2 Working Fluid Can Increase Power Plant Efficiency

Multi-stage steam turbine with turbo generator (rear, in red) at the German lignite plant Boxberg (Credit: Siemens AG)

Using steam to produce electricity or perform work via steam turbines has been a thing for a very long time. Today it is still exceedingly common to use steam in this manner, with said steam generated either by burning something (e.g. coal, wood), by using spicy rocks (nuclear fission) or from stored thermal energy (e.g. molten salt). That said, today we don’t use steam in the same way any more as in the 19th century, with e.g. supercritical and pressurized loops allowing for far higher efficiencies. As covered in a recent video by [Ryan Inis], a more recent alternative to using water is supercritical carbon dioxide (CO2), which could boost the thermal efficiency even further.

In the video [Ryan Inis] goes over the basics of what the supercritical fluid state of CO2 is, which occurs once the critical point is reached at 31°C and 83.8 bar (8.38 MPa). When used as a working fluid in a thermal power plant, this offers a number of potential advantages, such as the higher density requiring smaller turbine blades, and the potential for higher heat extraction. This is also seen with e.g. the shift from boiling to pressurized water loops in BWR & PWR nuclear plants, and in gas- and salt-cooled reactors that can reach far higher efficiencies, as in e.g. the HTR-PM and MSRs.

In a 2019 article in Power the author goes over some of the details, including the different power cycles using this supercritical fluid, such as various Brayton cycles (some with extra energy recovery) and the Allam cycle. Of course, there is no such thing as a free lunch, with corrosion issues still being worked out, and despite the claims made in the video, erosion is also an issue with supercritical CO2 as working fluid. That said, it’s in many ways less of an engineering issue than supercritical steam generators due to the far more extreme critical point parameters of water.

If these issues can be overcome, it could provide some interesting efficiency boosts for thermal plants, with the caveat that likely nobody is going to retrofit existing plants, supercritical steam (coal) plants already exist and new nuclear plant designs are increasingly moving towards gas, salt and even liquid metal coolants, though secondary coolant loops (following the typical steam generator) could conceivably use CO2 instead of water where appropriate.

Why Physical Media Deserved To Die

Over the course of more than a decade, physical media has gradually vanished from public view. Once computers had an optical drive except for ultrabooks, but these days computer cases that even support an internal optical drive are rare. Rather than manuals and drivers included on a data CD you now get a QR code for an online download. In the home, DVD and Blu-ray (BD) players have given way to smart TVs with integrated content streaming apps for various services. Music and kin are enjoyed via smart speakers and smart phones that stream audio content from online services. Even books are now commonly read on screens rather than printed on paper.

With these changes, stores selling physical media have mostly shuttered, with much audiovisual and software content no longer pressed on discs or printed. This situation might lead one to believe that the end of physical media is nigh, but the contradiction here comes in the form of a strong revival of primarily what used to be considered firmly obsolete physical media formats. While CD, DVD and BD sales are plummeting off a cliff, vinyl records, cassette tapes and even media like 8-track tapes are undergoing a resurgence, in a process that feels hard to explain.

How big is this revival, truly? Are people tired of digital restrictions management (DRM), high service fees and/or content in their playlists getting vanished or altered? Perhaps it is out of a sense of (faux) nostalgia?

A Deserved End

Ask anyone who ever has had to use any type of physical media and they’ll be able to provide a list of issues with various types of physical media. Vinyl always was cumbersome, with clicking and popping from dust in the grooves, and gradual degradation of the record with a lifespan in the hundreds of plays. Audio cassettes were similar, with especially Type I cassettes having a lot of background hiss that the best Dolby noise reduction (NR) systems like Dolby B, C and S only managed to tame to a certain extent.

Add to this issues like wow and flutter, and the joy of having a sticky capstan roller resulting in tape spaghetti when you open the tape deck, ruining that precious tape that you had only recently bought. These issues made CDs an obvious improvement over both audio formats, as they were fully digital and didn’t wear out from merely playing them hundreds of times.

Although audio CDs are better in many ways, they do not lend themselves to portability very well unlike tape, with anti-shock read buffers being an absolute necessity to make portable CD players at all feasible. This same issue made data CDs equally fraught with issues, especially if you went into the business of writing your own (data or audio) CDs  on CD-Rs. Burning coasters was exceedingly common for years. Yet the alternative was floppies – with LS-120 and Zip disks never really gaining much market share – or early Flash memory, whether USB sticks (MB-sized) or those inside MP3 players and early digital cameras. There were no good options, but we muddled on.

On the video side VHS had truly brought the theater into the home, even if it was at fuzzy NTSC or PAL quality with astounding color bleed and other artefacts. Much like audio cassette tapes, here too the tape would gradually wear out, with the analog video signal ensuring that making copies would result in an inferior copy.

Rewinding VHS tapes was the eternal curse, especially when popping in that tape from the rental store and finding that the previous person had neither been kind, nor rewound. Even if being able to record TV shows to watch later was an absolute game changer, you better hope that you managed to appease the VHS gods and had it start at the right time.

It could be argued that DVDs were mostly perfect aside from a lack of recording functionality by default and pressed DVDs featuring unskippable trailers and similar nonsense. One can also easily argue here that DVDs’ success was mostly due to its DRM getting cracked early on when the CSS master key leaked. DVDs would also introduce region codes that made this format less universal than VHS and made things like snapping up a movie during an overseas vacation effectively impossible.

This was a practice that BDs doubled-down on, and with the encryption still intact to this day, it means that unlike with DVDs you must pay to be allowed to watch BDs which you previously bought, whether this cost is included in the dedicated BD player, or the license cost for a BD video player for on the PC.

Thus, when streaming services gave access to a very large library for a (small) monthly fee, and cloud storage providers popped up everywhere, it seemed like a no-brainer. It was like paying to have the world’s largest rental store next door to your house, or a data storage center for all your data. All you had to do was create an account, whip out the credit card and no more worries.

Combined with increasingly faster and ubiquitous internet connections, the age of physical media seemed to have come to its natural end.

The Revival

US vinyl record sales 1995-2020. (Credit: Ippantekina with RIAA data)
US vinyl record sales 1995-2020. (Credit: Ippantekina with RIAA data)

Despite this perfect landscape where all content is available all the time via online services through your smart speakers, smart TVs, smart phones and so on, the number of vinyl record sales has surged the past years despite its reported death in the early 2000s. In 2024 the vinyl records market grew another few percent, with more and more new record pressing plants coming online. In addition to vinyl sales, UK cassette sales also climbed, hitting 136,000 in 2023. CD sales meanwhile have kept plummeting, but not as strongly any more.

Perhaps the most interesting part is that most of newly released vinyl are new albums, by artists like Taylor Swift, yet even the classics like Pink Floyd and Fleetwood Mac keep selling. As for the ‘why’, some suggest that it’s the social and physical experience of physical media and the associated interactions that is a driving factor. In this sense it’s more of a (cultural) statement, as a rejection of the world of digital streaming. The sleeve of a vinyl record also provides a lot of space for art and other creative expressions, all of which provides a collectible value.

Although so far CD sales haven’t really seen a revival, the much lower cost of producing these shiny discs could reinvigorate this market too for many of the same reasons. Who doesn’t remember hanging out with a buddy and reading the booklet of a CD album which they just put into the player after fetching it from their shelves? Maybe checking the lyrics, finding some fun Easter eggs or interesting factoids that the artists put in it, and having a good laugh about it with your buddy.

As some responded when asked, they like the more intimate experience of vinyl records along with having a physical item to own, while streaming music is fine for background music. The added value of physical media here is thus less about sound quality, and more about a (social) experience and collectibles.

On the video side of the fence there is no such cheerful news, however. In 2024 sales of DVDs, BDs and UHD (4K) BDs dropped by 23.4% year-over-year to below $1B in the US. This compares with a $16B market value in 2005, underlining a collapsing market amidst brick & mortar stores either entirely removing their DVD & BD section, or massively downsizing it. Recently Sony also announced the cessation of its recordable BD, MD and MiniDV media, as a further indication of where the market is heading.

Despite streaming services repeatedly bifurcating themselves and their libraries, raising prices and constantly pulling series and movies, this does not seem to hurt their revenue much, if at all. This is true for both audiovisual services like Netflix, but also for audio streaming services like Spotify, who are seeing increasing demand (per Billboard), even as digital track sales are seeing a pretty big drop year-over-year (-17.9% for Week 16 of 2025).

Perhaps this latter statistic is indicative that the idea of ‘buying’ a music album or film which – courtesy of DRM – is something that you’re technically only leasing, is falling out of favor. This is also illustrated by the end of Apple’s iPod personal music player in favor of its smart phones that are better suited for streaming music on the go. Meanwhile many series and some movies are only released on certain streaming platforms with no physical media release, which incentivizes people to keep those subscriptions.

To continue the big next-door-rental-store analogy, in 2025 said single rental store has now turned into fifty stores, each carrying a different inventory that gets either shuffled between stores or tossed into a shredder from time to time. Yet one of them will have That New Series™, which makes them a great choice, unless you like more rare and older titles, in which case you get to hunt the dusty shelves over at EBay and kin.

It’s A Personal Thing

Humans aren’t automatons that have to adhere to rigid programming. They have each their own preferences, ideologies and wishes. While for some people the DRM that has crept into the audiovisual world since DVDs, Sony’s MiniDisc (with initial ATRAC requirement), rootkits on audio CDs, and digital music sales continues to be a deal-breaker, others feel no need to own all the music and videos they like and put them on their NAS for local streaming. For some the lower audio quality of Spotify and kin is no concern, much like for those who listened to 64 kbit WMA files in the early 2000s, while for others only FLACs ripped from a CD can begin to appease their tastes.

Reading through the many reports about ‘the physical media’ revival, what jumps out is that on one hand it is about the exclusivity of releasing something on e.g. vinyl, which is also why sites like Bandcamp offer the purchase of a physical album, and mainstream artists more and more often opt for this. This ties into the other noticeable reason, which is the experience around physical media. Not just that of handling the physical album and operating of the playback device, but also that of the offline experience, being able to share the experience with others without any screens or other distractions around. Call it touching grass in a socializing sense.

As I mentioned already in an earlier article on physical media and its purported revival, there is no reason why people cannot enjoy both physical media as well as online streaming. If one considers the rental store analogy, the former (physical media) is much the same as it always was, while online streaming merely replaces the brick & mortar rental store. Except that these new rental stores do not take requests for tapes or DVDs not in inventory and will instead tell you to subscribe to another store or use a VPN, but that’s another can of worms.

So far optical media seems to be still in freefall, and it’s not certain whether it will recover, or even whether there might be incentives in board rooms to not have DVDs and BDs simply die. Here the thought of having countless series and movies forever behind paywalls, with occasional ‘vanishings’ might be reason enough for more people to seek out a physical version they can own, or it may be that the feared erasure of so much media in this digital, DRM age is inevitable.

Running Up That Hill

Original Sony Walkman TPS-L2 from 1979.
Original Sony Walkman TPS-L2 from 1979.

The ironic thing about this revival is that it seems influenced very much by streaming services, such as with the appearance of a portable cassette player in Netflix’s Stranger Things, not to mention Rocket Raccoon’s original Sony Walkman TPS-L2 in Marvel’s Guardians of the Galaxy.

After many saw Sony’s original Walkman in the latter movie, there was a sudden surge in EBay searches for this particular Walkman, as well as replicas being produced by the bucket load, including 3D printed variants. This would seem to support the theory that the revival of vinyl and cassette tapes is more about the experiences surrounding these formats, rather than anything inherent to the format itself, never mind the audio quality.

As we’re now well into 2025, we can quite confidently state that vinyl and cassette tape sales will keep growing this year. Whether or not new (and better) cassette mechanisms (with Dolby NR) will begin to be produced again along with Type II tapes remains to be seen, but there seems to be an inkling of hope there. It was also reported that Dolby is licensing new cassette mechanisms for NR, so who knows.

Meanwhile CD sales may stabilize and perhaps even increase again, in the midst of still a very uncertain future optical media in general. Recordable optical media will likely continue its slow death, as in the PC space Flash storage has eaten its lunch and demanded seconds. Even though PCs no longer tend to have 5.25″ bays for optical drives, even a simple Flash thumb drive tends to be faster and more durable than a BD. Here the appeal of ‘cloud storage’ has been reduced after multiple incidents of data loss & leaks in favor of backing up to a local (SSD) drive.

Finally, as old-school physical audio formats experience a revival, there just remains the one question about whether movies and series will soon only be accessible via streaming services, alongside a veritable black market of illicit copies, or whether BD versions of movies and series will remain available for sale. With the way things are going, we may see future releases on VHS, to match the vibe of vinyl and cassette tapes.

In lieu of clear indications from the industry on what direction things will be heading into, any guess is probably valid at this point. The only thing that seems abundantly clear at this point is that physical media had to die first for us to learn to truly appreciate it.

PoX: Super-Fast Graphene-Based Flash Memory

Recently a team at Fudan University claimed to have developed a picosecond-level Flash memory device (called ‘PoX’) that has an access time of a mere 400 picoseconds. This is significantly faster than the millisecond level access times of NAND Flash memory, and more in the ballpark of DRAM, while still being non-volatile. Details on the device technology were published in Nature.

In the paper by [Yutong Xing] et al. they describe the memory device as using a two-dimensional Dirac graphene-channel Flash memory structure, with hot carrier injection for both electron and hole injection, meaning that it is capable of both writing and erasing. Dirac graphene refers to the unusual electron transport properties of typical monolayer graphene sheets.

Demonstrated was a write speed of 400 picoseconds, non-volatile storage and a 5.5 × 106 cycle endurance with a programming voltage of 5 V. It are the unique properties of a Dirac material like graphene that allow these writes to occur significantly faster than in a typical silicon transistor device.

What is still unknown is how well this technology scales, its power usage, durability and manufacturability.

Remembering UCSD p-System, the Pascal Virtual Machine

Long before the Java Virtual Machine (JVM) was said to take the world by storm, the p-System  (pseudo-system, or virtual machine) developed at the University of California, San Diego (UCSD) provided a cross-platform environment for the UCSD’s Pascal dialect. Later on, additional languages would also be made available for the UCSD p-System, such as Fortran (by Apple Computer) and Ada (by TeleSoft), not unlike the various languages targeting the JVM today in addition to Java. The p-System could be run on an existing OS or as its own OS directly on the hardware. This was extremely attractive in the fragmented home computer market of the 1980s.

After the final release of version IV of UCSD p-System (IV.2.2 R1.1) in 1987, the software died a slow death, but this doesn’t mean it is forgotten. People like [Hans Otten] have documented the history and technical details of the UCSD p-System, and the UCSD Pascal dialect went on to inspire Borland Pascal.

Recently [Mark Bessey] also reminisced about using the p-System in High School with computer programming classes back in 1986. This inspired him to look at re-experiencing Apple Pascal as well as UCSD Pascal on the UCSD p-System, possibly writing a p-System machine. Even if it’s just for nostalgia’s sake, it’s pretty cool to tinker with what is effectively the Java Virtual Machine or Common Language Runtime of the 1970s, decades before either of those were a twinkle in a software developer’s eyes.

Another common virtual runtime of the era was CHIP-8. It is also gone, but not quite forgotten.

Preventing Galvanic Corrosion in Water Cooling Loops

Water is an excellent coolant, but the flip side is that it is also an excellent solvent. This, in short, is why any water cooling loop is also a prime candidate for an interesting introduction to the galvanic metal series, resulting in severe corrosion that commences immediately. In a recent video by [der8aer], this issue is demonstrated using a GPU cold plate. The part is made out of nickel-plated copper and features many small channels to increase surface area with the coolant.

The surface analysis of the sample cold plate after a brief exposure to distilled water, showing the deposited copper atoms. (Credit: der8auer, YouTube)
The surface analysis of the sample cold plate after a brief exposure to distilled water shows the deposited copper atoms. (Credit: der8auer, YouTube)

Theoretically, if one were to use distilled water in a coolant loop that contains a single type of metal (like copper), there would be no issue. As [der8auer] points out, fittings, radiators, and the cooling block are nearly always made of various metals and alloys like brass, for example. This thus creates the setup for galvanic corrosion, whereby one metal acts as the anode and the other as a cathode. While this is desirable in batteries, for a cooling loop, this means that the water strips metal ions off the anode and deposits them on the cathode metal.

The nickel-plated cold plate should be immune to this if the plating were perfect. However, as demonstrated in the video, even a brief exposure to distilled water at 60°C induced strong galvanic corrosion. Analysis in an SEM showed that the imperfect nickel plating allowed copper ions to be dissolved into the water before being deposited on top of the nickel (cathode). In a comparison with another sample that had a coolant with corrosion inhibitor (DP Ultra) used, no such corrosion was observed, even after much longer exposure.

This DP Ultra coolant is mostly distilled water but has glycol added. The glycol improves the pH and coats surfaces to prevent galvanic corrosion. The other element is benzotriazole, which provides similar benefits. Of course, each corrosion inhibitor targets a specific environment, and there is also the issue with organic films forming, which may require biocides to be added. As usual, water cooling has more subtlety than you’d expect.

China’s TMSR-LF1 Molten Salt Thorium Reactor Begins Live Refueling Operations

The TMSR-LF1 building seen from the sky. (Credit: SINAP)

Although uranium-235 is the typical fuel for commercial fission reactors on account of it being fissile, it’s relatively rare relative to the fertile U-238 and thorium (Th-232). Using either of these fertile isotopes to breed new fuel from is thus an attractive proposition. Despite this, only India and China have a strong focus on using Th-232 for reactors, the former using breeders (Th-232 to U-233) to create fertile uranium fuel. China has demonstrated its approach — including refueling a live reactor — using a fourth-generation molten salt reactor.

The original research comes from US scientists in the 1960s. While there were tests in the MSRE reactor, no follow-up studies were funded. The concept languished until recently, with Terrestrial Energy’s Integral MSR and construction on China’s 2 MW TMSR-LF1 experimental reactor commencing in 2018 before first criticality in 2023. One major advantage of an MSR with liquid fuel (the -LF part in the name) is that it can filter out contaminants and add fresh fuel while the reactor is running. With this successful demonstration, along with the breeding of uranium fuel from thorium last year, a larger, 10 MW design can now be tested.

Since TMSR doesn’t need cooling water, it is perfect for use in arid areas. In addition, China is working on using a TMSR-derived design in nuclear-powered container vessels. With enough thorium around for tens of thousands of years, these low-maintenance MSR designs could soon power much of modern society, along with high-temperature pebble bed reactors, which is another concept that China has recently managed to make work with the HTR-PM design.

Meanwhile, reactors are getting smaller in general.

Restoring an Abandoned Game Boy Kiosk

Back in the olden days, there existed physical game stores, which in addition to physical games would also have kiosks where you could try out the current game consoles and handhelds. Generally these kiosks held the console, a display and any controllers if needed. After a while these kiosks would get scrapped, with only a very few ending up being rescued and restored. One of the lucky ones is a Game Boy kiosk, which [The Retro Future] managed to snag after it was found in a construction site. Sadly the thing was in a very rough condition, with the particle board especially being mostly destroyed.

Display model Game Boy, safely secured into the demo kiosk. (Credit: The Retro Future, YouTube)
Display model Game Boy, safely secured into the demo kiosk. (Credit: The Retro Future, YouTube)

These Game Boy kiosks also featured a special Game Boy, which – despite being super rare – also was hunted down. This led to the restoration, which included recovering as much of the original particle board as possible, with a professional furniture restore ([Don]) lending his expertise. This provides a master class in how to patch up damaged particle board, as maligned as this wood-dust-and-glue material is.

The boards were then reassembled more securely than the wood screws used by the person who had found the destroyed kiosk, in a way that allows for easy disassembly if needed. Fortunately most of the plastic pieces were still intact, and the Game Boy grey paint was easily matched. Next was reproducing a missing piece of art work, with fortunately existing versions available as reference. For a few missing metal bits that held the special Game Boy in place another kiosk was used to provide measurements.

After all this, the kiosk was powered back on, and it was like 1990 was back once again, just in time for playing Tetris on a dim, green-and-black screen while hunched half into the kiosk at the game store.

Haircuts in Space: How to Keep Your Astronauts Looking Fresh

NASA astronaut Catherine Coleman gives ESA astronaut Paolo Nespoli a haircut in the Kibo laboratory on the ISS in 2011. (Credit: NASA)
NASA astronaut Catherine Coleman gives ESA astronaut Paolo Nespoli a haircut in the Kibo laboratory on the ISS in 2011. (Credit: NASA)

Although we tend to see mostly the glorious and fun parts of hanging out in a space station, the human body will not cease to do its usual things, whether it involves the digestive system, or even something as mundane as the hair that sprouts from our heads. After all, we do not want our astronauts to return to Earth after a half-year stay in the ISS looking as if they got marooned on an uninhabited island. Introducing the onboard barbershop on the ISS, and the engineering behind making sure that after a decade the ISS doesn’t positively look like it got the 1970s shaggy wall carpet treatment.

The basic solution is rather straightforward: an electric hair clipper attached to a vacuum that will whisk the clippings safely into a container rather than being allowed to drift around. In a way this is similar to the vacuums you find on routers and saws in a woodworking shop, just with more keratin rather than cellulose and lignin.

On the Chinese Tiangong space station they use a similar approach, with the video showing how simple the system is, little more than a small handheld vacuum cleaner attached to the clippers. Naturally, you cannot just tape the vacuum cleaner to some clippers and expect it to get most of the clippings, which is where both the ISS and Tiangong solutions seems to have a carefully designed construction to maximize the hair removal. You can see the ISS system in action in this 2019 video from the Canadian Space Agency.

Of course, this system is not perfect, but amidst the kilograms of shed skin particles from the crew, a few small hair clippings can likely be handled by the ISS’ air treatment systems just fine. The goal after all is to not have a massive expanding cloud of hair clippings filling up the space station.

Rise of the Robots: How Robots Are Changing Dairy Farms

Running a dairy farm used to be a rather hands-on experience, with the farmer required to be around every few hours to milk the cows, feed them, do all the veterinarian tasks that the farmer can do themselves, and so on. The introduction of milking machines in the early 20th century however began a trend of increased automation whereby a single farmer could handle a hundred cows by the end of the century instead of only a couple. In a recent article in IEEE Spectrum covers the continued progress here is covered, including cows milking themselves, on-demand style as shown in the top image.

The article focuses primarily on Dutch company Lely’s recent robots, which range from said self-milking robots to a manure cleaning robot that looks like an oversized Roomba. With how labor-intensive (and low-margin) a dairy farm is, any level of automation that can improve matters will be welcomed, with so far Lely’s robots receiving a mostly positive response. Since cows are pretty smart, they will happily guide themselves to a self-milking robot when they feel that their udders are full enough, which can save the farmer a few hours of work each day, as this robot handles every task, including the cleaning of the udders prior to milking and sanitizing itself prior to inviting the next cow into its loving embrace.

As for the other tasks, speaking as a genuine Dutch dairy farm girl who was born & raised around cattle (and sheep), the idea of e.g. mucking out stables being taken over by robots is something that raises a lot more skepticism. After all, a farmer’s children have to earn their pocket money somehow, which includes mucking, herding, farm maintenance and so on. Unless those robots get really cheap and low maintenance, the idea of fully automated dairy farms may still be a long while off, but reducing the workload and making cows happier are definitely lofty goals.

Top image: The milking robot that can automatically milk a cow without human assistance. (Credit: Lely)

GK STM32 MCU-Based Handheld Game System

These days even a lowly microcontroller can easily trade blows with – or surpass – desktop systems of yesteryear, so it is little wonder that DIY handheld gaming systems based around an MCU are more capable than ever. A case in point is the GK handheld gaming system by [John Cronin], which uses an MCU from relatively new and very capable STM32H7S7 series, specifically the 225-pin STM32H7S7L8 in TFBGA package with a single Cortex-M7 clocked at 600 MHz and a 2D NeoChrom GPU.

Coupled with this MCU are 128 MB of XSPI (hexa-SPI) SDRAM, a 640×480 color touch screen, gyrometer, WiFi network support and the custom gkOS in the firmware for loading games off an internal SD card. A USB-C port is provided to both access said SD card’s contents and for recharging the internal Li-ion battery.

As can be seen in the demonstration video, it runs a wide variety of games, ranging from DOOM (of course), Quake, as well as Command and Conquer: Red Alert and emulators for many consoles, with the Mednafen project used to emulate Game Boy, Super Nintendo and other systems at 20+ FPS. Although there aren’t a lot of details on how optimized the current firmware is, it seems to be pretty capable already.

❌