Using steam to produce electricity or perform work via steam turbines has been a thing for a very long time. Today it is still exceedingly common to use steam in this manner, with said steam generated either by burning something (e.g. coal, wood), by using spicy rocks (nuclear fission) or from stored thermal energy (e.g. molten salt). That said, today we don’t use steam in the same way any more as in the 19th century, with e.g. supercritical and pressurized loops allowing for far higher efficiencies. As covered in a recent video by [Ryan Inis], a more recent alternative to using water is supercritical carbon dioxide (CO2), which could boost the thermal efficiency even further.
In the video [Ryan Inis] goes over the basics of what the supercritical fluid state of CO2 is, which occurs once the critical point is reached at 31°C and 83.8 bar (8.38 MPa). When used as a working fluid in a thermal power plant, this offers a number of potential advantages, such as the higher density requiring smaller turbine blades, and the potential for higher heat extraction. This is also seen with e.g. the shift from boiling to pressurized water loops in BWR & PWR nuclear plants, and in gas- and salt-cooled reactors that can reach far higher efficiencies, as in e.g. the HTR-PM and MSRs.
In a 2019 article in Power the author goes over some of the details, including the different power cycles using this supercritical fluid, such as various Brayton cycles (some with extra energy recovery) and the Allam cycle. Of course, there is no such thing as a free lunch, with corrosion issues still being worked out, and despite the claims made in the video, erosion is also an issue with supercritical CO2 as working fluid. That said, it’s in many ways less of an engineering issue than supercritical steam generators due to the far more extreme critical point parameters of water.
If these issues can be overcome, it could provide some interesting efficiency boosts for thermal plants, with the caveat that likely nobody is going to retrofit existing plants, supercritical steam (coal) plants already exist and new nuclear plant designs are increasingly moving towards gas, salt and even liquid metal coolants, though secondary coolant loops (following the typical steam generator) could conceivably use CO2 instead of water where appropriate.
Over the course of more than a decade, physical media has gradually vanished from public view. Once computers had an optical drive except for ultrabooks, but these days computer cases that even support an internal optical drive are rare. Rather than manuals and drivers included on a data CD you now get a QR code for an online download. In the home, DVD and Blu-ray (BD) players have given way to smart TVs with integrated content streaming apps for various services. Music and kin are enjoyed via smart speakers and smart phones that stream audio content from online services. Even books are now commonly read on screens rather than printed on paper.
With these changes, stores selling physical media have mostly shuttered, with much audiovisual and software content no longer pressed on discs or printed. This situation might lead one to believe that the end of physical media is nigh, but the contradiction here comes in the form of a strong revival of primarily what used to be considered firmly obsolete physical media formats. While CD, DVD and BD sales are plummeting off a cliff, vinyl records, cassette tapes and even media like 8-track tapes are undergoing a resurgence, in a process that feels hard to explain.
How big is this revival, truly? Are people tired of digital restrictions management (DRM), high service fees and/or content in their playlists getting vanished or altered? Perhaps it is out of a sense of (faux) nostalgia?
A Deserved End
Ask anyone who ever has had to use any type of physical media and they’ll be able to provide a list of issues with various types of physical media. Vinyl always was cumbersome, with clicking and popping from dust in the grooves, and gradual degradation of the record with a lifespan in the hundreds of plays. Audio cassettes were similar, with especially Type I cassettes having a lot of background hiss that the best Dolby noise reduction (NR) systems like Dolby B, C and S only managed to tame to a certain extent.
Add to this issues like wow and flutter, and the joy of having a sticky capstan roller resulting in tape spaghetti when you open the tape deck, ruining that precious tape that you had only recently bought. These issues made CDs an obvious improvement over both audio formats, as they were fully digital and didn’t wear out from merely playing them hundreds of times.
Although audio CDs are better in many ways, they do not lend themselves to portability very well unlike tape, with anti-shock read buffers being an absolute necessity to make portable CD players at all feasible. This same issue made data CDs equally fraught with issues, especially if you went into the business of writing your own (data or audio) CDs on CD-Rs. Burning coasters was exceedingly common for years. Yet the alternative was floppies – with LS-120 and Zip disks never really gaining much market share – or early Flash memory, whether USB sticks (MB-sized) or those inside MP3 players and early digital cameras. There were no good options, but we muddled on.
On the video side VHS had truly brought the theater into the home, even if it was at fuzzy NTSC or PAL quality with astounding color bleed and other artefacts. Much like audio cassette tapes, here too the tape would gradually wear out, with the analog video signal ensuring that making copies would result in an inferior copy.
Rewinding VHS tapes was the eternal curse, especially when popping in that tape from the rental store and finding that the previous person had neither been kind, nor rewound. Even if being able to record TV shows to watch later was an absolute game changer, you better hope that you managed to appease the VHS gods and had it start at the right time.
It could be argued that DVDs were mostly perfect aside from a lack of recording functionality by default and pressed DVDs featuring unskippable trailers and similar nonsense. One can also easily argue here that DVDs’ success was mostly due to its DRM getting cracked early on when the CSS master key leaked. DVDs would also introduce region codes that made this format less universal than VHS and made things like snapping up a movie during an overseas vacation effectively impossible.
This was a practice that BDs doubled-down on, and with the encryption still intact to this day, it means that unlike with DVDs you must pay to be allowed to watch BDs which you previously bought, whether this cost is included in the dedicated BD player, or the license cost for a BD video player for on the PC.
Thus, when streaming services gave access to a very large library for a (small) monthly fee, and cloud storage providers popped up everywhere, it seemed like a no-brainer. It was like paying to have the world’s largest rental store next door to your house, or a data storage center for all your data. All you had to do was create an account, whip out the credit card and no more worries.
Combined with increasingly faster and ubiquitous internet connections, the age of physical media seemed to have come to its natural end.
The Revival
US vinyl record sales 1995-2020. (Credit: Ippantekina with RIAA data)
Despite this perfect landscape where all content is available all the time via online services through your smart speakers, smart TVs, smart phones and so on, the number of vinyl record sales has surged the past years despite its reported death in the early 2000s. In 2024 the vinyl records market grew another few percent, with more and more new record pressing plants coming online. In addition to vinyl sales, UK cassette sales also climbed, hitting 136,000 in 2023. CD sales meanwhile have kept plummeting, but not as strongly any more.
Perhaps the most interesting part is that most of newly released vinyl are new albums, by artists like Taylor Swift, yet even the classics like Pink Floyd and Fleetwood Mac keep selling. As for the ‘why’, some suggest that it’s the social and physical experience of physical media and the associated interactions that is a driving factor. In this sense it’s more of a (cultural) statement, as a rejection of the world of digital streaming. The sleeve of a vinyl record also provides a lot of space for art and other creative expressions, all of which provides a collectible value.
Although so far CD sales haven’t really seen a revival, the much lower cost of producing these shiny discs could reinvigorate this market too for many of the same reasons. Who doesn’t remember hanging out with a buddy and reading the booklet of a CD album which they just put into the player after fetching it from their shelves? Maybe checking the lyrics, finding some fun Easter eggs or interesting factoids that the artists put in it, and having a good laugh about it with your buddy.
As some responded when asked, they like the more intimate experience of vinyl records along with having a physical item to own, while streaming music is fine for background music. The added value of physical media here is thus less about sound quality, and more about a (social) experience and collectibles.
On the video side of the fence there is no such cheerful news, however. In 2024 sales of DVDs, BDs and UHD (4K) BDs dropped by 23.4% year-over-year to below $1B in the US. This compares with a $16B market value in 2005, underlining a collapsing market amidst brick & mortar stores either entirely removing their DVD & BD section, or massively downsizing it. Recently Sony also announced the cessation of its recordable BD, MD and MiniDV media, as a further indication of where the market is heading.
Despite streaming services repeatedly bifurcating themselves and their libraries, raising prices and constantly pulling series and movies, this does not seem to hurt their revenue much, if at all. This is true for both audiovisual services like Netflix, but also for audio streaming services like Spotify, who are seeing increasing demand (per Billboard), even as digital track sales are seeing a pretty big drop year-over-year (-17.9% for Week 16 of 2025).
Perhaps this latter statistic is indicative that the idea of ‘buying’ a music album or film which – courtesy of DRM – is something that you’re technically only leasing, is falling out of favor. This is also illustrated by the end of Apple’s iPod personal music player in favor of its smart phones that are better suited for streaming music on the go. Meanwhile many series and some movies are only released on certain streaming platforms with no physical media release, which incentivizes people to keep those subscriptions.
To continue the big next-door-rental-store analogy, in 2025 said single rental store has now turned into fifty stores, each carrying a different inventory that gets either shuffled between stores or tossed into a shredder from time to time. Yet one of them will have That New Series, which makes them a great choice, unless you like more rare and older titles, in which case you get to hunt the dusty shelves over at EBay and kin.
It’s A Personal Thing
Humans aren’t automatons that have to adhere to rigid programming. They have each their own preferences, ideologies and wishes. While for some people the DRM that has crept into the audiovisual world since DVDs, Sony’s MiniDisc (with initial ATRAC requirement), rootkits on audio CDs, and digital music sales continues to be a deal-breaker, others feel no need to own all the music and videos they like and put them on their NAS for local streaming. For some the lower audio quality of Spotify and kin is no concern, much like for those who listened to 64 kbit WMA files in the early 2000s, while for others only FLACs ripped from a CD can begin to appease their tastes.
Reading through the many reports about ‘the physical media’ revival, what jumps out is that on one hand it is about the exclusivity of releasing something on e.g. vinyl, which is also why sites like Bandcamp offer the purchase of a physical album, and mainstream artists more and more often opt for this. This ties into the other noticeable reason, which is the experience around physical media. Not just that of handling the physical album and operating of the playback device, but also that of the offline experience, being able to share the experience with others without any screens or other distractions around. Call it touching grass in a socializing sense.
As I mentioned already in an earlier article on physical media and its purported revival, there is no reason why people cannot enjoy both physical media as well as online streaming. If one considers the rental store analogy, the former (physical media) is much the same as it always was, while online streaming merely replaces the brick & mortar rental store. Except that these new rental stores do not take requests for tapes or DVDs not in inventory and will instead tell you to subscribe to another store or use a VPN, but that’s another can of worms.
So far optical media seems to be still in freefall, and it’s not certain whether it will recover, or even whether there might be incentives in board rooms to not have DVDs and BDs simply die. Here the thought of having countless series and movies forever behind paywalls, with occasional ‘vanishings’ might be reason enough for more people to seek out a physical version they can own, or it may be that the feared erasure of so much media in this digital, DRM age is inevitable.
Running Up That Hill
Original Sony Walkman TPS-L2 from 1979.
The ironic thing about this revival is that it seems influenced very much by streaming services, such as with the appearance of a portable cassette player in Netflix’s Stranger Things, not to mention Rocket Raccoon’s original Sony Walkman TPS-L2 in Marvel’s Guardians of the Galaxy.
After many saw Sony’s original Walkman in the latter movie, there was a sudden surge in EBay searches for this particular Walkman, as well as replicas being produced by the bucket load, including 3D printed variants. This would seem to support the theory that the revival of vinyl and cassette tapes is more about the experiences surrounding these formats, rather than anything inherent to the format itself, never mind the audio quality.
As we’re now well into 2025, we can quite confidently state that vinyl and cassette tape sales will keep growing this year. Whether or not new (and better) cassette mechanisms (with Dolby NR) will begin to be produced again along with Type II tapes remains to be seen, but there seems to be an inkling of hope there. It was also reported that Dolby is licensing new cassette mechanisms for NR, so who knows.
Meanwhile CD sales may stabilize and perhaps even increase again, in the midst of still a very uncertain future optical media in general. Recordable optical media will likely continue its slow death, as in the PC space Flash storage has eaten its lunch and demanded seconds. Even though PCs no longer tend to have 5.25″ bays for optical drives, even a simple Flash thumb drive tends to be faster and more durable than a BD. Here the appeal of ‘cloud storage’ has been reduced after multiple incidents of data loss & leaks in favor of backing up to a local (SSD) drive.
Finally, as old-school physical audio formats experience a revival, there just remains the one question about whether movies and series will soon only be accessible via streaming services, alongside a veritable black market of illicit copies, or whether BD versions of movies and series will remain available for sale. With the way things are going, we may see future releases on VHS, to match the vibe of vinyl and cassette tapes.
In lieu of clear indications from the industry on what direction things will be heading into, any guess is probably valid at this point. The only thing that seems abundantly clear at this point is that physical media had to die first for us to learn to truly appreciate it.
Recently a team at Fudan University claimed to have developed a picosecond-level Flash memory device (called ‘PoX’) that has an access time of a mere 400 picoseconds. This is significantly faster than the millisecond level access times of NAND Flash memory, and more in the ballpark of DRAM, while still being non-volatile. Details on the device technology were published in Nature.
In the paper by [Yutong Xing] et al. they describe the memory device as using a two-dimensional Dirac graphene-channel Flash memory structure, with hot carrier injection for both electron and hole injection, meaning that it is capable of both writing and erasing. Dirac graphene refers to the unusual electron transport properties of typical monolayer graphene sheets.
Demonstrated was a write speed of 400 picoseconds, non-volatile storage and a 5.5 × 106 cycle endurance with a programming voltage of 5 V. It are the unique properties of a Dirac material like graphene that allow these writes to occur significantly faster than in a typical silicon transistor device.
What is still unknown is how well this technology scales, its power usage, durability and manufacturability.
Long before the Java Virtual Machine (JVM) was said to take the world by storm, the p-System (pseudo-system, or virtual machine) developed at the University of California, San Diego (UCSD) provided a cross-platform environment for the UCSD’s Pascal dialect. Later on, additional languages would also be made available for the UCSD p-System, such as Fortran (by Apple Computer) and Ada (by TeleSoft), not unlike the various languages targeting the JVM today in addition to Java. The p-System could be run on an existing OS or as its own OS directly on the hardware. This was extremely attractive in the fragmented home computer market of the 1980s.
After the final release of version IV of UCSD p-System (IV.2.2 R1.1) in 1987, the software died a slow death, but this doesn’t mean it is forgotten. People like [Hans Otten] have documented the history and technical details of the UCSD p-System, and the UCSD Pascal dialect went on to inspire Borland Pascal.
Recently [Mark Bessey] also reminisced about using the p-System in High School with computer programming classes back in 1986. This inspired him to look at re-experiencing Apple Pascal as well as UCSD Pascal on the UCSD p-System, possibly writing a p-System machine. Even if it’s just for nostalgia’s sake, it’s pretty cool to tinker with what is effectively the Java Virtual Machine or Common Language Runtime of the 1970s, decades before either of those were a twinkle in a software developer’s eyes.
Another common virtual runtime of the era was CHIP-8. It is also gone, but not quite forgotten.
Water is an excellent coolant, but the flip side is that it is also an excellent solvent. This, in short, is why any water cooling loop is also a prime candidate for an interesting introduction to the galvanic metal series, resulting in severe corrosion that commences immediately. In a recent video by [der8aer], this issue is demonstrated using a GPU cold plate. The part is made out of nickel-plated copper and features many small channels to increase surface area with the coolant.
The surface analysis of the sample cold plate after a brief exposure to distilled water shows the deposited copper atoms. (Credit: der8auer, YouTube)
Theoretically, if one were to use distilled water in a coolant loop that contains a single type of metal (like copper), there would be no issue. As [der8auer] points out, fittings, radiators, and the cooling block are nearly always made of various metals and alloys like brass, for example. This thus creates the setup for galvanic corrosion, whereby one metal acts as the anode and the other as a cathode. While this is desirable in batteries, for a cooling loop, this means that the water strips metal ions off the anode and deposits them on the cathode metal.
The nickel-plated cold plate should be immune to this if the plating were perfect. However, as demonstrated in the video, even a brief exposure to distilled water at 60°C induced strong galvanic corrosion. Analysis in an SEM showed that the imperfect nickel plating allowed copper ions to be dissolved into the water before being deposited on top of the nickel (cathode). In a comparison with another sample that had a coolant with corrosion inhibitor (DP Ultra) used, no such corrosion was observed, even after much longer exposure.
This DP Ultra coolant is mostly distilled water but has glycol added. The glycol improves the pH and coats surfaces to prevent galvanic corrosion. The other element is benzotriazole, which provides similar benefits. Of course, each corrosion inhibitor targets a specific environment, and there is also the issue with organic films forming, which may require biocides to be added. As usual, water cooling has more subtlety than you’d expect.
Although uranium-235 is the typical fuel for commercial fission reactors on account of it being fissile, it’s relatively rare relative to the fertile U-238 and thorium (Th-232). Using either of these fertile isotopes to breed new fuel from is thus an attractive proposition. Despite this, only India and China have a strong focus on using Th-232 for reactors, the former using breeders (Th-232 to U-233) to create fertile uranium fuel. China has demonstrated its approach — including refueling a live reactor — using a fourth-generation molten salt reactor.
The original research comes from US scientists in the 1960s. While there were tests in the MSRE reactor, no follow-up studies were funded. The concept languished until recently, with Terrestrial Energy’s Integral MSR and construction on China’s 2 MW TMSR-LF1 experimental reactor commencing in 2018 before first criticality in 2023. One major advantage of an MSR with liquid fuel (the -LF part in the name) is that it can filter out contaminants and add fresh fuel while the reactor is running. With this successful demonstration, along with the breeding of uranium fuel from thorium last year, a larger, 10 MW design can now be tested.
Since TMSR doesn’t need cooling water, it is perfect for use in arid areas. In addition, China is working on using a TMSR-derived design in nuclear-powered container vessels. With enough thorium around for tens of thousands of years, these low-maintenance MSR designs could soon power much of modern society, along with high-temperature pebble bed reactors, which is another concept that China has recently managed to make work with the HTR-PM design.
Meanwhile, reactors are getting smaller in general.
Back in the olden days, there existed physical game stores, which in addition to physical games would also have kiosks where you could try out the current game consoles and handhelds. Generally these kiosks held the console, a display and any controllers if needed. After a while these kiosks would get scrapped, with only a very few ending up being rescued and restored. One of the lucky ones is a Game Boy kiosk, which [The Retro Future] managed to snag after it was found in a construction site. Sadly the thing was in a very rough condition, with the particle board especially being mostly destroyed.
Display model Game Boy, safely secured into the demo kiosk. (Credit: The Retro Future, YouTube)
These Game Boy kiosks also featured a special Game Boy, which – despite being super rare – also was hunted down. This led to the restoration, which included recovering as much of the original particle board as possible, with a professional furniture restore ([Don]) lending his expertise. This provides a master class in how to patch up damaged particle board, as maligned as this wood-dust-and-glue material is.
The boards were then reassembled more securely than the wood screws used by the person who had found the destroyed kiosk, in a way that allows for easy disassembly if needed. Fortunately most of the plastic pieces were still intact, and the Game Boy grey paint was easily matched. Next was reproducing a missing piece of art work, with fortunately existing versions available as reference. For a few missing metal bits that held the special Game Boy in place another kiosk was used to provide measurements.
After all this, the kiosk was powered back on, and it was like 1990 was back once again, just in time for playing Tetris on a dim, green-and-black screen while hunched half into the kiosk at the game store.
NASA astronaut Catherine Coleman gives ESA astronaut Paolo Nespoli a haircut in the Kibo laboratory on the ISS in 2011. (Credit: NASA)
Although we tend to see mostly the glorious and fun parts of hanging out in a space station, the human body will not cease to do its usual things, whether it involves the digestive system, or even something as mundane as the hair that sprouts from our heads. After all, we do not want our astronauts to return to Earth after a half-year stay in the ISS looking as if they got marooned on an uninhabited island. Introducing the onboard barbershop on the ISS, and the engineering behind making sure that after a decade the ISS doesn’t positively look like it got the 1970s shaggy wall carpet treatment.
The basic solution is rather straightforward: an electric hair clipper attached to a vacuum that will whisk the clippings safely into a container rather than being allowed to drift around. In a way this is similar to the vacuums you find on routers and saws in a woodworking shop, just with more keratin rather than cellulose and lignin.
On the Chinese Tiangong space station they use a similar approach, with the video showing how simple the system is, little more than a small handheld vacuum cleaner attached to the clippers. Naturally, you cannot just tape the vacuum cleaner to some clippers and expect it to get most of the clippings, which is where both the ISS and Tiangong solutions seems to have a carefully designed construction to maximize the hair removal. You can see the ISS system in action in this 2019 video from the Canadian Space Agency.
Of course, this system is not perfect, but amidst the kilograms of shed skin particles from the crew, a few small hair clippings can likely be handled by the ISS’ air treatment systems just fine. The goal after all is to not have a massive expanding cloud of hair clippings filling up the space station.
Running a dairy farm used to be a rather hands-on experience, with the farmer required to be around every few hours to milk the cows, feed them, do all the veterinarian tasks that the farmer can do themselves, and so on. The introduction of milking machines in the early 20th century however began a trend of increased automation whereby a single farmer could handle a hundred cows by the end of the century instead of only a couple. In a recent article in IEEE Spectrum covers the continued progress here is covered, including cows milking themselves, on-demand style as shown in the top image.
The article focuses primarily on Dutch company Lely’s recent robots, which range from said self-milking robots to a manure cleaning robot that looks like an oversized Roomba. With how labor-intensive (and low-margin) a dairy farm is, any level of automation that can improve matters will be welcomed, with so far Lely’s robots receiving a mostly positive response. Since cows are pretty smart, they will happily guide themselves to a self-milking robot when they feel that their udders are full enough, which can save the farmer a few hours of work each day, as this robot handles every task, including the cleaning of the udders prior to milking and sanitizing itself prior to inviting the next cow into its loving embrace.
As for the other tasks, speaking as a genuine Dutch dairy farm girl who was born & raised around cattle (and sheep), the idea of e.g. mucking out stables being taken over by robots is something that raises a lot more skepticism. After all, a farmer’s children have to earn their pocket money somehow, which includes mucking, herding, farm maintenance and so on. Unless those robots get really cheap and low maintenance, the idea of fully automated dairy farms may still be a long while off, but reducing the workload and making cows happier are definitely lofty goals.
Top image: The milking robot that can automatically milk a cow without human assistance. (Credit: Lely)
These days even a lowly microcontroller can easily trade blows with – or surpass – desktop systems of yesteryear, so it is little wonder that DIY handheld gaming systems based around an MCU are more capable than ever. A case in point is the GK handheld gaming system by [John Cronin], which uses an MCU from relatively new and very capable STM32H7S7 series, specifically the 225-pin STM32H7S7L8 in TFBGA package with a single Cortex-M7 clocked at 600 MHz and a 2D NeoChrom GPU.
Coupled with this MCU are 128 MB of XSPI (hexa-SPI) SDRAM, a 640×480 color touch screen, gyrometer, WiFi network support and the custom gkOS in the firmware for loading games off an internal SD card. A USB-C port is provided to both access said SD card’s contents and for recharging the internal Li-ion battery.
As can be seen in the demonstration video, it runs a wide variety of games, ranging from DOOM (of course), Quake, as well as Command and Conquer: Red Alert and emulators for many consoles, with the Mednafen project used to emulate Game Boy, Super Nintendo and other systems at 20+ FPS. Although there aren’t a lot of details on how optimized the current firmware is, it seems to be pretty capable already.
Whenever the topic is raised in popular media about porting a codebase written in an ‘antiquated’ programming language like Fortran or COBOL, very few people tend to object to this notion. After all, what could be better than ditching decades of crusty old code in a language that only your grandparents can remember as being relevant? Surely a clean and fresh rewrite in a modern language like Java, Rust, Python, Zig, or NodeJS will fix all ailments and make future maintenance a snap?
For anyone who has ever had to actually port large codebases or dealt with ‘legacy’ systems, their reflexive response to such announcements most likely ranges from a shaking of one’s head to mad cackling as traumatic memories come flooding back. The old idiom of “if it ain’t broke, don’t fix it”, purportedly coined in 1977 by Bert Lance, is a feeling that has been shared by countless individuals over millennia. Even worse, how can you ‘fix’ something if you do not even fully understand the problem?
In the case of languages like COBOL this is doubly true, as it is a domain specific language (DSL). This is a very different category from general purpose system programming languages like the aforementioned ‘replacements’. The suggestion of porting the DSL codebase is thus to effectively reimplement all of COBOL’s functionality, which should seem like a very poorly thought out idea to any rational mind.
Sticking To A Domain
The term ‘domain specific language’ is pretty much what it says it is, and there are many of such DSLs around, ranging from PostScript and SQL to the shader language GLSL. Although it is definitely possible to push DSLs into doing things which they were never designed for, the primary point of a DSL is to explicitly limit its functionality to that one specific domain. GLSL, for example, is based on C and could be considered to be a very restricted version of that language, which raises the question of why one should not just write shaders in C?
Similarly, Fortran (Formula translating system) was designed as a DSL targeting scientific and high-performance computation. First used in 1957, it still ranks in the top 10 of the TIOBE index, and just about any code that has to do with high-performance computation (HPC) in science and engineering will be written in Fortran or strongly relies on libraries written in Fortran. The reason for this is simple: from the beginning Fortran was designed to make such computations as easy as possible, with subsequent updates to the language standard adding updates where needed.
Fortran’s latest standard update was published in November 2023, joining the COBOL 2023 standard as two DSLs which are both still very much alive and very current today.
The strength of a DSL is often underestimated, as the whole point of a DSL is that you can teach this simpler, focused language to someone who can then become fluent in it, without requiring them to become fluent in a generic programming language and all the libraries and other luggage that entails. For those of us who already speak C, C++, or Java, it may seem appealing to write everything in that language, but not to those who have no interest in learning a whole generic language.
There are effectively two major reasons why a DSL is the better choice for said domain:
Easy to learn and teach, because it’s a much smaller language
Far fewer edge cases and simpler tooling
In the case of COBOL and Fortran this means only a fraction of the keywords (‘verbs’ for COBOL) to learn, and a language that’s streamlined for a specific task, whether it’s to allow a physicist to do some fluid-dynamic modelling, or a staff member at a bank or the social security offices to write a data processing application that churns through database data in order to create a nicely formatted report. Surely one could force both of these people to learn C++, Java, Rust or NodeJS, but this may backfire in many ways, the resulting code quality being one of them.
Tangentially, this is also one of the amazing things in the hardware design language (HDL) domain, where rather than using (System)Verilog or VHDL, there’s an amazing growth of alternative HDLs, many of them implemented in generic scripting and programming languages. That this prohibits any kind of skill and code sharing, and repeatedly, and often poorly, reinvents the wheel seems to be of little concern to many.
Non-Broken Code
A very nice aspect of these existing COBOL codebases is that they generally have been around for decades, during which time they have been carefully pruned, trimmed and debugged, requiring only minimal maintenance and updates while they happily keep purring along on mainframes as they process banking and government data.
One argument that has been made in favor of porting from COBOL to a generic programming language is ‘ease of maintenance’, pointing out that COBOL is supposedly very hard to read and write and thus maintaining it would be far too cumbersome.
Since it’s easy to philosophize about such matters from a position of ignorance and/or conviction, I recently decided to take up some COBOL programming from the position of both a COBOL newbie as well as an experienced C++ (and other language) developer. Cue the ‘Hello Business’ playground project.
For the tooling I used the GnuCOBOL transpiler, which converts the COBOL code to C before compiling it to a binary, but in a few weeks the GCC 15.1 release will bring a brand new COBOL frontend (gcobol) that I’m dying to try out. As language reference I used a combination of the Wikipedia entry for COBOL, the IBM ILE COBOL language reference (PDF) and the IBM COBOL Report Writer Programmer’s Manual (PDF).
My goal for this ‘Hello Business’ project was to create something that did actual practical work. I took the FileHandling.cob example from the COBOL tutorial by Armin Afazeli as starting point, which I modified and extended to read in records from a file, employees.dat, before using the standard Report Writer feature to create a report file in which the employees with their salaries are listed, with page numbering and totaling the total salary value in a report footing entry.
My impression was that although it takes a moment to learn the various divisions that the variables, files, I/O, and procedures are put into, it’s all extremely orderly and predictable. The compiler also will helpfully tell you if you did anything out of order or forgot something. While data level numbering to indicate data associations is somewhat quaint, after a while I didn’t mind at all, especially since this provides a whole range of meta information that other languages do not have.
The lack of semi-colons everywhere is nice, with only a single period indicating the end of a scope, even if it concerns an entire loop (perform). I used the modern free style form of COBOL, which removes the need to use specific columns for parts of the code, which no doubt made things a lot easier. In total it only took me a few hours to create a semi-useful COBOL application.
Would I opt to write a more extensive business application in C++ if I got put on a tight deadline? I don’t think so. If I had to do COBOL-like things in C++, I would be hunting for various libraries, get stuck up to my gills in complex configurations and be scrambling to find replacements for things like Report Writer, or be forced to write my own. Meanwhile in COBOL everything is there already, because it’s what that DSL is designed for. Replacing C++ with Java or the like wouldn’t help either, as you end up doing so much boilerplate work and dependencies wrangling.
A Modern DSL
Perhaps the funniest thing about COBOL is that since version 2002 it got a whole range of features that push it closer to generic languages like Java. Features that include object-oriented programming, bit and boolean types, heap-based memory allocation, method overloading and asynchronous messaging. Meanwhile the simple English, case-insensitive, syntax – with allowance for various spellings and acronyms – means that you can rapidly type code without adding symbol soup, and reading it is obvious even as a beginner, as the code literally does what it says it does.
True, the syntax and naming feels a bit quaint at first, but that is easily explained by the fact that when COBOL appeared on the scene, ALGOL was still highly relevant and the C programming language wasn’t even a glimmer in Dennis Ritchie’s eyes yet. If anything, COBOL has proven itself – much like Fortran and others – to be a time-tested DSL that is truly a testament to Grace Hopper and everyone else involved in its creation.
The General Instruments AY-3-8910 was a quite popular Programmable Sound Generator (PSG) that saw itself used in a wide variety of systems, including Apple II soundcards such as the Mockingboard and various arcade systems. In addition to the Yamaha variants (e.g. YM2149), two cut-down were created by GI: these being the AY-3-8912 and the AY-3-8913, which should have been differentiated only by the number of GPIO banks broken out in the IC package (one or zero, respectively). However, research by [fenarinarsa] and others have shown that the AY-3-8913 variant has some actual hardware issues as a PSG.
With only 24 pins, the AY-3-8913 is significantly easier to integrate than the 40-pin AY-3-8910, at the cost of the (rarely used) GPIO functionality, but as it turns out with a few gotchas in terms of timing and register access. Although the Mockingboard originally used the AY-3-8910, latter revisions would use two AY-3-8913 instead, including the MS revision that was the Mac version of the Mindscape Music Board for IBM PCs.
The first hint that something was off with the AY-3-8913 came when [fenarinarsa] was experimenting with effect composition on an Apple II and noticed very poor sound quality, as demonstrated in an example comparison video (also embedded below). The issue was very pronounced in bass envelopes, with an oscilloscope capture showing a very distorted output compared to a YM2149. As for why this was not noticed decades ago can likely be explained by that the current chiptune scene is pushing the hardware in very different ways than back then.
As for potential solutions, the [French Touch] project has created an adapter to allow an AY-3-8910 (or YM2149) to be used in place of an AY-3-8913.
Top image: Revision D PCB of Mockingboard with GI AY-3-8913 PSGs.
A major bottleneck with high-frequency wireless communications is the conversion from radio frequencies to optical signals and vice versa. This is performed by an electro-optic modulator (EOM), which generally are limited to GHz-level signals. To reach THz speeds, a new approach was needed, which researchers at ETH Zurich in Switzerland claim to have found in the form of a plasmonic phase modulator.
Although sounding like something from a Star Trek episode, plasmonics is a very real field, which involves the interaction between optical frequencies along metal-dielectric interfaces. The original 2015 paper by [Yannick Salamin] et al. as published in Nano Letters provides the foundations of the achievement, with the recent paper in Optica by [Yannik Horst] et al. covering the THz plasmonic EOM demonstration.
The demonstrated prototype can achieve 1.14 THz, though signal degradation begins to occur around 1 THz. This is achieved by using plasmons (quanta of electron oscillators) generated on the gold surface, who affect the optical beam as it passes small slots in the gold surface that contain a nonlinear organic electro optic material that ‘writes’ the original wireless signal onto the optical beam.
The PET opened, showing the motherboard. (Credit: Ken Shirriff)
An unavoidable part of old home computer systems and kin like the Commodore PET is that due to the age of their components they will develop issues that go far beyond what was covered in the official repair manual, not to mention require unconventional repairs. A case in point is the 2001 series Commodore PET that [Ken Shirriff] recently repaired.
The initial diagnosis was quite straightforward: it did turn on, but only displayed random symbols on the CRT, so obviously the ICs weren’t entirely happy, but at least the power supply and the basic display routines seemed to be more or less functional. Surely this meant that only a few bad ICs and maybe a few capacitors had to be replaced, and everything would be fully functional again.
Initially two bad MOS MPS6540 ROM chips had to be replaced with 2716 EPROMs using an adapter, but this did not fix the original symptom. After a logic analyzer session three bad RAM ICs were identified, which mostly fixed the display issue, aside from a quaint 2×2 checkerboard pattern and completely bizarre behavior upon running BASIC programs.
Using the logic analyzer capture the 6502 MPU was identified as writing to the wrong addresses. Ironically, this turned out to be due to a wrong byte in one of the replacement 2716 EPROMs as the used programmer wasn’t quite capable of hitting the right programming voltage. Using a better programmer fixed this, but on the next boot another RAM IC turned out to have failed, upping the total of failed silicon to four RAM & two ROM ICs, as pictured above, and teaching the important lesson to test replacement ROMs before you stick them into a system.
Whilst recently perusing the fine wares for sale at the Vintage Computer Festival East, [Action Retro] ended up adopting a 1995 ProStar laptop. Unlike most laptops of the era, however, this one didn’t just have the typical trackpad and clicky mouse buttons, but also a D-pad and four suspiciously game controller looking buttons. This makes it rather like the 2002 Sony VAIO PCG-U subnotebook, or the 2018 GPD Win 2, except that inexplicably the manufacturer has opted to put these (serial-connected) game controls on the laptop’s palm rest.
Sony VAIO PCG-U101. (Credit: Sony)
Though branded ProStar, this laptop was manufactured by Clevo, who to this day produces generic laptops that are rebranded by everyone & their dog. This particular laptop is your typical (120 MHz) Pentium-based unit, with two additional PCBs for the D-pad and buttons wired into the mainboard.
Unlike the sleek and elegant VAIO PCG-U and successors, this Clevo laptop is a veritable brick, as was typical for the era, which makes the ergonomics of the game controls truly questionable. Although the controls totally work, as demonstrated in the video, you won’t be holding the laptop, meaning that using the D-pad with your thumb is basically impossible unless you perch the laptop on a stand.
We’re not sure what the Clevo designers were thinking when they dreamed up this beauty, but it definitely makes this laptop stand out from the crowd. As would you, if you were using this as a portable gaming system back in the late 90s.
Have you ever felt the urge to make your own private binary format for use in Linux? Perhaps you have looked at creating the smallest possible binary when compiling a project, and felt disgusted with how bloated the ELF format is? If you are like [Brian Raiter], then this has led you down many rabbit holes, with the conclusion being that flat binary formats are the way to go if you want sleek, streamlined binaries. These are formats like COM, which many know from MS-DOS, but which was already around in the CP/M days. Here ‘flat’ means that the entire binary is loaded into RAM without any fuss or foreplay.
Although Linux does not (yet) support this binary format, the good news is that you can learn how to write kernel modules by implementing COM support for the Linux kernel. In the article [Brian] takes us down this COM rabbit hole, which involves setting up a kernel module development environment and exploring how to implement a binary file format. This leads us past familiar paths for those who have looked at e.g. how the Linux kernel handles the shebang (#!) and ‘misc’ formats.
On Windows, the kernel identifies the COM file by its extension, after which it gives it 640 kB & an interrupt table to play with. The kernel module does pretty much the same, which still involves a lot of code.
Of course, this particular rabbit hole wasn’t deep enough yet, so the COM format was extended into the .♚ (Unicode U+265A) format, because this is 2025 and we have to use all those Unicode glyphs for something. This format extension allows for amazing things like automatically exiting after finishing execution (like crashing).
At the end of all these efforts we have not only learned how to write kernel modules and add new binary file formats to Linux, we have also learned to embrace the freedom of accepting the richness of the Unicode glyph space, rather than remain confined by ASCII. All of which is perfectly fine.
Top image: Illustration of [Brian Raiter] surveying the fruits of his labor by [Bomberanian]
The “USB C” cable that comes with the Inaya Portable Rechargeable Lamp. (Credit: The Stock Pot, YouTube)
Recently [Dillan Stock] over at The Stock Pot YouTube channel bought a $17 ‘mushroom’ lamp from his local Kmart that listed ‘USB-C rechargeable’ as one of its features, the only problem being that although this is technically true, there’s a major asterisk. This Inaya-branded lamp namely comes with a USB-C cable with a rather prominent label attached to it that tells you that this lamp requires that specific cable. After trying with a regular USB-C cable, [Dillan] indeed confirmed that the lamp does not charge from a standard USB-C cable. So he did what any reasonable person would do: he bought a second unit and set about to hacking it.
[Dillan] also dug more into what’s so unusual about this cable and the connector inside the lamp. As it turns out, while GND & Vcc are connected as normal, the two data lines (D+, D-) are also connected to Vcc. Presumably on the lamp side this is the expected configuration, while using a regular USB-C cable causes issues. Vice versa, this cable’s configuration may actually be harmful to compliant USB-C devices, though [Dillan] did not try this.
With the second unit in hand, he then started hacking it, with the full plans and schematic available on his website.
The changes include a regular USB-C port for charging, an ESP32 board with integrated battery charger for the 18650 Li-ion cell of the lamp, and an N-channel MOSFET to switch the power to the lamp’s LED. With all of the raw power from the ESP32 available, the two lamps got integrated into the Home Assistant network which enables features such as turning the lamps on when the alarm goes off in the morning. All of this took about $7 in parts and a few hours of work.
Although we can commend [Dillan] on this creative hack rather than returning the item, it’s worrying that apparently there’s now a flood of ‘USB C-powered’ devices out there that come with non-compliant cables that are somehow worse than ‘power-only’ USB cables. It brings back fond memories of hunting down proprietary charging cables, which was the issue that USB power was supposed to fix.
One of the delights in Bash, zsh, or whichever shell tickles your fancy in your OSS distribution of choice, is the ease of which you can use scripts. These can be shell scripts, or use the Perl, Python or another interpreter, as defined by the shebang (#!) at the beginning of the script. This signature is followed by the path to the interpreter, which can be /bin/sh for maximum compatibility across OSes, but how does this actually work? As [Bruno Croci] found while digging into this question, it is not the shell that interprets the shebang, but the kernel.
It’s easy enough to find out the basic execution sequence using strace after you run an executable shell script with said shebang in place. The first point is in execve, a syscall that gets one straight into the Linux kernel (fs/exec.c). Here the ‘binary program’ is analyzed for its executable format, which for the shell script gets us to binfmt_script.c. Incidentally the binfmt_misc.c source file provides an interesting detour as it concerns magic byte sequences to do something similar as a shebang.
As a bonus [Bruno] also digs into the difference between executing a script with shebang or running it in a shell (e.g. sh script.sh), before wrapping up with a look at where the execute permission on a shebang-ed shell script is checked.
Human biology is very much like that of other mammals, and yet so very different in areas where it matters. One of these being human neurology, with aspects like the human brain and the somatosensory pathways (i.e. touch etc.) being not only hard to study in non-human animal analogs, but also (genetically) different enough that a human test subject is required. Over the past years the use of human organoids have come into use, which are (parts of) organs grown from human pluripotent stem cells and thus allow for ethical human experimentation.
For studying aspects like the somatosensory pathways, multiple of such organoids must be combined, with recently [Ji-il Kim] et al. as published in Nature demonstrating the creation of a so-called assembloid. This four-part assembloid contains somatosensory, spinal, thalamic and cortical organoids, covering the entirety of such a pathway from e.g. one’s skin to the brain’s cortex where the sensory information is received.
Such assembloids are – much like organoids – extremely useful for not only studying biological and biochemical processes, but also to research diseases and disorders, including tactile deficits as previously studied in mouse models by e.g. [Lauren L. Orefice] et al. caused by certain genetic mutations in Mecp2 and other genes, as well as genes like SCN9A that can cause clinical absence of pain perception.
Using these assembloids the development of these pathways can be studied in great detail and therapies developed and tested.