Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Hoy — 21 Abril 2025Salida Principal

Preventing Galvanic Corrosion in Water Cooling Loops

Por: Maya Posch
21 Abril 2025 at 05:00

Water is an excellent coolant, but the flip side is that it is also an excellent solvent. This, in short, is why any water cooling loop is also a prime candidate for an interesting introduction to the galvanic metal series, resulting in severe corrosion that commences immediately. In a recent video by [der8aer], this issue is demonstrated using a GPU cold plate. The part is made out of nickel-plated copper and features many small channels to increase surface area with the coolant.

The surface analysis of the sample cold plate after a brief exposure to distilled water, showing the deposited copper atoms. (Credit: der8auer, YouTube)
The surface analysis of the sample cold plate after a brief exposure to distilled water shows the deposited copper atoms. (Credit: der8auer, YouTube)

Theoretically, if one were to use distilled water in a coolant loop that contains a single type of metal (like copper), there would be no issue. As [der8auer] points out, fittings, radiators, and the cooling block are nearly always made of various metals and alloys like brass, for example. This thus creates the setup for galvanic corrosion, whereby one metal acts as the anode and the other as a cathode. While this is desirable in batteries, for a cooling loop, this means that the water strips metal ions off the anode and deposits them on the cathode metal.

The nickel-plated cold plate should be immune to this if the plating were perfect. However, as demonstrated in the video, even a brief exposure to distilled water at 60°C induced strong galvanic corrosion. Analysis in an SEM showed that the imperfect nickel plating allowed copper ions to be dissolved into the water before being deposited on top of the nickel (cathode). In a comparison with another sample that had a coolant with corrosion inhibitor (DP Ultra) used, no such corrosion was observed, even after much longer exposure.

This DP Ultra coolant is mostly distilled water but has glycol added. The glycol improves the pH and coats surfaces to prevent galvanic corrosion. The other element is benzotriazole, which provides similar benefits. Of course, each corrosion inhibitor targets a specific environment, and there is also the issue with organic films forming, which may require biocides to be added. As usual, water cooling has more subtlety than you’d expect.

Ayer — 20 Abril 2025Salida Principal

China’s TMSR-LF1 Molten Salt Thorium Reactor Begins Live Refueling Operations

Por: Maya Posch
19 Abril 2025 at 23:00
The TMSR-LF1 building seen from the sky. (Credit: SINAP)

Although uranium-235 is the typical fuel for commercial fission reactors on account of it being fissile, it’s relatively rare relative to the fertile U-238 and thorium (Th-232). Using either of these fertile isotopes to breed new fuel from is thus an attractive proposition. Despite this, only India and China have a strong focus on using Th-232 for reactors, the former using breeders (Th-232 to U-233) to create fertile uranium fuel. China has demonstrated its approach — including refueling a live reactor — using a fourth-generation molten salt reactor.

The original research comes from US scientists in the 1960s. While there were tests in the MSRE reactor, no follow-up studies were funded. The concept languished until recently, with Terrestrial Energy’s Integral MSR and construction on China’s 2 MW TMSR-LF1 experimental reactor commencing in 2018 before first criticality in 2023. One major advantage of an MSR with liquid fuel (the -LF part in the name) is that it can filter out contaminants and add fresh fuel while the reactor is running. With this successful demonstration, along with the breeding of uranium fuel from thorium last year, a larger, 10 MW design can now be tested.

Since TMSR doesn’t need cooling water, it is perfect for use in arid areas. In addition, China is working on using a TMSR-derived design in nuclear-powered container vessels. With enough thorium around for tens of thousands of years, these low-maintenance MSR designs could soon power much of modern society, along with high-temperature pebble bed reactors, which is another concept that China has recently managed to make work with the HTR-PM design.

Meanwhile, reactors are getting smaller in general.

AnteayerSalida Principal

Restoring an Abandoned Game Boy Kiosk

Por: Maya Posch
19 Abril 2025 at 05:00

Back in the olden days, there existed physical game stores, which in addition to physical games would also have kiosks where you could try out the current game consoles and handhelds. Generally these kiosks held the console, a display and any controllers if needed. After a while these kiosks would get scrapped, with only a very few ending up being rescued and restored. One of the lucky ones is a Game Boy kiosk, which [The Retro Future] managed to snag after it was found in a construction site. Sadly the thing was in a very rough condition, with the particle board especially being mostly destroyed.

Display model Game Boy, safely secured into the demo kiosk. (Credit: The Retro Future, YouTube)
Display model Game Boy, safely secured into the demo kiosk. (Credit: The Retro Future, YouTube)

These Game Boy kiosks also featured a special Game Boy, which – despite being super rare – also was hunted down. This led to the restoration, which included recovering as much of the original particle board as possible, with a professional furniture restore ([Don]) lending his expertise. This provides a master class in how to patch up damaged particle board, as maligned as this wood-dust-and-glue material is.

The boards were then reassembled more securely than the wood screws used by the person who had found the destroyed kiosk, in a way that allows for easy disassembly if needed. Fortunately most of the plastic pieces were still intact, and the Game Boy grey paint was easily matched. Next was reproducing a missing piece of art work, with fortunately existing versions available as reference. For a few missing metal bits that held the special Game Boy in place another kiosk was used to provide measurements.

After all this, the kiosk was powered back on, and it was like 1990 was back once again, just in time for playing Tetris on a dim, green-and-black screen while hunched half into the kiosk at the game store.

Haircuts in Space: How to Keep Your Astronauts Looking Fresh

Por: Maya Posch
19 Abril 2025 at 02:00
NASA astronaut Catherine Coleman gives ESA astronaut Paolo Nespoli a haircut in the Kibo laboratory on the ISS in 2011. (Credit: NASA)
NASA astronaut Catherine Coleman gives ESA astronaut Paolo Nespoli a haircut in the Kibo laboratory on the ISS in 2011. (Credit: NASA)

Although we tend to see mostly the glorious and fun parts of hanging out in a space station, the human body will not cease to do its usual things, whether it involves the digestive system, or even something as mundane as the hair that sprouts from our heads. After all, we do not want our astronauts to return to Earth after a half-year stay in the ISS looking as if they got marooned on an uninhabited island. Introducing the onboard barbershop on the ISS, and the engineering behind making sure that after a decade the ISS doesn’t positively look like it got the 1970s shaggy wall carpet treatment.

The basic solution is rather straightforward: an electric hair clipper attached to a vacuum that will whisk the clippings safely into a container rather than being allowed to drift around. In a way this is similar to the vacuums you find on routers and saws in a woodworking shop, just with more keratin rather than cellulose and lignin.

On the Chinese Tiangong space station they use a similar approach, with the video showing how simple the system is, little more than a small handheld vacuum cleaner attached to the clippers. Naturally, you cannot just tape the vacuum cleaner to some clippers and expect it to get most of the clippings, which is where both the ISS and Tiangong solutions seems to have a carefully designed construction to maximize the hair removal. You can see the ISS system in action in this 2019 video from the Canadian Space Agency.

Of course, this system is not perfect, but amidst the kilograms of shed skin particles from the crew, a few small hair clippings can likely be handled by the ISS’ air treatment systems just fine. The goal after all is to not have a massive expanding cloud of hair clippings filling up the space station.

Rise of the Robots: How Robots Are Changing Dairy Farms

Por: Maya Posch
18 Abril 2025 at 02:00

Running a dairy farm used to be a rather hands-on experience, with the farmer required to be around every few hours to milk the cows, feed them, do all the veterinarian tasks that the farmer can do themselves, and so on. The introduction of milking machines in the early 20th century however began a trend of increased automation whereby a single farmer could handle a hundred cows by the end of the century instead of only a couple. In a recent article in IEEE Spectrum covers the continued progress here is covered, including cows milking themselves, on-demand style as shown in the top image.

The article focuses primarily on Dutch company Lely’s recent robots, which range from said self-milking robots to a manure cleaning robot that looks like an oversized Roomba. With how labor-intensive (and low-margin) a dairy farm is, any level of automation that can improve matters will be welcomed, with so far Lely’s robots receiving a mostly positive response. Since cows are pretty smart, they will happily guide themselves to a self-milking robot when they feel that their udders are full enough, which can save the farmer a few hours of work each day, as this robot handles every task, including the cleaning of the udders prior to milking and sanitizing itself prior to inviting the next cow into its loving embrace.

As for the other tasks, speaking as a genuine Dutch dairy farm girl who was born & raised around cattle (and sheep), the idea of e.g. mucking out stables being taken over by robots is something that raises a lot more skepticism. After all, a farmer’s children have to earn their pocket money somehow, which includes mucking, herding, farm maintenance and so on. Unless those robots get really cheap and low maintenance, the idea of fully automated dairy farms may still be a long while off, but reducing the workload and making cows happier are definitely lofty goals.

Top image: The milking robot that can automatically milk a cow without human assistance. (Credit: Lely)

GK STM32 MCU-Based Handheld Game System

Por: Maya Posch
16 Abril 2025 at 23:00

These days even a lowly microcontroller can easily trade blows with – or surpass – desktop systems of yesteryear, so it is little wonder that DIY handheld gaming systems based around an MCU are more capable than ever. A case in point is the GK handheld gaming system by [John Cronin], which uses an MCU from relatively new and very capable STM32H7S7 series, specifically the 225-pin STM32H7S7L8 in TFBGA package with a single Cortex-M7 clocked at 600 MHz and a 2D NeoChrom GPU.

Coupled with this MCU are 128 MB of XSPI (hexa-SPI) SDRAM, a 640×480 color touch screen, gyrometer, WiFi network support and the custom gkOS in the firmware for loading games off an internal SD card. A USB-C port is provided to both access said SD card’s contents and for recharging the internal Li-ion battery.

As can be seen in the demonstration video, it runs a wide variety of games, ranging from DOOM (of course), Quake, as well as Command and Conquer: Red Alert and emulators for many consoles, with the Mednafen project used to emulate Game Boy, Super Nintendo and other systems at 20+ FPS. Although there aren’t a lot of details on how optimized the current firmware is, it seems to be pretty capable already.

Porting COBOL Code and the Trouble With Ditching Domain Specific Languages

Por: Maya Posch
16 Abril 2025 at 14:00

Whenever the topic is raised in popular media about porting a codebase written in an ‘antiquated’ programming language like Fortran or COBOL, very few people tend to object to this notion. After all, what could be better than ditching decades of crusty old code in a language that only your grandparents can remember as being relevant? Surely a clean and fresh rewrite in a modern language like Java, Rust, Python, Zig, or NodeJS will fix all ailments and make future maintenance a snap?

For anyone who has ever had to actually port large codebases or dealt with ‘legacy’ systems, their reflexive response to such announcements most likely ranges from a shaking of one’s head to mad cackling as traumatic memories come flooding back. The old idiom of “if it ain’t broke, don’t fix it”, purportedly coined in 1977 by Bert Lance, is a feeling that has been shared by countless individuals over millennia. Even worse, how can you ‘fix’ something if you do not even fully understand the problem?

In the case of languages like COBOL this is doubly true, as it is a domain specific language (DSL). This is a very different category from general purpose system programming languages like the aforementioned ‘replacements’. The suggestion of porting the DSL codebase is thus to effectively reimplement all of COBOL’s functionality, which should seem like a very poorly thought out idea to any rational mind.

Sticking To A Domain

The term ‘domain specific language’ is pretty much what it says it is, and there are many of such DSLs around, ranging from PostScript and SQL to the shader language GLSL. Although it is definitely possible to push DSLs into doing things which they were never designed for, the primary point of a DSL is to explicitly limit its functionality to that one specific domain. GLSL, for example, is based on C and could be considered to be a very restricted version of that language, which raises the question of why one should not just write shaders in C?

Similarly, Fortran (Formula translating system) was designed as a DSL targeting scientific and high-performance computation. First used in 1957, it still ranks in the top 10 of the TIOBE index, and just about any code that has to do with high-performance computation (HPC) in science and engineering will be written in Fortran or strongly relies on libraries written in Fortran. The reason for this is simple: from the beginning Fortran was designed to make such computations as easy as possible, with subsequent updates to the language standard adding updates where needed.

Fortran’s latest standard update was published in November 2023, joining the COBOL 2023 standard as two DSLs which are both still very much alive and very current today.

The strength of a DSL is often underestimated, as the whole point of a DSL is that you can teach this simpler, focused language to someone who can then become fluent in it, without requiring them to become fluent in a generic programming language and all the libraries and other luggage that entails. For those of us who already speak C, C++, or Java, it may seem appealing to write everything in that language, but not to those who have no interest in learning a whole generic language.

There are effectively two major reasons why a DSL is the better choice for said domain:

  • Easy to learn and teach, because it’s a much smaller language
  • Far fewer edge cases and simpler tooling

In the case of COBOL and Fortran this means only a fraction of the keywords (‘verbs’ for COBOL) to learn, and a language that’s streamlined for a specific task, whether it’s to allow a physicist to do some fluid-dynamic modelling, or a staff member at a bank or the social security offices to write a data processing application that churns through database data in order to create a nicely formatted report. Surely one could force both of these people to learn C++, Java, Rust or NodeJS, but this may backfire in many ways, the resulting code quality being one of them.

Tangentially, this is also one of the amazing things in the hardware design language (HDL) domain, where rather than using (System)Verilog or VHDL, there’s an amazing growth of alternative HDLs, many of them implemented in generic scripting and programming languages. That this prohibits any kind of skill and code sharing, and repeatedly, and often poorly, reinvents the wheel seems to be of little concern to many.

Non-Broken Code

A very nice aspect of these existing COBOL codebases is that they generally have been around for decades, during which time they have been carefully pruned, trimmed and debugged, requiring only minimal maintenance and updates while they happily keep purring along on mainframes as they process banking and government data.

One argument that has been made in favor of porting from COBOL to a generic programming language is ‘ease of maintenance’, pointing out that COBOL is supposedly very hard to read and write and thus maintaining it would be far too cumbersome.

Since it’s easy to philosophize about such matters from a position of ignorance and/or conviction, I recently decided to take up some COBOL programming from the position of both a COBOL newbie as well as an experienced C++ (and other language) developer. Cue the ‘Hello Business’ playground project.

For the tooling I used the GnuCOBOL transpiler, which converts the COBOL code to C before compiling it to a binary, but in a few weeks the GCC 15.1 release will bring a brand new COBOL frontend (gcobol) that I’m dying to try out. As language reference I used a combination of the Wikipedia entry for COBOL, the IBM ILE COBOL language reference (PDF) and the IBM COBOL Report Writer Programmer’s Manual (PDF).

My goal for this ‘Hello Business’ project was to create something that did actual practical work. I took the FileHandling.cob example from the COBOL tutorial by Armin Afazeli as starting point, which I modified and extended to read in records from a file, employees.dat, before using the standard Report Writer feature to create a report file in which the employees with their salaries are listed, with page numbering and totaling the total salary value in a report footing entry.

My impression was that although it takes a moment to learn the various divisions that the variables, files, I/O, and procedures are put into, it’s all extremely orderly and predictable. The compiler also will helpfully tell you if you did anything out of order or forgot something. While data level numbering to indicate data associations is somewhat quaint, after a while I didn’t mind at all, especially since this provides a whole range of meta information that other languages do not have.

The lack of semi-colons everywhere is nice, with only a single period indicating the end of a scope, even if it concerns an entire loop (perform). I used the modern free style form of COBOL, which removes the need to use specific columns for parts of the code, which no doubt made things a lot easier. In total it only took me a few hours to create a semi-useful COBOL application.

Would I opt to write a more extensive business application in C++ if I got put on a tight deadline? I don’t think so. If I had to do COBOL-like things in C++, I would be hunting for various libraries, get stuck up to my gills in complex configurations and be scrambling to find replacements for things like Report Writer, or be forced to write my own. Meanwhile in COBOL everything is there already, because it’s what that DSL is designed for. Replacing C++ with Java or the like wouldn’t help either, as you end up doing so much boilerplate work and dependencies wrangling.

A Modern DSL

Perhaps the funniest thing about COBOL is that since version 2002 it got a whole range of features that push it closer to generic languages like Java. Features that include object-oriented programming, bit and boolean types, heap-based memory allocation, method overloading and asynchronous messaging. Meanwhile the simple English, case-insensitive, syntax – with allowance for various spellings and acronyms – means that you can rapidly type code without adding symbol soup, and reading it is obvious even as a beginner, as the code literally does what it says it does.

True, the syntax and naming feels a bit quaint at first, but that is easily explained by the fact that when COBOL appeared on the scene, ALGOL was still highly relevant and the C programming language wasn’t even a glimmer in Dennis Ritchie’s eyes yet. If anything, COBOL has proven itself – much like Fortran and others – to be a time-tested DSL that is truly a testament to Grace Hopper and everyone else involved in its creation.

Something is Very Wrong With the AY-3-8913 Sound Generator

Por: Maya Posch
16 Abril 2025 at 05:00
Revision D PCB of Mockingboard with GI AY-3-8913 PSGs.

The General Instruments AY-3-8910 was a quite popular Programmable Sound Generator (PSG) that saw itself used in a wide variety of systems, including Apple II soundcards such as the Mockingboard and various arcade systems. In addition to the Yamaha variants (e.g. YM2149), two cut-down were created by GI: these being the AY-3-8912 and the AY-3-8913, which should have been differentiated only by the number of GPIO banks broken out in the IC package (one or zero, respectively). However, research by [fenarinarsa] and others have shown that the AY-3-8913 variant has some actual hardware issues as a PSG.

With only 24 pins, the AY-3-8913 is significantly easier to integrate than the 40-pin AY-3-8910, at the cost of the (rarely used) GPIO functionality, but as it turns out with a few gotchas in terms of timing and register access. Although the Mockingboard originally used the AY-3-8910, latter revisions would use two AY-3-8913 instead, including the MS revision that was the Mac version of the Mindscape Music Board for IBM PCs.

The first hint that something was off with the AY-3-8913 came when [fenarinarsa] was experimenting with effect composition on an Apple II and noticed very poor sound quality, as demonstrated in an example comparison video (also embedded below). The issue was very pronounced in bass envelopes, with an oscilloscope capture showing a very distorted output compared to a YM2149. As for why this was not noticed decades ago can likely be explained by that the current chiptune scene is pushing the hardware in very different ways than back then.

As for potential solutions, the [French Touch] project has created an adapter to allow an AY-3-8910 (or YM2149) to be used in place of an AY-3-8913.

Top image: Revision D PCB of Mockingboard with GI AY-3-8913 PSGs.

Plasmonic Modulators Directly Convert Terahertz Waves to Optical Signals

Por: Maya Posch
15 Abril 2025 at 02:00

A major bottleneck with high-frequency wireless communications is the conversion from radio frequencies to optical signals and vice versa. This is performed by an electro-optic modulator (EOM), which generally are limited to GHz-level signals. To reach THz speeds, a new approach was needed, which researchers at ETH Zurich in Switzerland claim to have found in the form of a plasmonic phase modulator.

Although sounding like something from a Star Trek episode, plasmonics is a very real field, which involves the interaction between optical frequencies along metal-dielectric interfaces. The original 2015 paper by [Yannick Salamin] et al. as published in Nano Letters provides the foundations of the achievement, with the recent paper in Optica by [Yannik Horst] et al. covering the THz plasmonic EOM demonstration.

The demonstrated prototype can achieve 1.14 THz, though signal degradation begins to occur around 1 THz. This is achieved by using plasmons (quanta of electron oscillators) generated on the gold surface, who affect the optical beam as it passes small slots in the gold surface that contain a nonlinear organic electro optic material that ‘writes’ the original wireless signal onto the optical beam.

A Tricky Commodore PET Repair and a Lesson About Assumptions

Por: Maya Posch
14 Abril 2025 at 08:00
The PET opened, showing the motherboard. (Credit: Ken Shirriff)
The PET opened, showing the motherboard. (Credit: Ken Shirriff)

An unavoidable part of old home computer systems and kin like the Commodore PET is that due to the age of their components they will develop issues that go far beyond what was covered in the official repair manual, not to mention require unconventional repairs. A case in point is the 2001 series Commodore PET that [Ken Shirriff] recently repaired.

The initial diagnosis was quite straightforward: it did turn on, but only displayed random symbols on the CRT, so obviously the ICs weren’t entirely happy, but at least the power supply and the basic display routines seemed to be more or less functional. Surely this meant that only a few bad ICs and maybe a few capacitors had to be replaced, and everything would be fully functional again.

Initially two bad MOS MPS6540 ROM chips had to be replaced with 2716 EPROMs using an adapter, but this did not fix the original symptom. After a logic analyzer session three bad RAM ICs were identified, which mostly fixed the display issue, aside from a quaint 2×2 checkerboard pattern and completely bizarre behavior upon running BASIC programs.

Using the logic analyzer capture the 6502 MPU was identified as writing to the wrong addresses. Ironically, this turned out to be due to a wrong byte in one of the replacement 2716 EPROMs as the used programmer wasn’t quite capable of hitting the right programming voltage. Using a better programmer fixed this, but on the next boot another RAM IC turned out to have failed, upping the total of failed silicon to four RAM & two ROM ICs, as pictured above, and teaching the important lesson to test replacement ROMs before you stick them into a system.

The ProStar: the Portable Gaming System and Laptop From 1995

Por: Maya Posch
14 Abril 2025 at 02:00

Whilst recently perusing the fine wares for sale at the Vintage Computer Festival East, [Action Retro] ended up adopting a 1995 ProStar laptop. Unlike most laptops of the era, however, this one didn’t just have the typical trackpad and clicky mouse buttons, but also a D-pad and four suspiciously game controller looking buttons. This makes it rather like the 2002 Sony VAIO PCG-U subnotebook, or the 2018 GPD Win 2, except that inexplicably the manufacturer has opted to put these (serial-connected) game controls on the laptop’s palm rest.

Sony VAIO PCG-U101. (Credit: Sony)
Sony VAIO PCG-U101. (Credit: Sony)

Though branded ProStar, this laptop was manufactured by Clevo, who to this day produces generic laptops that are rebranded by everyone & their dog. This particular laptop is your typical (120 MHz) Pentium-based unit, with two additional PCBs for the D-pad and buttons wired into the mainboard.

Unlike the sleek and elegant VAIO PCG-U and successors, this Clevo laptop is a veritable brick, as was typical for the era, which makes the ergonomics of the game controls truly questionable. Although the controls totally work, as demonstrated in the video, you won’t be holding the laptop, meaning that using the D-pad with your thumb is basically impossible unless you perch the laptop on a stand.

We’re not sure what the Clevo designers were thinking when they dreamed up this beauty, but it definitely makes this laptop stand out from the crowd. As would you, if you were using this as a portable gaming system back in the late 90s.

Our own [Adam Fabio] was at VCF East this year as well, and was impressed by an expansive exhibit dedicated to Windows 95.

Learning Linux Kernel Modules Using COM Binary Support

Por: Maya Posch
13 Abril 2025 at 08:00
Illustration of author surveying the fruits of his labor by Bomberanian

Have you ever felt the urge to make your own private binary format for use in Linux? Perhaps you have looked at creating the smallest possible binary when compiling a project, and felt disgusted with how bloated the ELF format is? If you are like [Brian Raiter], then this has led you down many rabbit holes, with the conclusion being that flat binary formats are the way to go if you want sleek, streamlined binaries. These are formats like COM, which many know from MS-DOS, but which was already around in the CP/M days. Here ‘flat’ means that the entire binary is loaded into RAM without any fuss or foreplay.

Although Linux does not (yet) support this binary format, the good news is that you can learn how to write kernel modules by implementing COM support for the Linux kernel. In the article [Brian] takes us down this COM rabbit hole, which involves setting up a kernel module development environment and exploring how to implement a binary file format. This leads us past familiar paths for those who have looked at e.g. how the Linux kernel handles the shebang (#!) and ‘misc’ formats.

On Windows, the kernel identifies the COM file by its extension, after which it gives it 640 kB & an interrupt table to play with. The kernel module does pretty much the same, which still involves a lot of code.

Of course, this particular rabbit hole wasn’t deep enough yet, so the COM format was extended into the .♚ (Unicode U+265A) format, because this is 2025 and we have to use all those Unicode glyphs for something. This format extension allows for amazing things like automatically exiting after finishing execution (like crashing).

At the end of all these efforts we have not only learned how to write kernel modules and add new binary file formats to Linux, we have also learned to embrace the freedom of accepting the richness of the Unicode glyph space, rather than remain confined by ASCII. All of which is perfectly fine.

Top image: Illustration of [Brian Raiter] surveying the fruits of his labor by [Bomberanian]

Hacking a Cheap Rechargeable Lamp With Non-Standard USB-C Connector

Por: Maya Posch
12 Abril 2025 at 23:00
The "USB C" cable that comes with the Inaya Portable Rechargeable Lamp. (Credit: The Stock Pot, YouTube)
The “USB C” cable that comes with the Inaya Portable Rechargeable Lamp. (Credit: The Stock Pot, YouTube)

Recently [Dillan Stock] over at The Stock Pot YouTube channel bought a $17 ‘mushroom’ lamp from his local Kmart that listed ‘USB-C rechargeable’ as one of its features, the only problem being that although this is technically true, there’s a major asterisk. This Inaya-branded lamp namely comes with a USB-C cable with a rather prominent label attached to it that tells you that this lamp requires that specific cable. After trying with a regular USB-C cable, [Dillan] indeed confirmed that the lamp does not charge from a standard USB-C cable. So he did what any reasonable person would do: he bought a second unit and set about to hacking it.

[Dillan] also dug more into what’s so unusual about this cable and the connector inside the lamp. As it turns out, while GND & Vcc are connected as normal, the two data lines (D+, D-) are also connected to Vcc. Presumably on the lamp side this is the expected configuration, while using a regular USB-C cable causes issues. Vice versa, this cable’s configuration may actually be harmful to compliant USB-C devices, though [Dillan] did not try this.

With the second unit in hand, he then started hacking it, with the full plans and schematic available on his website.

The changes include a regular USB-C port for charging, an ESP32 board with integrated battery charger for the 18650 Li-ion cell of the lamp, and an N-channel MOSFET to switch the power to the lamp’s LED. With all of the raw power from the ESP32 available, the two lamps got integrated into the Home Assistant network which enables features such as turning the lamps on when the alarm goes off in the morning. All of this took about $7 in parts and a few hours of work.

Although we can commend [Dillan] on this creative hack rather than returning the item, it’s worrying that apparently there’s now a flood of ‘USB C-powered’ devices out there that come with non-compliant cables that are somehow worse than ‘power-only’ USB cables. It brings back fond memories of hunting down proprietary charging cables, which was the issue that USB power was supposed to fix.

Tracing the #!: How the Linux Kernel Handles the Shebang

Por: Maya Posch
12 Abril 2025 at 05:00

One of the delights in Bash, zsh, or whichever shell tickles your fancy in your OSS distribution of choice, is the ease of which you can use scripts. These can be shell scripts, or use the Perl, Python or another interpreter, as defined by the shebang (#!) at the beginning of the script. This signature is followed by the path to the interpreter, which can be /bin/sh for maximum compatibility across OSes, but how does this actually work? As [Bruno Croci] found while digging into this question, it is not the shell that interprets the shebang, but the kernel.

It’s easy enough to find out the basic execution sequence using strace after you run an executable shell script with said shebang in place. The first point is in execve, a syscall that gets one straight into the Linux kernel (fs/exec.c). Here the ‘binary program’ is analyzed for its executable format, which for the shell script gets us to binfmt_script.c. Incidentally the binfmt_misc.c source file provides an interesting detour as it concerns magic byte sequences to do something similar as a shebang.

As a bonus [Bruno] also digs into the difference between executing a script with shebang or running it in a shell (e.g. sh script.sh), before wrapping up with a look at where the execute permission on a shebang-ed shell script is checked.

Creating a Somatosensory Pathway From Human Stem Cells

Por: Maya Posch
12 Abril 2025 at 02:00

Human biology is very much like that of other mammals, and yet so very different in areas where it matters. One of these being human neurology, with aspects like the human brain and the somatosensory pathways (i.e. touch etc.) being not only hard to study in non-human animal analogs, but also (genetically) different enough that a human test subject is required. Over the past years the use of human organoids have come into use, which are (parts of) organs grown from human pluripotent stem cells and thus allow for ethical human experimentation.

For studying aspects like the somatosensory pathways, multiple of such organoids must be combined, with recently [Ji-il Kim] et al. as published in Nature demonstrating the creation of a so-called assembloid. This four-part assembloid contains somatosensory, spinal, thalamic and cortical organoids, covering the entirety of such a pathway from e.g. one’s skin to the brain’s cortex where the sensory information is received.

Such assembloids are – much like organoids – extremely useful for not only studying biological and biochemical processes, but also to research diseases and disorders, including tactile deficits as previously studied in mouse models by e.g. [Lauren L. Orefice] et al. caused by certain genetic mutations in Mecp2 and other genes, as well as genes like SCN9A that can cause clinical absence of pain perception.

Using these assembloids the development of these pathways can be studied in great detail and therapies developed and tested.

Using Integer Addition to Approximate Float Multiplication

Por: Maya Posch
11 Abril 2025 at 02:00

Once the domain of esoteric scientific and business computing, floating point calculations are now practically everywhere. From video games to large language models and kin, it would seem that a processor without floating point capabilities is pretty much a brick at this point. Yet the truth is that integer-based approximations can be good enough to hit the required accuracy. For example, approximating floating point multiplication with integer addition, as [Malte Skarupke] recently had a poke at based on an integer addition-only LLM approach suggested by [Hongyin Luo] and [Wei Sun].

As for the way this works, it does pretty much what it says on the tin: adding the two floating point inputs as integer values, followed by adjusting the exponent. This adjustment factor is what gets you close to the answer, but as the article and comments to it illustrate, there are plenty of issues and edge cases you have to concern yourself with. These include under- and overflow, but also specific floating point inputs.

Unlike in scientific calculations where even minor inaccuracies tend to propagate and cause much larger errors down the line, graphics and LLMs do not care that much about float point precision, so the ~7.5% accuracy of the integer approach is good enough. The question is whether it’s truly more efficient as the paper suggests, rather than a fallback as seen with e.g. integer-only audio decoders for platforms without an FPU.

Since one of the nice things about FP-focused vector processors like GPUs and derivatives (tensor, ‘neural’, etc.) is that they can churn through a lot of data quite efficiently, the benefits of shifting this to the ALU of a CPU and expecting (energy) improvements seem quite optimistic.

Why USB-C Splitters Can Cause Magic Smoke Release

Por: Maya Posch
7 Abril 2025 at 11:00

Using USB for powering devices is wonderful, as it frees us from a tangle of incompatible barrel & TRS connectors, not to mention a veritable gaggle of proprietary power connectors. The unfortunate side-effect of this is that the obvious thing to do with power connectors is to introduce splitters, which can backfire horribly, especially since USB-C and USB Power Delivery (USB-PD) were introduced. The [Quiescent Current] channel on YouTube recently went over the ways in which these handy gadgets can literally turn your USB-powered devices into a smoldering pile of ashes.

Much like Qualcomm’s Quick Charge protocols, USB-PD negotiates higher voltages with the power supply, after which this same voltage will be provided to any device that’s tapped into the power supply lines of the USB connector. Since USB-C has now also taken over duties like analog audio jacks, this has increased the demand for splitters, but these introduce many risks. Unless you know how these splitters are wired inside, your spiffy smartphone may happily negotiate 20V that will subsequently fry a USB-powered speaker that was charging off the same splitter.

In the video only a resistor and LED were sacrificed to make the point, but in a real life scenario the damage probably would be significantly more expensive.

Teardown of a Scam Ultrasonic Cleaner

Por: Maya Posch
3 Abril 2025 at 20:00

Everyone knows that ultrasonic cleaners are great, but not every device that’s marketed as an ultrasonic cleaner is necessarily such a device. In a recent video on the Cheap & Cheerful YouTube channel the difference is explored, starting with a teardown of a fake one. The first hint comes with the use of the description ‘Multifunction cleaner’ on the packaging, and the second in the form of it being powered by two AAA batteries.

Unsurprisingly, inside you find not the ultrasonic transducer that you’d expect to find in an actual ultrasonic cleaner, but rather a vibration motor. In the demonstration prior to the teardown you can see that although the device makes a similar annoying buzzing noise, it’s very different. Subsequently the video looks at a small ultrasonic cleaner and compares the two.

Among the obvious differences are that the ultrasonic cleaner is made out of metal and AC-powered, and does a much better job at cleaning things like rusty parts. The annoying thing is that although the cleaners with a vibration motor will also clean things, they rely on agitating the water in a far less aggressive way than the ultrasonic cleaner, so marketing them as something which they’re not is very unpleasant.

In the video the argument is also made that you do not want to clean PCBs with an ultrasonic cleaner, but we think that people here may have different views on that aspect.

Remembering Betty Webb: Bletchley Park & Pentagon Code Breaker

Por: Maya Posch
3 Abril 2025 at 14:00
S/Sgt Betty Vine-Stevens, Washington DC, May 1945.
S/Sgt Betty Vine-Stevens, Washington DC, May 1945.

On 31 March of this year we had to bid farewell to Charlotte Elizabeth “Betty” Webb (née Vine-Stevens) at the age of 101. She was one of the cryptanalysts who worked at Bletchley Park during World War 2, as well as being one of the few women who worked at Bletchley Park in this role. At the time existing societal biases held that women were not interested in ‘intellectual work’, but as manpower was short due to wartime mobilization, more and more women found themselves working at places like Bletchley Park in a wide variety of roles, shattering these preconceived notions.

Betty Webb had originally signed up with the Auxiliary Territorial Service (ATS), with her reasoning per a 2012 interview being that she and a couple of like-minded students felt that they ought to be serving their country, ‘rather than just making sausage rolls’. After volunteering for the ATS, she found herself being interviewed at Bletchley Park in 1941. This interview resulted in a years-long career that saw her working on German and Japanese encrypted communications, all of which had to be kept secret from then 18-year old Betty’s parents.

Until secrecy was lifted, all her environment knew was that she was a ‘secretary’ at Bletchley Park. Instead, she was fighting on the frontlines of cryptanalysis, an act which got acknowledged by both the UK and French governments years later.

Writing The Rulebook

Enigma machine
Enigma machine

Although encrypted communications had been a part of warfare for centuries, the level and scale was vastly different during World War 2, which spurred the development of mechanical and electronic computer systems. At Bletchley Park these were the Bombe and Colossus computers, with the former being an electro-mechanical system. Both were used for deciphering German Enigma machine encrypted messages, with the tube-based Colossus taking over starting in 1943.

After enemy messages were intercepted, it was the task of these systems and the cryptanalysis experts to decipher them as quickly as possible. With the introduction of the Enigma machine by the Axis, this had become a major challenge. Since each message was likely to relate to a current event and thus time-sensitive, any delay in decrypting it would render the resulting decrypted message less useful. Along with the hands-on decrypting work, there were many related tasks to make this process work as smoothly and securely as possible.

Betty’s first task at Bletchley was to do the registering of incoming messages, which she began with as soon as she had been subjected to the contents of the Official Secrets Act. This forbade her from disclosing even the slightest detail of what she did or had seen at Bletchley, at the risk of severe punishment.

As was typical at Bletchley Park, each member of the staff there was kept as much in the dark of the whole as possible for operational security reasons. This meant that of the thousands of incoming messages per day, each had to be carefully kept in order and marked with a date and obfuscated location. She did see a Colossus computer once when it was moved into one of the buildings, but this was not one of her tasks, and snooping around Bletchley was discouraged for obvious reasons.

Paraphrasing

The Bletchley Park Mansion where Betty Webb worked initially before moving to Block F. (Credit: DeFacto, Wikimedia)
The Bletchley Park Mansion where Betty Webb worked initially before moving to Block F, which is now demolished. (Credit: DeFacto, Wikimedia)

Although Betty’s German language skills were pretty good thanks to her mother’s insistence that she’d be able to take care of herself whilst travelling on the continent, the requirements for the translators at Bletchley were much more strict, and thus eventually she ended up working in the Japanese section located in Block F. After decrypting and translating the enemy messages, the texts were not simply sent to military headquarters or similar, but had to be paraphrased first.

The paraphrasing task entails pretty much what it says: taking the original translated message and paraphrasing it so that the meaning is retained, but any clues about what the original message was from which it was paraphrased is erased. In the case that such a message then falls into enemy hands, via a spy at HQ, it is made much harder to determine where this particular information was intercepted.

Betty was deemed to be very good at this task, which she attributed to her mother, who encouraged her to relate stories in her own words. As she did this paraphrasing work, the looming threat of the Official Secrets Act encouraged those involved with the work to not dwell on or remember much of the texts they read.

In May of 1945 with the war in Europe winding down, Betty was transferred to the Pentagon in the USA to continue her paraphrasing work on translated Japanese messages. Here she was the sole ATS girl, but met up with a girl from Hull with whom she had to share a room, and bed, in the rundown Cairo hotel.

With the surrender of Japan the war officially came to an end, and Betty made her way back to the UK.

Secrecy’s Long Shadow

When the work at Bletchley Park was finally made public in 1975, Betty’s parents had sadly already passed away, so she was never able to tell them the truth of what she had been doing during the war. Her father had known that she was keeping a secret, but because of his own experiences during World War 1, he had shown great understanding and appreciation of his daughter’s work.

After keeping her secrets along with everyone else at Bletchley, the Pentagon and elsewhere, Betty wasn’t about to change anything about this. Her husband had never indicated any interest in talking about it either. In her eyes she had just done her duty and that was good enough, but when she got asked to talk about her experiences in 1990, this began a period in which she would not only give talks, but also write about her experiences. In 2015 Betty was appointed a Member of the Order of the British Empire (MBE) and in 2021 as a Chevalier de la Légion d’Honneur (Knight of the Legion of Honour) in France.

Today, as more and more voices from of those who experienced World War 2 and who were involved the heroic efforts to stop the Axis forces fall silent, it is more important than ever to recognize their sacrifices and ingenuity. Even if Betty Webb didn’t save the UK by her lonesome, it was the combined effort from thousands of individuals like her that cracked the Enigma encryption and provided a constant flow of intel to military command, saving countless lives in the process and enabling operations that may have significantly shortened the war.

Top image: A Colossus Mark 2 computer being operated by Dorothy Du Boisson (left) and Elsie Booker (right), 1943 (Credit: The National Archives, UK)

❌
❌