Using USB for powering devices is wonderful, as it frees us from a tangle of incompatible barrel & TRS connectors, not to mention a veritable gaggle of proprietary power connectors. The unfortunate side-effect of this is that the obvious thing to do with power connectors is to introduce splitters, which can backfire horribly, especially since USB-C and USB Power Delivery (USB-PD) were introduced. The [Quiescent Current] channel on YouTube recently went over the ways in which these handy gadgets can literally turn your USB-powered devices into a smoldering pile of ashes.
Much like Qualcomm’s Quick Charge protocols, USB-PD negotiates higher voltages with the power supply, after which this same voltage will be provided to any device that’s tapped into the power supply lines of the USB connector. Since USB-C has now also taken over duties like analog audio jacks, this has increased the demand for splitters, but these introduce many risks. Unless you know how these splitters are wired inside, your spiffy smartphone may happily negotiate 20V that will subsequently fry a USB-powered speaker that was charging off the same splitter.
In the video only a resistor and LED were sacrificed to make the point, but in a real life scenario the damage probably would be significantly more expensive.
Everyone knows that ultrasonic cleaners are great, but not every device that’s marketed as an ultrasonic cleaner is necessarily such a device. In a recent video on the Cheap & Cheerful YouTube channel the difference is explored, starting with a teardown of a fake one. The first hint comes with the use of the description ‘Multifunction cleaner’ on the packaging, and the second in the form of it being powered by two AAA batteries.
Unsurprisingly, inside you find not the ultrasonic transducer that you’d expect to find in an actual ultrasonic cleaner, but rather a vibration motor. In the demonstration prior to the teardown you can see that although the device makes a similar annoying buzzing noise, it’s very different. Subsequently the video looks at a small ultrasonic cleaner and compares the two.
Among the obvious differences are that the ultrasonic cleaner is made out of metal and AC-powered, and does a much better job at cleaning things like rusty parts. The annoying thing is that although the cleaners with a vibration motor will also clean things, they rely on agitating the water in a far less aggressive way than the ultrasonic cleaner, so marketing them as something which they’re not is very unpleasant.
In the video the argument is also made that you do not want to clean PCBs with an ultrasonic cleaner, but we think that people here may have different views on that aspect.
S/Sgt Betty Vine-Stevens, Washington DC, May 1945.
On 31 March of this year we had to bid farewell to Charlotte Elizabeth “Betty” Webb (née Vine-Stevens) at the age of 101. She was one of the cryptanalysts who worked at Bletchley Park during World War 2, as well as being one of the few women who worked at Bletchley Park in this role. At the time existing societal biases held that women were not interested in ‘intellectual work’, but as manpower was short due to wartime mobilization, more and more women found themselves working at places like Bletchley Park in a wide variety of roles, shattering these preconceived notions.
Betty Webb had originally signed up with the Auxiliary Territorial Service (ATS), with her reasoning per a 2012 interview being that she and a couple of like-minded students felt that they ought to be serving their country, ‘rather than just making sausage rolls’. After volunteering for the ATS, she found herself being interviewed at Bletchley Park in 1941. This interview resulted in a years-long career that saw her working on German and Japanese encrypted communications, all of which had to be kept secret from then 18-year old Betty’s parents.
Until secrecy was lifted, all her environment knew was that she was a ‘secretary’ at Bletchley Park. Instead, she was fighting on the frontlines of cryptanalysis, an act which got acknowledged by both the UK and French governments years later.
Writing The Rulebook
Enigma machine
Although encrypted communications had been a part of warfare for centuries, the level and scale was vastly different during World War 2, which spurred the development of mechanical and electronic computer systems. At Bletchley Park these were the Bombe and Colossus computers, with the former being an electro-mechanical system. Both were used for deciphering German Enigma machine encrypted messages, with the tube-based Colossus taking over starting in 1943.
After enemy messages were intercepted, it was the task of these systems and the cryptanalysis experts to decipher them as quickly as possible. With the introduction of the Enigma machine by the Axis, this had become a major challenge. Since each message was likely to relate to a current event and thus time-sensitive, any delay in decrypting it would render the resulting decrypted message less useful. Along with the hands-on decrypting work, there were many related tasks to make this process work as smoothly and securely as possible.
Betty’s first task at Bletchley was to do the registering of incoming messages, which she began with as soon as she had been subjected to the contents of the Official Secrets Act. This forbade her from disclosing even the slightest detail of what she did or had seen at Bletchley, at the risk of severe punishment.
As was typical at Bletchley Park, each member of the staff there was kept as much in the dark of the whole as possible for operational security reasons. This meant that of the thousands of incoming messages per day, each had to be carefully kept in order and marked with a date and obfuscated location. She did see a Colossus computer once when it was moved into one of the buildings, but this was not one of her tasks, and snooping around Bletchley was discouraged for obvious reasons.
Paraphrasing
The Bletchley Park Mansion where Betty Webb worked initially before moving to Block F, which is now demolished. (Credit: DeFacto, Wikimedia)
Although Betty’s German language skills were pretty good thanks to her mother’s insistence that she’d be able to take care of herself whilst travelling on the continent, the requirements for the translators at Bletchley were much more strict, and thus eventually she ended up working in the Japanese section located in Block F. After decrypting and translating the enemy messages, the texts were not simply sent to military headquarters or similar, but had to be paraphrased first.
The paraphrasing task entails pretty much what it says: taking the original translated message and paraphrasing it so that the meaning is retained, but any clues about what the original message was from which it was paraphrased is erased. In the case that such a message then falls into enemy hands, via a spy at HQ, it is made much harder to determine where this particular information was intercepted.
Betty was deemed to be very good at this task, which she attributed to her mother, who encouraged her to relate stories in her own words. As she did this paraphrasing work, the looming threat of the Official Secrets Act encouraged those involved with the work to not dwell on or remember much of the texts they read.
In May of 1945 with the war in Europe winding down, Betty was transferred to the Pentagon in the USA to continue her paraphrasing work on translated Japanese messages. Here she was the sole ATS girl, but met up with a girl from Hull with whom she had to share a room, and bed, in the rundown Cairo hotel.
With the surrender of Japan the war officially came to an end, and Betty made her way back to the UK.
Secrecy’s Long Shadow
When the work at Bletchley Park was finally made public in 1975, Betty’s parents had sadly already passed away, so she was never able to tell them the truth of what she had been doing during the war. Her father had known that she was keeping a secret, but because of his own experiences during World War 1, he had shown great understanding and appreciation of his daughter’s work.
After keeping her secrets along with everyone else at Bletchley, the Pentagon and elsewhere, Betty wasn’t about to change anything about this. Her husband had never indicated any interest in talking about it either. In her eyes she had just done her duty and that was good enough, but when she got asked to talk about her experiences in 1990, this began a period in which she would not only give talks, but also write about her experiences. In 2015 Betty was appointed a Member of the Order of the British Empire (MBE) and in 2021 as a Chevalier de la Légion d’Honneur (Knight of the Legion of Honour) in France.
Today, as more and more voices from of those who experienced World War 2 and who were involved the heroic efforts to stop the Axis forces fall silent, it is more important than ever to recognize their sacrifices and ingenuity. Even if Betty Webb didn’t save the UK by her lonesome, it was the combined effort from thousands of individuals like her that cracked the Enigma encryption and provided a constant flow of intel to military command, saving countless lives in the process and enabling operations that may have significantly shortened the war.
Top image: A Colossus Mark 2 computer being operated by Dorothy Du Boisson (left) and Elsie Booker (right), 1943 (Credit: The National Archives, UK)
History is rather dull and unexciting to most people, which naturally invites exciting flights of fancy that can range from the innocent to outright conspiracies. Nobody truly believes that the astounding finds and (fully functioning) ancient mechanisms in the Indiana Jones & Uncharted franchises are real, with mostly intact ancient cities waiting for intrepid explorers along with whatever mystical sources of power, wealth or influence formed the civilization’s foundations before its tragic demise. Yet somehow Plato’s fictive Atlantis has taken on a life of its own, along with many other ‘lost’ civilizations, whether real or imagined.
Of course, if these aforementioned movies and video games were realistic, they would center around a big archaeological dig and thrilling finds like pot shards and cuneiform clay tablets, not ways to smite enemies and gain immortality. Nor would it involve solving complex mechanical puzzles to gain access to the big secret chamber, prior to walking out of the readily accessible backdoor. Reality is boring like that, which is why there’s a major temptation to spruce things up. With the Egyptian pyramids as well as similar structures around the world speaking to the human imagination, this has led to centuries of half-baked ideas and outright conspiracies.
Most recently, a questionable 2022 paper hinting at structures underneath the Pyramid of Khafre in Egypt was used for a fresh boost to old ideas involving pyramid power stations, underground cities and other fanciful conspiracies. Although we can all agree that the ancient pyramids in Egypt are true marvels of engineering, are we really on the cusp of discovering that the ancient Egyptians were actually provided with Forerunner technology by extraterrestrials?
The Science of Being Tragically Wrong
A section of the ‘runes’ at Runamo. (Credit: Entheta, Wikimedia)
In defense of fanciful theories regarding the Actual Truth about Ancient Egypt and kin, archaeology as we know it today didn’t really develop until the latter half of the 20th century, with the field being mostly a hobbyist thing that people did out of curiosity as well as a desire for riches. Along the way many comical blunders were made, such as the Runamo runes in Sweden that turned out to be just random cracks in dolerite.
Less funny were attempts by colonists to erase Great Zimbabwe (11th – ~17th century CE) and the Kingdom of Zimbabwe after the ruins of the abandoned capital were discovered by European colonists and explored in earnest by the 19th century. Much like the wanton destruction of local cultures in the Americas by European colonists and explorers who considered their own culture, religion and technology to be clearly superior, the history of Great Zimbabwe was initially rewritten so that no thriving African society ever formed on its own, but was the result of outside influences.
In this regard it’s interesting how many harebrained ideas about archaeological sites have now effectively flipped, with mystical and mythical properties being assigned and these ‘Ancients’ being almost worshipped. Clearly, aliens visited Earth and that led to pyramids being constructed all around the globe. These would also have been the same aliens or lost civilizations that had technology far beyond today’s cutting edge, putting Europe’s fledgling civilization to shame.
Hence people keep dogpiling on especially the pyramids of Giza and its surrounding complex, assigning mystical properties to their ventilation shafts and expecting hidden chambers with technology and treasures interspersed throughout and below the structures.
Lost Technology
The Giant’s Causeway in Northern Ireland. (Credit: code poet, Wikimedia)
The idea of ‘lost technology’ is a pervasive one, mostly buoyed by the axiom that you cannot disprove something, only find evidence for its absence. Much like the possibility of a teapot being in orbit around the Sun right now, you cannot disprove that the Ancient Egyptians did not have hyper-advanced power plants using zero point energy back around 3,600 BCE. This ties in with the idea of ‘lost civilizations‘, which really caught on around the Victorian era.
Such romanticism for a non-existent past led to the idea of Atlantis being a real, lost civilization becoming pervasive, with the 1960s seeing significant hype around the Bimini Road. This undersea rock formation in the Bahamas was said to have been part of Atlantis, but is actually a perfectly cromulent geological formation. More recently a couple of German tourists got into legal trouble while trying to prove a connection between Egypt’s pyramids to Atlantis, which is a theory that refuses to die along with the notion that Atlantis was some kind of hyper-advanced civilization and not just a fictional society that Plato concocted to illustrate the folly of man.
Admittedly there is a lot of poetry in all of this when you consider it from that angle.
Welcome to Shangri-La… or rather Shambhala as portrayed in Uncharted 3.
People have spent decades of their life and countless sums of money on trying to find Atlantis, Shangri-La (possibly inspired by Shambhala), El Dorado and similar fictional locations. The Iram of the Pillars which featured in Uncharted 3: Drake’s Deception is one of the lost cities mentioned in the Qur’an, and is incidentally another great civilization that saw itself meet a grim end through divine punishment. Iram is often said to be Ubar, which is commonly known as Atlantis of the Sands.
All of this is reminiscent of the Giant’s Causeway in Northern Ireland, and corresponding area at Fingal’s Cave on the Scottish isle of Staffa, where eons ago molten basalt cooled and contracted into basalt columns in a way that is similar to how drying mud will crack in semi-regular patterns. This particular natural formation did lead to many local myths, including how a giant built a causeway across the North Channel, hence the name.
Fortunately for this location, no ‘lost civilization’ tag became attached, and thus it remains a curious demonstration of how purely natural formations can create structures that one might assume to have required intelligence, thus providing fuel for conspiracies. So far only ‘Young Earth’ conspiracy folk have put a claim on this particular site.
What we can conclude is that much like the Victorian age that spawned countless works of fiction on the topic, many of these modern-day stories appear to be rooted in a kind of romanticism for a past that never existed, with those affected interpreting natural patterns as something more in a sure sign of confirmation bias.
Tourist Traps
Tomb of the First Emperor Qin Shi Huang Di, Xi’an, China (Credit: Aaron Zhu)
One can roughly map the number of tourist visits with the likelihood of wild theories being dreamed up. These include the Egyptian pyramids, but also similar structures in what used to be the sites of the Aztec and Maya civilizations. Similarly the absolutely massive mausoleum of Qin Shi Huang in China with its world-famous Terracotta Army has led to incredible speculation on what might still be hidden inside the unexcavated tomb mound, such as entire seas and rivers of mercury that moved mechanically to simulate real bodies of water, a simulated starry sky, crossbows set to take out trespassers and incredible riches.
Many of these features were described by Sima Qian in the first century BCE, who may or may not have been truthful in his biography of Qin Shi Huang. Meanwhile, China’s authorities have wisely put further excavations on hold, as they have found that many of the recovered artefacts degrade very quickly once exposed to air. The paint on the terracotta figures began to flake off rapidly after excavation, for example, reducing them to the plain figures which we are familiar with.
Tourism can be as damaging as careless excavation. As popular as the pyramids at Giza are, centuries of tourism have taken their toll, with vandalism, graffiti and theft increasing rapidly since the 20th century. The Great Pyramid of Khufu had already been pilfered for building materials over the course of millennia by the local population, but due to tourism part of its remaining top stones were unceremoniously tipped over the side to make a larger platform where tourists could have some tea while gazing out over the the Giza Plateau, as detailed in a recent video on the History for Granite channel:
The recycling of building materials from antique structures was also the cause of the demise of the Labyrinth at the foot of the pyramid of Amenemhat III at Hawara. Once an architectural marvel, with reportedly twelve roofed courts and spanning a total of 28,000 m2, today only fragments remain of its existence. This sadly is how most marvels of the Ancient World end up: looted ruins, ashes and shards, left in the sand, mud, or reclaimed by nature, from which we can piece together with a lot of patience and the occasional stroke of fortune a picture what it once may have looked like.
Pyramid Power
Cover of The Giza Power Plant book. (Credit: Christopher Dunn)
When in light of all this we look at the claims made about the Pyramid of Khafre and the persistent conspiracies regarding this and other pyramids hiding great secrets, we can begin to see something of a pattern. Some people have really bought into these fantasies, while for others it’s just another way to embellish a location, to attract more rubes tourists and sell more copies of their latest book on the extraterrestrial nature of pyramids and how they are actually amazing lost technologies. This latter category is called pseudoarcheology.
Pyramids, of course, have always held magical powers, but the idea that they are literal power plants seems to have been coined by one Christopher Dunn, with the publication of his pseudo-archeological book The Giza Power Plant in 1998. That there would be more structures underneath the Pyramid of Khafre is a more recent invention, however. Feeding this particular flight of fancy appears to be a 2022 paper by Filippo Biondi and Corrado Malanga, in which synthetic aperture radar (SAR) was used to examine said pyramid interior and subsurface features.
Somehow this got turned into claims about multiple deep vertical wells descending 648 meters along with other structures. Shared mostly via conspiracy channels, it widely extrapolates from claims made in the paper by Biondi et al., with said SAR-based claims never having been peer-reviewed or independently corroborated. On the Rational Wiki entry for these and other claims related to the Giza pyramids are savagely tossed under the category of ‘pyramidiots’.
The art that conspiracy nuts produce when provided with generative AI tools. (Source: Twitter)
Back in the real world, archaeologists have found a curious L-shaped area underneath a royal graveyard near Khufu’s pyramid that was apparently later filled in, but which seems to lead to a deeper structure. This is likely to be part of the graveyard, but may also have been a feature that was abandoned during construction. Currently this area is being excavated, so we’re likely to figure out more details after archaeologists have finished gently sifting through tons of sand and gravel.
There is also the ScanPyramids project, which uses non-destructive and non-invasive techniques to scan Old Kingdom-era pyramids, such as muon tomography and infrared thermography. This way the internal structure of these pyramids can be examined in-depth. One finding was that of a number of ‘voids’, which could mean any of a number of things, but most likely do not contain world-changing secrets.
To this day the most credible view is still that the pyramids of the Old Kingdom were used as tombs, though unlike the mastabas and similar tombs, there is a credible argument to be made that rather than being designed to be hidden away, these pyramids would be eternal monuments to the pharaoh. They would be open for worship of the pharaoh, hence the ease of getting inside them. Ironically this would make them more secure from graverobbers, which was a great idea until the demise of the Ancient Egyptian civilization.
This is a point that’s made succinctly on the History for Granite channel, with the conclusion being that this goal of ‘inspiring awe’ to worshippers is still effective today, simply judging by the millions of tourists each year to these monuments, and the tall tales that they’ve inspired.
Split keyboards are becoming more popular, but because they’re still relatively niche, they can be rather expensive if you want to buy one. So why not make your own? Sure, you could assemble one from a kit, but why not take a cheap mechanical keyboard, slice it in half and just waves hands connect the two halves back together? If this thought appeals to you, then [nomolk]’s literal hackjob video should not be ignored. Make sure to enable English subtitles for the Japanese-language video.
Easy split keyboard tip: just reconnect both halves… (Credit: nomolk, YouTube)
In it, the fancy (but cheap) mechanical keyboard with Full RGB functionality is purchased and tested prior to meeting its demise. Although the left side with the cable and controller still works, the right side now needs to be connected, which is where a lot of tedious wires have to be soldered to repair traces.
Naturally this will go wrong, so it’s important to take a (sushi) break and admire the sunset before hurling oneself at the tracing of faulty wiring. This process and the keyboard matrix is further detailed on the blog entry (in Japanese) for this process.
Although this was perhaps easier than the other split keyboard project involving a membrane keyboard, this tongue-in-cheek project demonstrates the limits of practicality with this approach even if it could be cleaned up more with fancier wiring.
We give it full points for going the whole way, however, and making the keyboard work again in the end.
When the AMSAT-OSCAR 7 (AO-7) amateur radio satellite was launched in 1974, its expected lifespan was about five years. The plucky little satellite made it to 1981 when a battery failure caused it to be written off as dead. Then, in 2002 it came back to life. The prevailing theory being that one of the cells in the satellites NiCd battery pack, in an extremely rare event, failed open — thus allowing the satellite to run (intermittently) off its solar panels.
In a recent video by [Ben] on the AE4JC Amateur Radio YouTube channel goes over the construction of AO-7, its operation, death and subsequent revival are covered, as well as a recent QSO (direct contact).
The battery is made up of multiple individual cells.
The solar panels covering this satellite provided a grand total of 14 watts at maximum illumination, which later dropped to 10 watts, making for a pretty small power budget. The entire satellite was assembled in a ‘clean room’ consisting of a sectioned off part of a basement, with components produced by enthusiasts associated with AMSAT around the world. Onboard are two radio transponders: Mode A at 2 meters and Mode B at 10 meters, as well as four beacons, three of which are active due to an international treaty affecting the 13 cm beacon.
Positioned in a geocentric LEO (1,447 – 1,465 km) orbit, it’s quite amazing that after 50 years it’s still mostly operational. Most of this is due to how the satellite smartly uses the Earth’s magnetic field for alignment with magnets as well as the impact of photons to maintain its spin. This passive control combined with the relatively high altitude should allow AO-7 to function pretty much indefinitely while the PV panels keep producing enough power. All because a NiCd battery failed in a very unusual way.
Fake WiFi repeater with a cheap real one behind it. (Credit: Big Clive, YouTube)
Over the years we have seen a lot of fake electronics, ranging from fake power saving devices that you plug into an outlet, to fake car ECU optimizers that you stick into the OBD port. These are all similar in that they fake functionality while happily lighting up a LED or two to indicate that they’re doing ‘something’. Less expected here was that we’d be seeing fake WiFi repeaters, but recently [Big Clive] got his hands on one and undertook the arduous task of reverse-engineering it.
The simple cardboard box which it comes in claims that it’s a 2.4 GHz unit that operates at 300 Mbps, which would be quite expected for the price. [Clive] obtained a real working WiFi repeater previously that did boast similar specifications and did indeed work. The dead giveaway that it is a fake are the clearly fake antennae, along with the fact that once you plug it in, no new WiFi network pops up or anything else.
Inside the case – which looks very similar to the genuine repeater – there is just a small PCB attached to the USB connector. On the PCB are a 20 Ohm resistor and a blue LED, which means that the LED is being completely overdriven as well and is likely to die quite rapidly. Considering that a WiFi repeater is supposed to require a setup procedure, it’s possible that these fake repeaters target an audience which does not quite understand what these devices are supposed to do, but they can also catch more informed buyers unaware who thought they were buying some of the cheap real ones. Caveat emptor, indeed.
It’s rarely appreciated just how much more complicated nuclear fusion is than nuclear fission. Whereas the latter involves a process that happens all around us without any human involvement, and where the main challenge is to keep the nuclear chain reaction within safe bounds, nuclear fusion means making atoms do something that goes against their very nature, outside of a star’s interior.
Fusing helium isotopes can be done on Earth fairly readily these days, but doing it in a way that’s repeatable — bombs don’t count — and in a way that makes economical sense is trickier. As covered previously, plasma stability is a problem with the popular approach of tokamak-based magnetic confinement fusion (MCF). Although this core problem has now been largely addressed, and stellarators are mostly unbothered by this particular problem, a Canadian start-up figures that they can do even better, in the form of a nuclear fusion reactors based around the principle of magnetized target fusion (MTF).
Although General Fusion’s piston-based fusion reactor has people mostly very confused, MTF is based on real physics and with GF’s current LM26 prototype having recently achieved first plasma, this seems like an excellent time to ask the question of what MTF is, and whether it can truly compete billion-dollar tokamak-based projects.
Squishing Plasma Toroids
Lawson criterion of important magnetic confinement fusion experiments (Credit: Horvath, A., 2016)
In general, to achieve nuclear fusion, the target atoms have to be pushed past the Coulomb barrier, which is an electrostatic interaction that normally prevents atoms from approaching each other and even spontaneously fusing. In stars, the process of nucleosynthesis is enabled by the intense pressures due to the star’s mass, which overcomes this electrostatic force.
Replicating the nuclear fusion process requires a similar way to overcome the Coulomb barrier, but in lieu of even a small-sized star like our Sun, we need alternate means such as much higher temperatures, alternative ways to provide pressure and longer confinement times. The efficiency of each approach was originally captured in the Lawson criterion, which was developed by John D. Lawson in a (then classified) 1955 paper (PDF on Archive.org).
In order to achieve a self-sustaining fusion reaction, the energy losses should be less than the energy produced by the reaction. The break-even point here is expressed as having a Q (energy gain factor) of 1, where the added energy and losses within the fusion process are in balance. For sustained fusion with excess energy generation, the Q value should be higher than 1, typically around 5 for contemporary fuels and fusion technology.
In the slow march towards ignition, we have seen many reports in the popular media that turn out to be rather meaningless, such as the horrendous inefficiency demonstrated by the laser-based inertial confinement fusion (ICF) at the National Ignition Facility (NIF). This makes it rather fascinating that what General Fusion is attempting is closer to ICF, just without the lasers and artisan Hohlraum-based fuel pellets.
Instead they use a plasma injector, a type of plasma railgun called a Marshall gun, that produces hydrogen isotope plasma, which is subsequently contained in a magnetic field as a self-stable compact toroid. This toroid is then squished by a mechanical system in a matter of milliseconds, with the resulting compression induces fusion. Creating this toroid is the feat that was recently demonstrated in the current Lawson Machine 26 (LM26) prototype reactor with its first plasma in the target chamber.
Magneto-Inertial Fusion
Whereas magnetic confinement fusion does effectively what it says on the tin, magnetic target fusion is pretty much a hybrid of magnetic confinement fusion and the laser-based intertial confinement fusion. Because the magnetic containment is only there to essentially keep the plasma in a nice stable toroid, it doesn’t have nearly the same requirements as in a tokamak or stellarator. Yet rather than using complex and power-hungry lasers, MCF applies mechanical energy using an impulse driver — the liner — that rapidly compresses the low-density plasma toroid.
Schematic of the Lawson Machine 26 MTF reactor. (Credit: General Fusion)
The juiciest parts of General Fusion’s experimental setup can be found in the Research Library on the GF website. The above graphic was copied from the LM26 poster (PDF), which provides a lot of in-depth information on the components of the device and its operation, as well as the experiments that informed its construction.
The next step will be to test the ring compressor that is designed to collapse the lithium liner around the plasma toroid, compressing it and achieving fusion.
Long Road Ahead
Interpretation of General Fusion’s commercial MTF reactor design. (Credit: Evan Mason)
As promising this may sound, there is still a lot of work to do before MTF can be considered a viable option for commercial fusion. As summarized on the Wikipedia entry for General Fusion, the goal is to have a liquid liner rather than the solid lithium liner of LM26. This liquid lithium liner will both breed new tritium fuel from neutron exposure, as well as provide the liner that compresses the deuterium-tritium fuel.
This liquid liner would also provide cooling, linked with a heat exchanger or steam generator to generate electricity. Because the liquid liner would be infinitely renewable, it should allow for about 1 cycle per second. To keep the liquid liner in place on the inside of the sphere, it would need to be constantly spun, further complicating the design.
Although getting plasma in the reaction chamber where it can be squished by the ring compressor’s lithium liner is a major step, the real challenge will be in moving from a one-cycle-a-day MTF prototype to something that can integrate not only the aforementioned features, but also run one cycle per second, while being more economical to run than tokamaks, stellarators, or even regular nuclear fission plants, especially Gen IV fast neutron reactors.
That said, there is a strong argument to be made that MTF is significantly more practical for commercial power generation than ICF. And regardless, it is just really cool science and engineering.
Top image: General Fusion’s Lawson Machine 26. (Credit: General Fusion)
Every year, USB flash drives get cheaper and hold more data. Unfortunately, they don’t always get faster. The reality is, many USB 3.0 flash drives aren’t noticeably faster than their USB 2.0 cousins, as [Chase Fournier] found with the ultra-cheap specimens purchased over at his local Micro Center store.
Although these all have USB 3.0 interfaces, they transfer at less than 30 MB/s, but why exactly? After popping open a few of these drives the answer appears to be that they use the old-style Phison controller (PS2251-09-V) and NAND flash packages that you’d expect to find in a USB 2.0 drive.
Across the 32, 64, and 256 GB variants the same Phison controller is used, but the PCB has provisions for both twin TSOP packages or one BGA package. The latter package turned out to be identical to those found in the iPhone 8. Also interesting was that the two 256 GB drives [Chase] bought had different Phison chips, as in one being BGA and the other QFP. Meanwhile some flash drives use eMMC chips, which are significantly faster, as demonstrated in the video.
It would seem that you really do get what you pay for, with $3 “USB 3.0” flash drives providing the advertised storage, but you really need to budget in the extra time that you’ll be waiting for transfers.
Recently Raspberry Pi publicly announced the release of their new rpi-image-gen tool, which is advertised as making custom Raspberry Pi OS (i.e. Debian for specific Broadcom SoCs) images in a much more streamlined fashion than with the existing rpi-gen tool, or with third-party solutions. The general idea seems to be that the user fetches the tool from the GitHub project page, before running the build.sh script with parameters defining the configuration file and other options.
The main advantage of this tool is said to be that it uses binary packages rather than (cross-)compiling, while providing a range of profiles and configuration layers to target specific hardware & requirements. Two examples are provided in the GitHub project, one for a ‘slim’ project, the other for a ‘webkiosk‘ configuration that runs a browser in a restricted (Cage) environment, with required packages installed in the final image.
Looking at the basic ‘slim’ example, it defines the INI-style configuration in config/pi5-slim.cfg, but even when browsing through the main README it’s still somewhat obtuse. Under device it references the mypi5 subfolder which contains its own shell script, plus a cmdline.txt and fstab file. Under image it references the compact subfolder with another bunch of files in it. Although this will no doubt make a lot more sense after taking a few days to prod & poke at this, it’s clear that this is not a tool for casual users who just want to quickly put a custom image together.
This is also reflected in the Raspberry Pi blog post, which strongly insinuates that this is targeting commercial & industrial customers, rather than hobbyists.
Although NVidia’s current disastrous RTX 50-series is getting all the attention right now, this wasn’t the first misstep by NVidia. Back in 2014 when NVidia released the GTX 970 users were quickly dismayed to find that their ‘4 GB VRAM’ GPU had actually just 3.5 GB, with the remaining 512 MB being used in a much slower way at just 1/7th of the normal speed. Back then NVidia was subject to a $30/card settlement with disgruntled customers, but there’s a way to at least partially fix these GPUs, as demonstrated by a group of Brazilian modders (original video with horrid English auto-dub).
The mod itself is quite straightforward, with the original 512 MB, 7 Gbps GDDR5 memory modules replaced with 1 GB, 8 Gbps chips and adding a resistor on the PCB to make the GPU recognize the higher density VRAM ICs. Although this doesn’t fix the fundamental split VRAM issue of the ASIC, it does give it access to 7 GB of faster, higher-density VRAM. In benchmarks performance was massively increased, with Unigine Superposition showing nearly a doubling in the score.
In addition to giving this GTX 970 a new lease on life, it also shows just how important having more VRAM on a GPU is, which is ironic in this era where somehow GPU manufacturers deem 8 GB of VRAM to be acceptable in 2025.
Recently the ReactOS project released the much anticipated 0.4.15 update, making it the first major release since 2020. Despite what might seem like a minor version bump from the previous 0.4.14 release, the update introduces sweeping changes to everything from the kernel to the user interface and aspects like the audio system and driver support. Those who have used the nightly builds over the past years will likely have noticed a lot of these changes already.
Japanese input with MZ-IME and CJK font (Credit: ReactOS project)
A notable change is to plug-and-play support which enables more third party drivers and booting from USB storage devices. The Microsoft FAT filesystem driver from the Windows Driver Kit can now be used courtesy of better compatibility, there is now registry healing, and caching and kernel access checks are implemented. The latter improvement means that many ReactOS modules can now work in Windows too.
On the UI side there is a much improved IME (input method editor) feature, along with native ZIP archive support and various graphical tweaks.
Meanwhile since 0.4.15 branched off the master branch six months ago, the latter has seen even more features added, including SMP improvements, UEFI support, a new NTFS driver and improvements to power management and application support. All of this accompanied by many bug fixes, which makes it totally worth it to regularly check out the nightly builds.
Who hasn’t dreamed of serving on the bridge of a Star Trek starship? Although the EmptyEpsilon project isn’t adorned with the Universe-famous LCARS user interface, it does provide a comprehensive simulation scenario, in a multiplayer setting. Designed as a LAN or WAN multiplayer game hosted by the server that also serves as the main screen, four to six additional devices are required to handle the non-captain tasks. These include helm, weapons, engineering, science and relay, which includes comms.
Scenarios are created by the game master, not unlike a D&D game, with the site providing a reference and various examples of how to go about this.
The free and open source game’s binaries can be obtained directly from the site, but it’s also available on Steam. The game isn’t limited to just Trek either, but scenarios can be crafted to fit whatever franchise or creative impulse feels right for that LAN party.
Sound cards on PC-compatible computer systems have a rather involved and convoluted history, with not only a wide diversity of proprietary standards, but also a collection of sound cards that were never advertised as such. Case in point the 1985 Mindscape Music Board, which was an add-on ISA card that came bundled with [Glen Clancy]’s Bank Street Music Writer software for IBM PC. This contrasted with the Commodore 64 version which used the Commodore SID sound chip. Recently both [Tales of Weird Stuff] and [The Oldskool PC] on YouTube both decided to cover this very rare soundcard.
Based around two General Instruments AY-3-8913 programmable sound generators, it enabled the output of six voices, mapped to six instruments in the Bank Street Music Writer software. Outside of this use this card saw no use, however, and it would fade into obscurity along with the software that it was originally bundled with. Only four cards are said to still exist, with [Tales of Weird Stuff] getting their grubby mitts on one.
As a rare slice of history, it is good to see this particular card getting some more love and attention, as it was, and still is, quite capable. [The Oldskool PC] notes that because the GI chip used is well-known and used everywhere, adding support for it in software and emulators is trivial, and efforts to reproduce the board are already underway.
Top image: Mindscape Music Board (Credit: Ian Romanick)
Until the late 1990s, the concept of a 3D accelerator card was something generally associated with high-end workstations. Video games and kin would run happily on the CPU in one’s desktop system, with later extensions like MMX, 3DNow!, SSE, etc. providing a significant performance boost for games that supported it. As 3D accelerator cards (colloquially called graphics processing unit, or GPU) became prevalent, they took over almost all SIMD vector tasks, but one thing which they’re not good at is being a general parallel computer. While working on a software project this really ticked [Raph Levien] off and inspired him to cover his grievances.
Although the interaction between CPUs and GPUs has become tighter over the decades, with PCIe in particular being a big improvement over AGP & PCI, GPUs are still terrible at running arbitrary computing tasks and PCIe links are still glacial compared to communication within the GPU & CPU dies. With the introduction of asynchronous graphic APIs this divide became even more intense. The proposal thus is to invert this relationship.
There’s precedent for this already, with Intel’s Larrabee and IBM’s Cell processor merging CPU and GPU characteristics on a single die, though both struggled with developing for such a new kind of architecture. Sony’s PlayStation 3 was forced to add a GPU due to these issues. There is also the DirectStorage API in DirectX which bypasses the CPU when loading assets from storage, effectively adding CPU features to GPUs.
As [Raph] notes, so-called AI accelerators also have these characteristics, with often multiple SIMD-capable, CPU-like cores. Maybe the future is Cell after all.
After [Andy]’s discovery of an old ISA soundcard at his parents’ place that once was inside the family PC, the onset of a wave of nostalgia for those old-school sounds drove him off the deep end. This is how we get [Andy] building the fastest MS-DOS gaming system ever, with ISA slot and full hardware compatibility. After some digging around, the fastest CPU for an Intel platform that still retained ISA compatibility turned out to be Intel’s 4th generation Core series i7-4790K CPU, along with an H81 chipset-based MiniITX mainboard.
Of note is that ISA slots on these newer boards are basically unheard of outside of niche industrial applications, ergo [Andy] had to tap into the LPC (low pin count) debug port & hunt down the LDRQ signal on the mainboard. LPC is a very compact version of a PCI bus, that works great with ISA adapter boards, specially an LPC to ISA adapter like [Andy]’s dISAppointment board as used here.
A PCIe graphics card (NVidia 7600 GT, 256 MB VRAM), ISA soundcard, dodgy PSU and a SATA SSD were added into a period-correct case. After this Windows 98 was installed from a USB stick within a minute using [Eric Voirin]’s Windows 98 Quick Install. This gave access to MS-DOS and enabled the first tests, followed by benchmarking.
Benchmarking MS-DOS on a system this fast turned out to be somewhat messy with puzzling results. The reason for this was that the BIOS default settings under MS-DOS limited the CPU to non-turbo speeds. After this the system turned out to be actually really quite fast at MS-DOS (and Windows 98) games, to nobody’s surprise.
Polymers are one of the most important elements of modern-day society, particularly in the form of plastics. Unfortunately most common polymers are derived from fossil resources, which not only makes them a finite resource, but is also problematic from a pollution perspective. A potential alternative being researched is that of biopolymers, in particular those produced by microorganisms such as everyone’s favorite bacterium Escherichia coli (E. coli).
These bacteria were the subject of a recent biopolymer study by [Tong Un Chae] et al., as published in Nature Chemical Biology (paywalled, break-down on Arstechnica).
By genetically engineering E. coli bacteria to use one of their survival energy storage pathways instead for synthesizing long chains of polyester amides (PEAs), the researchers were able to make the bacteria create long chains of mostly pure PEA. A complication here is that this modified pathway is not exactly picky about what amino acid monomers to stick onto the chain next, including metabolism products.
Although using genetically engineered bacteria for the synthesis of products on an industrial scale isn’t uncommon (see e.g. the synthesis of insulin), it would seem that biosynthesis of plastics using our prokaryotic friends isn’t quite ready yet to graduate from laboratory experiments.
The prototype DACCU device for producing syngas from air. (Credit: Sayan Kar, University of Cambridge)
There is more carbon dioxide (CO2) in the atmosphere these days than ever before in human history, and while it would be marvelous to use these carbon atoms for something more useful, capturing CO2 directly from the air isn’t that easy. After capturing it would also be great if you could do something more with it than stuff it into a big hole. Something like producing syngas (CO + H2) for example, as demonstrated by researchers at the University of Cambridge.
Among the improvements claimed in the paper as published in Nature Energy for this direct air capture and utilization (DACCU) approach are that it does not require pure CO2 feedstock, but will adsorb it directly from the air passing over a bed of solid silica-amine. After adsorption, the CO2 can be released again by exposure to concentrated light. Following this the conversion to syngas is accomplished by passing it over a second bed consisting of silica/alumina-titania-cobalt bis(terpyridine), that acts as a photocatalyst.
The envisioned usage scenario would be CO2 adsorption during the night, with concentrated solar power releasing it the day with subsequent production of syngas. Inlet air would be passed only over the adsorption section before switching the inlet off during the syngas generating phase. As a lab proof-of-concept it seems to work well, with outlet air stripped from virtually all CO2 and very high conversion ratio from CO2 to syngas.
Syngas has historically been used as a replacement for gasoline, but is also used as a source of hydrogen (e.g. steam reformation (SMR) of natural gas) where it’s used for reduction of iron ore, as well as the production of methanol as a precursor to many industrial processes. Whether this DACCU approach provides a viable alternative to SMR and other existing technologies will become clear once this technology moves from the lab into the real world.
Over the decades there have been many denominations coined to classify computer systems, usually when they got used in different fields or technological improvements caused significant shifts. While the very first electronic computers were very limited and often not programmable, they would soon morph into something that we’d recognize today as a computer, starting with World War 2’s Colossus and ENIAC, which saw use with cryptanalysis and military weapons programs, respectively.
The first commercial digital electronic computer wouldn’t appear until 1951, however, in the form of the Ferranti Mark 1. These 4.5 ton systems mostly found their way to universities and kin, where they’d find welcome use in engineering, architecture and scientific calculations. This became the focus of new computer systems, effectively the equivalent of a scientific calculator. Until the invention of the transistor, the idea of a computer being anything but a hulking, room-sized monstrosity was preposterous.
A few decades later, more computer power could be crammed into less space than ever before including ever higher density storage. Computers were even found in toys, and amidst a whirlwind of mini-, micro-, super-, home-, minisuper- and mainframe computer systems, one could be excused for asking the question: what even is a supercomputer?
Today’s Supercomputers
ORNL’s Summit supercomputer, fastest until 2020 (Credit: ORNL)
Perhaps a fair way to classify supercomputers is that the ‘supercomputer’ aspect is a highly time-limited property. During the 1940s, Colossus and ENIAC were without question the supercomputers of their era, while 1976’s Cray-1 wiped the floor with everything that came before, yet all of these are archaic curiosities next to today’s top two supercomputers. Both the El Capitan and Frontier supercomputers are exascale (1+ exaFLOPS in double precision IEEE 754 calculations) level machines, based around commodity x86_64 CPUs in a massively parallel configuration.
Taking up 700 m2 of floor space at the Lawrence Livermore National Laboratory (LLNL) and drawing 30 MW of power, El Capitan’s 43,808 AMD EPYC CPUs are paired with the same number of AMD Instinct MI300A accelerators, each containing 24 Zen 4 cores plus CDNA3 GPU and 128 GB of HBM3 RAM. Unlike the monolithic ENIAC, El Capitan’s 11,136 nodes, containing four MI300As each, rely on a number of high-speed interconnects to distribute computing work across all cores.
At LLNL, El Capitan is used for effectively the same top secret government things as ENIAC was, while Frontier at Oak Ridge National Laboratory (ORNL) was the fastest supercomputer before El Capitan came online about three years later. Although currently LLNL and ORNL have the fastest supercomputers, there are many more of these systems in use around the world, even for innocent scientific research.
Looking at the current list of supercomputers, such as today’s Top 9, it’s clear that not only can supercomputers perform a lot more operations per second, they also are invariably massively parallel computing clusters. This wasn’t a change that was made easily, as parallel computing comes with a whole stack of complications and problems.
The Parallel Computing Shift
ILLIAC IV massively parallel computer’s Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)
The first massively parallel computer was the ILLIAC IV, conceptualized by Daniel Slotnick in 1952 and first successfully put into operation in 1975 when it was connected to ARPANET. Although only one quadrant was fully constructed, it produced 50 MFLOPS compared to the Cray-1’s 160 MFLOPS a year later. Despite the immense construction costs and spotty operational history, it provided a most useful testbed for developing parallel computation methods and algorithms until the system was decommissioned in 1981.
There was a lot of pushback against the idea of massively parallel computation, however, with Seymour Cray famously comparing the idea of using many parallel vector processors instead of a single large one akin to ‘plowing a field with 1024 chickens instead of two oxen’.
Ultimately there is only so far you can scale a singular vector processor, of course, while parallel computing promised much better scaling, as well as the use of commodity hardware. A good example of this is a so-called Beowulf cluster, named after the original 1994 parallel computer built by Thomas Sterling and Donald Becker at NASA. This can use plain desktop computers, wired together using for example Ethernet and with open source libraries like Open MPI enabling massively parallel computing without a lot of effort.
Not only does this approach enable the assembly of a ‘supercomputer’ using cheap-ish, off-the-shelf components, it’s also effectively the approach used for LLNL’s El Capitan, just with not very cheap hardware, and not very cheap interconnect hardware, but still cheaper than if one were to try to build a monolithic vector processor with the same raw processing power after taking the messaging overhead of a cluster into account.
Mini And Maxi
David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)
One way to look at supercomputers is that it’s not about the scale, but what you do with it. Much like how government, large businesses and universities would end up with ‘Big Iron’ in the form of mainframes and supercomputers, there was a big market for minicomputers too. Here ‘mini’ meant something like a PDP-11 that’d comfortably fit in the corner of an average room at an office or university.
The high-end versions of minicomputers were called ‘superminicomputer‘, which is not to be confused with minisupercomputer, which is another class entirely. During the 1980s there was a brief surge in this latter class of supercomputers that were designed to bring solid vector computing and similar supercomputer feats down to a size and price tag that might entice departments and other customers who’d otherwise not even begin to consider such an investment.
The manufacturers of these ‘budget-sized supercomputers’ were generally not the typical big computer manufacturers, but instead smaller companies and start-ups like Floating Point Systems (later acquired by Cray) who sold array processors and similar parallel, vector computing hardware.
Recently David Lovett (AKA Mr. Usagi Electric) embarked on a quest to recover and reverse-engineer as much FPS hardware as possible, with one of the goals being to build a full minisupercomputer system as companies and universities might have used them in the 1980s. This would involve attaching such an array processor to a PDP-11/44 system.
Speed Versus Reliability
Amidst all of these definitions, the distinction between a mainframe and a supercomputer is much easier and more straightforward at least. A mainframe is a computer system that’s designed for bulk data processing with as much built-in reliability and redundancy as the price tag allows for. A modern example is IBM’s Z-series of mainframes, with the ‘Z’ standing for ‘zero downtime’. These kind of systems are used by financial institutions and anywhere else where downtime is counted in millions of dollars going up in (literal) flames every second.
This means hot-swappable processor modules, hot-swappable and redundant power supplies, not to mention hot spares and a strong focus on fault tolerant computing. All of these features are less relevant for a supercomputer, where raw performance is the defining factor when running days-long simulations and when other ways to detect flaws exist without requiring hardware-level redundancy.
Considering the brief lifespan of supercomputers (currently in the order of a few years) compared to mainframes (decades) and the many years that the microcomputers which we have on our desks can last, the life of a supercomputer seems like that of a bright and very brief flame, indeed.
Top image: Marlyn Wescoff and Betty Jean Jennings configuring plugboards on the ENIAC computer (Source: US National Archives)