Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Ayer — 1 Abril 2025Salida Principal

On Egyptian Pyramids and Why It’s Definitely Aliens

Por: Maya Posch
1 Abril 2025 at 14:00

History is rather dull and unexciting to most people, which naturally invites exciting flights of fancy that can range from the innocent to outright conspiracies. Nobody truly believes that the astounding finds and (fully functioning) ancient mechanisms in the Indiana Jones & Uncharted franchises are real, with mostly intact ancient cities waiting for intrepid explorers along with whatever mystical sources of power, wealth or influence formed the civilization’s foundations before its tragic demise. Yet somehow Plato’s fictive Atlantis has taken on a life of its own, along with many other ‘lost’ civilizations, whether real or imagined.

Of course, if these aforementioned movies and video games were realistic, they would center around a big archaeological dig and thrilling finds like pot shards and cuneiform clay tablets, not ways to smite enemies and gain immortality. Nor would it involve solving complex mechanical puzzles to gain access to the big secret chamber, prior to walking out of the readily accessible backdoor. Reality is boring like that, which is why there’s a major temptation to spruce things up. With the Egyptian pyramids as well as similar structures around the world speaking to the human imagination, this has led to centuries of half-baked ideas and outright conspiracies.

Most recently, a questionable 2022 paper hinting at structures underneath the Pyramid of Khafre in Egypt was used for a fresh boost to old ideas involving pyramid power stations, underground cities and other fanciful conspiracies. Although we can all agree that the ancient pyramids in Egypt are true marvels of engineering, are we really on the cusp of discovering that the ancient Egyptians were actually provided with Forerunner technology by extraterrestrials?

The Science of Being Tragically Wrong

A section of the 'runes' at Runamo. (Credit: Entheta, Wikimedia)
A section of the ‘runes’ at Runamo. (Credit: Entheta, Wikimedia)

In defense of fanciful theories regarding the Actual Truth™ about Ancient Egypt and kin, archaeology as we know it today didn’t really develop until the latter half of the 20th century, with the field being mostly a hobbyist thing that people did out of curiosity as well as a desire for riches. Along the way many comical blunders were made, such as the Runamo runes in Sweden that turned out to be just random cracks in dolerite.

Less funny were attempts by colonists to erase Great Zimbabwe (11th – ~17th century CE) and the Kingdom of Zimbabwe after the ruins of the abandoned capital were discovered by European colonists and explored in earnest by the 19th century. Much like the wanton destruction of local cultures in the Americas by European colonists and explorers who considered their own culture, religion and technology to be clearly superior, the history of Great Zimbabwe was initially rewritten so that no thriving African society ever formed on its own, but was the result of outside influences.

In this regard it’s interesting how many harebrained ideas about archaeological sites have now effectively flipped, with mystical and mythical properties being assigned and these ‘Ancients’ being almost worshipped. Clearly, aliens visited Earth and that led to pyramids being constructed all around the globe. These would also have been the same aliens or lost civilizations that had technology far beyond today’s cutting edge, putting Europe’s fledgling civilization to shame.

Hence people keep dogpiling on especially the pyramids of Giza and its surrounding complex, assigning mystical properties to their ventilation shafts and expecting hidden chambers with technology and treasures interspersed throughout and below the structures.

Lost Technology

The Giant's Causeway in Northern Ireland. (Credit: code poet, Wikimedia)
The Giant’s Causeway in Northern Ireland. (Credit: code poet, Wikimedia)

The idea of ‘lost technology’ is a pervasive one, mostly buoyed by the axiom that you cannot disprove something, only find evidence for its absence. Much like the possibility of a teapot being in orbit around the Sun right now, you cannot disprove that the Ancient Egyptians did not have hyper-advanced power plants using zero point energy back around 3,600 BCE. This ties in with the idea of ‘lost civilizations‘, which really caught on around the Victorian era.

Such romanticism for a non-existent past led to the idea of Atlantis being a real, lost civilization becoming pervasive, with the 1960s seeing significant hype around the Bimini Road. This undersea rock formation in the Bahamas was said to have been part of Atlantis, but is actually a perfectly cromulent geological formation. More recently a couple of German tourists got into legal trouble while trying to prove a connection between Egypt’s pyramids to Atlantis, which is a theory that refuses to die along with the notion that Atlantis was some kind of hyper-advanced civilization and not just a fictional society that Plato concocted to illustrate the folly of man.

Admittedly there is a lot of poetry in all of this when you consider it from that angle.

Welcome to Shangri-La... or rather Shambhala as portrayed in <i>Uncharted 3</i>.
Welcome to Shangri-La… or rather Shambhala as portrayed in Uncharted 3.

People have spent decades of their life and countless sums of money on trying to find Atlantis, Shangri-La (possibly inspired by Shambhala), El Dorado and similar fictional locations. The Iram of the Pillars which featured in Uncharted 3: Drake’s Deception is one of the lost cities mentioned in the Qur’an, and is incidentally another great civilization that saw itself meet a grim end through divine punishment. Iram is often said to be Ubar, which is commonly known as Atlantis of the Sands.

 

All of this is reminiscent of the Giant’s Causeway in Northern Ireland, and corresponding area at Fingal’s Cave on the Scottish isle of Staffa, where eons ago molten basalt cooled and contracted into basalt columns in a way that is similar to how drying mud will crack in semi-regular patterns. This particular natural formation did lead to many local myths, including how a giant built a causeway across the North Channel, hence the name.

Fortunately for this location, no ‘lost civilization’ tag became attached, and thus it remains a curious demonstration of how purely natural formations can create structures that one might assume to have required intelligence, thus providing fuel for conspiracies. So far only ‘Young Earth’ conspiracy folk have put a claim on this particular site.

What we can conclude is that much like the Victorian age that spawned countless works of fiction on the topic, many of these modern-day stories appear to be rooted in a kind of romanticism for a past that never existed, with those affected interpreting natural patterns as something more in a sure sign of confirmation bias.

Tourist Traps

Tomb of the First Emperor Qin Shi Huang Di, Xi'an, China (Credit: Aaron Zhu)
Tomb of the First Emperor Qin Shi Huang Di, Xi’an, China (Credit: Aaron Zhu)

One can roughly map the number of tourist visits with the likelihood of wild theories being dreamed up. These include the Egyptian pyramids, but also similar structures in what used to be the sites of the Aztec and Maya civilizations. Similarly the absolutely massive mausoleum of Qin Shi Huang in China with its world-famous Terracotta Army has led to incredible speculation on what might still be hidden inside the unexcavated tomb mound, such as entire seas and rivers of mercury that moved mechanically to simulate real bodies of water, a simulated starry sky, crossbows set to take out trespassers and incredible riches.

Many of these features were described by Sima Qian in the first century BCE, who may or may not have been truthful in his biography of Qin Shi Huang. Meanwhile, China’s authorities have wisely put further excavations on hold, as they have found that many of the recovered artefacts degrade very quickly once exposed to air. The paint on the terracotta figures began to flake off rapidly after excavation, for example, reducing them to the plain figures which we are familiar with.

Tourism can be as damaging as careless excavation. As popular as the pyramids at Giza are, centuries of tourism have taken their toll, with vandalism, graffiti and theft increasing rapidly since the 20th century. The Great Pyramid of Khufu had already been pilfered for building materials over the course of millennia by the local population, but due to tourism part of its remaining top stones were unceremoniously tipped over the side to make a larger platform where tourists could have some tea while gazing out over the the Giza Plateau, as detailed in a recent video on the History for Granite channel:

The recycling of building materials from antique structures was also the cause of the demise of the Labyrinth at the foot of the pyramid of Amenemhat III at Hawara. Once an architectural marvel, with reportedly twelve roofed courts and spanning a total of 28,000 m2, today only fragments remain of its existence. This sadly is how most marvels of the Ancient World end up: looted ruins, ashes and shards, left in the sand, mud, or reclaimed by nature, from which we can piece together with a lot of patience and the occasional stroke of fortune a picture what it once may have looked like.

Pyramid Power

Cover of The Giza Power Plant book. (Credit: Christopher Dunn)
Cover of The Giza Power Plant book. (Credit: Christopher Dunn)

When in light of all this we look at the claims made about the Pyramid of Khafre and the persistent conspiracies regarding this and other pyramids hiding great secrets, we can begin to see something of a pattern. Some people have really bought into these fantasies, while for others it’s just another way to embellish a location, to attract more rubes tourists and sell more copies of their latest book on the extraterrestrial nature of pyramids and how they are actually amazing lost technologies. This latter category is called pseudoarcheology.

Pyramids, of course, have always held magical powers, but the idea that they are literal power plants seems to have been coined by one Christopher Dunn, with the publication of his pseudo-archeological book The Giza Power Plant in 1998. That there would be more structures underneath the Pyramid of Khafre is a more recent invention, however. Feeding this particular flight of fancy appears to be a 2022 paper by Filippo Biondi and Corrado Malanga, in which synthetic aperture radar (SAR) was used to examine said pyramid interior and subsurface features.

Somehow this got turned into claims about multiple deep vertical wells descending 648 meters along with other structures. Shared mostly via conspiracy channels, it widely extrapolates from claims made in the paper by Biondi et al., with said SAR-based claims never having been peer-reviewed or independently corroborated. On the Rational Wiki entry for these and other claims related to the Giza pyramids are savagely tossed under the category of ‘pyramidiots’.

The art that conspiracy nuts produce when provided with generative AI tools. (Source: Twitter)
The art that conspiracy nuts produce when provided with generative AI tools. (Source: Twitter)

Back in the real world, archaeologists have found a curious L-shaped area underneath a royal graveyard near Khufu’s pyramid that was apparently later filled in, but which seems to lead to a deeper structure. This is likely to be part of the graveyard, but may also have been a feature that was abandoned during construction. Currently this area is being excavated, so we’re likely to figure out more details after archaeologists have finished gently sifting through tons of sand and gravel.

There is also the ScanPyramids project, which uses non-destructive and non-invasive techniques to scan Old Kingdom-era pyramids, such as muon tomography and infrared thermography. This way the internal structure of these pyramids can be examined in-depth. One finding was that of a number of ‘voids’, which could mean any of a number of things, but most likely do not contain world-changing secrets.

To this day the most credible view is still that the pyramids of the Old Kingdom were used as tombs, though unlike the mastabas and similar tombs, there is a credible argument to be made that rather than being designed to be hidden away, these pyramids would be eternal monuments to the pharaoh. They would be open for worship of the pharaoh, hence the ease of getting inside them. Ironically this would make them more secure from graverobbers, which was a great idea until the demise of the Ancient Egyptian civilization.

This is a point that’s made succinctly on the History for Granite channel, with the conclusion being that this goal of ‘inspiring awe’ to worshippers is still effective today, simply judging by the millions of tourists each year to these monuments, and the tall tales that they’ve inspired.

AnteayerSalida Principal

DIY Split Keyboard Made with a Saw

Por: Maya Posch
30 Marzo 2025 at 02:00

Split keyboards are becoming more popular, but because they’re still relatively niche, they can be rather expensive if you want to buy one. So why not make your own? Sure, you could assemble one from a kit, but why not take a cheap mechanical keyboard, slice it in half and just waves hands connect the two halves back together? If this thought appeals to you, then [nomolk]’s literal hackjob video should not be ignored. Make sure to enable English subtitles for the Japanese-language video.

Easy split keyboard tip: just reconnect both halves... (Credit: nomolk, YouTube)
Easy split keyboard tip: just reconnect both halves… (Credit: nomolk, YouTube)

In it, the fancy (but cheap) mechanical keyboard with Full RGB™ functionality is purchased and tested prior to meeting its demise. Although the left side with the cable and controller still works, the right side now needs to be connected, which is where a lot of tedious wires have to be soldered to repair traces.

Naturally this will go wrong, so it’s important to take a (sushi) break and admire the sunset before hurling oneself at the tracing of faulty wiring. This process and the keyboard matrix is further detailed on the blog entry (in Japanese) for this process.

Although this was perhaps easier than the other split keyboard project involving a membrane keyboard, this tongue-in-cheek project demonstrates the limits of practicality with this approach even if it could be cleaned up more with fancier wiring.

We give it full points for going the whole way, however, and making the keyboard work again in the end.

AMSAT-OSCAR 7: the Ham Satellite That Refused to Die

Por: Maya Posch
29 Marzo 2025 at 20:00

When the AMSAT-OSCAR 7 (AO-7) amateur radio satellite was launched in 1974, its expected lifespan was about five years. The plucky little satellite made it to 1981 when a battery failure caused it to be written off as dead. Then, in 2002 it came back to life. The prevailing theory being that one of the cells in the satellites NiCd battery pack, in an extremely rare event, failed open — thus allowing the satellite to run (intermittently) off its solar panels.

In a recent video by [Ben] on the AE4JC Amateur Radio YouTube channel goes over the construction of AO-7, its operation, death and subsequent revival are covered, as well as a recent QSO (direct contact).

The battery is made up of multiple individual cells.

The solar panels covering this satellite provided a grand total of 14 watts at maximum illumination, which later dropped to 10 watts, making for a pretty small power budget. The entire satellite was assembled in a ‘clean room’ consisting of a sectioned off part of a basement, with components produced by enthusiasts associated with AMSAT around the world. Onboard are two radio transponders: Mode A at 2 meters and Mode B at 10 meters, as well as four beacons, three of which are active due to an international treaty affecting the 13 cm beacon.

Positioned in a geocentric LEO (1,447 – 1,465 km) orbit, it’s quite amazing that after 50 years it’s still mostly operational. Most of this is due to how the satellite smartly uses the Earth’s magnetic field for alignment with magnets as well as the impact of photons to maintain its spin. This passive control combined with the relatively high altitude should allow AO-7 to function pretty much indefinitely while the PV panels keep producing enough power. All because a NiCd battery failed in a very unusual way.

Inside a Fake WiFi Repeater

Por: Maya Posch
27 Marzo 2025 at 20:00
Fake WiFi repeater with a cheap real one behind it. (Credit: Big Clive, YouTube)
Fake WiFi repeater with a cheap real one behind it. (Credit: Big Clive, YouTube)

Over the years we have seen a lot of fake electronics, ranging from fake power saving devices that you plug into an outlet, to fake car ECU optimizers that you stick into the OBD port. These are all similar in that they fake functionality while happily lighting up a LED or two to indicate that they’re doing ‘something’. Less expected here was that we’d be seeing fake WiFi repeaters, but recently [Big Clive] got his hands on one and undertook the arduous task of reverse-engineering it.

The simple cardboard box which it comes in claims that it’s a 2.4 GHz unit that operates at 300 Mbps, which would be quite expected for the price. [Clive] obtained a real working WiFi repeater previously that did boast similar specifications and did indeed work. The dead giveaway that it is a fake are the clearly fake antennae, along with the fact that once you plug it in, no new WiFi network pops up or anything else.

Inside the case – which looks very similar to the genuine repeater – there is just a small PCB attached to the USB connector. On the PCB are a 20 Ohm resistor and a blue LED, which means that the LED is being completely overdriven as well and is likely to die quite rapidly. Considering that a WiFi repeater is supposed to require a setup procedure, it’s possible that these fake repeaters target an audience which does not quite understand what these devices are supposed to do, but they can also catch more informed buyers unaware who thought they were buying some of the cheap real ones. Caveat emptor, indeed.

General Fusion Claims Success with Magnetized Target Fusion

Por: Maya Posch
27 Marzo 2025 at 14:00

It’s rarely appreciated just how much more complicated nuclear fusion is than nuclear fission. Whereas the latter involves a process that happens all around us without any human involvement, and where the main challenge is to keep the nuclear chain reaction within safe bounds, nuclear fusion means making atoms do something that goes against their very nature, outside of a star’s interior.

Fusing helium isotopes can be done on Earth fairly readily these days, but doing it in a way that’s repeatable — bombs don’t count — and in a way that makes economical sense is trickier. As covered previously, plasma stability is a problem with the popular approach of tokamak-based magnetic confinement fusion (MCF). Although this core problem has now been largely addressed, and stellarators are mostly unbothered by this particular problem, a Canadian start-up figures that they can do even better, in the form of a nuclear fusion reactors based around the principle of magnetized target fusion (MTF).

Although General Fusion’s piston-based fusion reactor has people mostly very confused, MTF is based on real physics and with GF’s current LM26 prototype having recently achieved first plasma, this seems like an excellent time to ask the question of what MTF is, and whether it can truly compete billion-dollar tokamak-based projects.

Squishing Plasma Toroids

Lawson criterion of important magnetic confinement fusion experiments (Credit: Horvath, A., 2016)
Lawson criterion of important magnetic confinement fusion experiments (Credit: Horvath, A., 2016)

In general, to achieve nuclear fusion, the target atoms have to be pushed past the Coulomb barrier, which is an electrostatic interaction that normally prevents atoms from approaching each other and even spontaneously fusing. In stars, the process of nucleosynthesis is enabled by the intense pressures due to the star’s mass, which overcomes this electrostatic force.

Replicating the nuclear fusion process requires a similar way to overcome the Coulomb barrier, but in lieu of even a small-sized star like our Sun, we need alternate means such as much higher temperatures, alternative ways to provide pressure and longer confinement times. The efficiency of each approach was originally captured in the Lawson criterion, which was developed by John D. Lawson in a (then classified) 1955 paper (PDF on Archive.org).

In order to achieve a self-sustaining fusion reaction, the energy losses should be less than the energy produced by the reaction. The break-even point here is expressed as having a Q (energy gain factor) of 1, where the added energy and losses within the fusion process are in balance. For sustained fusion with excess energy generation, the Q value should be higher than 1, typically around 5 for contemporary fuels and fusion technology.

In the slow march towards ignition, we have seen many reports in the popular media that turn out to be rather meaningless, such as the horrendous inefficiency demonstrated by the laser-based inertial confinement fusion (ICF) at the National Ignition Facility (NIF). This makes it rather fascinating that what General Fusion is attempting is closer to ICF, just without the lasers and artisan Hohlraum-based fuel pellets.

Instead they use a plasma injector, a type of plasma railgun called a Marshall gun, that produces hydrogen isotope plasma, which is subsequently contained in a magnetic field as a self-stable compact toroid. This toroid is then squished by a mechanical system in a matter of milliseconds, with the resulting compression induces fusion. Creating this toroid is the feat that was recently demonstrated in the current Lawson Machine 26 (LM26) prototype reactor with its first plasma in the target chamber.

Magneto-Inertial Fusion

Whereas magnetic confinement fusion does effectively what it says on the tin, magnetic target fusion is pretty much a hybrid of magnetic confinement fusion and the laser-based intertial confinement fusion. Because the magnetic containment is only there to essentially keep the plasma in a nice stable toroid, it doesn’t have nearly the same requirements as in a tokamak or stellarator. Yet rather than using complex and power-hungry lasers, MCF applies mechanical energy using an impulse driver — the liner — that rapidly compresses the low-density plasma toroid.

Schematic of the Lawson Machine 26 MTF reactor. (Credit: General Fusion)
Schematic of the Lawson Machine 26 MTF reactor. (Credit: General Fusion)

The juiciest parts of General Fusion’s experimental setup can be found in the Research Library on the GF website. The above graphic was copied from the LM26 poster (PDF), which provides a lot of in-depth information on the components of the device and its operation, as well as the experiments that informed its construction.

The next step will be to test the ring compressor that is designed to collapse the lithium liner around the plasma toroid, compressing it and achieving fusion.

Long Road Ahead

Interpretation of General Fusion's commercial MTF reactor design. (Credit: Evan Mason)
Interpretation of General Fusion’s commercial MTF reactor design. (Credit: Evan Mason)

As promising this may sound, there is still a lot of work to do before MTF can be considered a viable option for commercial fusion. As summarized on the Wikipedia entry for General Fusion, the goal is to have a liquid liner rather than the solid lithium liner of LM26. This liquid lithium liner will both breed new tritium fuel from neutron exposure, as well as provide the liner that compresses the deuterium-tritium fuel.

This liquid liner would also provide cooling, linked with a heat exchanger or steam generator to generate electricity. Because the liquid liner would be infinitely renewable, it should allow for about 1 cycle per second. To keep the liquid liner in place on the inside of the sphere, it would need to be constantly spun, further complicating the design.

Although getting plasma in the reaction chamber where it can be squished by the ring compressor’s lithium liner is a major step, the real challenge will be in moving from a one-cycle-a-day MTF prototype to something that can integrate not only the aforementioned features, but also run one cycle per second, while being more economical to run than tokamaks, stellarators, or even regular nuclear fission plants, especially Gen IV fast neutron reactors.

That said, there is a strong argument to be made that MTF is significantly more practical for commercial power generation than ICF. And regardless, it is just really cool science and engineering.

Top image: General Fusion’s Lawson Machine 26. (Credit: General Fusion)

Why are Micro Center Flash Drives so Slow?

Por: Maya Posch
27 Marzo 2025 at 05:00

Every year, USB flash drives get cheaper and hold more data. Unfortunately, they don’t always get faster. The reality is, many USB 3.0 flash drives aren’t noticeably faster than their USB 2.0 cousins, as [Chase Fournier] found with the ultra-cheap specimens purchased over at his local Micro Center store.

Although these all have USB 3.0 interfaces, they transfer at less than 30 MB/s, but why exactly? After popping open a few of these drives the answer appears to be that they use the old-style Phison controller (PS2251-09-V) and NAND flash packages that you’d expect to find in a USB 2.0 drive.

Across the 32, 64, and 256 GB variants the same Phison controller is used, but the PCB has provisions for both twin TSOP packages or one BGA package. The latter package turned out to be identical to those found in the iPhone 8. Also interesting was that the two 256 GB drives [Chase] bought had different Phison chips, as in one being BGA and the other QFP. Meanwhile some flash drives use eMMC chips, which are significantly faster, as demonstrated in the video.

It would seem that you really do get what you pay for, with $3 “USB 3.0” flash drives providing the advertised storage, but you really need to budget in the extra time that you’ll be waiting for transfers.

Build Customized Raspberry Pi OS Images With rpi-image-gen

Por: Maya Posch
26 Marzo 2025 at 11:00

Recently Raspberry Pi publicly announced the release of their new rpi-image-gen tool, which is advertised as making custom Raspberry Pi OS (i.e. Debian for specific Broadcom SoCs) images in a much more streamlined fashion than with the existing rpi-gen tool, or with third-party solutions. The general idea seems to be that the user fetches the tool from the GitHub project page, before running the build.sh script with parameters defining the configuration file and other options.

The main advantage of this tool is said to be that it uses binary packages rather than (cross-)compiling, while providing a range of profiles and configuration layers to target specific hardware & requirements. Two examples are provided in the GitHub project, one for a ‘slim’ project, the other for a ‘webkiosk‘ configuration that runs a browser in a restricted (Cage) environment, with required packages installed in the final image.

Looking at the basic ‘slim’ example, it defines the INI-style configuration in config/pi5-slim.cfg, but even when browsing through the main README it’s still somewhat obtuse. Under device it references the mypi5 subfolder which contains its own shell script, plus a cmdline.txt and fstab file. Under image it references the compact subfolder with another bunch of files in it. Although this will no doubt make a lot more sense after taking a few days to prod & poke at this, it’s clear that this is not a tool for casual users who just want to quickly put a custom image together.

This is also reflected in the Raspberry Pi blog post, which strongly insinuates that this is targeting commercial & industrial customers, rather than hobbyists.

Brazilian Modders Upgrade NVidia Geforce GTX 970 to 8 GB of VRAM

Por: Maya Posch
25 Marzo 2025 at 23:00

Although NVidia’s current disastrous RTX 50-series is getting all the attention right now, this wasn’t the first misstep by NVidia. Back in 2014 when NVidia released the GTX 970 users were quickly dismayed to find that their ‘4 GB VRAM’ GPU had actually just 3.5 GB, with the remaining 512 MB being used in a much slower way at just 1/7th of the normal speed. Back then NVidia was subject to a $30/card settlement with disgruntled customers, but there’s a way to at least partially fix these GPUs, as demonstrated by a group of Brazilian modders (original video with horrid English auto-dub).

The mod itself is quite straightforward, with the original 512 MB, 7 Gbps GDDR5 memory modules replaced with 1 GB, 8 Gbps chips and adding a resistor on the PCB to make the GPU recognize the higher density VRAM ICs. Although this doesn’t fix the fundamental split VRAM issue of the ASIC, it does give it access to 7 GB of faster, higher-density VRAM. In benchmarks performance was massively increased, with Unigine Superposition showing nearly a doubling in the score.

In addition to giving this GTX 970 a new lease on life, it also shows just how important having more VRAM on a GPU is, which is ironic in this era where somehow GPU manufacturers deem 8 GB of VRAM to be acceptable in 2025.

ReactOS 0.4.15 Released With Major Improvements

Por: Maya Posch
25 Marzo 2025 at 11:00

Recently the ReactOS project released the much anticipated 0.4.15 update, making it the first major release since 2020. Despite what might seem like a minor version bump from the previous 0.4.14 release, the update introduces sweeping changes to everything from the kernel to the user interface and aspects like the audio system and driver support. Those who have used the nightly builds over the past years will likely have noticed a lot of these changes already.

Japanese input with MZ-IME and CJK font (Credit: ReactOS project)
Japanese input with MZ-IME and CJK font (Credit: ReactOS project)

A notable change is to plug-and-play support which enables more third party drivers and booting from USB storage devices. The Microsoft FAT filesystem driver from the Windows Driver Kit can now be used courtesy of better compatibility, there is now registry healing, and caching and kernel access checks are implemented. The latter improvement means that many ReactOS modules can now work in Windows too.

On the UI side there is a much improved IME (input method editor) feature, along with native ZIP archive support and various graphical tweaks.

Meanwhile since 0.4.15 branched off the master branch six months ago, the latter has seen even more features added, including SMP improvements, UEFI support, a new NTFS driver and improvements to power management and application support. All of this accompanied by many bug fixes, which makes it totally worth it to regularly check out the nightly builds.

Build a Starship Bridge Simulator With EmptyEpsilon

Por: Maya Posch
24 Marzo 2025 at 05:00
Next time on Star Trek: EmptyEpsilon... (Credit: EmptyEpsilon project)

Who hasn’t dreamed of serving on the bridge of a Star Trek starship? Although the EmptyEpsilon project isn’t adorned with the Universe-famous LCARS user interface, it does provide a comprehensive simulation scenario, in a multiplayer setting. Designed as a LAN or WAN multiplayer game hosted by the server that also serves as the main screen, four to six additional devices are required to handle the non-captain tasks. These include helm, weapons, engineering, science and relay, which includes comms.

Scenarios are created by the game master, not unlike a D&D game, with the site providing a reference and various examples of how to go about this.

The free and open source game’s binaries can be obtained directly from the site, but it’s also available on Steam. The game isn’t limited to just Trek either, but scenarios can be crafted to fit whatever franchise or creative impulse feels right for that LAN party.

Obviously building the whole thing into a realistic starship bridge is optional, but it certainly looks like more fun that way.

The Mysterious Mindscape Music Board

Por: Maya Posch
23 Marzo 2025 at 14:00

Sound cards on PC-compatible computer systems have a rather involved and convoluted history, with not only a wide diversity of proprietary standards, but also a collection of sound cards that were never advertised as such. Case in point the 1985 Mindscape Music Board, which was an add-on ISA card that came bundled with [Glen Clancy]’s Bank Street Music Writer software for IBM PC. This contrasted with the Commodore 64 version which used the Commodore SID sound chip. Recently both [Tales of Weird Stuff] and [The Oldskool PC] on YouTube both decided to cover this very rare soundcard.

Based around two General Instruments AY-3-8913 programmable sound generators, it enabled the output of six voices, mapped to six instruments in the Bank Street Music Writer software. Outside of this use this card saw no use, however, and it would fade into obscurity along with the software that it was originally bundled with. Only four cards are said to still exist, with [Tales of Weird Stuff] getting their grubby mitts on one.

As a rare slice of history, it is good to see this particular card getting some more love and attention, as it was, and still is, quite capable. [The Oldskool PC] notes that because the GI chip used is well-known and used everywhere, adding support for it in software and emulators is trivial, and efforts to reproduce the board are already underway.

Top image: Mindscape Music Board (Credit: Ian Romanick)

Musings on a Good Parallel Computer

Por: Maya Posch
23 Marzo 2025 at 08:00

Until the late 1990s, the concept of a 3D accelerator card was something generally associated with high-end workstations. Video games and kin would run happily on the CPU in one’s desktop system, with later extensions like MMX, 3DNow!, SSE, etc. providing a significant performance boost for games that supported it. As 3D accelerator cards (colloquially called graphics processing unit, or GPU) became prevalent, they took over almost all SIMD vector tasks, but one thing which they’re not good at is being a general parallel computer. While working on a software project this really ticked [Raph Levien] off and inspired him to cover his grievances.

Although the interaction between CPUs and GPUs has become tighter over the decades, with PCIe in particular being a big improvement over AGP & PCI, GPUs are still terrible at running arbitrary computing tasks and PCIe links are still glacial compared to communication within the GPU & CPU dies. With the introduction of asynchronous graphic APIs this divide became even more intense. The proposal thus is to invert this relationship.

There’s precedent for this already, with Intel’s Larrabee and IBM’s Cell processor merging CPU and GPU characteristics on a single die, though both struggled with developing for such a new kind of architecture. Sony’s PlayStation 3 was forced to add a GPU due to these issues. There is also the DirectStorage API in DirectX which bypasses the CPU when loading assets from storage, effectively adding CPU features to GPUs.

As [Raph] notes, so-called AI accelerators also have these characteristics, with often multiple SIMD-capable, CPU-like cores. Maybe the future is Cell after all.

The Fastest MS-DOS Gaming PC Ever

Por: Maya Posch
22 Marzo 2025 at 11:00

After [Andy]’s discovery of an old ISA soundcard at his parents’ place that once was inside the family PC, the onset of a wave of nostalgia for those old-school sounds drove him off the deep end. This is how we get [Andy] building the fastest MS-DOS gaming system ever, with ISA slot and full hardware compatibility. After some digging around, the fastest CPU for an Intel platform that still retained ISA compatibility turned out to be Intel’s 4th generation Core series i7-4790K CPU, along with an H81 chipset-based MiniITX mainboard.

Of note is that ISA slots on these newer boards are basically unheard of outside of niche industrial applications, ergo [Andy] had to tap into the LPC (low pin count) debug port & hunt down the LDRQ signal on the mainboard. LPC is a very compact version of a PCI bus, that works great with ISA adapter boards, specially an LPC to ISA adapter like [Andy]’s dISAppointment board as used here.

A PCIe graphics card (NVidia 7600 GT, 256 MB VRAM), ISA soundcard, dodgy PSU and a SATA SSD were added into a period-correct case. After this Windows 98 was installed from a USB stick within a minute using [Eric Voirin]’s Windows 98 Quick Install. This gave access to MS-DOS and enabled the first tests, followed by benchmarking.

Benchmarking MS-DOS on a system this fast turned out to be somewhat messy with puzzling results. The reason for this was that the BIOS default settings under MS-DOS limited the CPU to non-turbo speeds. After this the system turned out to be actually really quite fast at MS-DOS (and Windows 98) games, to nobody’s surprise.

If you’d like to run MS-DOS on relatively modern hardware with a little less effort, you could always pick up a second-hand ThinkPad and rip through some Descent.

Biosynthesis of Polyester Amides in Engineered Escherichia Coli

Por: Maya Posch
22 Marzo 2025 at 08:00

Polymers are one of the most important elements of modern-day society, particularly in the form of plastics. Unfortunately most common polymers are derived from fossil resources, which not only makes them a finite resource, but is also problematic from a pollution perspective. A potential alternative being researched is that of biopolymers, in particular those produced by microorganisms such as everyone’s favorite bacterium Escherichia coli (E. coli).

These bacteria were the subject of a recent biopolymer study by [Tong Un Chae] et al., as published in Nature Chemical Biology (paywalled, break-down on Arstechnica).

By genetically engineering E. coli bacteria to use one of their survival energy storage pathways instead for synthesizing long chains of polyester amides (PEAs), the researchers were able to make the bacteria create long chains of mostly pure PEA. A complication here is that this modified pathway is not exactly picky about what amino acid monomers to stick onto the chain next, including metabolism products.

Although using genetically engineered bacteria for the synthesis of products on an industrial scale isn’t uncommon (see e.g. the synthesis of insulin), it would seem that biosynthesis of plastics using our prokaryotic friends isn’t quite ready yet to graduate from laboratory experiments.

Producing Syngas From CO2 and Sunlight With Direct Air Capture

Por: Maya Posch
22 Marzo 2025 at 02:00
The prototype DACCU device for producing syngas from air. (Credit: Sayan Kar, University of Cambridge)

There is more carbon dioxide (CO2) in the atmosphere these days than ever before in human history, and while it would be marvelous to use these carbon atoms for something more useful, capturing CO2 directly from the air isn’t that easy. After capturing it would also be great if you could do something more with it than stuff it into a big hole. Something like producing syngas (CO + H2) for example, as demonstrated by researchers at the University of Cambridge.

Among the improvements claimed in the paper as published in Nature Energy for this direct air capture and utilization (DACCU) approach are that it does not require pure CO2 feedstock, but will adsorb it directly from the air passing over a bed of solid silica-amine. After adsorption, the CO2 can be released again by exposure to concentrated light. Following this the conversion to syngas is accomplished by passing it over a second bed consisting of silica/alumina-titania-cobalt bis(terpyridine), that acts as a photocatalyst.

The envisioned usage scenario would be CO2 adsorption during the night, with concentrated solar power releasing it the day with subsequent production of syngas. Inlet air would be passed only over the adsorption section before switching the inlet off during the syngas generating phase. As a lab proof-of-concept it seems to work well, with outlet air stripped from virtually all CO2 and very high conversion ratio from CO2 to syngas.

Syngas has historically been used as a replacement for gasoline, but is also used as a source of hydrogen (e.g. steam reformation (SMR) of natural gas) where it’s used for reduction of iron ore, as well as the production of methanol as a precursor to many industrial processes. Whether this DACCU approach provides a viable alternative to SMR and other existing technologies will become clear once this technology moves from the lab into the real world.

Thanks to [Dan] for the tip.

So What is a Supercomputer Anyway?

Por: Maya Posch
19 Marzo 2025 at 14:00

Over the decades there have been many denominations coined to classify computer systems, usually when they got used in different fields or technological improvements caused significant shifts. While the very first electronic computers were very limited and often not programmable, they would soon morph into something that we’d recognize today as a computer, starting with World War 2’s Colossus and ENIAC, which saw use with cryptanalysis and military weapons programs, respectively.

The first commercial digital electronic computer wouldn’t appear until 1951, however, in the form of the Ferranti Mark 1. These 4.5 ton systems mostly found their way to universities and kin, where they’d find welcome use in engineering, architecture and scientific calculations. This became the focus of new computer systems, effectively the equivalent of a scientific calculator. Until the invention of the transistor, the idea of a computer being anything but a hulking, room-sized monstrosity was preposterous.

A few decades later, more computer power could be crammed into less space than ever before including ever higher density storage. Computers were even found in toys, and amidst a whirlwind of mini-, micro-, super-, home-, minisuper- and mainframe computer systems, one could be excused for asking the question: what even is a supercomputer?

Today’s Supercomputers

ORNL's Summit supercomputer, fastest until 2020 (Credit: ORNL)
ORNL’s Summit supercomputer, fastest until 2020 (Credit: ORNL)

Perhaps a fair way to classify supercomputers  is that the ‘supercomputer’ aspect is a highly time-limited property. During the 1940s, Colossus and ENIAC were without question the supercomputers of their era, while 1976’s Cray-1 wiped the floor with everything that came before, yet all of these are archaic curiosities next to today’s top two supercomputers. Both the El Capitan and Frontier supercomputers are exascale (1+ exaFLOPS in double precision IEEE 754 calculations) level machines, based around commodity x86_64 CPUs in a massively parallel configuration.

Taking up 700 m2 of floor space at the Lawrence Livermore National Laboratory (LLNL) and drawing 30 MW of power, El Capitan’s 43,808 AMD EPYC CPUs are paired with the same number of AMD Instinct MI300A accelerators, each containing 24 Zen 4 cores plus CDNA3 GPU and 128 GB of HBM3 RAM. Unlike the monolithic ENIAC, El Capitan’s 11,136 nodes, containing four MI300As each, rely on a number of high-speed interconnects to distribute computing work across all cores.

At LLNL, El Capitan is used for effectively the same top secret government things as ENIAC was, while Frontier at Oak Ridge National Laboratory (ORNL) was the fastest supercomputer before El Capitan came online about three years later. Although currently LLNL and ORNL have the fastest supercomputers, there are many more of these systems in use around the world, even for innocent scientific research.

Looking at the current list of supercomputers, such as today’s Top 9, it’s clear that not only can supercomputers perform a lot more operations per second, they also are invariably massively parallel computing clusters. This wasn’t a change that was made easily, as parallel computing comes with a whole stack of complications and problems.

The Parallel Computing Shift

ILLIAC IV massively parallel computer's Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)
ILLIAC IV massively parallel computer’s Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)

The first massively parallel computer was the ILLIAC IV, conceptualized by Daniel Slotnick in 1952 and first successfully put into operation in 1975 when it was connected to ARPANET. Although only one quadrant was fully constructed, it produced 50 MFLOPS compared to the Cray-1’s 160 MFLOPS a year later. Despite the immense construction costs and spotty operational history, it provided a most useful testbed for developing parallel computation methods and algorithms until the system was decommissioned in 1981.

There was a lot of pushback against the idea of massively parallel computation, however, with Seymour Cray famously comparing the idea of using many parallel vector processors instead of a single large one akin to ‘plowing a field with 1024 chickens instead of two oxen’.

Ultimately there is only so far you can scale a singular vector processor, of course, while parallel computing promised much better scaling, as well as the use of commodity hardware. A good example of this is a so-called Beowulf cluster, named after the original 1994 parallel computer built by Thomas Sterling and Donald Becker at NASA. This can use plain desktop computers, wired together using for example Ethernet and with open source libraries like Open MPI enabling massively parallel computing without a lot of effort.

Not only does this approach enable the assembly of a ‘supercomputer’ using cheap-ish, off-the-shelf components, it’s also effectively the approach used for LLNL’s El Capitan, just with not very cheap hardware, and not very cheap interconnect hardware, but still cheaper than if one were to try to build a monolithic vector processor with the same raw processing power after taking the messaging overhead of a cluster into account.

Mini And Maxi

David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)
David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)

One way to look at supercomputers is that it’s not about the scale, but what you do with it. Much like how government, large businesses and universities would end up with ‘Big Iron’ in the form of mainframes and supercomputers, there was a big market for minicomputers too. Here ‘mini’ meant something like a PDP-11 that’d comfortably fit in the corner of an average room at an office or university.

The high-end versions of minicomputers were called ‘superminicomputer‘, which is not to be confused with minisupercomputer, which is another class entirely. During the 1980s there was a brief surge in this latter class of supercomputers that were designed to bring solid vector computing and similar supercomputer feats down to a size and price tag that might entice departments and other customers who’d otherwise not even begin to consider such an investment.

The manufacturers of these ‘budget-sized supercomputers’ were generally not the typical big computer manufacturers, but instead smaller companies and start-ups like Floating Point Systems (later acquired by Cray) who sold array processors and similar parallel, vector computing hardware.

Recently David Lovett (AKA Mr. Usagi Electric) embarked on a quest to recover and reverse-engineer as much FPS hardware as possible, with one of the goals being to build a full minisupercomputer system as companies and universities might have used them in the 1980s. This would involve attaching such an array processor to a PDP-11/44 system.

Speed Versus Reliability

Amidst all of these definitions, the distinction between a mainframe and a supercomputer is much easier and more straightforward at least. A mainframe is a computer system that’s designed for bulk data processing with as much built-in reliability and redundancy as the price tag allows for. A modern example is IBM’s Z-series of mainframes, with the ‘Z’ standing for ‘zero downtime’. These kind of systems are used by financial institutions and anywhere else where downtime is counted in millions of dollars going up in (literal) flames every second.

This means hot-swappable processor modules, hot-swappable and redundant power supplies, not to mention hot spares and a strong focus on fault tolerant computing. All of these features are less relevant for a supercomputer, where raw performance is the defining factor when running days-long simulations and when other ways to detect flaws exist without requiring hardware-level redundancy.

Considering the brief lifespan of supercomputers (currently in the order of a few years) compared to mainframes (decades) and the many years that the microcomputers which we have on our desks can last, the life of a supercomputer seems like that of a bright and very brief flame, indeed.

Top image: Marlyn Wescoff and Betty Jean Jennings configuring plugboards on the ENIAC computer (Source: US National Archives)

Checking In On the ISA Wars and Its Impact on CPU Architectures

Por: Maya Posch
18 Marzo 2025 at 14:00

An Instruction Set Architecture (ISA) defines the software interface through which for example a central processor unit (CPU) is controlled. Unlike early computer systems which didn’t define a standard ISA as such, over time the compatibility and portability benefits of having a standard ISA became obvious. But of course the best part about standards is that there are so many of them, and thus every CPU manufacturer came up with their own.

Throughout the 1980s and 1990s, the number of mainstream ISAs dropped sharply as the computer industry coalesced around a few major ones in each type of application. Intel’s x86 won out on desktop and smaller servers while ARM proclaimed victory in low-power and portable devices, and for Big Iron you always had IBM’s Power ISA. Since we last covered the ISA Wars in 2019, quite a lot of things have changed, including Apple shifting its desktop systems to ARM from x86 with Apple Silicon and finally MIPS experiencing an afterlife in  the form of LoongArch.

Meanwhile, six years after the aforementioned ISA Wars article in which newcomer RISC-V was covered, this ISA seems to have not made the splash some had expected. This raises questions about what we can expect from RISC-V and other ISAs in the future, as well as how relevant having different ISAs is when it comes to aspects like CPU performance and their microarchitecture.

RISC Everywhere

Unlike in the past when CPU microarchitectures were still rather in flux, these days they all seem to coalesce around a similar set of features, including out-of-order execution, prefetching, superscalar parallelism, speculative execution, branch prediction and multi-core designs. Most of the performance these days is gained from addressing specific bottlenecks and optimization for specific usage scenarios, which has resulted in such things like simultaneous multithreading  (SMT) and various pipelining and instruction decoder designs.

CPUs today are almost all what in the olden days would have been called RISC (reduced instruction set computer) architectures, with a relatively small number of heavily optimized instructions. Using approaches like register renaming, CPUs can handle many simultaneous threads of execution, which for the software side that talks to the ISA is completely invisible. For the software, there is just the one register file, and unless something breaks the illusion, like when speculative execution has a bad day, each thread of execution is only aware of its own context and nothing else.

So if CPU microarchitectures have pretty much merged at this point, what difference does the ISA make?

Instruction Set Nitpicking

Within the world of ISA flamewars, the battle lines have currently mostly coalesced around topics like the pros and cons of delay slots, as well as those of compressed instructions, and setting status flags versus checking results in a branch. It is incredibly hard to compare ISAs in an apple-vs-apples fashion, as the underlying microarchitecture of a commercially available ARMv8-based CPU will differ from a similar x86_64- or RV64I- or RV64IMAC-based CPU. Here the highly modular nature of RISC-V adds significant complications as well.

If we look at where RISC-V is being used today in a commercial setting, it is primarily as simple embedded controllers where this modularity is an advantage, and compatibility with the zillion other possible RISC-V extension combinations is of no concern. Here, using RISC-V has an obvious advantage over in-house proprietary ISAs, due to the savings from outsourcing it to an open standard project. This is however also one of the major weaknesses of this ISA, as the lack of a fixed ISA along the pattern of ARMv8 and x86_64 makes tasks like supporting a Linux kernel for it much more complicated than it should be.

This has led Google to pull initial RISC-V support from Android due to the ballooning support complexity. Since every RISC-V-based CPU is only required to support the base integer instruction set, and so many things are left optional, from integer multiplication (M), atomics (A), bit manipulation (B), and beyond, all software targeting RISC-V has to explicitly test that the required instructions and functionality is present, or use a fallback.

Tempers are also running hot when it comes to RISC-V’s lack of integer overflow traps and carry instructions. As for whether compressed instructions are a good idea, the ARMv8 camp does not see any need for them, while the RISC-V camp is happy to defend them, and meanwhile x86_64 still happily uses double the number of instruction lengths courtesy of its CISC legacy, which would make x86_64 twice as bad or twice as good as RISC-V depending on who you ask.

Meanwhile an engineer with strong experience on the ARM side of things wrote a lengthy dissertation a while back on the pros and cons of these three ISAs. Their conclusion is that RISC-V is ‘minimalist to a fault’, with overlapping instructions and no condition codes or flags, instead requiring compare-and-branch instructions. This latter point cascades into a number of compromises, which is one of the major reasons why RISC-V is seen as problematic by many.

In summary, in lieu of clear advantages of RISC-V against fields where other ISAs are already established, its strong points seem to be mostly where its extreme modularity and lack of licensing requirements are seen as convincing arguments, which should not keep anyone from enjoying a good flame war now and then.

The China Angle

The Loongson 3A6000 (LS3A6000) CPU. (Credit: Geekerwan, Wikimedia)
The Loongson 3A6000 (LS3A6000) CPU. (Credit: Geekerwan, Wikimedia)

Although everywhere that is not China has pretty much coalesced around the three ISAs already described, there are always exceptions. Unlike Russia’s ill-fated very-large-instruction-word Elbrus architecture, China’s CPU-related efforts have borne significantly more fruit. Starting with the Loongson CPUs, China’s home-grown microprocessor architecture scene began to take on real shape.

Originally these were MIPS-compatible CPUs. But starting with the 3A5000 in 2021, Chinese CPUs began to use the new LoongArch ISA. Described as being a ‘bit like MIPS or RISC-V’ in the Linux kernel documentation on this ISA, it features three variants, ranging from a reduced 32-bit version (LA32R) and standard 32-bit (LA32S) to a 64-bit version (LA64). In the current LS3A6000 CPU there are 16 cores with SMT support. In reviews these chips are shown to be rapidly catching up to modern x86_64 CPUs, including when it comes to overclocking.

Of course, these being China-only hardware, few Western reviewers have subjected the LS3A6000, or its upcoming successor the LS3A7000, to an independent test.

In addition to LoongArch, other Chinese companies are using RISC-V for their own microprocessors, such as SpacemiT, an AI-focused company, whose products also include more generic processors. This includes the K1 octa-core CPU which saw use in the MuseBook laptop. As with all commercial RISC-V-based cores out today, this is no speed monsters, and even the SiFive Premier P550 SoC gets soundly beaten by even a Raspberry Pi 4’s already rather long-in-the-tooth ARM-based SoC.

Perhaps the most successful use of RISC-V in China are the cores in Espressif’s popular ESP32-C range of MCUs, although here too they are the lower-end designs relative to the Xtensa Lx6 and Lx7 cores that power Espressif’s higher-end MCUs.

Considering all this, it wouldn’t be surprising if China’s ISA scene outside of embedded will feature mostly LoongArch, a lot of ARM, some x86_64 and a sprinkling of RISC-V to round it all out.

It’s All About The IP

The distinction between ISAs and microarchitecture can be clearly seen by contrasting Apple Silicon with other ARMv8-based CPUs. Although these all support a version of the same ARMv8 ISA, the magic sauce is in the intellectual property (IP) blocks that are integrated into the chip. These range from memory controllers, PCIe SerDes blocks, and integrated graphics (iGPU), to encryption and security features. Unless you are an Apple or Intel with your own GPU-solution, you will be licensing the iGPU block along with other IP blocks from IP vendors.

These IP blocks offer the benefit of being able to use off-the-shelf functionality with known performance characteristics, but they are also where much of the cost of a microprocessor design ends up going. Developing such functionality from scratch can pay for itself if you reuse the same blocks over and over like Apple or Qualcomm do. For a start-up hardware company this is one of the biggest investments, which is why they tend to license a fully manufacturable design from Arm.

The actual cost of the ISA in terms of licensing is effectively a rounding error, while the benefit of being able to leverage existing software and tooling is the main driver. This is why a new ISA like LoongArch may very well pose a real challenge to established ISAs in the long run, beacause it is being given a chance to develop in a very large market with guaranteed demand.

Spoiled For Choice

Meanwhile, the Power ISA is also freely available for anyone to use without licensing costs; the only major requirement is compliance with the Power ISA. The OpenPOWER Foundation is now also part of the Linux Foundation, with a range of IBM Power cores open sourced. These include the A2O core that’s based on the A2I core which powered the XBox 360 and Playstation 360’s Cell processor, as well as the Microwatt reference design that’s based on the much newer Power ISA 3.0.

Whatever your fancy is, and regardless of whether you’re just tinkering on a hobby or commercial project, it would seem that there is plenty of diversity in the ISA space to go around. Although it’s only human to pick a favorite and favor it, there’s something to be said for each ISA. Whether it’s a better teaching tool, more suitable for highly customized embedded designs, or simply because it runs decades worth of software without fuss, they all have their place.

Blue Ghost Watches Lunar Eclipse from the Lunar Surface

Por: Maya Posch
16 Marzo 2025 at 20:00
Firefly’s Blue Ghost lander's first look at the solar eclipse as it began to emerge from its Mare Crisium landing site on March 14 at 5:30 AM UTC. (Credit: Firefly Aerospace)
Firefly’s Blue Ghost lander’s first look at the solar eclipse as it began to emerge from its Mare Crisium landing site on March 14 at 5:30 AM UTC. (Credit: Firefly Aerospace)

After recently landing at the Moon’s Mare Crisium, Firefly’s Blue Ghost lunar lander craft was treated to a spectacle that’s rarely observed: a total solar eclipse as seen from the surface of the Moon. This entire experience was detailed on the Blue Ghost Mission 1 live blog. As the company notes, this is the first time that a commercial entity has been able to observe this phenomenon.

During this event, the Earth gradually moved in front of the Sun, as observed from the lunar surface. During this time, the Blue Ghost lander had to rely on its batteries as it was capturing the solar eclipse with a wide-angle camera on its top deck.

Unlike the Blood Moon seen from the Earth, there was no such cool effect observed from the Lunar surface. The Sun simply vanished, leaving a narrow ring of light around the Earth. The reason for the Blood Moon becomes obvious, however, as the refracting of the sunlight through Earth’s atmosphere changes the normal white-ish light to shift to an ominous red.

The entire sequence of images captured can be observed in the video embedded on the live blog and below, giving a truly unique view of something that few humans (and robots) have so far been able to observe.

You can make your own lunar eclipse. Or, make your own solar eclipse, at least once a day.

Transmitting Wireless Power Over Longer Distances

Por: Maya Posch
16 Marzo 2025 at 14:00
Proof-of-concept of the inductive coupling transmitter with the 12V version of the circuitry (Credit: Hyperspace Pirate, YouTube)
Proof-of-concept of the inductive coupling transmitter with the 12V version of the circuitry (Credit: Hyperspace Pirate, YouTube)

Everyone loves wireless power these days, almost vindicating [Tesla’s] push for wireless power. One reason why transmitting electricity this way is a terrible idea is the massive losses involved once you increase the distance between transmitter and receiver. That said, there are ways to optimize wireless power transfer using inductive coupling, as [Hyperspace Pirate] demonstrates in a recent video.

Starting with small-scale proof of concept coils, the final version of the transmitter is powered off 120 VAC. The system has 10 kV on the coil and uses a half-bridge driver to oscillate at 145 kHz. The receiver matches this frequency precisely for optimal efficiency. The transmitting antenna is a 4.6-meter hexagon with eight turns of 14 AWG wire. During tests, a receiver of similar size could light an LED at a distance of 40 meters with an open circuit voltage of 2.6 V.

Although it’s also an excellent example of why air core transformers like this are lousy for efficient remote power transfer, a fascinating finding is that intermediate (unpowered) coils between the transmitter and receiver can help to boost the range due to coupling effects. Even if it’s not a practical technology (sorry, [Tesla]), it’s undeniable that it makes for a great science demonstration.

Of course, people do charge phones wirelessly. It works, but it trades efficiency for convenience. Modern attempts at beaming power around seem to focus more on microwaves or lasers.

❌
❌