Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Ayer — 2 Abril 2025Salida Principal

Australia’s Steady March Towards Space

Por: Lewin Day
2 Abril 2025 at 14:00

The list of countries to achieve their own successful orbital space launch is a short one, almost as small as the exclusive club of states that possess nuclear weapons. The Soviet Union was first off the rank in 1957, with the United States close behind in 1958, and a gaggle of other aerospace-adept states followed in the 1960s, 1970s, and 1980s. Italy, Iran, North Korea and South Korea have all joined the list since the dawn of the new millennium.

Absent from the list stands Australia. The proud island nation has never stood out as a player in the field of space exploration, despite offering ground station assistance to many missions from other nations over the years. However, the country has continued to inch its way to the top of the atmosphere, establishing its own space agency in 2018. Since then, development has continued apace, and the country’s first orbital launch appears to be just around the corner.

Space, Down Under

The Australian Space Agency has played an important role in supporting domestic space projects, like the ELO2 lunar rover (also known as “Roo-ver”). Credit: ASA

The establishment of the Australian Space Agency (ASA) took place relatively recently. The matter was seen to be long overdue from an OECD member country; by 2008, Australia was the only one left without a national space agency since previous state authorities had been disbanded in 1996. This was despite many facilities across the country contributing to international missions, providing critical radio downlink services and even welcoming JAXA’s Hayabusa2 spacecraft back to Earth.

Eventually, a groundswell grew, pressuring the government to put Australia on the right footing to seize growing opportunities in the space arena. Things came to a head in 2018, when the government established ASA to “support the growth and transformation of Australia’s space industry.”

ASA would serve a somewhat different role compared to organizations like NASA (USA) and ESA (EU). Many space agencies in other nations focus on developing launch vehicles and missions in-house, collaborating with international partners and aerospace companies in turn to do so. However, for ASA, the agency is more focused on supporting and developing the local space industry rather than doing the engineering work of getting to space itself.

Orbital Upstarts

Just because the government isn’t building its own rockets, doesn’t mean that Australia isn’t trying to get to orbit. That goal is the diehard mission of Gilmour Space Technologies. The space startup was founded in 2013, and established its rocketry program in 2015, and has been marching towards orbit ever since. As is often the way, the journey has been challenging, but the payoff of genuine space flight is growing ever closer.

Gilmour Space moved fast, launching its first hybrid rocket back in 2016. The successful suborbital launch proved to be a useful demonstration of the company’s efforts to produce a rocket that used 3D-printed fuel. This early milestone aided the company to secure investment that would support its push to grander launches at greater scale. The company’s next major launch was planned for 2019, but frustration struck—when the larger One Vision rocket suffered a failure just 7 seconds prior to liftoff. Undeterred, the company continued development of a larger rocket, taking on further investment and signing contracts to launch payloads to orbit in the ensuing years.

Gilmour Space has worked hard to develop its hybrid rocket engines in-house. 

With orbital launches and commercial payload deliveries the ultimate goal, it wasn’t enough to just develop a rocket. Working with the Australian government, Gilmour Space established the Bowen Orbital Spaceport in early 2024—a launchpad suitable for the scale of its intended space missions. Located on Queensland’s Gold Coast, it’s just 20 degrees south of the equator—closer than Cape Canaveral, and useful for accessing low- to mid-inclination equatorial orbits. The hope was to gain approval to launch later that year, but thus far, no test flights have taken place. Licensing issues around the launch have meant the company has had to hold back on shooting for orbit.

The rocket with which Gilmour Space intends to get there is called Eris. In Block 1 configuration, it stands 25 meters tall, and is intended to launch payloads up to 300 kg into low-Earth orbits. It’s a three-stage design. It uses four of Gilmour’s Sirius hybrid rocket motors in the first stage, and just one in the second stage. The third stage has a smaller liquid rocket engine of Gilmour’s design, named Phoenix. The rocket was first staged vertically on the launch pad in early 2024, and a later “dress rehearsal” for launch was performed in September, with the rocket fully fueled. However, flight did not take place, as launch permits were still pending from Australia’s Civil Aviation Safety Authority (CASA).

The Eris rocket was first vertically erected on the launchpad in 2024, but progress towards launch has been slow since then. 

After a number of regulatory issues, the company’s first launch of Eris was slated for March 15, 2025. However, that day came and passed, even with CASA approval, as the required approvals were still not available from the Australian Space Agency. Delays have hurt the company’s finances, hampering its ability to raise further funds. As for the rocket itself, hopes for Eris’s performance at this stage remain limited, even if you ask those at Gilmour Space. Earlier this month, founder Adam Gilmour spoke to the Sydney Morning Herald on his expectations for the initial launch. Realistic about the proposition of hitting orbit on the company first attempt, he expects it to take several launches to achieve, with some teething problems to come. “It’s very hard to test an orbital rocket without just flying it,” he told the Herald. “We don’t have high expectations we’ll get to orbit… I’d personally be happy to get off the pad.”

Despite the trepidation, Eris stands as Australia’s closest shot at hitting the bigtime outside the atmosphere. Government approvals and technical hurdles will still need to be overcome, with the Australian Space Agency noting that the company still has licence conditions to meet before a full launch is approved. Still, before the year is out, Australia might join that vaunted list of nations that have leapt beyond the ground to circle the Earth from above. It will be a proud day when that comes to pass.

AnteayerSalida Principal

On Egyptian Pyramids and Why It’s Definitely Aliens

Por: Maya Posch
1 Abril 2025 at 14:00

History is rather dull and unexciting to most people, which naturally invites exciting flights of fancy that can range from the innocent to outright conspiracies. Nobody truly believes that the astounding finds and (fully functioning) ancient mechanisms in the Indiana Jones & Uncharted franchises are real, with mostly intact ancient cities waiting for intrepid explorers along with whatever mystical sources of power, wealth or influence formed the civilization’s foundations before its tragic demise. Yet somehow Plato’s fictive Atlantis has taken on a life of its own, along with many other ‘lost’ civilizations, whether real or imagined.

Of course, if these aforementioned movies and video games were realistic, they would center around a big archaeological dig and thrilling finds like pot shards and cuneiform clay tablets, not ways to smite enemies and gain immortality. Nor would it involve solving complex mechanical puzzles to gain access to the big secret chamber, prior to walking out of the readily accessible backdoor. Reality is boring like that, which is why there’s a major temptation to spruce things up. With the Egyptian pyramids as well as similar structures around the world speaking to the human imagination, this has led to centuries of half-baked ideas and outright conspiracies.

Most recently, a questionable 2022 paper hinting at structures underneath the Pyramid of Khafre in Egypt was used for a fresh boost to old ideas involving pyramid power stations, underground cities and other fanciful conspiracies. Although we can all agree that the ancient pyramids in Egypt are true marvels of engineering, are we really on the cusp of discovering that the ancient Egyptians were actually provided with Forerunner technology by extraterrestrials?

The Science of Being Tragically Wrong

A section of the 'runes' at Runamo. (Credit: Entheta, Wikimedia)
A section of the ‘runes’ at Runamo. (Credit: Entheta, Wikimedia)

In defense of fanciful theories regarding the Actual Truth™ about Ancient Egypt and kin, archaeology as we know it today didn’t really develop until the latter half of the 20th century, with the field being mostly a hobbyist thing that people did out of curiosity as well as a desire for riches. Along the way many comical blunders were made, such as the Runamo runes in Sweden that turned out to be just random cracks in dolerite.

Less funny were attempts by colonists to erase Great Zimbabwe (11th – ~17th century CE) and the Kingdom of Zimbabwe after the ruins of the abandoned capital were discovered by European colonists and explored in earnest by the 19th century. Much like the wanton destruction of local cultures in the Americas by European colonists and explorers who considered their own culture, religion and technology to be clearly superior, the history of Great Zimbabwe was initially rewritten so that no thriving African society ever formed on its own, but was the result of outside influences.

In this regard it’s interesting how many harebrained ideas about archaeological sites have now effectively flipped, with mystical and mythical properties being assigned and these ‘Ancients’ being almost worshipped. Clearly, aliens visited Earth and that led to pyramids being constructed all around the globe. These would also have been the same aliens or lost civilizations that had technology far beyond today’s cutting edge, putting Europe’s fledgling civilization to shame.

Hence people keep dogpiling on especially the pyramids of Giza and its surrounding complex, assigning mystical properties to their ventilation shafts and expecting hidden chambers with technology and treasures interspersed throughout and below the structures.

Lost Technology

The Giant's Causeway in Northern Ireland. (Credit: code poet, Wikimedia)
The Giant’s Causeway in Northern Ireland. (Credit: code poet, Wikimedia)

The idea of ‘lost technology’ is a pervasive one, mostly buoyed by the axiom that you cannot disprove something, only find evidence for its absence. Much like the possibility of a teapot being in orbit around the Sun right now, you cannot disprove that the Ancient Egyptians did not have hyper-advanced power plants using zero point energy back around 3,600 BCE. This ties in with the idea of ‘lost civilizations‘, which really caught on around the Victorian era.

Such romanticism for a non-existent past led to the idea of Atlantis being a real, lost civilization becoming pervasive, with the 1960s seeing significant hype around the Bimini Road. This undersea rock formation in the Bahamas was said to have been part of Atlantis, but is actually a perfectly cromulent geological formation. More recently a couple of German tourists got into legal trouble while trying to prove a connection between Egypt’s pyramids to Atlantis, which is a theory that refuses to die along with the notion that Atlantis was some kind of hyper-advanced civilization and not just a fictional society that Plato concocted to illustrate the folly of man.

Admittedly there is a lot of poetry in all of this when you consider it from that angle.

Welcome to Shangri-La... or rather Shambhala as portrayed in <i>Uncharted 3</i>.
Welcome to Shangri-La… or rather Shambhala as portrayed in Uncharted 3.

People have spent decades of their life and countless sums of money on trying to find Atlantis, Shangri-La (possibly inspired by Shambhala), El Dorado and similar fictional locations. The Iram of the Pillars which featured in Uncharted 3: Drake’s Deception is one of the lost cities mentioned in the Qur’an, and is incidentally another great civilization that saw itself meet a grim end through divine punishment. Iram is often said to be Ubar, which is commonly known as Atlantis of the Sands.

 

All of this is reminiscent of the Giant’s Causeway in Northern Ireland, and corresponding area at Fingal’s Cave on the Scottish isle of Staffa, where eons ago molten basalt cooled and contracted into basalt columns in a way that is similar to how drying mud will crack in semi-regular patterns. This particular natural formation did lead to many local myths, including how a giant built a causeway across the North Channel, hence the name.

Fortunately for this location, no ‘lost civilization’ tag became attached, and thus it remains a curious demonstration of how purely natural formations can create structures that one might assume to have required intelligence, thus providing fuel for conspiracies. So far only ‘Young Earth’ conspiracy folk have put a claim on this particular site.

What we can conclude is that much like the Victorian age that spawned countless works of fiction on the topic, many of these modern-day stories appear to be rooted in a kind of romanticism for a past that never existed, with those affected interpreting natural patterns as something more in a sure sign of confirmation bias.

Tourist Traps

Tomb of the First Emperor Qin Shi Huang Di, Xi'an, China (Credit: Aaron Zhu)
Tomb of the First Emperor Qin Shi Huang Di, Xi’an, China (Credit: Aaron Zhu)

One can roughly map the number of tourist visits with the likelihood of wild theories being dreamed up. These include the Egyptian pyramids, but also similar structures in what used to be the sites of the Aztec and Maya civilizations. Similarly the absolutely massive mausoleum of Qin Shi Huang in China with its world-famous Terracotta Army has led to incredible speculation on what might still be hidden inside the unexcavated tomb mound, such as entire seas and rivers of mercury that moved mechanically to simulate real bodies of water, a simulated starry sky, crossbows set to take out trespassers and incredible riches.

Many of these features were described by Sima Qian in the first century BCE, who may or may not have been truthful in his biography of Qin Shi Huang. Meanwhile, China’s authorities have wisely put further excavations on hold, as they have found that many of the recovered artefacts degrade very quickly once exposed to air. The paint on the terracotta figures began to flake off rapidly after excavation, for example, reducing them to the plain figures which we are familiar with.

Tourism can be as damaging as careless excavation. As popular as the pyramids at Giza are, centuries of tourism have taken their toll, with vandalism, graffiti and theft increasing rapidly since the 20th century. The Great Pyramid of Khufu had already been pilfered for building materials over the course of millennia by the local population, but due to tourism part of its remaining top stones were unceremoniously tipped over the side to make a larger platform where tourists could have some tea while gazing out over the the Giza Plateau, as detailed in a recent video on the History for Granite channel:

The recycling of building materials from antique structures was also the cause of the demise of the Labyrinth at the foot of the pyramid of Amenemhat III at Hawara. Once an architectural marvel, with reportedly twelve roofed courts and spanning a total of 28,000 m2, today only fragments remain of its existence. This sadly is how most marvels of the Ancient World end up: looted ruins, ashes and shards, left in the sand, mud, or reclaimed by nature, from which we can piece together with a lot of patience and the occasional stroke of fortune a picture what it once may have looked like.

Pyramid Power

Cover of The Giza Power Plant book. (Credit: Christopher Dunn)
Cover of The Giza Power Plant book. (Credit: Christopher Dunn)

When in light of all this we look at the claims made about the Pyramid of Khafre and the persistent conspiracies regarding this and other pyramids hiding great secrets, we can begin to see something of a pattern. Some people have really bought into these fantasies, while for others it’s just another way to embellish a location, to attract more rubes tourists and sell more copies of their latest book on the extraterrestrial nature of pyramids and how they are actually amazing lost technologies. This latter category is called pseudoarcheology.

Pyramids, of course, have always held magical powers, but the idea that they are literal power plants seems to have been coined by one Christopher Dunn, with the publication of his pseudo-archeological book The Giza Power Plant in 1998. That there would be more structures underneath the Pyramid of Khafre is a more recent invention, however. Feeding this particular flight of fancy appears to be a 2022 paper by Filippo Biondi and Corrado Malanga, in which synthetic aperture radar (SAR) was used to examine said pyramid interior and subsurface features.

Somehow this got turned into claims about multiple deep vertical wells descending 648 meters along with other structures. Shared mostly via conspiracy channels, it widely extrapolates from claims made in the paper by Biondi et al., with said SAR-based claims never having been peer-reviewed or independently corroborated. On the Rational Wiki entry for these and other claims related to the Giza pyramids are savagely tossed under the category of ‘pyramidiots’.

The art that conspiracy nuts produce when provided with generative AI tools. (Source: Twitter)
The art that conspiracy nuts produce when provided with generative AI tools. (Source: Twitter)

Back in the real world, archaeologists have found a curious L-shaped area underneath a royal graveyard near Khufu’s pyramid that was apparently later filled in, but which seems to lead to a deeper structure. This is likely to be part of the graveyard, but may also have been a feature that was abandoned during construction. Currently this area is being excavated, so we’re likely to figure out more details after archaeologists have finished gently sifting through tons of sand and gravel.

There is also the ScanPyramids project, which uses non-destructive and non-invasive techniques to scan Old Kingdom-era pyramids, such as muon tomography and infrared thermography. This way the internal structure of these pyramids can be examined in-depth. One finding was that of a number of ‘voids’, which could mean any of a number of things, but most likely do not contain world-changing secrets.

To this day the most credible view is still that the pyramids of the Old Kingdom were used as tombs, though unlike the mastabas and similar tombs, there is a credible argument to be made that rather than being designed to be hidden away, these pyramids would be eternal monuments to the pharaoh. They would be open for worship of the pharaoh, hence the ease of getting inside them. Ironically this would make them more secure from graverobbers, which was a great idea until the demise of the Ancient Egyptian civilization.

This is a point that’s made succinctly on the History for Granite channel, with the conclusion being that this goal of ‘inspiring awe’ to worshippers is still effective today, simply judging by the millions of tourists each year to these monuments, and the tall tales that they’ve inspired.

General Fusion Claims Success with Magnetized Target Fusion

Por: Maya Posch
27 Marzo 2025 at 14:00

It’s rarely appreciated just how much more complicated nuclear fusion is than nuclear fission. Whereas the latter involves a process that happens all around us without any human involvement, and where the main challenge is to keep the nuclear chain reaction within safe bounds, nuclear fusion means making atoms do something that goes against their very nature, outside of a star’s interior.

Fusing helium isotopes can be done on Earth fairly readily these days, but doing it in a way that’s repeatable — bombs don’t count — and in a way that makes economical sense is trickier. As covered previously, plasma stability is a problem with the popular approach of tokamak-based magnetic confinement fusion (MCF). Although this core problem has now been largely addressed, and stellarators are mostly unbothered by this particular problem, a Canadian start-up figures that they can do even better, in the form of a nuclear fusion reactors based around the principle of magnetized target fusion (MTF).

Although General Fusion’s piston-based fusion reactor has people mostly very confused, MTF is based on real physics and with GF’s current LM26 prototype having recently achieved first plasma, this seems like an excellent time to ask the question of what MTF is, and whether it can truly compete billion-dollar tokamak-based projects.

Squishing Plasma Toroids

Lawson criterion of important magnetic confinement fusion experiments (Credit: Horvath, A., 2016)
Lawson criterion of important magnetic confinement fusion experiments (Credit: Horvath, A., 2016)

In general, to achieve nuclear fusion, the target atoms have to be pushed past the Coulomb barrier, which is an electrostatic interaction that normally prevents atoms from approaching each other and even spontaneously fusing. In stars, the process of nucleosynthesis is enabled by the intense pressures due to the star’s mass, which overcomes this electrostatic force.

Replicating the nuclear fusion process requires a similar way to overcome the Coulomb barrier, but in lieu of even a small-sized star like our Sun, we need alternate means such as much higher temperatures, alternative ways to provide pressure and longer confinement times. The efficiency of each approach was originally captured in the Lawson criterion, which was developed by John D. Lawson in a (then classified) 1955 paper (PDF on Archive.org).

In order to achieve a self-sustaining fusion reaction, the energy losses should be less than the energy produced by the reaction. The break-even point here is expressed as having a Q (energy gain factor) of 1, where the added energy and losses within the fusion process are in balance. For sustained fusion with excess energy generation, the Q value should be higher than 1, typically around 5 for contemporary fuels and fusion technology.

In the slow march towards ignition, we have seen many reports in the popular media that turn out to be rather meaningless, such as the horrendous inefficiency demonstrated by the laser-based inertial confinement fusion (ICF) at the National Ignition Facility (NIF). This makes it rather fascinating that what General Fusion is attempting is closer to ICF, just without the lasers and artisan Hohlraum-based fuel pellets.

Instead they use a plasma injector, a type of plasma railgun called a Marshall gun, that produces hydrogen isotope plasma, which is subsequently contained in a magnetic field as a self-stable compact toroid. This toroid is then squished by a mechanical system in a matter of milliseconds, with the resulting compression induces fusion. Creating this toroid is the feat that was recently demonstrated in the current Lawson Machine 26 (LM26) prototype reactor with its first plasma in the target chamber.

Magneto-Inertial Fusion

Whereas magnetic confinement fusion does effectively what it says on the tin, magnetic target fusion is pretty much a hybrid of magnetic confinement fusion and the laser-based intertial confinement fusion. Because the magnetic containment is only there to essentially keep the plasma in a nice stable toroid, it doesn’t have nearly the same requirements as in a tokamak or stellarator. Yet rather than using complex and power-hungry lasers, MCF applies mechanical energy using an impulse driver — the liner — that rapidly compresses the low-density plasma toroid.

Schematic of the Lawson Machine 26 MTF reactor. (Credit: General Fusion)
Schematic of the Lawson Machine 26 MTF reactor. (Credit: General Fusion)

The juiciest parts of General Fusion’s experimental setup can be found in the Research Library on the GF website. The above graphic was copied from the LM26 poster (PDF), which provides a lot of in-depth information on the components of the device and its operation, as well as the experiments that informed its construction.

The next step will be to test the ring compressor that is designed to collapse the lithium liner around the plasma toroid, compressing it and achieving fusion.

Long Road Ahead

Interpretation of General Fusion's commercial MTF reactor design. (Credit: Evan Mason)
Interpretation of General Fusion’s commercial MTF reactor design. (Credit: Evan Mason)

As promising this may sound, there is still a lot of work to do before MTF can be considered a viable option for commercial fusion. As summarized on the Wikipedia entry for General Fusion, the goal is to have a liquid liner rather than the solid lithium liner of LM26. This liquid lithium liner will both breed new tritium fuel from neutron exposure, as well as provide the liner that compresses the deuterium-tritium fuel.

This liquid liner would also provide cooling, linked with a heat exchanger or steam generator to generate electricity. Because the liquid liner would be infinitely renewable, it should allow for about 1 cycle per second. To keep the liquid liner in place on the inside of the sphere, it would need to be constantly spun, further complicating the design.

Although getting plasma in the reaction chamber where it can be squished by the ring compressor’s lithium liner is a major step, the real challenge will be in moving from a one-cycle-a-day MTF prototype to something that can integrate not only the aforementioned features, but also run one cycle per second, while being more economical to run than tokamaks, stellarators, or even regular nuclear fission plants, especially Gen IV fast neutron reactors.

That said, there is a strong argument to be made that MTF is significantly more practical for commercial power generation than ICF. And regardless, it is just really cool science and engineering.

Top image: General Fusion’s Lawson Machine 26. (Credit: General Fusion)

2024 Hackaday Supercon Talk: Killing Mosquitoes with Freaking Drones, and Sonar

25 Marzo 2025 at 14:00

Suppose that you want to get rid of a whole lot of mosquitoes with a quadcopter drone by chopping them up in the rotor blades. If you had really good eyesight and pretty amazing piloting skills, you could maybe fly the drone yourself, but honestly this looks like it should be automated. [Alex Toussaint] took us on a tour of how far he has gotten toward that goal in his amazingly broad-ranging 2024 Superconference talk. (Embedded below.)

The end result is an amazing 380-element phased sonar array that allows him to detect the location of mosquitoes in mid-air, identifying them by their particular micro-doppler return signature. It’s an amazing gadget called LeSonar2, that he has open-sourced, and that doubtless has many other applications at the tweak of an algorithm.

Rolling back in time a little bit, the talk starts off with [Alex]’s thoughts about self-guiding drones in general. For obstacle avoidance, you might think of using a camera, but they can be heavy and require a lot of expensive computation. [Alex] favored ultrasonic range finding. But then an array of ultrasonic range finders could locate smaller objects and more precisely than the single ranger that you probably have in mind. This got [Alex] into beamforming and he built an early prototype, which we’ve actually covered in the past. If you’re into this sort of thing, the talk contains a very nice description of the necessary DSP.

[Alex]’s big breakthrough, though, came with shrinking down the ultrasonic receivers. The angular resolution that you can resolve with a beam-forming array is limited by the distance between the microphone elements, and traditional ultrasonic devices like we use in cars are kinda bulky. So here comes a hack: the TDK T3902 MEMS microphones work just fine up into the ultrasound range, even though they’re designed for human hearing. Combining 380 of these in a very tightly packed array, and pushing all of their parallel data into an FPGA for computation, lead to the LeSonar2. Bigger transducers put out ultrasound pulses, the FPGA does some very intense filtering and combining of the output of each microphone, and the resulting 3D range data is sent out over USB.

After a marvelous demo of the device, we get to the end-game application: finding and identifying mosquitoes in mid-air. If you don’t want to kill flies, wasps, bees, or other useful pollinators while eradicating the tiny little bloodsuckers that are the drone’s target, you need to be able to not only locate bugs, but discriminate mosquitoes from the others.

For this, he uses the micro-doppler signatures that the different wing beats of the various insects put out. Wasps have a very wide-band doppler echo – their relatively long and thin wings are moving slower at the roots than at the tips. Flies, on the other hand, have stubbier wings, and emit a tighter echo signal. The mosquito signal is even tighter.

If you told us that you could use sonar to detect mosquitoes at a distance of a few meters, much less locate them and differentiate them from their other insect brethren, we would have thought that it was impossible. But [Alex] and his team are building these devices, and you can even build one yourself if you want. So watch the talk, learn about phased arrays, and start daydreaming about what you would use something like this for.

 

So What is a Supercomputer Anyway?

Por: Maya Posch
19 Marzo 2025 at 14:00

Over the decades there have been many denominations coined to classify computer systems, usually when they got used in different fields or technological improvements caused significant shifts. While the very first electronic computers were very limited and often not programmable, they would soon morph into something that we’d recognize today as a computer, starting with World War 2’s Colossus and ENIAC, which saw use with cryptanalysis and military weapons programs, respectively.

The first commercial digital electronic computer wouldn’t appear until 1951, however, in the form of the Ferranti Mark 1. These 4.5 ton systems mostly found their way to universities and kin, where they’d find welcome use in engineering, architecture and scientific calculations. This became the focus of new computer systems, effectively the equivalent of a scientific calculator. Until the invention of the transistor, the idea of a computer being anything but a hulking, room-sized monstrosity was preposterous.

A few decades later, more computer power could be crammed into less space than ever before including ever higher density storage. Computers were even found in toys, and amidst a whirlwind of mini-, micro-, super-, home-, minisuper- and mainframe computer systems, one could be excused for asking the question: what even is a supercomputer?

Today’s Supercomputers

ORNL's Summit supercomputer, fastest until 2020 (Credit: ORNL)
ORNL’s Summit supercomputer, fastest until 2020 (Credit: ORNL)

Perhaps a fair way to classify supercomputers  is that the ‘supercomputer’ aspect is a highly time-limited property. During the 1940s, Colossus and ENIAC were without question the supercomputers of their era, while 1976’s Cray-1 wiped the floor with everything that came before, yet all of these are archaic curiosities next to today’s top two supercomputers. Both the El Capitan and Frontier supercomputers are exascale (1+ exaFLOPS in double precision IEEE 754 calculations) level machines, based around commodity x86_64 CPUs in a massively parallel configuration.

Taking up 700 m2 of floor space at the Lawrence Livermore National Laboratory (LLNL) and drawing 30 MW of power, El Capitan’s 43,808 AMD EPYC CPUs are paired with the same number of AMD Instinct MI300A accelerators, each containing 24 Zen 4 cores plus CDNA3 GPU and 128 GB of HBM3 RAM. Unlike the monolithic ENIAC, El Capitan’s 11,136 nodes, containing four MI300As each, rely on a number of high-speed interconnects to distribute computing work across all cores.

At LLNL, El Capitan is used for effectively the same top secret government things as ENIAC was, while Frontier at Oak Ridge National Laboratory (ORNL) was the fastest supercomputer before El Capitan came online about three years later. Although currently LLNL and ORNL have the fastest supercomputers, there are many more of these systems in use around the world, even for innocent scientific research.

Looking at the current list of supercomputers, such as today’s Top 9, it’s clear that not only can supercomputers perform a lot more operations per second, they also are invariably massively parallel computing clusters. This wasn’t a change that was made easily, as parallel computing comes with a whole stack of complications and problems.

The Parallel Computing Shift

ILLIAC IV massively parallel computer's Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)
ILLIAC IV massively parallel computer’s Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)

The first massively parallel computer was the ILLIAC IV, conceptualized by Daniel Slotnick in 1952 and first successfully put into operation in 1975 when it was connected to ARPANET. Although only one quadrant was fully constructed, it produced 50 MFLOPS compared to the Cray-1’s 160 MFLOPS a year later. Despite the immense construction costs and spotty operational history, it provided a most useful testbed for developing parallel computation methods and algorithms until the system was decommissioned in 1981.

There was a lot of pushback against the idea of massively parallel computation, however, with Seymour Cray famously comparing the idea of using many parallel vector processors instead of a single large one akin to ‘plowing a field with 1024 chickens instead of two oxen’.

Ultimately there is only so far you can scale a singular vector processor, of course, while parallel computing promised much better scaling, as well as the use of commodity hardware. A good example of this is a so-called Beowulf cluster, named after the original 1994 parallel computer built by Thomas Sterling and Donald Becker at NASA. This can use plain desktop computers, wired together using for example Ethernet and with open source libraries like Open MPI enabling massively parallel computing without a lot of effort.

Not only does this approach enable the assembly of a ‘supercomputer’ using cheap-ish, off-the-shelf components, it’s also effectively the approach used for LLNL’s El Capitan, just with not very cheap hardware, and not very cheap interconnect hardware, but still cheaper than if one were to try to build a monolithic vector processor with the same raw processing power after taking the messaging overhead of a cluster into account.

Mini And Maxi

David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)
David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)

One way to look at supercomputers is that it’s not about the scale, but what you do with it. Much like how government, large businesses and universities would end up with ‘Big Iron’ in the form of mainframes and supercomputers, there was a big market for minicomputers too. Here ‘mini’ meant something like a PDP-11 that’d comfortably fit in the corner of an average room at an office or university.

The high-end versions of minicomputers were called ‘superminicomputer‘, which is not to be confused with minisupercomputer, which is another class entirely. During the 1980s there was a brief surge in this latter class of supercomputers that were designed to bring solid vector computing and similar supercomputer feats down to a size and price tag that might entice departments and other customers who’d otherwise not even begin to consider such an investment.

The manufacturers of these ‘budget-sized supercomputers’ were generally not the typical big computer manufacturers, but instead smaller companies and start-ups like Floating Point Systems (later acquired by Cray) who sold array processors and similar parallel, vector computing hardware.

Recently David Lovett (AKA Mr. Usagi Electric) embarked on a quest to recover and reverse-engineer as much FPS hardware as possible, with one of the goals being to build a full minisupercomputer system as companies and universities might have used them in the 1980s. This would involve attaching such an array processor to a PDP-11/44 system.

Speed Versus Reliability

Amidst all of these definitions, the distinction between a mainframe and a supercomputer is much easier and more straightforward at least. A mainframe is a computer system that’s designed for bulk data processing with as much built-in reliability and redundancy as the price tag allows for. A modern example is IBM’s Z-series of mainframes, with the ‘Z’ standing for ‘zero downtime’. These kind of systems are used by financial institutions and anywhere else where downtime is counted in millions of dollars going up in (literal) flames every second.

This means hot-swappable processor modules, hot-swappable and redundant power supplies, not to mention hot spares and a strong focus on fault tolerant computing. All of these features are less relevant for a supercomputer, where raw performance is the defining factor when running days-long simulations and when other ways to detect flaws exist without requiring hardware-level redundancy.

Considering the brief lifespan of supercomputers (currently in the order of a few years) compared to mainframes (decades) and the many years that the microcomputers which we have on our desks can last, the life of a supercomputer seems like that of a bright and very brief flame, indeed.

Top image: Marlyn Wescoff and Betty Jean Jennings configuring plugboards on the ENIAC computer (Source: US National Archives)

Relativity Space Changes Course on Path to Orbit

Por: Tom Nardi
17 Marzo 2025 at 14:00

In 2015, Tim Ellis and Jordan Noone founded Relativity Space around an ambitious goal: to be the first company to put a 3D printed rocket into orbit. While additive manufacturing was already becoming an increasingly important tool in the aerospace industry, the duo believed it could be pushed further than anyone had yet realized.

Rather than assembling a rocket out of smaller printed parts, they imagined the entire rocket being produced on a huge printer. Once the methodology was perfected, they believed rockets could be printed faster and cheaper than they could be traditionally assembled. What’s more, in the far future, Relativity might even be able to produce rockets off-world in fully automated factories. It was a bold idea, to be sure. But then, landing rockets on a barge in the middle of the ocean once seemed pretty far fetched as well.

An early printed propellant tank.

Of course, printing something the size of an orbital rocket requires an exceptionally large 3D printer, so Relativity Space had to built one. It wasn’t long before the company had gotten to the point where they had successfully tested their printed rocket engine, and were scaling up their processes to print the vehicle’s propellant tanks. In 2018 Bryce Salmi, then an avionics hardware engineer at Relatively Space, gave a talk at Hackaday Supercon detailing the rapid progress the company had made so far.

Just a few years later, in March of 2023, the Relativity’s first completed rocket sat fueled and ready to fly on the launch pad. The Terran 1 rocket wasn’t the entirely printed vehicle that Ellis and Noone had imagined, but with approximately 85% of the booster’s mass being made up of printed parts, it was as close as anyone had ever gotten before.

The launch of Terran 1 was a huge milestone for the company, and even though a problem in the second stage engine prevented the rocket from reaching orbit, the flight proved to critics that a 3D printed rocket could fly and that their manufacturing techniques were sound. Almost immediately, Relativity Space announced they would begin work on a larger and more powerful successor to the Terran 1 which would be more competitive to SpaceX’s Falcon 9.

Now, after an administrative shakeup that saw Tim Ellis replaced as CEO, the company has released a nearly 45 minute long video detailing their plans for the next Terran rocket — and explaining why they won’t be 3D printing it.

Meet the New Boss

For the mainstream press, the biggest story has been that former Google chief Eric Schmidt would be taking over as Relativity’s CEO. Tim Ellis will remain on the company’s board, but likely won’t have much involvement in the day-to-day operation of the company. Similarly, co-founder Jordan Noone stepped down from chief technology officer to take on an advisory role back in 2020.

Eric Schmidt

With the two founders of the company now sidelined, and despite the success of the largely 3D printed Terran 1, the video makes it clear that they’re pursuing a more traditional approach for the new Terran R rocket. At several points in the presentation, senior Relativity staffers explain the importance of remaining agile in the competitive launch market, and caution against letting the company’s historic goals hinder their path forward. They aren’t abandoning additive manufacturing, but it’s no longer the driving force behind the program.

For his part, The New York Times reports that Schmidt made a “significant investment” in Relativity Space to secure controlling interest in the company and his new position as CEO, although the details of the arrangement have so far not been made public. One could easily dismiss this move as Schmidt’s attempt to buy into the so-called “billionaire space race”, but it’s more likely he simply sees it as an investment in a rapidly growing industry.

Even before he came onboard, Relativity Space had amassed nearly $3 billion in launch contracts. Between his considerable contacts in Washington, and his time as the chair of the DoD’s Defense Innovation Advisory Board, it’s likely Schmidt will attempt to put Relativity the running for lucrative government launches as well.

All they need is a reliable rocket, and they’ll have a revenue stream for years.

Outsourcing Your Way to Space

In general, New Space companies like SpaceX and Rocket Lab have been far more open about their design and manufacturing processes than the legacy aerospace players. But even still, the video released by Relativity Space offers an incredibly transparent look at how the company is approaching the design of Terran R.

One of the most interesting aspects of the rocket’s construction is how many key components are being outsourced to vendors. According to the video, Relativity Space has contracted out the manufacturing of the aluminium “domes” that cap off the propellant tanks, the composite overwrapped pressure vessels (COPVs) that hold high pressure helium at cryogenic temperatures, and even the payload fairings.

This isn’t like handing the construction of some minor assemblies off to a local shop — these components are about as flight-critical as you can possibly get. In 2017, SpaceX famously lost one of their Falcon 9 rockets (and its payload) in an explosion on the launch pad due to a flaw in one of the booster’s COPVs. It’s believed the company ultimately brought production of COPVs in-house so they could have complete control of their design and fabrication.

Unpacking a shipment of composite overwrapped pressure vessels (COPVs) for Terran R

Farming out key components of Terran R to other, more established, aerospace companies is a calculated risk. On one hand, it will allow Relativity Space to accelerate the booster’s development time, and in this case time is very literally money. The sooner Terran R is flying, the sooner it can start bringing in revenue. The trade-off is that their launch operations will become dependent on the performance of said companies. If the vendor producing their fairings runs into a production bottleneck, there’s little Relativity Space can do but wait. Similarly, if the company producing the propellant tank domes decides to raise their prices, that eats into profits.

For the long term security of the project, it would make the most sense for Relativity to produce all of Terran R’s major components themselves. But at least for now, the company is more concerned with getting the vehicle up and running in the most expedient manner possible.

Printing Where it Counts

Currently, 3D printing a tank dome simply takes too long.

In some cases, this is where Relativity is still banking on 3D printing in the long term. As explained in the video by Chief Technology Officer Kevin Wu, they initially planned on printing the propellant tank domes out of aluminum, but found that they couldn’t produce them at a fast enough rate to support their targeted launch cadence.

At the same time, the video notes that the state-of-the-art in metal printing is a moving target (in part thanks to their own research and development), and that they are continuing to improve their techniques in parallel to the development of Terran R. It’s not hard to imagine a point in the future where Relativity perfects printing the tank domes and no longer needs to outsource them.

While printing the structural components of the rocket hasn’t exactly worked out as Relativity hoped, they are still fully committed to printing the booster’s Aeon R engines. Printing the engine not only allows for rapid design iteration, but the nature of additive manufacturing makes it easy to implement features such as integrated fluid channels which would be difficult and expensive to produce traditionally.

Printing an Aeon R engine

Of course, Relativity isn’t alone in this regard. Nearly every modern rocket engine is using at least some 3D printed components for precisely the same reasons, and they have been for some time now.

Which in the end, is really the major takeaway from Relativity’s update video. Though the company started out with an audacious goal, and got very close to reaching it, in the end they’ve more or less ended up where everyone else in aerospace finds themselves in 2025. They’ll use additive manufacturing where it makes sense, partner with outside firms when necessary, and use traditional manufacturing methods where they’ve proven to be the most efficient.

It’s not as exciting as saying you’ll put the world’s first 3D printed rocket into space, to be sure. But it’s the path that’s the most likely to get Terran R on the launch pad within the next few years, which is where they desperately need to be if they’ll have any chance of catching up to the commercial launch providers that are already gobbling up large swaths of the market.

Inexpensive Repairable Laptops, With Apple Style

10 Marzo 2025 at 14:00

Despite a general lack of real-world experience, many teenagers are overly confident in their opinions, often to the point of brashness and arrogance. In the late 90s and early 00s I was no different, firmly entrenched in a clichéd belief that Apple computers weren’t worth the silicon they were etched onto—even though I’d never actually used one. Eventually, thanks to a very good friend in college, a bit of Linux knowledge, and Apple’s switch to Intel processors, I finally abandoned this one irrational belief. Now, I maintain an array of Apple laptops for my own personal use that are not only surprisingly repairable and hacker-friendly but also serve as excellent, inexpensive Linux machines.

Of course, I will have ruffled a few feathers suggesting Apple laptops are repairable and inexpensive. This is certainly not true of their phones or their newer computers, but there was a time before 2016 when Apple built some impressively high quality, robust laptops that use standard parts, have removable batteries, and, thanks to Apple dropping support for these older machines in their latest operating systems, can also be found for sale for next to nothing. In a way that’s similar to buying a luxury car that’s only a few years old and letting someone else eat the bulk of the depreciation, a high quality laptop from this era is only one Linux install away from being a usable and relatively powerful machine at an excellent bargain.

The History Lesson

To be fair to my teenage self though, Apple used to use less-mainstream PowerPC processors which meant there was very little software cross-compatibility with x86 PCs. It was also an era before broadband meant that most people could move their work into cloud and the browser, allowing them to be more agnostic about their operating system. Using an Apple when I was a teenager was therefore a much different experience than it is today. My first Apple was from this PowerPC era though; my ThinkPad T43 broke mid-way through college and a friend of mine gave me an old PowerBook G4 that had stopped working for her. Rather than have no computer at all, I swallowed my pride and was able to get the laptop working well enough to finish college with it. Part of the reason this repair was even possible was thanks to a major hacker-friendly aspect of Apple computers: they run Unix. (Note for commenters: technically Apple’s OS is Unix-like but they have carried a UNIX certification since 2007.)

I had used Unix somewhat in Solaris-based labs in college but, as I mentioned in a piece about installing Gentoo on one of my MacBooks, I was also getting pretty deep into the Linux world at the time as well. Linux was also designed to be Unix-like, so most of the basic commands and tools available for it have nearly one-to-one analogs in Unix. The PowerBook’s main problem, along with a battery that needed a warranty replacement, was a corrupted filesystem and disk drive that I was able to repair using my new Linux knowledge. This realization marked a major turning point for me which helped tear down most of my biases against Apple computers.

MacBooks through the ages

Over the next few years or so I grew quite fond of the PowerBook, partially because I liked its 12″, netbook-like form factor and also because the operating system never seemed to crash. As a Linux user, my system crashes were mostly self-inflicted, but they did happen. As a former Windows user as well, the fact that it wouldn’t randomly bluescreen itself through no fault of my own was quite a revelation. Apple was a few years into their Intel years at this point as well, and seeing how easily these computers did things my PowerBook could never do, including running Windows, I saved up enough money to buy my first MacBook Pro, a mid-2009 model which I still use to this day. Since then I’ve acquired four other Apple laptops, most of which run Linux or a patched version of macOS that lets older, unsupported machines run modern versions of Apple’s operating system.

So if you’ve slogged through my coming-of-age story and are still curious about picking up an old Mac for whatever reason—a friend or family member has one gathering dust, you’re tired of looking at the bland styling of older ThinkPads while simultaneously growing frustrated with the declining quality of their newer ones, or just want to go against the grain a bit and do something different—I’ll try and help by sharing some tips and guidelines I’ve picked up through the years.

What to Avoid

Starting with broad categories of older Apple laptops to avoid, the first major red flag are any with the butterfly keyboard that Apple put on various laptops from 2015 to 2019 which were so bad that a number of lawsuits were filed against them. Apple eventually relented and instituted a replacement program for them, but it’s since expired and can cost hundreds of dollars to fix otherwise. The second red flag are models with the T2 security chips. It’s not a complete dealbreaker but does add a lot of hassle if the end goal is a working Linux machine.

Additionally, pay close attention to any laptops with discrete graphics cards. Some older MacBooks have Nvidia graphics, which is almost always going to provide a below-average experience for a Linux user especially for Apple laptops of this vintage. Others have AMD graphics which do have better Linux support, but there were severe problems with the 15″ and 17″ Mac around the 2011 models. Discrete graphics is not something to avoid completely like laptops with butterfly keyboards, but it’s worth investigating the specific model year for problems if a graphics card is included. A final note is to be aware of “Staingate” which is a problem which impacted some Retina displays between 2012 and 2015. This of course is not an exhaustive list, but covers the major difficult-to-solve problems for this era of Apple laptop.

What to Look For

As for what specific computers are the best from this era for a bit of refurbishment and use, in my opinion the best mix of performance, hackability, and Linux-ability will be from the 2009-2012 Unibody era. These machines come in all sizes and are surprisingly upgradable, with standard SODIMM slots for RAM, 2.5″ laptop drives, an optical drive (which can be changed out for a second hard drive), easily replaceable batteries if you can unscrew the back cover, and plenty of ports. Some older models from this era have Core 2 Duo processors and should be avoided if you have the choice, but there are plenty of others from this era with much more powerful Core i5 or Core i7 processors.

After 2012, though, Apple started making some less-desirable changes for those looking to maintain their computers long-term, like switching to a proprietary M.2-like port for their storage and adding in soldered or otherwise non-upgradable RAM, but these machines can still be worthwhile as many had Core i7 processors and at least 8 GB of RAM and can still run Linux and even modern macOS versions quite capably. The batteries can still be replaced without too much hassle as well.

Inside the 2012 MacBook Pro. Visible here are the 2.5″ SSD, removable battery, standard SODIMM RAM slots, optical drive, and cooling fan.

Of course, a major problem with these computers is that they all have processors that have the Intel Management Engine coprocessor installed, so they’re not the most privacy-oriented machines in existence even if Linux is the chosen operating system. It’s worth noting, though, that some MacBooks from before the unibody era can run the open-source bootloader Libreboot but the tradeoff, as with any system capable of running Libreboot, is that they’re a bit limited in performance even compared to the computers from just a few years later.

Out of the five laptops I own, four are from the pre-butterfly era including my two favorites. Topping the list is a mid-2012 13″ MacBook Pro with Intel graphics that’s a beast of a Debian machine thanks to upgrades to a solid state drive and to 16 GB of RAM. It also has one of the best-feeling laptop keyboards I’ve ever used to write with, and is also the computer I used to experiment with Gentoo.

Second place goes to a 2015 11″ MacBook Air which is a netbook-style Apple that I like for its exceptional portability even though it’s not as upgradable as I might otherwise like. It will have 4 GB of RAM forever, but this is not much of a problem for Debian. I also still have my 2009 MacBook Pro as well, which runs macOS Sonoma thanks to OpenCore Legacy Patcher. This computer’s major weakness is that it has an Nvidia graphics card so it isn’t as good of a Linux machine as the others, and occasionally locks up when running Debian for this reason. But it also has been upgraded with an SSD and 8 GB of RAM so Sonoma still runs pretty well on it despite its age. Sequoia, on the other hand, dropped support for dual-core machines so I’m not sure what I will do with it after Sonoma is no longer supported.

A 13″ MacBook Air from 2013. Not quite as upgradable as the 2012 MacBook Pro but still has a removable battery and a heat sink which can be re-pasted much more easily.

My newest Apple laptop is an M1 MacBook Air, which I was excited about when it launched because I’m a huge fan of ARM-based personal computers for more reasons than one. Although the M1 does have essentially no user-repairability unless you want to go to extremes, I have some hope that this will last me as long as my MacBook Pros have thanks to a complete lack of moving parts and also because of Asahi Linux, a version of Fedora which is built for Apple silicon. Whenever Apple stops providing security patches for this machine, I plan to switch it over to this specialized Linux distribution.

Why Bother?

But why spend all this effort keeping these old machines running at all? If repairability is a major concern, laptops from companies like System76 or Framework are arguably a much better option. Not to mention that, at least according to the best Internet commenters out there, Apple computers aren’t supposed to be fixable, repairable, or upgradable at all. They’re supposed to slowly die as upgrades force them to be less useful.

While this is certainly true for their phones and their more modern machines to some extent, part of the reason I keep these older machines running is to go against the grain and do something different, like a classic car enthusiast who picks a 70s era Volkswagen to drive to and from the office every day instead of a modern Lexus. It’s also because at times I still feel a bit like that teenager I was. While I might be a little wiser now from some life experiences, I believe some amount of teenage rebellion can be put to use stubbornly refusing to buy the latest products year after year from a trillion-dollar company which has become synonymous with planned obsolescence. Take that, Apple!

Trint

Por: EasyWithAI
27 Marzo 2023 at 14:20
Trint is a powerful AI transcription tool that converts your audio and video files to text, making them editable, searchable, and collaborative. Trint has multilingual capabilities and can currently transcribe in 30+ languages. The tool can also generate and edit closed captions for video content, improving accessibility, and securely stores all your content in one […]

Source

❌
❌