Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

Alternatives Don’t Need to be Bashed

By default, bash is the most popular command language simply because it’s included in most *nix operating systems. Additionally, people don’t tend to spend a lot of time thinking about whatever their computer uses for scripting as they might for other pieces of software like a word processor or browser. If you are so inclined to take a closer look at this tool that’s often taken for granted, there are a number of alternatives to bash and [monzool] wanted to investigate them closely.

Unlike other similar documentation that [monzool] has come across where the writers didn’t actually use the scripting languages being investigated, [monzool] is planning to use each of these and accomplish specific objectives. This will allow them to get a feel for the languages and whether or not they are acceptable alternatives for bash. Moving through directories, passing commands back and forth, manipulating strings, searching for files, and manipulating the terminal display settings are all included in this task list. A few languages are tossed out before initial testing even begins for not meeting certain specific requirements. One example is not being particularly useful in [monzool]’s preferred embedded environments, but even so there are enough bash alternatives to test out ten separate languages.

Unfortunately, at the end of the day none of the ten selected would make a true replacement for bash, at least for [monzool]’s use case, but there were a few standouts nonetheless. Nutshell was interesting for being a more modern, advanced system and [monzool] found Janet to be a fun and interesting project but had limitations with cross-compiling. All in all though this seemed to be an enjoyable experience that we’d recommend if you actually want to get into the weeds on what scripting languages are actually capable of. Another interesting one we featured a while back attempts to perform as a shell and a programming language simultaneously.

Linux Fu: Audio Network Pipes

Life was simpler when everything your computer did was text-based. It is easy enough to shove data into one end of a pipe and take it out of the other. Sure, if the pipe extends across the network, you might have to call it a socket and take some special care. But how do you pipe all the data we care about these days? In particular, I found I wanted to transport audio from the output of one program to the input of another. Like most things in Linux, there are many ways you can get this done and — like most things in Linux — only some of those ways will work depending on your setup.

Why?

There are many reasons you might want to take an audio output and process it through a program that expects audio input. In my case, it was ham radio software. I’ve been working on making it possible to operate my station remotely. If all you want to do is talk, it is easy to find software that will connect you over the network.

However, if you want to do digital modes like PSK31, RTTY, or FT8, you may have a problem. The software to handle those modes all expect audio from a soundcard. They also want to send audio to a soundcard. But, in this case, the data is coming from a program.

Of course, one answer is to remote desktop into the computer directly connected to the radio. However, most remote desktop solutions aren’t made for high-fidelity and low-latency audio. Plus, it is nice to have apps running directly on your computer.

I’ll talk about how I’ve remoted my station in a future post, but for right now, just assume we want to get a program’s audio output into another program’s audio input.

Sound System Overview

Someone once said, “The nice thing about standards is there are so many of them.” This is true for Linux sound, too. The most common way to access a soundcard is via ALSA, also known as Advanced Linux Sound Architecture. There are other methods, but this is somewhat the lowest common denominator on most modern systems.

However, most modern systems add one or more layers so you can do things like easily redirect sound from a speaker to a headphone, for example. Or ship audio over the network.

The most common layer over ALSA is PulseAudio, and for many years, it was the most common standard. These days, you see many distros moving to PipeWire.

PipeWire is newer and has a lot of features but perhaps the best one is that it is easy to set it up to look like PulseAudio. So software that understands PipeWire can use it. Programs that don’t understand it can pretend it is PulseAudio.

There are other systems, too, and they all interoperate in some way. While OSS is not as common as it once was, JACK is still found in certain applications. Many choices!

One Way

There are many ways you can accomplish what I was after. Since I am running PipeWire, I elected to use qpwgraph, which is a GUI that shows you all the sound devices on the system and lets you drag lines between them.

It is super powerful but also super cranky. As things change, it tends to want to redraw the “graph,” and it often does it in a strange and ugly way. If you name a block to help you remember what it is and then disconnect it, the name usually goes back to the default. But these are small problems, and you can work around them.

In theory, you should be able to just grab the output and “wire” it to the other program’s input. In fact, that works, but there is one small problem. Both PipeWire and PulseAudio will show when a program is making sound, and then, when it stops, the source vanishes.

This makes it very hard to set up what I wanted. I wound up using a loopback device so there was something for the receiver to connect to and the transient sending device.

Here’s the graph I wound up with:

A partial display of the PipeWire configuration

I omitted some of the devices and streams that didn’t matter, so it looks pretty simple. The box near the bottom right represents my main speakers. Note that the radio speaker device (far left) has outputs to the speaker and to the JTDX in box.

This lets me hear the audio from the radio and allows JTDX to decode the FT8 traffic. Sending is a little more complicated.

The radio-in boxes are the loopback device. You can see it hooked to the  JTDX out box because when I took the screenshot, I was transmitting. If I were not transmitting, the out box would vanish, and only the pipe would be there.

Everything that goes to the pipe’s input also shows up as the pipe’s output and that’s connected directly to the radio input. I left that box marked with the default name instead of renaming it so you can see why it is worth renaming these boxes! If you hover over the box, you’ll see the full name which does have the application name in it.

That means JTDX has to be set to listen and send to the streams in question. The radio also has to be set to the correct input and output. Usually, setting them to Pulse will work, although you might have better luck with the actual pipe or sink/source name.

In order to make this work, though, I had to create the loopback device:

pw-loopback -n radio-in -m '[FL FR]' --capture-props='[media.class=Audio/Sink]' --playback-props='[media.class=Audio/Source]' &

This creates the device as a sink with stereo channels that connect to nothing by default. Sometimes, I only connect the left channels since that’s all I need, but you may need something different.

Other Ways

There are many ways to accomplish this, including using the pw-link utility or setting up special configurations. The PipeWire documentation has a page that covers at least most of the scenarios.

You can also create this kind of virtual device and wiring with PulseAudio. If you need to do that, investigate the pactl command and use it to load the module-loopback module.

It is even possible to use the snd-aloop module to create loopback devices. However, PipeWire seems to be the future, so unless you are on an older system, it is probably better to stick to that method.

Sound Off!

What’s your favorite way to route audio? Why do you do it? What will you do with it? I’ll have a post detailing how this works to allow remote access to a ham transceiver, although this is just a part of the equation. It would be easy enough to use something like this and socat to stream audio around the network in fun ways.

We’ve talked about PipeWire for audio and video before. Of course, connecting blocks for audio processing makes us want to do more GNU Radio.

Recreating Unobtainium Weather Station Sensors

Imagine you own a weather station. Then imagine that after some years have passed, you’ve had to replace one of the sensors multiple times. Your new problem is that the sensor is no longer available. What does a hacker like [Luca] do? Build a custom solution, of course!

[Luca]’s work concerns the La Crosse WS-9257F-IT weather station, and the repeat failures of the TX44DTH-IT external sensor. Thankfully, [Luca] found that the weather station’s communication protocol had been thoroughly reverse-engineered by [Fred], among others. He then set about creating a bridge to take humidity and temperature data from Zigbee sensors hooked up to his Home Assistant hub, and send it to the La Crosse weather station. This was achieved with the aid of a SX1276 LoRa module on a TTGO LoRa board. Details are on GitHub for the curious.

Luca didn’t just work on the Home Assistant integration, though. A standalone sensor was also developed, based on the Xiao SAMD21 microcontroller board and a BME280 temperature, pressure, and humidity sensor. It too can integrate with the Lacrosse weather station, and proved useful for one of [Luca’s] friends who was in the same boat.

Ultimately, it sucks when a manufacturer no longer supports hardware that you love and use every day. However, the hacking community has a way of working around such trifling limitations. It’s something to be proud of—as the corporate world leaves hardware behind, the hackers pick up the slack!

Humans Can Learn Echolocation Too

Most of us associate echolocation with bats. These amazing creatures are able to chirp at frequencies beyond the limit of our hearing, and they use the reflected sound to map the world around them. It’s the perfect technology for navigating pitch-dark cave systems, so it’s understandable why evolution drove down this innovative path.

Humans, on the other hand, have far more limited hearing, and we’re not great chirpers, either. And yet, it turns out we can learn this remarkable skill, too. In fact, research suggests it’s far more achievable than you might think—for the sighted and vision impaired alike!

Bounce That Sound

Bats are the most famous biologcal users of echolocation. Credit: Petteri Aimonen

Before we talk about humans using echolocation, let’s examine how the pros do it. Bats are nature’s acoustic engineers, emitting rapid-fire ultrasonic pulses from their larynx that can range from 11 kHz to over 200 kHz. Much of that range is far beyond human hearing, which tops out at under 20 kHz. As these sound waves bounce off objects in their environment, the bat’s specialized ultrasonic-capable ears capture the returning echoes. Their brain then processes these echoes in real-time, comparing the outgoing and incoming signals to construct a detailed 3D map of their surroundings. The differences in echo timing tell them how far away objects are, while variations in frequency and amplitude reveal information about size, texture, and even movement. Bats will vary between constant-frequency chirps and frequency-modulated tones depending on where they’re flying and what they’re trying to achieve, such as navigating a dark cavern or chasing prey.  This biological sonar is so precise that bats can use it to track tiny insects while flying at speed.

Humans can’t naturally produce sounds in the ultrasonic frequency range. Nor could we hear them if we did. That doesn’t mean we can’t echolocate, though—it just means we don’t have quite the same level of equipment as the average bat. Instead, humans can achieve relatively basic echolocation using simple tongue clicks. In fact, a research paper from 2021 outlined that skills in this area can be developed with as little as a 10-week training program. Over this period, researchers successfully taught echolocation to both sighted and blind participants using a combination of practical exercises and virtual training. A group of 14 sighted and 12 blind participants took part, with the former using blindfolds to negate their vision.

The aim of the research was to investigate click-based echolocation in humans. When a person makes a sharp click with their tongue, they’re essentially launching a sonic probe into their environment. As these sound waves radiate outward, they reflect off surfaces and return to the ears with subtle changes. A flat wall creates a different echo signature than a rounded pole, while soft materials absorb more sound than hard surfaces. The timing between click and echo precisely encodes distance, while differences between the echoes reaching each ear allows for direction finding.

The orientation task involved asking participants to use mouth clicks to determine the way a rectangular object was oriented in front of them. Credit: research paper
The size discrimination task asked participants to determine which disc was bigger solely using echolocation. Credit: research paper 

The training regime consisted of a variety of simple tasks. The researchers aimed to train participants on size discrimination, with participants facing two foam board disks mounted on metal poles. They had to effectively determine which foam disc was larger using only their mouth clicks and their hearing. The program also included an orientation challenge, which used a single rectangular board that could be rotated to different angles. The participants had to again use clicks and their hearing to determine the orientation of the board. These basic tools allowed participants to develop increasingly refined echo-sensing abilities in a controlled environment.

Perhaps the most intriguing part of the training involved a navigation task in a virtually simulated maze. Researchers first created special binaural recordings of a mannikin moving through a real-world maze, making clicks as it went. They then created virtual mazes that participants could navigate using keyboard controls. As they navigated through the virtual maze, without vision, the participants would hear the relevant echo signature recorded in the real maze. The idea was to allow participants to build mental maps of virtual spaces using only acoustic information. This provided a safe, controlled environment for developing advanced navigation skills before applying them in the real world. Participants also attempted using echolocation to navigate in the real world, navigating freely with experimenters on hand to guide them if needed.

Participants were trained to navigate a virtual maze using audio cues only. Credit: research paper

The most surprising finding wasn’t that people could learn echolocation – it was how accessible the skill proved to be. Previous assumptions about age and visual status being major factors in learning echolocation turned out to be largely unfounded. While younger participants showed some advantages in the computer-based exercises, the core skill of practical echolocation was  accessible to all participants. After 10 weeks of training, participants were able to correctly answer the size discrimination task over 75% of the time, and at increased range compared to when they began. Orientation discrimination also improved greatly over the test period to a success rate over 60% for the cohort. Virtual maze completion times also dropped by over 50%.

Over time, participants improved in all tasks—particularly the size discrimination task, as seen in the results on this graph. The difficulty level of tasks were also scaled over time, presenting greater challenge as participants improved their echolocation skills. Credit: research paper

The study also involved a follow-up three months later with the blind members of the cohort. Participants credited the training with improving their spatial awareness, and some noted they had begun to use the technique to find doors or exits, or to make their way through strange places.

What’s particularly fascinating is how this challenges our understanding of basic human sensory capabilities. Echolocation doesn’t involve adding new sensors or augmenting existing ones—it’s just about training the brain to extract more information from signals it already receives. It’s a reminder that human perception is far more plastic than we often assume.

The researchers suggest that echolocation training should be integrated into standard mobility training for visually impaired individuals. Given the relatively short training period needed to develop functional echo-sensing abilities, it’s hard to argue against its inclusion. We might be standing at the threshold of a broader acceptance of human echolocation, not as an exotic capability, but as a practical skill that anyone can learn.

The Junk Machine Prints Corrupted Advertising On Demand

[ClownVamp]’s art project The Junk Machine is an interactive and eye-catching machine that, on demand, prints out an equally eye-catching and unique yet completely meaningless (one may even say corrupted) AI-generated advertisement for nothing in particular.

The machine is an artistic statement on how powerful software tools that have genuine promise and usefulness to creative types are finding their way into marketer’s hands, and resulting in a deluge of, well, junk. This machine simplifies and magnifies that in a physical way.

We can’t help but think that The Junk Machine is in a way highlighting Sturgeon’s Law (paraphrased as ‘ninety percent of everything is crud’) which happens to be particularly applicable to the current AI landscape. In short, the ease of use of these tools means that crud is also being effortlessly generated at an unprecedented scale, swamping any positive elements.

As for the hardware and software, we’re very interested in what’s inside. Unfortunately there’s no deep technical details, but the broad strokes are that The Junk Machine uses an embedded NVIDIA Jetson loaded up with Stable Diffusion’s SDXL Turbo, an open source AI image generator that can be installed and run locally. When and if a user mashes a large red button, the machine generates a piece of AI junk mail in real time without any need for a network connection of any kind, and prints it from an embedded printer.

Watch it in action in the video embedded below, just under the page break. There are a few more different photos on [ClownVamp]’s X account.

Electric Motors Run Continuously at Near-Peak Power

For a lot of electrical and mechanical machines, there are nominal and peak ratings for energy output or input. If you’re in marketing or advertising, you’ll typically look at the peak rating and move on with your day. But engineers need to know that most things can only operate long term at a fraction of this peak rating, whether it’s a power supply in a computer, a controller on an ebike, or the converter on a wind turbine. But this electric motor system has a unique cooling setup allowing it to function at nearly full peak rating for an unlimited amount of time.

The motor, called the Super Continuous Torque motor built by German automotive manufacturer Mahle is capable of 92% of its peak output power thanks to a unique oil cooling system which is able to remove heat and a rapid rate. Heat is the major limiter for machines like this; typically when operating at a peak rating a motor would need to reduce power output to cool down so that major components don’t start melting or otherwise failing. Given that the largest of these motors have output power ratings of around 700 horsepower, that’s quite an impressive benchmark.

The motor is meant for use in passenger vehicles but also tractor-trailer style trucks, where a motor able to operate at its peak rating would mean a smaller size motor or less weight or both, making them easier to fit into the space available as well as being more economically viable. Mahle is reporting that these motors are ready for production so we should be seeing them help ease the transportation industry into electrification. If you’re more concerned about range than output power, though, there’s a solution there as well so you don’t have to be stuck behind the times with fossil fuels forever.

Thanks to [john] for the tip!

Building A Pi-Powered LED Chess Board

If you live near Central Park or some other local chess hub, you’re likely never short of opponents for a good game. If you find yourself looking for a computer opponent, or you just prefer playing online, you might like this LED chessboard from [DIY Machines] instead.

At heart, it’s basically a regular chessboard with addressable LEDs of the WS2812B variety under each square. The lights are under the command of an Arduino Nano, which is also tasked with reading button inputs from the board’s side panel. The Nano is interfaced with a Raspberry Pi, which is the true brains of the operation. The Pi handles chess tasks—checking the validity of moves, acting as a computer opponent, and connecting online for games against other humans if so desired. Everything is wrapped up with 3D printed parts, making this an easy project to build for the average DIY maker.

The video tutorial does a great job of covering the design. It’s a relatively simple project at heart, but the presentation is great and it looks awfully fun to play with. We’ve featured some other great builds from [DIY Machines] before, too. Video after the break.

Solar Orbiter Takes Amazing Solar Pictures

There’s an old joke that they want to send an exploratory mission to the sun, but to save money, they are going at night. The European Space Agency’s Solar Orbiter has gotten as close as anything we’ve sent to study our star on purpose, and the pictures it took last year were from less than 46 million miles away. That sounds far away, but in space terms, that’s awfully close to the nuclear furnace. The pictures are amazing, and the video below is also worth watching.

Because the craft was so close, each picture it took was just a small part of the sun’s surface. ESA stitched together multiple images to form the final picture, which shows the entire sun as 8,000 pixels across. We’ll save you the math. We figure each pixel is worth about 174 kilometers or 108 miles, more or less.

The stunning images used the Polarimetric and Helioseismic Imager and the Extreme Ultraviolet Imager. The first instrument snapped the visible light and the magnetic field lines. It also provided a velocity map. The UV instrument took pictures of the corona.

Understanding the sun is important because it greatly impacts our life on Earth. Technology is especially sensitive, and, lest we forget, massive solar disruptions have happened before.

An Over-Engineered Basement Monitor

[Stephen] has a basement that depends on a sump pump. What that means is if the pump fails or the power goes out, the basement floods—which is rather undesirable. Not wanting to rely on a single point of failure, [Stephen] decided to build a monitor for the basement situation, which quickly spiralled to a greater degree of complexity than he initially expected.

The initial plan was just to have water level sensors reporting data over a modified CATS packet radio transmitter. On the other end, the plan was to capture the feed via a CATS receiver, pipe the data to the internet via FELINET, and then have the data displayed on a Grafana dashboard. Simple enough. From there, though, [Stephen] started musing on the possibilities. He thought about capturing humidity data to verify the dehumidifier was working. Plus, temperature would be handy to get early warning before any pipes were frozen in colder times. Achieving those aims would be easy enough with a BME280 sensor, though hacking it into the CATS rig was a little challenging.

The results are pretty neat, though. [Stephen] can now track all the vital signs of his basement remotely, with all the data displayed elegantly on a nice Grafana dashboard. If you’re looking to get started on a similar project, we’ve featured a great Grafana guide at a previous Supercon, just by the by. All in all, [Stephen’s] project may have a touch of the old overkill, but sometimes, the most rewarding projects are the ones you pour your heart and soul into!

E-Ink Screen Combined With Analog Dial Is Epic Win

Analog dials used to be a pretty common way of displaying information on test equipment and in industrial applications. They fell out of favor as more advanced display technologies became cheaper. However, if you combine an analog dial with a modern e-ink display, it turns out you get something truly fantastic indeed.

This build comes to us from [Arne]. The concept is simple—get an e-ink display, and draw a dial on it using whatever graphics and scale you choose. Then, put it behind a traditional coil-driven analog dial in place of the more traditional paper scale. Now, you have an analog dial that can display any quantity you desire. Just update the screen to display a different scale as needed. Meanwhile, if you don’t need to change the display, the e-ink display will draw zero power and still display the same thing.

[Arne] explains how it all works in the writeup. It’s basically a LilyGo T5 ESP32 board with an e-ink screen attached, and it’s combined with a MF-110A multimeter. It’s super easy to buy that stuff and start tinkering with the concept yourself. [Arne] uses it with Home Assistant, which is as good an idea as any.

You get all the benefits of a redrawable display, with the wonderful visual tactility of a real analog dial. It’s a build that smashes old and new together in the best way possible. It doesn’t heart that [Arne] chose a great retro font for the dial, either. Applause all around!

Square Roots 1800s Style — No, the Other 1800s

[MindYourDecisions] presents a Babylonian tablet dating back to around 1800 BC that shows that the hypotenuse of a unit square is the square root of two or 1.41421. How did they know that? We don’t know for sure how they computed it, but experts think it is the same as the ancient Greek method written down by Hero. It is a specialized form of the Newton method. You can follow along and learn how it works in the video below.

The method is simple. You guess the answer first, then you compute the difference and use that to adjust your estimate. You keep repeating the process until the error becomes small enough for your purposes.

For example, suppose you wanted to take the square root of 85. You can observe that 9 squared is 81, so the answer is sort of 9, right? But that’s off by 4 (85-81=4). So you take that number and divide it by the current answer (9) multiplied by two. In other words, the adjustment is 4/18 or 0.2222. Putting it together, our first answer is 9.2222.

If you square that, you get about 85.05 which is not too bad, but if you wanted closer you could repeat the process using 9.2222 in place of the 9. Repeat until the error is as low as you like. Our calculator tells us the real answer is 9.2195, so that first result is not bad. A second pass gives 9.2193, You could keep going, but that’s close enough for almost any purpose.

The video shows a geographical representation, and if you are a visual thinker, that might help you. We prefer to think of it algebraically. You are essentially creating each adjustment by adding the guess and the square divided by the guess and averaging them.

The ancients loved to estimate numbers. And Hero was into a lot of different things.

The Lancaster ASCII Keyboard Recreated

It is hard to imagine that there was a time when having a keyboard and screen readily available was a real problem for people who wanted to experiment with computers. In the 1970s, if you wanted a terminal, you might well have built a [Don Lancaster] “TV Typewriter” and the companion “low cost keyboard.” [Artem Kalinchuk] wanted to recreate this historic keyboard and, you know what? He did! Take a look at the video below.

The first task was to create a PCB from the old artwork from Radio Electronics magazine. [Artem] did the hard work but discovered that the original board expected a very specific kind of key. So, he created a variant that takes modern MX keyboard switches, which is nice. He does sell the PCBs, but you can also find the design files on GitHub.

Not only were the TV typewriters and related projects popular, but they also inspired many similar projects and products from early computer companies.

The board is really just a holder for keys, some jumper wires, and an edge connector. You still need an ASCII encoder board, which [Artem] also recreates. That board is simple, using diodes, a few transistors, and a small number of simple ICs.

If you weren’t there, part of installing old software was writing the code needed to read and write to your terminal. No kidding. We miss [Don Lancaster]. We wonder how many TV typewriters were built, especially if you include modern recreations.

Hacking Global Positioning Systems Onto 16th-Century Maps

Historical map of The Netherlands overlayed with clouds

What if GPS had existed in 1565? No satellites or microelectronics, sure—but let’s play along. Imagine the bustling streets of Antwerp, where merchants navigated the sprawling city with woodcut maps. Or sailors plotting Atlantic crossings with accuracy unheard of for the time. This whimsical intersection of history and tech was recently featured in a blog post by [Jan Adriaenssens], and comes alive with Bert Spaan’s Allmaps Here: a delightful web app that overlays your GPS location onto georeferenced historical maps.

Take Antwerp’s 1565 city map by Virgilius Bononiensis, a massive 120×265 cm woodcut. With Allmaps Here, you’re a pink dot navigating this masterpiece. Plantin-Moretus Museum? Nailed it. Kasteelpleinstraat? A shadow of the old citadel it bordered. Let’s not forget how life might’ve been back then. A merchant could’ve avoided morning traffic and collapsing bridges en route to the market, while a farmer relocating his herd could’ve found fertile pastures minus the swamp detour.

Unlike today’s turn-by-turn navigation, a 16th-century GPS might have been all about survival: avoiding bandit-prone roads, timing tides for river crossings, or tracking stars as backup. Imagine explorers fine-tuning their Atlantic crossings with trade winds mapped to the mile. Georeferenced maps like these let us re-imagine the practical genius of our ancestors while enjoying a modern hack on a centuries-old problem.

Although sites like OldMapsOnline, Google Earth Timelapse (and for the Dutch: TopoTijdreis) have been around for a while, this new match of technology and historical detail is a true gem. Curious to map your own world on antique charts? Navigate to Allmaps and start georeferencing!

Programmable Zener is Really an IC

[Kevin] doesn’t stock zener diodes anymore. Why? Because for everything he used to use zeners, he now uses TL431 bandgap voltage references. These look like zener diodes but have an extra terminal. That extra terminal allows you to set the threshold to any value you want (within specifications, of course). Have a look at the video below for an introduction to these devices and a practical circuit on a breadboard.

Inside, there’s a voltage reference, an op-amp, and a transistor, so these are tiny 3-terminal ICs. The chip powers itself from the load, so there are no separate power supply pins.

Note that just before the five-minute mark, he had a typo on the part number, but he corrected that in the comments. He goes on to put a demonstration schematic in KiCad. Once it was all worked out, it was breadboard time.

As always, there were a few real-world things to resolve, but the circuit worked as expected. As [Kevin] points out, the faux-zeners are about four for a dollar and even less in quantity. A zener might be a few pennies cheaper, but unless you are making thousands of copies of your circuit, who cares?

We don’t see zeners as often as we used to. As for the TL431, we’ve seen one torn apart for your amusement.

Aftershock II: How Students Shattered 20-Year Amateur Rocket Records

Student-built rocket launch in Black Rock Desert, Nevada

When it comes to space exploration, we often think of billion-dollar projects—NASA’s Artemis missions, ESA’s Mars rovers, or China’s Tiangong station. Yet, a group of U.S. students at USC’s Rocket Propulsion Lab (RPL) has achieved something truly extraordinary—a reminder that groundbreaking work doesn’t always require government budgets. On October 20, their homemade rocket, Aftershock II, soared to an altitude of 470,000 feet, smashing the amateur spaceflight altitude and speed records held for over two decades. Intrigued? Check out the full article here.

The 14-foot, 330-pound rocket broke the sound barrier within two seconds, reaching hypersonic speeds of Mach 5.5—around 3,600 mph. But Aftershock II didn’t just go fast; it climbed higher than any amateur spacecraft ever before, surpassing the 2004 GoFast rocket’s record by 90,000 feet. Even NASA-level challenges like thermal protection at hypersonic speeds were tackled using clever tricks. Titanium-coated fins, specially engineered heat-resistant paint, and a custom telemetry module ensured the rocket not only flew but returned largely intact.

This achievement feels straight out of a Commander Keen adventure—scrappy explorers, daring designs, and groundbreaking success against all odds. The full story is a must-read for anyone dreaming of building their own rocket.

Hackaday Links: November 24, 2024

Hackaday Links Column Banner

We received belated word this week of the passage of Ward Christensen, who died unexpectedly back in October at the age of 78. If the name doesn’t ring a bell, that’s understandable, because the man behind the first computer BBS wasn’t much for the spotlight. Along with Randy Suess and in response to the Blizzard of ’78, which kept their Chicago computer club from meeting in person, Christensen created an electronic version of a community corkboard. Suess worked on the hardware while Christensen provided the software, leveraging his XMODEM file-sharing protocol. They dubbed their creation a “bulletin board system” and when the idea caught on, they happily shared their work so that other enthusiasts could build their own systems.

BBSs were the only show in town for a long time, and the happy little modem negotiation tones were like a doorbell you rang to get into a club where people understood your obsession. Perhaps it’s just the BBS nostalgia talking, but despite the functional similarities to today’s social media, the BBS experience seemed a lot more civilized. It’s not that people were much better behaved back then; any BBS regular can tell you there were plenty of jerks online then, too. But the general tone of BBS life was a little more sedate, probably due in part to the glacial pace of dial-up connections. Even at a screaming 2,400 baud, characters scrolled across your screen slower than you could read them, and that seemed to have a sedating effect on your passions. By the time someone’s opinion on the burning issues of the day had finally been painted on your monitor, you’d had a bit of time to digest it and perhaps cool down a bit before composing a reply. We still had our flame wars, of course, but it was like watching slow-motion warfare and the dynamic was completely different from today’s Matrix.

Speaking of yearning for a probably mythical Golden Age, Casio has announced a smart ring that looks like a miniature version of their classic sports chronograph wristwatch. The ring celebrates Casio’s 50th anniversary of making watches, and features a stainless steel case made by metal injection molding. The six-digit LCD is pretty limited in what it can display, and the ring doesn’t do much other than tell the time and date and sound alarms. So we’re not sure where the smarts are here, except for the looks, of course.

We got a tip recently on a series of really interesting videos that you might want to check out, especially if you’re into EMC simulations. Panire’s channel is chock full of videos showing how to use openEMS, the open-source electromagnetic field solver, with KiCad EDA software to simulate the RF properties of high-speed circuits. He’s got some in-depth videos on getting things set up plus some great tutorials on creating simulations that let you see how your PCB designs are radiating, allowing you to make changes and see the results right away. Very useful stuff, and pretty fun to look at, too.

Here at Hackaday, we get a surprising and disappointingly regular stream of projects that claim to finally have beaten the laws of thermodynamics. So the words “Perpetual Motion” are especially triggering to us, but we instantly put that aside when we saw the title card on this video about the Atmos Clock. No, it’s not perpetual motion, but since as the name suggests, being powered by atmospheric pressure and temperature changes, it’s about as close as you can get. We remember one of these beautiful timepieces on the mantle in our grandparents’ house, gifted to “Grampy” for years of faithful service by his employer. It was a delicate machine and fascinating to watch work, which it only briefly did once we grandkids got near it. Still, watching how the mechanism worked is pretty interesting stuff.

And finally, if you haven’t checked out The Analog, you really should. It’s a weekly newsletter written by our friend Mihir Shah and is full of interesting tidbits from the world of electronics and technology. This time around he gifted us with a video that looks inside optical sorting in food processing. You’ve probably seen these in action before, where cascades of objects — grapes in this case, obviously in a winery — are spread out on a high-speed conveyor belt under the watchful gaze of a computer vision system, which spots the bad grapes and yeets them into oblivion with a precisely controlled jet of compressed air. The mind boggles on the control loops needed to get the jet and the bad grape to meet up at just the right time so that good grapes stay in the game.

Double Your Analog Oscilloscope Fun with this Retro Beam Splitter

These days, oscilloscope hacking is all about enabling features that the manufacturer baked into the hardware but locked out in the firmware. Those hacks are cool, of course, but back in the days of analog scopes, unlocking new features required a decidedly more hardware-based approach.

For an example of this, take a look at this oscilloscope beam splitter by [Lockdown Electronics]. It’s a simple way to turn a single-channel scope into a dual-channel scope using what amounts to time-division multiplexing. A 555 timer is set up as an astable oscillator generating a 2.5-kHz square wave. That’s fed into the bases of a pair of transistors, one NPN and the other PNP. The collectors of each transistor are connected to the two input signals, each biased to either the positive or negative rail of the power supply. As the 555 swings back and forth it alternately applies each input signal to the output of the beam splitter, which goes to the scope. The result is two independent traces on the analog scope, like magic.

More after the break…

If you’re wondering how this would work on a modern digital scope, so was [Lockdown Electronics]. He gave it a go with his little handheld scope meter and the results were surprisingly good and illustrative of how the thing works. You can clearly see the 555’s square wave on the digital scope sandwiched between the two different input sine waves. Analog scopes always have trouble showing these rising and falling edges, which explains why the beam splitter looks so good on the CRT versus the LCD.

Does this circuit serve any practical purpose these days? Probably not, although you could probably use the same principle to double the number of channels on your digital scope. Eight channels on a four-channel scope for the price of a 555? Sounds like a bargain to us.

Flyback, Done Right

A common part used to create a high voltage is a CRT flyback transformer, having been a ubiquitous junk pile component. So many attempts to use them rely on brute force, with power transistors in simple feedback oscillators dropping high currents into hand-wound primaries, so it’s refreshing to see a much more nuanced approach from [Alex Lungu]. His flyback driver board drives the transformer as it’s meant to be used, in flyback mode relying on the sudden collapse of a magnetic field to generate an output voltage pulse rather than simply trying to create as much field as possible. It’s thus far more efficient than all those free running oscillators.

On the PCB is a UC3844 switch mode power supply controller driving the transformer at about 25 kHz through an IGBT. We’d be curious to know how closely the spec of the transformer is tied to the around 15 kHz it would have been run at in a typical TV, and thus what frequency would be the most efficient for it. The result as far as we can see it a stable and adjustable high voltage source with out all the high-current and over heating, something of which we approve.

Need to understand more about free running versus flyback? Read on.

Modular Multi-Rotor Flies Up To Two Hours

Flight time remains the Achilles’ heel of electric multi-rotor drones, with even high-end commercial units struggling to stay airborne for an hour. Enter Modovolo, a startup that’s shattered this limitation with their modular drone system achieving flights exceeding two hours.

The secret? Lightweight modular “lift pods” inspired by bicycle wheels using tensioned lines similar to spokes. The lines suspend the hub and rotor within a duct. It’s all much lighter than of traditional rigid framing. The pods can be configured into quad-, hex-, or octocopter arrangements, featuring large 671 mm propellers. Despite their size, the quad configuration weighs a mere 3.5 kg with batteries installed. From the demo-day video, it appears the frame, hub, and propeller are all FDM 3D printed. The internal structure of the propeller looks very similar to other 3D-printed RC aircraft.

The propulsion system operates at just 1000 RPM – far slower than conventional drones. The custom propellers feature internal ring gears driven by small brushless motors through a ~20:1 reduction. This design allows each motor to hover at a mere 60 W, enabling the use of high-density lithium-ion cells typically unsuitable for drone applications. The rest of the electronics are off-the-shelf, with the flight controller running ArduPilot. Due to the unconventional powertrain and large size, the PID tuning was very challenging.

We like the fact this drone doesn’t require fancy materials or electronics, it just uses existing tech creatively. The combination of extended flight times, rapid charging, and modular construction opens new possibilities for applications like surveying, delivery, and emergency response where endurance is critical.

❌