Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

Remotely Interesting: Stream Gages

28 Mayo 2025 at 14:00

Near my childhood home was a small river. It wasn’t much more than a creek at the best of times, and in dry summers it would sometimes almost dry up completely. But snowmelt revived it each Spring, and the remains of tropical storms in late Summer and early Fall often transformed it into a raging torrent if only briefly before the flood waters receded and the river returned to its lazy ways.

Other than to those of us who used it as a playground, the river seemed of little consequence. But it did matter enough that a mile or so downstream was some sort of instrumentation, obviously meant to monitor the river. It was — and still is — visible from the road, a tall corrugated pipe standing next to the river, topped with a box bearing the logo of the US Geological Survey. On occasion, someone would visit and open the box to do mysterious things, which suggested the river was interesting beyond our fishing and adventuring needs.

Although I learned quite early that this device was a streamgage, and that it was part of a large network of monitoring instruments the USGS used to monitor the nation’s waterways, it wasn’t until quite recently — OK, this week — that I learned how streamgages work, or how extensive the network is. A lot of effort goes into installing and maintaining this far-flung network, and it’s worth looking at how these instruments work and their impact on everyday life.

Inventing Hydrography

First, to address the elephant in the room, “gage” is a rarely used but accepted alternative spelling of “gauge.” In general, gage tends to be used in technical contexts, which certainly seems to be the case here, as opposed to a non-technical context such as “A gauge of public opinion.” Moreover, the USGS itself uses that spelling, for interesting historical reasons that they’ve apparently had to address often enough that they wrote an FAQ on the subject. So I’ll stick with the USGS terminology in this article, even if I really don’t like it that much.

With that out of the way, the USGS has a long history of monitoring the nation’s rivers. The first streamgaging station was established in 1889 along the Rio Grande River at a railroad station in Embudo, New Mexico. Measurements were entirely manual in those days, performed by crews trained on-site in the nascent field of hydrography. Many of the tools and methods that would be used through the rest of the 19th century to measure the flow of rivers throughout the West and later the rest of the nation were invented at Embudo.

Then as now, river monitoring boils down to one critical measurement: discharge rate, or the volume of water passing a certain point in a fixed amount of time. In the US, discharge rate is measured in cubic feet per second, or cfs. The range over which discharge rate is measured can be huge, from streams that trickle a few dozen cubic feet of water every second to the over one million cfs discharge routinely measured at the mouth of the mighty Mississippi each Spring.

Measurements over such a wide dynamic range would seem to be an engineering challenge, but hydrographers have simplified the problem by cheating a little. While volumetric flow in a closed container like a pipe is relatively easy — flowmeters using paddlewheels or turbines are commonly used for such a task — direct measurement of flow rates in natural watercourses is much harder, especially in navigable rivers where such measuring instruments would pose a hazard to navigation. Instead, the USGS calculates the discharge rate indirectly using stream height, often referred to as flood stage.

Beside Still Waters

Schematic of a USGS stilling well. The water level in the well tracks the height of the stream, with a bit of lag. The height of the water column in the well is easier to read than the surface of the river. Source: USGS, public domain.

The height of a river at any given point is much easier to measure, with the bonus that the tools used for this task lend themselves to continuous measurements. Stream height is the primary data point of each streamgage in the USGS network, which uses several different techniques based on the specific requirements of each site.

A float-tape gage, with a counterweighted float attached to an encoder by a stainless steel tape. The encoder sends the height of the water column in the stilling well to the data logger. Source: USGS, public domain.

The most common is based on a stilling well. Stilling wells are vertical shafts dug into the bank adjacent to a river. The well is generally large enough for a technician to enter, and is typically lined with either concrete or steel conduit, such as the streamgage described earlier. The bottom of the shaft, which is also lined with an impervious material such as concrete, lies below the bottom of the river bed, while the height of the well is determined by the highest expected flood stage for the river. The lumen of the well is connected to the river via a pair of pipes, which terminate in the water above the surface of the riverbed. Water fills the well via these input pipes, with the level inside the well matching the level of the water in the river.

As the name implies, the stilling well performs the important job of damping any turbulence in the river, allowing for a stable column of water whose height can be easily measured. Most stilling wells measure the height of the water column with a float connected to a shaft encoder by a counterweighted stainless steel tape. Other stilling wells are measured using ultrasonic transducers, radar, or even lidar scanners located in the instrument shelter on the top of the well, which translate time-of-flight to the height of the water column.

While stilling well gages are cheap and effective, they are not without their problems. Chief among these is dealing with silt and debris. Even though intakes are placed above the bottom of the river, silt enters the stilling well and settles into the sump. This necessitates frequent maintenance, usually by flushing the sump and the intake lines using water from a flushing tank located within the stilling well. In rivers with a particularly high silt load, there may be a silt trap between the intakes and the stilling well. Essentially a concrete box with a series of vertical baffles, the silt trap allows silt to settle out of the river water before it enters the stilling well, and must be cleaned out periodically.

Bubbles, Bubbles

Bubble gages often live on pilings or other structures within the watercourse.

Making up for some of the deficiencies of the stilling well is the bubble gage, which measures river stage using gas pressure. A bubble gage typically consists of a small air pump or gas cylinders inside the instrument shelter, plumbed to a pipe that comes out below the surface of the river. As with stilling wells, the tube is fixed at a known point relative to a datum, which is the reference height for that station. The end of the pipe in the water has an orifice of known size, while the supply side has regulators and valves to control the flow of gas. River stage can be measured by sensing the gas pressure in the system, which will increase as the water column above the orifice gets higher.

Bubble gages have a distinct advantage over stilling wells in rivers with a high silt load, since the positive pressure through the orifice tends to keep silt out of the works. However, bubble gages tend to need a steady supply of electricity to power their air pump continuously, or for gages using bottled gas, frequent site visits for replenishment. Also, the pipe run to the orifice needs to be kept fairly short, meaning that bubble gage instrument shelters are often located on pilings within the river course or on bridge abutments, which can make maintenance tricky and pose a hazard to navigation.

While bubble gages and stilling wells are the two main types of gaging stations for fixed installations, the USGS also maintains a selection of temporary gaging instruments for tactical use, often for response to natural disasters. These Rapid Deployment Gages (RDGs) are compact units designed to affix to the rail of a bridge or some other structure across the river. Most RDGs use radar to sense the water level, but some use sonar.

Go With the Flow

No matter what method is used to determine the stage of a river, calculating the discharge rate is the next step. To do that, hydrographers have to head to the field and make flow measurements. By measuring the flow rates at intervals across the river, preferably as close as possible to the gaging station, the total flow through the channel at that point can be estimated, and a calibration curve relating flow rate to stage can be developed. The discharge rate can then be estimated from just the stage reading.

Flow readings are taken using a variety of tools, depending on the size of the river and the speed of the current. Current meters with bucket wheels can be lowered into a river on a pole; the flow rotates the bucket wheel and closes electrical contacts that can be counted on an electromagnetic totalizer. More recently, Acoustic Doppler Current Profilers (ADCPs) have come into use. These use ultrasound to measure the velocity of particulates in the water by their Doppler shift.

Crews can survey the entire width of a small stream by wading, from boats, or by making measurements from a convenient bridge. In some remote locations where the river is especially swift, the USGS may erect a cableway across the river, so that measurements can be taken at intervals from a cable car.

Nice work if you can get it. USGS crew making flow measurements from a cableway over the American River in California using an Acoustic Doppler Current Profiler. Source: USGS, public domain.

From Paper to Satellites

In the earliest days of streamgaging, recording data was strictly a pen-on-paper process. Station log books were updated by hydrographers for every observation, with results transmitted by mail or telegraph. Later, stations were equipped with paper chart recorders using a long-duration clockwork mechanism. The pen on the chart recorder was mechanically linked to the float in a stilling well, deflecting it as the river stage changed and leaving a record on the chart. Electrical chart recorders came next, with the position of the pen changing based on the voltage through a potentiometer linked to the float.

Chart recorders, while reliable, have the twin disadvantages of needing a site visit to retrieve the data and requiring a tedious manual transcription of the chart data to tabular form. To solve the latter problem, analog-digital recorders (ADRs) were introduced in the 1960s. These recorded stage data on paper tape as four binary-coded decimal (BCD) digits. The time of each stage reading was inferred from its position on the tape, given a known starting time and reading interval. Tapes still had to be retrieved from each station, but at least reading the data back at the office could be automated with a paper tape reader.

In the 1980s and 1990s, gaging stations were upgraded to electronic data loggers, with small solar panels and batteries where grid power wasn’t available. Data was stored locally in the logger between maintenance visits by a hydrographer, who would download the data. Alternately, gaging stations located close to public rights of way sometimes had leased telephone lines for transmitting data at intervals via modem. Later, gaging stations started sprouting cross-polarized Yagi antennas, aimed at one of the Geostationary Operational Environmental Satellites (GOES). Initially, gaging stations used one of the GOES low data rate telemetry channels with a 100 to 300 bps connection. This gave hydrologists near-real-time access to gaging data for the first time. Since 2013, all stations have been upgraded to a high data rate channel that allows up to 1,200 bps telemetry.

Currently, gage data is collected every 15 minutes normally, although the interval can be increased to every 5 minutes at times of peak flow. Data is buffered locally before a GOES uplink, which is about every hour or so, or as often as every 15 minutes in peak flow or emergencies. The uplink frequencies and intervals are very well documented on the USGS site, so you can easily pick them up with an SDR, and you can see if the creek is rising from the comfort of your own shack.

Hackaday Links: May 25, 2025

25 Mayo 2025 at 23:00
Hackaday Links Column Banner

Have you heard that author Andy Weir has a new book coming out? Very exciting, we know, and according to a syndicated reading list for Summer 2025, it’s called The Last Algorithm, and it’s a tale of a programmer who discovers a dark and dangerous secret about artificial intelligence. If that seems a little out of sync with his usual space-hacking fare such as The Martian and Project Hail Mary, that’s because the book doesn’t exist, and neither do most of the other books on the list.

The list was published in a 64-page supplement that ran in major US newspapers like the Chicago Sun-Times and the Philadelphia Inquirer. The feature listed fifteen must-read books, only five of which exist, and it’s no surprise that AI is to behind the muck-up. Writer Marco Buscaglia took the blame, saying that he used an LLM to produce the list without checking the results. Nobody else in the editorial chain appears to have reviewed the list either, resulting in the hallucination getting published. Readers are understandably upset about this, but for our part, we’re just bummed that Andy doesn’t have a new book coming out.

In equally exciting but ultimately fake news, we had more than a few stories pop up in our feed about NASA’s recent discovery of urban lights on an exoplanet. AI isn’t to blame for this one, though, at least not directly. Ironically, the rumor started with a TikTok video debunking a claim of city lights on a distant planet. Social media did what social media does, though, sharing only the parts that summarized the false claim and turning a debunking into a bunking. This is why we can’t have nice things.

That wasn’t the only story about distant lights, though, with this report of unexplained signals from two nearby stars. This one is far more believable, coming as it does from retired JPL scientist Richard H. Stanton, who has been using a 30″ telescope to systematically search for optical SETI signals for the past few years. These searches led to seeing two rapid pulses of light from HD 89389, an F-type star located in the constellation Ursa Major. The star rapidly brightened, dimmed, brightened again, then returned to baseline over a fraction of second; the same pattern repeated itself about 4.4 seconds later.

Intrigued, he looked back through his observations and found a similar event from a different star, HD 217014 in Pegasus, four years previously. Interestingly, this G-type star is known to have at least one exoplanet. Stanton made the first observation in 2023, and he’s spent much of the last two years ruling out things like meteor flashes or birds passing through his field of view. More study is needed to figure out what this means, and while it’s clearly not aliens, it’s fun to imagine it could be some kind of technosignature.

And one last space story, this time with the first observation of extra-solar ice. The discovery comes from the James Webb Space Telescope, which caught the telltale signature of ice crystals in a debris ring circling HD 181327, a very young star only 155 light-years away. Water vapor had been detected plenty of times outside our solar system, but not actual ice crystals until now. The ice crystals seem to be coming from collisions between icy bodies in the debris field, an observation that has interesting implications for planetary evolution.

And finally, if like us you’re impressed anytime someone busts out a project with a six-layer PCB design, wait till you get a load of this 124-layer beast. The board comes from OKI Circuit Technologies and is intended for high-bandwidth memory for AI accelerators. The dielectric for each layer is only 125-μm thick, and the board is still only 7.6 mm thick overall. At $4,800 per square meter, it’s not likely we’ll see our friends at JLC PCB offering these anytime soon, but it’s still some pretty cool engineering.

Big Chemistry: Fuel Ethanol

21 Mayo 2025 at 14:00

If legend is to be believed, three disparate social forces in early 20th-century America – the temperance movement, the rise of car culture, and the Scots-Irish culture of the South – collided with unexpected results. The temperance movement managed to get Prohibition written into the Constitution, which rankled the rebellious spirit of the descendants of the Scots-Irish who settled the South. In response, some of them took to the backwoods with stills and sacks of corn, creating moonshine by the barrel for personal use and profit. And to avoid the consequences of this, they used their mechanical ingenuity to modify their Fords, Chevrolets, and Dodges to provide the speed needed to outrun the law.

Though that story may be somewhat apocryphal, at least one of those threads is still woven into the American story. The moonshiner’s hotrod morphed into NASCAR, one of the nation’s most-watched spectator sports, and informed much of the car culture of the 20th century in general. Unfortunately, that led in part to our current fossil fuel predicament and its attendant environmental consequences, which are now being addressed by replacing at least some of the gasoline we burn with the same “white lightning” those old moonshiners made. The cost-benefit analysis of ethanol as a fuel is open to debate, as is the wisdom of using food for motor fuel, but one thing’s for sure: turning corn into ethanol in industrially useful quantities isn’t easy, and it requires some Big Chemistry to get it done.

Heavy on the Starch

As with fossil fuels, manufacturing ethanol for motor fuel starts with a steady supply of an appropriate feedstock. But unlike the drilling rigs and pump jacks that pull the geochemically modified remains of half-billion-year-old phytoplankton from deep within the Earth, ethanol’s feedstock is almost entirely harvested from the vast swathes of corn that carpet the Midwest US (Other grains and even non-grain plants are used as feedstock in other parts of the world, but we’re going to stick with corn for this discussion. Also, other parts of the world refer to any grain crop as corn, but in this case, corn refers specifically to maize.)

Don’t try to eat it — you’ll break your teeth. Yellow dent corn is harvested when full of starch and hard as a rock. Credit: Marjhan Ramboyong.

The corn used for ethanol production is not the same as the corn-on-the-cob at a summer barbecue or that comes in plastic bags of frozen Niblets. Those products use sweet corn bred specifically to pack extra simple sugars and less starch into their kernels, which is harvested while the corn plant is still alive and the kernels are still tender. Field corn, on the other hand, is bred to produce as much starch as possible, and is left in the field until the stalks are dead and the kernels have converted almost all of their sugar into starch. This leaves the kernels dry and hard as a rock, and often with a dimple in their top face that gives them their other name, dent corn.

Each kernel of corn is a fruit, at least botanically, with all the genetic information needed to create a new corn plant. That’s carried in the germ of the kernel, a relatively small part of the kernel that contains the embryo, a bit of oil, and some enzymes. The bulk of the kernel is taken up by the endosperm, the energy reserve used by the embryo to germinate, and as a food source until photosynthesis kicks in. That energy reserve is mainly composed of starch, which will power the fermentation process to come.

Starch is mainly composed of two different but related polysaccharides, amylose and amylopectin. Both are polymers of the simple six-carbon sugar glucose, but with slightly different arrangements. Amylose is composed of long, straight chains of glucose molecules bound together in what’s called an α-1,4 glycosidic bond, which just means that the hydroxyl group on the first carbon of the first glucose is bound to the hydroxyl on the fourth carbon of the second glucose through an oxygen atom:

Amylose, one of the main polysaccharides in starch. The glucose subunits are connected in long, unbranched chains up to 500 or so residues long. The oxygen atom binding each glucose together comes from a reaction between the OH radicals on the 1 and 4 carbons, with one oxygen and two hydrogens leaving in the form of water.

Amylose chains can be up to about 500 or so glucose subunits long. Amylopectin, on the other hand, has shorter straight chains but also branches formed between the number one and number six carbon, an α-1,6 glycosidic bond. The branches appear about every 25 residues or so, making amylopectin much more tangled and complex than amylose. Amylopectin makes up about 75% of the starch in a kernel.

Slurry Time

Ethanol production begins with harvesting corn using combine harvesters. These massive machines cut down dozens of rows of corn at a time, separating the ears from the stalks and feeding them into a threshing drum, where the kernels are freed from the cob. Winnowing fans and sieves separate the chaff and debris from the kernels, which are stored in a tank onboard the combine until they can be transferred to a grain truck for transport to a grain bin for storage and further drying.

Corn harvest in progress. You’ve got to burn a lot of diesel to make ethanol. Credit: dvande – stock.adobe.com

Once the corn is properly dried, open-top hopper trucks or train cars transport it to the distillery. The first stop is the scale house, where the cargo is weighed and a small sample of grain is taken from deep within the hopper by a remote-controlled vacuum arm. The sample is transported directly to the scale house for a quick quality assessment, mainly based on moisture content but also the physical state of the kernels. Loads that are too wet, too dirty, or have too many fractured kernels are rejected.

Loads that pass QC are dumped through gates at the bottom of the hoppers into a pit that connects to storage silos via a series of augers and conveyors. Most ethanol plants keep a substantial stock of corn, enough to run the plant for several days in case of any supply disruption. Ethanol plants operate mainly in batch mode, with each batch taking several days to complete, so a large stock ensures the efficiency of continuous operation.

The Lakota Green Plains ethanol plant in Iowa. Ethanol plants look a lot like small petroleum refineries and share some of the same equipment. Source: MsEuphonic, CC BY-SA 3.0.

To start a batch of ethanol, corn kernels need to be milled into a fine flour. Corn is fed to a hammer mill, where large steel weights swinging on a flywheel smash the tough pericarp that protects the endosperm and the germ. The starch granules are also smashed to bits, exposing as much surface area as possible. The milled corn is then mixed with clean water to form a slurry, which can be pumped around the plant easily.

The first stop for the slurry is large cooking vats, which use steam to gently heat the mixture and break the starch into smaller chains. The heat also gelatinizes the starch, in a process that’s similar to what happens when a sauce is thickened with a corn starch slurry in the kitchen. The gelatinized starch undergoes liquefaction under heat and mildly acidic conditions, maintained by injecting sulfuric acid or ammonia as needed. These conditions begin hydrolysis of some of the α-1,4 glycosidic bonds, breaking the amylose and amylopectin chains down into shorter fragments called dextrin. An enzyme, α-amylase, is also added at this point to catalyze the α-1,4 bonds to create free glucose monomers. The α-1,6 bonds are cleaved by another enzyme, α-amyloglucosidase.

The Yeast Get Busy

The result of all this chemical and enzymatic action is a glucose-rich mixture ready for fermentation. The slurry is pumped to large reactor vessels where a combination of yeasts is added. Saccharomyces cerevisiae, or brewer’s yeast, is the most common, but other organisms can be used too. The culture is supplemented with ammonia sulfate or urea to provide the nitrogen the growing yeast requires, along with antibiotics to prevent bacterial overgrowth of the culture.

Fermentation occurs at around 30 degrees C over two to three days, while the yeast gorge themselves on the glucose-rich slurry. The glucose is transported into the yeast, where each glucose molecule is enzymatically split into two three-carbon pyruvate molecules. The pyruvates are then broken down into two molecules of acetaldehyde and two of CO2. The two acetaldehyde molecules then undergo a reduction reaction that creates two ethanol molecules. The yeast benefits from all this work by converting two molecules of ADP into two molecules of ATP, which captures the chemical energy in the glucose molecule into a form that can be used to power its metabolic processes, including making more yeast to take advantage of the bounty of glucose.

Anaerobic fermentation of one mole of glucose yields two moles of ethanol and two moles of CO2.

After the population of yeast grows to the point where they use up all the glucose, the mix in the reactors, which contains about 12-15% ethanol and is referred to as beer, is pumped into a series of three distillation towers. The beer is carefully heated to the boiling point of ethanol, 78 °C. The ethanol vapors rise through the tower to a condenser, where they change back into the liquid phase and trickle down into collecting trays lining the tower. The liquid distillate is piped to the next two towers, where the same process occurs and the distillate becomes increasingly purer. At the end of the final distillation, the mixture is about 95% pure ethanol, or 190 proof. That’s the limit of purity for fractional distillation, thanks to the tendency of water and ethanol to form an azeotrope, a mixture of two or more liquids that boils at a constant temperature. To drive off the rest of the water, the distillate is pumped into large tanks containing zeolite, a molecular sieve. The zeolite beads have pores large enough to admit water molecules, but too small to admit ethanol. The water partitions into the zeolite, leaving 99% to 100% pure (198 to 200 proof) ethanol behind. The ethanol is mixed with a denaturant, usually 5% gasoline, to make it undrinkable, and pumped into storage tanks to await shipping.

Nothing Goes to Waste

The muck at the bottom of the distillation towers, referred to as whole stillage, still has a lot of valuable material and does not go to waste. The liquid is first pumped into centrifuges to separate the remaining grain solids from the liquid. The solids, called wet distiller’s grain or WDG, go to a rotary dryer, where hot air drives off most of the remaining moisture. The final product is dried distiller’s grain with solubles, or DDGS, a high-protein product used to enrich animal feed. The liquid phase from the centrifuge is called thin stillage, which contains the valuable corn oil from the germ. That’s recovered and sold as an animal feed additive, too.

Ethanol fermentation produces mountains of DDGS, or dried distiller’s grain solubles. This valuable byproduct can account for 20% of an ethanol plant’s income. Source: Inside an Ethanol Plant (YouTube).

The final valuable product that’s recovered is the carbon dioxide. Fermentation produces a lot of CO2, about 17 pounds per bushel of feedstock. The gas is tapped off the tops of the fermentation vessels by CO2 scrubbers and run through a series of compressors and coolers, which turn it into liquid carbon dioxide. This is sold off by the tanker-full to chemical companies, food and beverage manufacturers, who use it to carbonate soft drinks, and municipal water treatment plants, where it’s used to balance the pH of wastewater.

There are currently 187 fuel ethanol plants in the United States, most of which are located in the Midwest’s corn belt, for obvious reasons. Together, these plants produced more than 16 billion gallons of ethanol in 2024. Since each bushel of corn yields about 3 gallons of ethanol, that translates to an astonishing 5 billion bushels of corn used for fuel production, or about a third of the total US corn production.

Hackaday Links: May 18, 2025

18 Mayo 2025 at 23:00
Hackaday Links Column Banner

Saw what you want about the wisdom of keeping a 50-year-old space mission going, but the dozen or so people still tasked with keeping the Voyager mission running are some major studs. That’s our conclusion anyway, after reading about the latest heroics that revived a set of thrusters on Voyager 1 that had been offline for over twenty years. The engineering aspects of this feat are interesting enough, but we’re more interested in the social engineering aspects of this exploit, which The Register goes into a bit. First of all, even though both Voyagers are long past their best-by dates, they are our only interstellar assets, and likely will be for centuries to come, or perhaps forever. Sure, the rigors of space travel and the ravages of time have slowly chipped away at what these machines can so, but while they’re still operating, they’re irreplaceable assets.

That makes the fix to the thruster problem all the more ballsy, since the Voyager team couldn’t be 100% sure about the status of the primary thrusters, which were shut down back in 2004. They thought it might have been that the fuel line heaters were still good, but if they actually had gone bad, trying to switch the primary thrusters back on with frozen fuel lines could have resulted in an explosion when Voyager tried to fire them, likely ending in a loss of the spacecraft. So the decision to try this had to be a difficult one, to say the least. Add in an impending shutdown of the only DSN antenna capable of communicating with the spacecraft and a two-day communications round trip, and the pressure must have been unbearable. But they did it, and Voyager successfully navigated yet another crisis. But what we’re especially excited about is discovering a 2023 documentary about the current Voyager mission team called “It’s Quieter in the Twilight.” We know what we’ll be watching this weekend.

Speaking of space exploration, one thing you don’t want to do is send anything off into space bearing Earth microbes. That would be a Very Bad Thing™, especially for missions designed to look for life anywhere else but here. But, it turns out that just building spacecraft in cleanrooms might not be enough, with the discovery of 26 novel species of bacteria growing in the cleanroom used to assemble a Mars lander. The mission in question was Phoenix, which landed on Mars in 2008 to learn more about the planet’s water. In 2007, while the lander was in the Payload Hazardous Servicing Facility at Kennedy Space Center, biosurveillance teams collected samples from the cleanroom floor. Apparently, it wasn’t very clean, with 215 bacterial strains isolated, 26 of which were novel. What’s more, genomic analysis of the new bugs suggests they have genes that make them especially tough, both in their resistance to decontamination efforts on Earth and in their ability to survive the rigors of life in space. We’re not really sure if these results say more about NASA’s cleanliness than they do about the selective pressure that an extreme environment like a cleanroom exerts on fast-growing organisms like bacteria. Either way, it doesn’t bode well for our planetary protection measures.

Closer to home but more terrifying is video from an earthquake in Myanmar that has to be seen to be believed. And even then, what’s happening in the video is hard to wrap your head around. It’s not your typical stuff-falling-off-the-shelf video; rather, the footage is from an outdoor security camera that shows the ground outside of a gate literally ripping apart during the 7.7 magnitude quake in March. The ground just past the fence settles a bit while moving away from the camera a little, but the real action is the linear motion — easily three meters in about two seconds. The motion leaves the gate and landscaping quivering but largely intact; sadly, the same can’t be said for a power pylon in the distance, which crumples as if it were made from toothpicks.

And finally, “Can it run DOOM?” has become a bit of a meme in our community, a benchmark against which hacking chops can be measured. If it has a microprocessor in it, chances are someone has tried to make it run the classic first-person shooter video game. We’ve covered dozens of these hacks before, everything from a diagnostic ultrasound machine to a custom keyboard keycap, while recent examples tend away from hardware ports to software platforms such as a PDF file, Microsoft Word, and even SQL. Honestly, we’ve lost count of the ways to DOOM, which is where Can It Run Doom? comes in handy. It lists all the unique platforms that hackers have tortured into playing the game, as well as links to source code and any relevant video proof of the exploit. Check it out the next time you get the urge to port DOOM to something cool; you wouldn’t want to go through all the work to find out it’s already been done, would you?

Radio Apocalypse: Meteor Burst Communications

12 Mayo 2025 at 14:00

The world’s militaries have always been at the forefront of communications technology. From trumpets and drums to signal flags and semaphores, anything that allows a military commander to relay orders to troops in the field quickly or call for reinforcements was quickly seized upon and optimized. So once radio was invented, it’s little wonder how quickly military commanders capitalized on it for field communications.

Radiotelegraph systems began showing up as early as the First World War, but World War II was the first real radio war, with every belligerent taking full advantage of the latest radio technology. Chief among these developments was the ability of signals in the high-frequency (HF) bands to reflect off the ionosphere and propagate around the world, an important capability when prosecuting a global war.

But not long after, in the less kinetic but equally dangerous Cold War period, military planners began to see the need to move more information around than HF radio could support while still being able to do it over the horizon. What they needed was the higher bandwidth of the higher frequencies, but to somehow bend the signals around the curvature of the Earth. What they came up with was a fascinating application of practical physics: meteor burst communications.

Blame It on Shannon

In practical terms, a radio signal that can carry enough information to be useful for digital communications while still being able to propagate long distances is a bit of a paradox. You can thank Claude Shannon for that, after he developed the idea of channel capacity from the earlier work of Harry Nyquist and Ralph Hartley. The resulting Hartley-Shannon Theorem states that the bit rate of a channel in a noisy environment is directly related to the bandwidth of the channel. In other words, the more data you want to stuff down a channel, the higher the frequency needs to be.

Unfortunately, that runs afoul of the physics of ionospheric propagation. Thanks to the physics of the interaction between radio waves and the charged particles between about 50 km and 600 km above the ground, the maximum frequency that can be reflected back toward the ground is about 30 MHz, which is the upper end of the HF band. Beyond that is the very-high frequency (VHF) band from 30 MHz to 300 MHz, which has enough bandwidth for an effective data channel but to which the ionosphere is essentially transparent.

Luckily, the ionosphere isn’t the only thing capable of redirecting radio waves. Back in the 1920s, Japanese physicist Hantaro Nagaoka observed that the ionospheric propagation of shortwave radio signals would change a bit during periods of high meteoric activity. That discovery largely remained dormant until after World War II, when researchers picked up on Nagoka’s work and looked into the mechanism behind his observations.

Every day, the Earth sweeps up a huge number of meteoroids; estimates range from a million to ten billion. Most of those are very small, on the order of a few nanograms, with a few good-sized chunks in the tens of kilograms range mixed in. But the ones that end up being most interesting for communications purposes are the particles in the milligram range, in part because there are about 100 million such collisions on average every day, but also because they tend to vaporize in the E-level of the ionosphere, between 80 and 120 km above the surface. The air at that altitude is dense enough to turn the incoming cosmic debris into a long, skinny trail of ions, but thin enough that the free electrons take a while to recombine into neutral atoms. It’s a short time — anywhere between 500 milliseconds to a few seconds — but it’s long enough to be useful.

A meteor trail from the annual Perseid shower, which peaks in early August. This is probably a bit larger than the optimum for MBC, but beautiful nonetheless. Source: John Flannery, CC BY-ND 2.0.

The other aspect of meteor trails formed at these altitudes that makes them useful for communications is their relative reflectivity. The E-layer of the ionosphere normally has on the order of 107 electrons per cubic meter, a density that tends to refract radio waves below about 20 MHz. But meteor trails at this altitude can have densities as high as 1011 to 1012 electrons/m3. This makes the trails highly reflective to radio waves, especially at the higher frequencies of the VHF band.

In addition to the short-lived nature of meteor trails, daily and seasonal variations in the number of meteors complicate their utility for communications. The rotation of the Earth on its axis accounts for the diurnal variation, which tends to peak around dawn local time every day as the planet’s rotation and orbit are going in the same direction and the number of collisions increases. Seasonal variations occur because of the tilt of Earth’s axis relative to the plane of the ecliptic, where most meteoroids are concentrated. More collisions occur when the Earth’s axis is pointed in the direction of travel around the Sun, which is the second half of the year for the northern hemisphere.

Learning to Burst

Building a practical system that leverages these highly reflective but short-lived and variable mirrors in the sky isn’t easy, as shown by several post-war experimental systems. The first of these was attempted by the National Bureau of Standards in 1951. They set up a system between Cedar Rapids, Iowa, and Sterling, Virginia, a path length of about 1250 km. Originally built to study propagation phenomena such as forward scatter and sporadic E, the researchers noticed significant effects on their tests by meteor trails. This made them switch their focus to meteor trails, which caught the attention of the US Air Force. They were in the market for a four-channel continuous teletype link to their base in Thule, Greenland. They got it, but only just barely, thanks to the limited technology of the time. The NBS system also used the Iowa to Virginia system to study higher data rates by pointing highly directional rhombic antennas at each end of the connection at the same small patch of sky. They managed a whopping data rate of 3,200 bits per second with this system, but only for the second or so that a meteor trail happened to appear.

The successes and failures of the NBS system made it clear that a useful system based on meteor trails would need to operate in burst mode, to jam data through the link for as long as it existed and wait for the next one. The NBS tested a burst-mode system in 1958 that used the 50-MHz band and offered a full-duplex link at 2,400 bits per second. The system used magnetic tape loops to buffer data and transmitters at both ends of the link that operated continually to probe for a path. Whenever the receiver at one end detected a sufficiently strong probe signal from the other end, the transmitter would start sending data. The Canadians got in on the MBC action with their JANET system, which had a similar dedicated probing channel and tape buffer. In 1954 they established a full-duplex teletype link between Ottawa and Nova Scotia at 1,300 bits per second with an error rate of only 1.5%

In the late 1950s, Hughes developed a single-channel air-to-ground MBC system. This was a significant development since not only had the equipment gotten small enough to install on an airplane but also because it really refined the burst-mode technology. The ground stations in the Hughes system periodically transmitted a 100-bit interrogation signal to probe for a path to the aircraft. The receiver on the ground listened for an acknowledgement from the plane, which turned the channel around and allowed the airborne transmitter to send a 100-bit data burst. The system managed a respectable 2,400 bps data rate, but suffered greatly from ground-based interference for TV stations and automotive ignition noise.

The SHAPE of Things to Come

Supreme HQ Allied Powers Europe (SHAPE), NATO’s European headquarters in the mid-60s. The COMET meteor-bounce system kept NATO commanders in touch with member-nation HQs via teletype. Source: NATO

The first major MBC system fielded during the Cold War was the Communications by Meteor Trails system, or COMET. It was used by the North Atlantic Treaty Organization (NATO) to link its far-flung outposts in member nations with Supreme Headquarters Allied Powers Europe, or SHAPE, located in Belgium. COMET took cues from the Hughes system, especially its error detection and correction scheme. COMET was a robust and effective MBC system that provided between four and eight teletype circuits depending on daily and seasonal conditions, each handling 60 words per minute.

COMET was in continuous use from the mid-1960s until well after the official end of the Cold War. By that point, secure satellite communications were nowhere near as prohibitively expensive as they had been at the beginning of the Space Age, and MBC systems became less critical to NATO. They weren’t retired, though, and COMET actually still exists, although rebranded as “Compact Over-the-Horizon Mobile Expeditionary Terminal.” These man-portable systems don’t use MBC; rather, they use high-power UHF and microwave transmitters to scatter signals off the troposphere. A small amount of the signal is reflected back to the ground, where high-gain antennas pick up the vanishingly weak signals.

Although not directly related to Cold War communications, it’s worth noting that there was a very successful MBC system fielded in the civilian space in the United States: SNOTEL. We’ve covered this system in some depth already, but briefly, it’s a network of stations in the western part of the USA with the critical job of monitoring the snowpack. A commercial MBC system connected the solar-powered monitoring stations, often in remote and rugged locations, to two different central bases. Taking advantage of diurnal meteor variations, each morning the master station would send a polling signal out to every remote, which would then send back the previous day’s data once a return path was opened. The system could collect data from 180 remote sites in just 20 minutes. It operated successfully from the mid-1970s until just recently, when pervasive cell technology and cheap satellite modems made the system obsolete.

Hackaday Links: May 11, 2025

11 Mayo 2025 at 23:00
Hackaday Links Column Banner

Did artificial intelligence just jump the shark? Maybe so, and it came from the legal world of all places, with this report of an AI-generated victim impact statement. In an apparent first, the family of an Arizona man killed in a road rage incident in 2021 used AI to bring the victim back to life to testify during the sentencing phase of his killer’s trial. The video was created by the sister and brother-in-law of the 37-year-old victim using old photos and videos, and was quite well done, despite the normal uncanny valley stuff around lip-syncing that seems to be the fatal flaw for every deep-fake video we’ve seen so far. The victim’s beard is also strangely immobile, which we found off-putting.

In the video, the victim expresses forgiveness toward his killer and addresses his family members directly, talking about things like what he would have looked like if he’d gotten the chance to grow old. That seemed incredibly inflammatory to us, but according to Arizona law, victims and their families get to say pretty much whatever they want in their impact statements. While this appears to be legal, we wouldn’t be surprised to see it appealed, since the judge tacked an extra year onto the killer’s sentence over what the prosecution sought based on the power of the AI statement. If this tactic withstands the legal tests it’ll no doubt face, we could see an entire industry built around this concept.

Last week, we warned about the impending return of Kosmos 482, a Soviet probe that was supposed to go to Venus when it was launched in 1972. It never quite chooched, though, and ended up circling the Earth for the last 53 years. The satellite made its final orbit on Saturday morning, ending up in the drink in the Indian Ocean, far from land. Alas, the faint hope that it would have a soft landing thanks to the probe’s parachute having apparently been deployed at some point in the last five decades didn’t come to pass. That’s a bit of a disappointment to space fans, who’d love to get a peek inside this priceless bit of space memorabilia. Roscosmos says they monitored the descent, so presumably they know more or less where the debris rests. Whether it’s worth an expedition to retrieve it remains to be seen.

Are we really at the point where we have to worry about counterfeit thermal paste? Apparently, yes, judging by the effort Arctic Cooling is putting into authenticity verification of its MX brand pastes. To make sure you’re getting the real deal, boxes will come with seals that rival those found on over-the-counter medications and scratch-off QR codes that can be scanned and cross-referenced to an online authentication site. We suppose it makes sense; chip counterfeiting is a very real thing, after all, and it’s probably as easy to put a random glob of goo into a syringe as it is to laser new markings onto a chip package. And Arctic compound commands a pretty penny, so the incentive is obvious. But still, something about this just bothers us.

Another very cool astrophotography shot this week, this time a breathtaking collection of galaxies. Taken from the Near Infrared camera on the James Webb Space Telescope with help from the Hubble Space Telescope and the XMM-Newton X-ray space observatory, the image shows thousands of galaxies of all shapes and sizes, along with the background X-ray glow emitted by all the clouds of superheated dust and gas between them. The stars with the characteristic six-pointed diffraction spikes are all located within our galaxy, but everything else is a galaxy. The variety is fascinating, and the scale of the image is mind-boggling. It’s galactic eye candy!

And finally, if you’ve ever wondered about what happens when a nuclear reactor melts down, you’re in luck with this interesting animagraphic on the process. It’s not a detailed 3D render of any particular nuclear power plant and doesn’t have a specific meltdown event in mind, although it does mention both Chernobyl and Fukushima. Rather, it’s a general look at pressurized water reactors and what can go wrong when the cooling water stops flowing. It also touches on potentially safer designs with passive safety systems that rely on natural convection to keep cooling water circulating in the event of disaster, along with gravity-fed deluge systems to cool the containment vessel if things get out of hand. It’s a good overview of how reactors work and where they can go wrong. Enjoy.

Big Chemistry: Cement and Concrete

7 Mayo 2025 at 14:00

Not too long ago, I was searching for ideas for the next installment of the “Big Chemistry” series when I found an article that discussed the world’s most-produced chemicals. It was an interesting article, right up my alley, and helpfully contained a top-ten list that I could use as a crib sheet for future articles, at least for the ones I hadn’t covered already, like the Haber-Bosch process for ammonia.

Number one on the list surprised me, though: sulfuric acid. The article stated that it was far and away the most produced chemical in the world, with 36 million tons produced every year in the United States alone, out of something like 265 million tons a year globally. It’s used in a vast number of industrial processes, and pretty much everywhere you need something cleaned or dissolved or oxidized, you’ll find sulfuric acid.

Staggering numbers, to be sure, but is it really the most produced chemical on Earth? I’d argue not by a long shot, when there’s a chemical that we make 4.4 billion tons of every year: Portland cement. It might not seem like a chemical in the traditional sense of the word, but once you get a look at what it takes to make the stuff, how finely tuned it can be for specific uses, and how when mixed with sand, gravel, and water it becomes the stuff that holds our world together, you might agree that cement and concrete fit the bill of “Big Chemistry.”

Rock Glue

To kick things off, it might be helpful to define some basic terms. Despite the tendency to use them as synonyms among laypeople, “cement” and “concrete” are entirely different things. Concrete is the finished building material of which cement is only one part, albeit a critical part. Cement is, for lack of a better term, the glue that binds gravel and sand together into a coherent mass, allowing it to be used as a building material.

What did the Romans ever do for us? The concrete dome of the Pantheon is still standing after 2,000 years. Source: Image by Sean O’Neill from Flickr via Monolithic Dome Institute (CC BY-ND 2.0)

It’s not entirely clear who first discovered that calcium oxide, or lime, mixed with certain silicate materials would form a binder strong enough to stick rocks together, but it certainly goes back into antiquity. The Romans get an outsized but well-deserved portion of the credit thanks to their use of pozzolana, a silicate-rich volcanic ash, to make the concrete that held the aqueducts together and built such amazing structures as the dome of the Pantheon. But the use of cement in one form or another can be traced back at least to ancient Egypt, and probably beyond.

Although there are many kinds of cement, we’ll limit our discussion to Portland cement, mainly because it’s what is almost exclusively manufactured today. (The “Portland” name was a bit of branding by its inventor, Joseph Aspdin, who thought the cured product resembled the famous limestone from the Isle of Portland off the coast of Dorset in the English Channel.)

Portland cement manufacturing begins with harvesting its primary raw material, limestone. Limestone is a sedimentary rock rich in carbonates, especially calcium carbonate (CaCO3), which tends to be found in areas once covered by warm, shallow inland seas. Along with the fact that limestone forms between 20% and 25% of all sedimentary rocks on Earth, that makes limestone deposits pretty easy to find and exploit.

Cement production begins with quarrying and crushing vast amounts of limestone. Cement plants are usually built alongside the quarries that produce the limestone or even right within them, to reduce transportation costs. Crushed limestone can be moved around the plant on conveyor belts or using powerful fans to blow the crushed rock through large pipes. Smaller plants might simply move raw materials around using haul trucks and front-end loaders. Along with the other primary ingredient, clay, limestone is stored in large silos located close to the star of the show: the rotary kiln.

Turning and Burning

A rotary kiln is an enormous tube, up to seven meters in diameter and perhaps 80 m long, set on a slight angle from the horizontal by a series of supports along its length. The supports have bearings built into them that allow the whole assembly to turn slowly, hence the name. The kiln is lined with refractory materials to resist the flames of a burner set in the lower end of the tube. Exhaust gases exit the kiln from the upper end through a riser pipe, which directs the hot gas through a series of preheaters that slowly raise the temperature of the entering raw materials, known as rawmix.

The rotary kiln is the centerpiece of Portland cement production. While hard to see in this photo, the body of the kiln tilts slightly down toward the structure on the left, where the burner enters and finished clinker exits. Source: by nordroden, via Adobe Stock (licensed).

Preheating the rawmix drives off any remaining water before it enters the kiln, and begins the decomposition of limestone into lime, or calcium oxide:

CaCO_{3} \rightarrow CaO + CO_{2}

The rotation of the kiln along with its slight slope results in a slow migration of rawmix down the length of the kiln and into increasingly hotter regions. Different reactions occur as the temperature increases. At the top of the kiln, the 500 °C heat decomposes the clay into silicate and aluminum oxide. Further down, as the heat reaches the 800 °C range, calcium oxide reacts with silicate to form the calcium silicate mineral known as belite:

2CaO + SiO_{2} \rightarrow 2CaO\cdot SiO_{2}

Finally, near the bottom of the kiln, belite and calcium oxide react to form another calcium silicate, alite:

2CaO\cdot SiO_{2} + CaO \rightarrow 3CaO\cdot SiO_{2}

It’s worth noting that cement chemists have a specialized nomenclature for alite, belite, and all the other intermediary phases of Portland cement production. It’s a shorthand that looks similar to standard chemical nomenclature, and while we’re sure it makes things easier for them, it’s somewhat infuriating to outsiders. We’ll stick to standard notation here to make things simpler. It’s also important to note that the aluminates that decomposed from the clay are still present in the rawmix. Even though they’re not shown in these reactions, they’re still critical to the proper curing of the cement.

Portland cement clinker. Each ball is just a couple of centimeters in diameter. Source: مرتضا, Public domain

The final section of the kiln is the hottest, at 1,500 °C. The extreme heat causes the material to sinter, a physical change that partially melts the particles and adheres them together into small, gray lumps called clinker. When the clinker pellets drop from the bottom of the kiln, they are still incandescently hot. Blasts of air that rapidly bring the clinker down to around 100 °C. The exhaust from the clinker cooler joins the kiln exhaust and helps preheat the incoming rawmix charge, while the cooled clinker is mixed with a small amount of gypsum and ground in a ball mill. The fine gray powder is either bagged or piped into bulk containers for shipment by road, rail, or bulk cargo ship.

The Cure

Most cement is shipped to concrete plants, which tend to be much more widely distributed than cement plants due to the perishable nature of the product they produce. True, both plants rely on nearby deposits of easily accessible rock, but where cement requires limestone, the gravel and sand that go into concrete can come from a wide variety of rock types.

Concrete plants quarry massive amounts of rock, crush it to specifications, and stockpile the material until needed. Orders for concrete are fulfilled by mixing gravel and sand in the proper proportions in a mixer housed in a batch house, which is elevated above the ground to allow space for mixer trucks to drive underneath. The batch house operators mix aggregate, sand, and any other admixtures the customer might require, such as plasticizers, retarders, accelerants, or reinforcers like chopped fiberglass, before adding the prescribed amount of cement from storage silos. Water may or may not be added to the mix at this point. If the distance from the concrete plant to the job site is far enough, it may make sense to load the dry mix into the mixer truck and add the water later. But once the water goes into the mix, the clock starts ticking, because the cement begins to cure.

Cement curing is a complex process involving the calcium silicates (alite and belite) in the cement, as well as the aluminate phases. Overall, the calcium silicates are hydrated by the water into a gel-like substance of calcium oxide and silicate. For alite, the reaction is:

Ca_{3}SiO_{5} + H_{2}O \rightarrow CaO\cdot SiO_{2} \cdot H_{2}O + Ca(OH)_{2}

Scanning electron micrograph of cured Portland cement, showing needle-like ettringite and plate-like calcium oxide. Source: US Department of Transportation, Public domain

At the same time, the aluminate phases in the cement are being hydrated and interacting with the gypsum, which prevents early setting by forming a mineral known as ettringite. Without the needle-like ettringite crystals, aluminate ions would adsorb onto alite and block it from hydrating, which would quickly reduce the plasticity of the mix. Ideally, the ettringite crystals interlock with the calcium silicate gel, which binds to the surface of the sand and gravel and locks it into a solid.

Depending on which adjuvants were added to the mix, most concretes begin to lose workability within a few hours of rehydration. Initial curing is generally complete within about 24 hours, but the curing process continues long after the material has solidified. Concrete in this state is referred to as “green,” and continues to gain strength over a period of weeks or even months.

Hackaday Links: May 4, 2025

4 Mayo 2025 at 23:00
Hackaday Links Column Banner

By now, you’ve probably heard about Kosmos 482, a Soviet probe destined for Venus in 1972 that fell a bit short of the mark and stayed in Earth orbit for the last 53 years. Soon enough, though, the lander will make its fiery return; exactly where and when remain a mystery, but it should be sometime in the coming week. We talked about the return of Kosmos briefly on this week’s podcast and even joked a bit about how cool it would be if the parachute that would have been used for the descent to Venus had somehow deployed over its half-century in space. We might have been onto something, as astrophotographer Ralf Vanderburgh has taken some pictures of the spacecraft that seem to show a structure connected to and trailing behind it. The chute is probably in pretty bad shape after 50 years of UV torture, but how cool is that?

Parachute or not, chances are good that the 495-kilogram spacecraft, built to not only land on Venus but to survive the heat, pressure, and corrosive effects of the hellish planet’s atmosphere, will at least partially survive reentry into Earth’s more welcoming environs. That’s a good news, bad news thing: good news that we might be able to recover a priceless artifact of late-Cold War space technology, bad news to anyone on the surface near where this thing lands. If Kosmos 482 does manage to do some damage, it won’t be the first time. Shortly after launch, pieces of titanium rained down on New Zealand after the probe’s booster failed to send it on its way to Venus, damaging crops and starting some fires. The Soviets, ever secretive about their space exploits until they could claim complete success, disavowed the debris and denied responsibility for it. That made the farmers whose fields they fell in the rightful owners, which is also pretty cool. We doubt that the long-lost Kosmos lander will get the same treatment, but it would be nice if it did.

Also of note in the news this week is a brief clip of a Unitree humanoid robot going absolutely ham during a demonstration — demo-hell, amiright? Potential danger to the nearby engineers notwithstanding, the footage is pretty hilarious. The demo, with a robot hanging from a hoist in a crowded lab, starts out calmly enough, but goes downhill quickly as the robot starts flailing its arms around. We’d say the movements were uncontrolled, but there are points where the robot really seems to be chasing the engineer and taking deliberate swipes at the poor guy, who was probably just trying to get to the e-stop switch. We know that’s probably just the anthropomorphization talking, but it sure looks like the bot had a beef to settle.  You be the judge.

Also from China comes a report of “reverse ATMs” that accept gold and turn it into cash on the spot (apologies for yet another social media link, but that’s where the stories are these days). The machine shown has a hopper into which customers can load their unwanted jewelry, after which it is reportedly melted down and assayed for purity. The funds are then directly credited to the customer’s account electronically. We’re not sure we fully believe this — thinking about the various failure modes of one of those fresh-brewed coffee machines, we shudder to think about the consequences of a machine with a 1,000°C furnace built into it. We also can’t help but wonder how the machine assays the scrap gold — X-ray fluorescence? Ramann spectroscopy? Also, what happens to the unlucky customer who puts some jewelry in that they thought was real gold, only to be told by the machine that it wasn’t? Do they just get their stuff back as a molten blob? The mind boggles.

And finally, the European Space Agency has released a stunning new image of the Sun. Captured by their Solar Orbiter spacecraft in March from about 77 million kilometers away, the mosaic is composed of about 200 images from the Extreme Ultraviolet Imager. The Sun was looking particularly good that day, with filaments, active regions, prominences, and coronal loops in evidence, along with the ethereal beauty of the Sun’s atmosphere. The image is said to be the most detailed view of the Sun yet taken, and needs to be seen in full resolution to be appreciated. Click on the image below and zoom to your heart’s content.

 

 

 

Look! It’s a Knob! It’s a Jack! It’s Euroknob!

28 Abril 2025 at 08:00

Are your Eurorack modules too crowded? Sick of your patch cables making it hard to twiddle your knobs? Then you might be very interested in the new Euroknob, the knob that sports a hidden patch cable jack.

Honestly, when we first saw the Euroknob demo board, we thought [Mitxela] had gone a little off the rails. It looks like nothing more than a PCB-mount potentiometer or perhaps an encoder with a knob attached. Twist the knob and a row of LEDs on the board light up in sequence. Nice, but not exactly what we’re used to seeing from him. But then he popped the knob off the board, revealing that what we thought was the pot body is actually a 3.5-mm audio jack, and that the knob was attached to a mating plug that acts as an axle.

The kicker is that underneath the audio jack is an AS5600 magnetic encoder, and hidden in a slot milled in the tip of the audio jack is a tiny magnet. Pop the knob into the jack, give it a twist, and you’ve got manual control of your module. Take the knob out, plug in a patch cable, and you can let a control voltage from another module do the job. Genius!

To make it all work mechanically, [Mitxela] had to sandwich a spacer board on top of the main PCB. The spacer has a large cutout to make room for the sensor chip so the magnet can rotate without hitting anything. He also added a CH32V003 to run the encoder and drive the LEDs to provide feedback for the knob-jack. The video below has a brief demo.

This is just a proof of concept, to be sure, but it’s still pretty slick. Almost as slick as [Mitxela]’s recent fluid-motion simulation pendant, or his dual-wielding soldering irons.

Hackaday Links: April 27, 2025

27 Abril 2025 at 23:00
Hackaday Links Column Banner

Looks like the Simpsons had it right again, now that an Australian radio station has been caught using an AI-generated DJ for their midday slot. Station CADA, a Sydney-based broadcaster that’s part of the Australian Radio Network, revealed that “Workdays with Thy” isn’t actually hosted by a person; rather, “Thy” is a generative AI text-to-speech system that has been on the air since November. An actual employee of the ARN finance department was used for Thy’s voice model and her headshot, which adds a bit to the creepy factor.

The discovery that they’ve been listening to a bot for months apparently has Thy’s fans in an uproar, although we suspect that the media doing the reporting is probably more exercised about this than the general public. Radio stations have used robo-jocks for the midday slot for ages, albeit using actual human DJs to record patter to play between tunes and commercials. Anyone paying attention over the last few years probably shouldn’t be surprised by this development, and we suspect similar disclosures will be forthcoming across the industry now that the cat’s out of the bag.

Also from the world of robotics, albeit the hardware kind, is this excellent essay from Brian Potter over at Construction Physics about the sad state of manual dexterity in humanoid robots. The whole article is worth reading, not least for the link to a rogue’s gallery of the current crop of humanoid robots, but briefly, the essay contends that while humanoid robots do a pretty good job of navigating in the world, their ability to do even the simplest tasks is somewhat wanting.

Brian’s example of unwrapping and applying a Band-Aid, a task that any toddler can handle, as being unimaginably difficult for any current robot to handle is quite apt. He attributes the gap in abilities between gross movements and fine motor control partly to hardware and partly to software. We think the blame skews more to the hardware side; while the legs and torso of the typical humanoid robot offer a lot of real estate for powerful actuators, squeezing that much equipment into a hand approximately the size of a human’s is a tall order. These problems will likely be overcome, of course, and when they do, Brian’s helpful list of “Dexterity Evals” or something similar will act as a sort of Turing test for robot dexterity. Although the day a humanoid robot can start a new roll of toilet paper without tearing the first sheet is the day we head for the woods.

We recently did a story on the use of nitrogen-vacancy diamonds as magnetic sensors, which we found really exciting because it’s about the simplest way we’ve seen to play with quantum physics at home. After that story ran, eagle-eyed reader Kealan noticed that Brian over at the “Real Engineering” channel on YouTube had recently run a video on anti-submarine warfare, which includes the uses of similar quantum magnetometers to detect submarines. The magnetometers in the video are based on the Zeeman effect and use laser-pumped helium atoms to detect tiny variations in the Earth’s magnetic field due to large ferrous objects like submarines. Pretty cool video; check it out.

And finally, if you have the slightest interest in civil engineering you’ve got to check out Animagraff’s recent 3D tour of the insides of Hoover Dam. If you thought a dam was just a big, boring block of concrete dumped in the middle of a river, think again. The video is incredibly detailed and starts with accurate 3D models of Black Canyon before the dam was built. Every single detail of the dam is shown, with the “X-ray views” of the dam with the surrounding rock taken away being our favorite bit — reminds us a bit of the book Underground by David Macaulay. But at the end of the day, it’s the enormity of Hoover Dam that really comes across in this video. The way that the structure dwarfs the human-for-scale included in almost every sequence is hard to express — megalophobics, beware. We were also floored by just how much machinery is buried in all that concrete. Sure, we knew about the generators, but the gates on the intake towers and the way the spillways work were news to us. Highly recommended.

Clickspring’s Experimental Archaeology: Concentric Thin-Walled Tubing

25 Abril 2025 at 08:00

It’s human nature to look at the technological achievements of the ancients — you know, anything before the 1990s — and marvel at how they were able to achieve precision results in such benighted times. How could anyone create a complicated mechanism without the aid of CNC machining and computer-aided design tools? Clearly, it was aliens.

Or, as [Chris] from Click Spring demonstrates by creating precision nesting thin-wall tubing, it was human beings running the same wetware as what’s running between our ears but with a lot more patience and ingenuity. It’s part of his series of experiments into how the craftsmen of antiquity made complicated devices like the Antikythera mechanism with simple tools. He starts by cleaning up roughly wrought brass rods on his hand-powered lathe, followed by drilling and reaming to create three tubes with incremental precision bores. He then creates matching pistons for each tube, with an almost gas-tight enough fit right off the lathe.

Getting the piston fit to true gas-tight precision came next, by lapping with a jeweler’s rouge made from iron swarf recovered from the bench. Allowed to rust and ground to a paste using a mortar and pestle, the red iron oxide mixed with olive oil made a dandy fine abrasive, perfect for polishing the metal to a high gloss finish. Making the set of tubes concentric required truing up the bores on the lathe, starting with the inner-most tube and adding the next-largest tube once the outer diameter was lapped to spec.

Easy? Not by a long shot! It looks like a tedious job that we suspect was given to the apprentice while the master worked on more interesting chores. But clearly, it was possible to achieve precision challenging today’s most exacting needs with nothing but the simplest tools and plenty of skill.

To See Within: Detecting X-Rays

23 Abril 2025 at 14:00

It’s amazing how quickly medical science made radiography one of its main diagnostic tools. Medicine had barely emerged from its Dark Age of bloodletting and the four humours when X-rays were discovered, and the realization that the internal structure of our bodies could cast shadows of this mysterious “X-Light” opened up diagnostic possibilities that went far beyond the educated guesswork and exploratory surgery doctors had relied on for centuries.

The problem is, X-rays are one of those things that you can’t see, feel, or smell, at least mostly; X-rays cause visible artifacts in some people’s eyes, and the pencil-thin beam of a CT scanner can create a distinct smell of ozone when it passes through the nasal cavity — ask me how I know. But to be diagnostically useful, the varying intensities created by X-rays passing through living tissue need to be translated into an image. We’ve already looked at how X-rays are produced, so now it’s time to take a look at how X-rays are detected and turned into medical miracles.

Taking Pictures

For over a century, photographic film was the dominant way to detect medical X-rays. In fact, years before Wilhelm Conrad Röntgen’s first systematic study of X-rays in 1895, fogged photographic plates during experiments with a Crooke’s tube were among the first indications of their existence. But it wasn’t until Röntgen convinced his wife to hold her hand between one of his tubes and a photographic plate to create the first intentional medical X-ray that the full potential of radiography could be realized.

“Hand mit Ringen” by W. Röntgen, December 1895. Public domain.

The chemical mechanism that makes photographic film sensitive to X-rays is essentially the same as the process that makes light photography possible. X-ray film is made by depositing a thin layer of photographic emulsion on a transparent substrate, originally celluloid but later polyester. The emulsion is a mixture of high-grade gelatin, a natural polymer derived from animal connective tissue, and silver halide crystals. Incident X-ray photons ionize the halogens, creating an excess of electrons within the crystals to reduce the silver halide to atomic silver. This creates a latent image on the film that is developed by chemically converting sensitized silver halide crystals to metallic silver grains and removing all the unsensitized crystals.

Other than in the earliest days of medical radiography, direct X-ray imaging onto photographic emulsions was rare. While photographic emulsions can be exposed by X-rays, it takes a lot of energy to get a good image with proper contrast, especially on soft tissues. This became a problem as more was learned about the dangers of exposure to ionizing radiation, leading to the development of screen-film radiography.

In screen-film radiography, X-rays passing through the patient’s tissues are converted to light by one or more intensifying screens. These screens are made from plastic sheets coated with a phosphorescent material that glows when exposed to X-rays. Calcium tungstate was common back in the day, but rare earth phosphors like gadolinium oxysulfate became more popular over time. Intensifying screens were attached to the front and back covers of light-proof cassettes, with double-emulsion film sandwiched between them; when exposed to X-rays, the screens would glow briefly and expose the film.

By turning one incident X-ray photon into thousands or millions of visible light photons, intensifying screens greatly reduce the dose of radiation needed to create diagnostically useful images. That’s not without its costs, though, as the phosphors tend to spread out each X-ray photon across a physically larger area. This results in a loss of resolution in the image, which in most cases is an acceptable trade-off. When more resolution is needed, single-screen cassettes can be used with one-sided emulsion films, at the cost of increasing the X-ray dose.

Wiggle Those Toes

Intensifying screens aren’t the only place where phosphors are used to detect X-rays. Early on in the history of radiography, doctors realized that while static images were useful, continuous images of body structures in action would be a fantastic diagnostic tool. Originally, fluoroscopy was performed directly, with the radiologist viewing images created by X-rays passing through the patient onto a phosphor-covered glass screen. This required an X-ray tube engineered to operate with a higher duty cycle than radiographic tubes and had the dual disadvantages of much higher doses for the patient and the need for the doctor to be directly in the line of fire of the X-rays. Cataracts were enough of an occupational hazard for radiologists that safety glasses using leaded glass lenses were a common accessory.

How not to test your portable fluoroscope. The X-ray tube is located in the upper housing, while the image intensifier and camera are below. The machine is generally referred to as a “C-arm” and is used in the surgery suite and for bedside pacemaker placements. Source: Nightryder84, CC BY-SA 3.0.

One ill-advised spin-off of medical fluoroscopy was the shoe-fitting fluoroscopes that started popping up in shoe stores in the 1920s. Customers would stick their feet inside the machine and peer at a fluorescent screen to see how well their new shoes fit. It was probably not terribly dangerous for the once-a-year shoe shopper, but pity the shoe salesman who had to peer directly into a poorly regulated X-ray beam eight hours a day to show every Little Johnny’s mother how well his new Buster Browns fit.

As technology improved, image intensifiers replaced direct screens in fluoroscopy suites. Image intensifiers were vacuum tubes with a large input window coated with a fluorescent material such as zinc-cadmium sulfide or sodium-cesium iodide. The phosphors convert X-rays passing through the patient to visible light photons, which are immediately converted to photoelectrons by a photocathode made of cesium and antimony. The electrons are focused by coils and accelerated across the image intensifier tube by a high-voltage field on a cylindrical anode. The electrons pass through the anode and strike a phosphor-covered output screen, which is much smaller in diameter than the input screen. Incident X-ray photons are greatly amplified by the image intensifier, making a brighter image with a lower dose of radiation.

Originally, the radiologist viewed the output screen using a microscope, which at least put a little more hardware between his or her eyeball and the X-ray source. Later, mirrors and lenses were added to project the image onto a screen, moving the doctor’s head out of the direct line of fire. Later still, analog TV cameras were added to the optical path so the images could be displayed on high-resolution CRT monitors in the fluoroscopy suite. Eventually, digital cameras and advanced digital signal processing were introduced, greatly streamlining the workflow for the radiologist and technologists alike.

Get To The Point

So far, all the detection methods we’ve discussed fall under the general category of planar detectors, in that they capture an entire 2D shadow of the X-ray beam after having passed through the patient. While that’s certainly useful, there are cases where the dose from a single, well-defined volume of tissue is needed. This is where point detectors come into play.

Nuclear medicine image, or scintigraph, of metastatic cancer. 99Tc accumulates in lesions in the ribs and elbows (A), which are mostly resolved after chemotherapy (B). Note the normal accumulation of isotope in the kidneys and bladder. Kazunari Mado, Yukimoto Ishii, Takero Mazaki, Masaya Ushio, Hideki Masuda and Tadatoshi Takayama, CC BY-SA 2.0.

In medical X-ray equipment, point detectors often rely on some of the same gas-discharge technology that DIYers use to build radiation detectors at home. Geiger tubes and ionization chambers measure the current created when X-rays ionize a low-pressure gas inside an electric field. Geiger tubes generally use a much higher voltage than ionization chambers, and tend to be used more for radiological safety, especially in nuclear medicine applications, where radioisotopes are used to diagnose and treat diseases. Ionization chambers, on the other hand, were often used as a sort of autoexposure control for conventional radiography. Tubes were placed behind the film cassette holders in the exam tables of X-ray suites and wired into the control panels of the X-ray generators. When enough radiation had passed through the patient, the film, and the cassette into the ion chamber to yield a correct exposure, the generator would shut off the X-ray beam.

Another kind of point detector for X-rays and other kinds of radiation is the scintillation counter. These use a crystal, often cesium iodide or sodium iodide doped with thallium, that releases a few visible light photons when it absorbs ionizing radiation. The faint pulse of light is greatly amplified by one or more photomultiplier tubes, creating a pulse of current proportional to the amount of radiation. Nuclear medicine studies use a device called a gamma camera, which has a hexagonal array of PM tubes positioned behind a single large crystal. A patient is injected with a radioisotope such as the gamma-emitting technetium-99, which accumulates mainly in the bones. Gamma rays emitted are collected by the gamma camera, which derives positional information from the differing times of arrival and relative intensity of the light pulse at the PM tubes, slowly building a ghostly skeletal map of the patient by measuring where the 99Tc accumulated.

Going Digital

Despite dominating the industry for so long, the days of traditional film-based radiography were clearly numbered once solid-state image sensors began appearing in the 1980s. While it was reliable and gave excellent results, film development required a lot of infrastructure and expense, and resulted in bulky films that required a lot of space to store. The savings from doing away with all the trappings of film-based radiography, including the darkrooms, automatic film processors, chemicals, silver recycling, and often hundreds of expensive film cassettes, is largely what drove the move to digital radiography.

After briefly flirting with phosphor plate radiography, where a sensitized phosphor-coated plate was exposed to X-rays and then “developed” by a special scanner before being recharged for the next use, radiology departments embraced solid-state sensors and fully digital image capture and storage. Solid-state sensors come in two flavors: indirect and direct. Indirect sensor systems use a large matrix of photodiodes on amorphous silicon to measure the light given off by a scintillation layer directly above it. It’s basically the same thing as a film cassette with intensifying screens, but without the film.

Direct sensors, on the other hand, don’t rely on converting the X-ray into light. Rather, a large flat selenium photoconductor is used; X-rays absorbed by the selenium cause electron-hole pairs to form, which migrate to a matrix of fine electrodes on the underside of the sensor. The current across each pixel is proportional to the amount measured to the amount of radiation received, and can be read pixel-by-pixel to build up a digital image.

A Scratch-Built Commodore 64, Turing Style

23 Abril 2025 at 08:00

Building a Commodore 64 is among the easier projects for retrocomputing fans to tackle. That’s because the C64’s core chipset does most of the heavy lifting; source those and you’re probably 80% of the way there. But what if you can’t find those chips, or if you want more of a challenge than plugging and chugging? Are you out of luck?

Hardly. The video below from [DrMattRegan] is the first in a series on his scratch-built C64 that doesn’t use the core chipset, and it looks pretty promising. This video concentrates on building a replacement for the 6502 microprocessor — actually the 6510, but close enough — using just a couple of EPROMs, some SRAM chips, and a few standard logic chips to glue everything together. He uses the EPROMs as a “rulebook” that contains the code to emulate the 6502 — derived from his earlier Turing 6502 project — and the SRAM chips as a “notebook” for scratch memory and registers to make a Turing-complete random access machine.

[DrMatt] has made good progress so far, with the core 6502 CPU built on a PCB and able to run the Apple II version of Pac-Man as a benchmark. We’re looking forward to the rest of this series, but in the meantime, a look back at his VIC-less VIC-20 project might be informative.

Thanks to [Clint] for the tip.

Hackaday Links: April 20, 2025

20 Abril 2025 at 23:00
Hackaday Links Column Banner

We appear to be edging ever closer to a solid statement of “We are not alone” in the universe with this week’s announcement of the detection of biosignatures in the atmosphere of exoplanet K2-18b. The planet, which is 124 light-years away, has been the focus of much attention since it was discovered in 2015 using the Kepler space telescope because it lies in the habitable zone around its red-dwarf star. Initial observations with Hubble indicated the presence of water vapor, and follow-up investigations using the James Webb Space Telescope detected all sorts of goodies in the atmosphere, including carbon dioxide and methane. But more recently, JWST saw signs of dimethyl sulfide (DMS) and dimethyl disulfide (DMDS), organic molecules which, on Earth, are strongly associated with biological processes in marine bacteria and phytoplankton.

The team analyzing the JWST data says that the data is currently pretty good, with a statistical significance of 99.7%. That’s a three-sigma result, and while it’s promising, it’s not quite good enough to seal the deal that life evolved more than once in the universe. If further JWST observations manage to firm that up to five sigma, it’ll be the most important scientific result of all time. To our way of thinking, it would be much more significant than finding evidence of ancient or even current life in our solar system, since cross-contamination is so easy in the relatively cozy confines of the Sun’s gravity well. K2-18b is far enough away from our system as to make that virtually impossible, and that would say a lot about the universality of biochemical evolution. It could also provide an answer to the Fermi Paradox, since it could indicate that the galaxy is actually teeming with life but under conditions that make it difficult to evolve into species capable of making detectable techno-signatures. It’s hard to build a radio or a rocket when you live on a high-g water world, after all.

Closer to home, there’s speculation that the famous Antikythera mechanism may not have worked at all in its heyday. According to researchers from Universidad Nacional de Mar del Plata in Argentina, “the world’s first analog computer” could not have worked due to the accumulated mechanical error of its gears. They blame this on the shape of the gear teeth, which appear triangular on CT scans of the mechanism, and which they seem to attribute to manufacturing defects. Given the 20-odd centuries the brass-and-iron device spent at the bottom of the Aegean Sea and the potential for artifacts in CT scans, we’re not sure it’s safe to pin the suboptimal shape of the gear teeth on the maker of the mechanism. They also seem to call into question the ability of 1st-century BCE craftsmen to construct a mechanism with sufficient precision to serve as a useful astronomical calculator, a position that Chris from Clickspring has been putting the lie to with his ongoing effort to reproduce the Antikythera mechanism using ancient tools and materials. We’re keen to hear what he has to say about this issue.

Speaking of questionable scientific papers, have you heard about “vegetative electron microscopy”? It’s all the rage, having been mentioned in at least 22 scientific papers recently, even though no such technique exists. Or rather, it didn’t exist until around 2017, when it popped up in a couple of Iranian scientific papers. How it came into being is a bit of a mystery, but it may have started with faulty scans of a paper from the 1950s, which had the terms “vegetative” and “electron microscopy” printed in different columns but directly across from each other. That somehow led to the terms getting glued together, possibly in one of those Iranian papers because the Farsi spelling of “vegetative” is very similar to “scanning,” a much more sensible prefix to “electron microscopy.” Once the nonsense term was created, it propagated into subsequent papers of dubious scientific provenance by authors who didn’t bother to check their references, or perhaps never existed in the first place. The wonders of our AI world never cease to amaze.

And finally, from the heart of Silicon Valley comes a tale of cyber hijinks as several crosswalks were hacked to taunt everyone’s favorite billionaires. Twelve Palo Alto crosswalks were targeted by persons unknown, who somehow managed to gain access to the voice announcement system in the crosswalks and replaced the normally helpful voice messages with deep-fake audio of Elon Musk and Mark Zuckerberg saying ridiculous but plausible things. Redwood City and Menlo Park crosswalks may have also been attacked, and soulless city officials responded by disabling the voice feature. We get why they had to do it, but as cyberattacks go, this one seems pretty harmless.

Designing an FM Drum Synth from Scratch

17 Abril 2025 at 20:00

How it started: a simple repair job on a Roland drum machine. How it ended: a scratch-built FM drum synth module that’s completely analog, and completely cool.

[Moritz Klein]’s journey down the analog drum machine rabbit hole started with a Roland TR-909, a hybrid drum machine from the mid-80s that combined sampled sounds with analog synthesis. The unit [Moritz] picked up was having trouble with the decay on the kick drum, so he spread out the gloriously detailed schematic and got to work. He breadboarded a few sections of the kick drum circuit to aid troubleshooting, but one thing led to another and he was soon in new territory.

The video below is on the longish side, with the first third or so dedicated to recreating the circuits used to create the 909’s iconic sound, slightly modifying some of them to simplify construction. Like the schematic that started the whole thing, this section of the video is jam-packed with goodness, too much to detail here. But a few of the gems that caught our eye were the voltage-controlled amplifier (VCA) circuit that seems to make appearances in multiple places in the circuit, and the dead-simple wave-shaper circuit, which takes some of the harmonics out of the triangle wave oscillator’s output with just a couple of diodes and some resistors.

Once the 909’s kick and toms section had been breadboarded, [Moritz] turned his attention to adding something Roland hadn’t included: frequency modulation. He did this by adding a second, lower-frequency voltage-controlled oscillator (VCO) and using that to modulate the drum section. That resulted in a weird, metallic sound that can be tuned to imitate anything from a steel drum to a bell. He also added a hi-hat and cymbal section by mixing the square wave outputs on the VCOs through a funky XOR gate made from discrete components and a high-pass filter.

There’s a lot of information packed into this video, and by breaking everything down into small, simple blocks, [Moritz] makes it easy to understand analog synths and the circuits behind them.

An Absolute Zero of a Project

17 Abril 2025 at 02:00

How would you go about determining absolute zero? Intuitively, it seems like you’d need some complicated physics setup with lasers and maybe some liquid helium. But as it turns out, all you need is some simple lab glassware and a heat gun. And a laser, of course.

To be clear, the method that [Markus Bindhammer] describes in the video below is only an estimation of absolute zero via Charles’s Law, which describes how gases expand when heated. To gather the needed data, [Marb] used a 50-ml glass syringe mounted horizontally on a stand and fitted with a thermocouple. Across from the plunger of the syringe he placed a VL6180 laser time-of-flight sensor, to measure the displacement of the plunger as the air within it expands.

Data from the TOF sensor and the thermocouple were recorded by a microcontroller as the air inside the syringe was gently heated. Plotting the volume of the gas versus the temperature results shows a nicely linear relationship, and the linear regression can be used to calculate the temperature at which the volume of the gas would be zero. The result: -268.82°C, or only about four degrees off from the accepted value of -273.15°. Not too shabby.

[Marb] has been on a tear lately with science projects like these; check out his open-source blood glucose measurement method or his all-in-one electrochemistry lab.

Homemade VNA Delivers High-Frequency Performance on a Budget

16 Abril 2025 at 11:00

With vector network analyzers, the commercial offerings seem to come in two flavors: relatively inexpensive but limited capabilities, and full-featured but scary expensive. There doesn’t seem to be much middle ground, especially if you want something that performs well in the microwave bands.

Unless, of course, you build your own vector network analyzer (VNA). That’s what [Henrik Forsten] did, and we’ve got to say we’re even more impressed by the results than we were with his earlier effort. That version was not without its problems, and fixing them was very much on the list of goals for this build. Keeping the build affordable was also key, which resulted in some design compromises while still meeting [Henrik]’s measurement requirements.

The Bill of Materials includes dual-channel broadband RF mixer chips, high-speed 12-bit ADCs, and a fast FPGA to handle the torrent of data and run the digital signal processing functions. The custom six-layer PCB is on the large side and includes large cutouts for the directional couplers, which use short lengths of stripped coaxial cable lined with ferrite rings. To properly isolate signals between stages, [Henrik] sandwiched the PCB between a two-piece aluminum enclosure. Wisely, he printed a prototype enclosure and lined it with aluminum foil to test for fit and function before committing to milling the final version. He did note some leakage around the SMA connectors, but a few RF gaskets made from scraps of foil and solder braid did the trick.

This is a pretty slick build, especially considering he managed to keep the price tag at a very reasonable $300. It’s more expensive than the popular NanoVNA or its clones, but it seems like quite a bargain considering its capabilities.

❌
❌