Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

The Potential Big Boom In Every Dust Cloud

Por: Maya Posch
2 Junio 2025 at 14:00

To the average person, walking into a flour- or sawmill and seeing dust swirling around is unlikely to evoke much of a response, but those in the know are quite likely to bolt for the nearest exit at this harrowing sight. For as harmless as a fine cloud of flour, sawdust or even coffee creamer may appear, each of these have the potential for a massive conflagration and even an earth-shattering detonation.

As for the ‘why’, the answer can be found in for example the working principle behind an internal combustion engine. While a puddle of gasoline is definitely flammable, the only thing that actually burns is the evaporated gaseous form above the liquid, ergo it’s a relatively slow process; in order to make petrol combust, it needs to be mixed in the right air-fuel ratio. If this mixture is then exposed to a spark, the fuel will nearly instantly burn, causing a detonation due to the sudden release of energy.

Similarly, flour, sawdust, and many other substances in powder form will burn gradually if a certain transition interface is maintained. A bucket of sawdust burns slowly, but if you create a sawdust cloud, it might just blow up the room.

This raises the questions of how to recognize this danger and what to do about it.

Welcome To The Chemical Safety Board

In an industrial setting, people will generally acknowledge that oil refineries and chemical plants are dangerous and can occasionally go boom in rather violent ways. More surprising is that something as seemingly innocuous as a sugar refinery and packing plant can go from a light sprinkling of sugar dust to a violent and lethal explosion within a second. This is however what happened in 2008 at the Georgia Imperial Sugar refinery, which killed fourteen and injured thirty-six. During this disaster, a primary and multiple secondary explosions ripped through the building, completely destroying it.

Georgia Imperial Sugar Refinery aftermath in 2008. (Credit: USCSB)
Georgia Imperial Sugar Refinery aftermath in 2008. (Credit: USCSB)

As described in the US Chemical Safety Board (USCSB) report with accompanying summary video (embedded below), the biggest cause was a lack of ventilation and cleaning that allowed for a build-up of sugar dust, with an ignition source, likely an overheated bearing, setting off the primary explosion. This explosion then found subsequent fuel to ignite elsewhere in the building, setting off a chain reaction.

What is striking is just how simple and straightforward both the build-up towards the disaster and the means to prevent it were. Even without knowing the exact air-fuel ratio for the fuel in question, there are only two points on the scale where you have a mixture that will not violently explode in the presence of an ignition source.

These are either a heavily saturated solution — too much fuel, not enough air — or the inverse. Essentially, if the dust-collection systems at the Imperial Sugar plant had been up to the task, and expanded to all relevant areas, the possibility of an ignition event would have likely been reduced to zero.

Things Like To Burn

In the context of dust explosions, it’s somewhat discomforting to realize just how many things around us are rather excellent sources of fuel. The aforementioned sugar, for example, is a carbohydrate (Cm(H2O)n). This chemical group also includes cellulose, which is a major part of wood dust, explaining why reducing dust levels in a woodworking shop is about much more than just keeping one’s lungs happy. Nobody wants their backyard woodworking shop to turn into a mini-Imperial Sugar ground zero, after all.

Carbohydrates aren’t far off from hydrocarbons, which includes our old friend petrol, as well as methane (CH4), butane (C4H10), etc., which are all delightfully combustible. All that the carbohydrates have in addition to carbon and hydrogen atoms are a lot of oxygen atoms, which is an interesting addition in the context of them being potential fuel sources. It incidentally also illustrates how important carbon is for life on this planet since its forms the literal backbone of its molecules.

Although one might conclude from this that only something which is a carbohydrate or hydrocarbon is highly flammable, there’s a whole other world out there of things that can burn. Case in point: metals.

Lit Metals

On December 9, 2010, workers were busy at the New Cumberland AL Solutions titanium plant in West Virginia, processing titanium powder. At this facility, scrap titanium and zirconium were milled and blended into a powder that got pressed into discs. Per the report, a malfunction inside one blender created a heat source that ignited the metal powder, killing three employees and injuring one contractor. As it turns out, no dust control methods were installed at the plant, allowing for uncontrolled dust build-up.

As pointed out in the USCSB report, both titanium and zirconium will readily ignite in particulate form, with zirconium capable of auto-igniting in air at room temperature. This is why the milling step at AL Solutions took place submerged in water. After ignition, titanium and zirconium require a Class D fire extinguisher, but it’s generally recommended to let large metal fires burn out by themselves. Using water on larger titanium fires can produce hydrogen, leading conceivably to even worse explosions.

The phenomenon of metal fires is probably best known from thermite. This is a mixture of a metal powder and a metal oxide. After ignited by an initial source of heat, the redox process becomes self-sustaining, providing the fuel, oxygen, and heat. While generally iron(III) oxide and aluminium are used, many more metals and metal oxides can be combined, including a copper oxide for a very rapid burn.

While thermite is intentionally kept as a powder, and often in some kind of container to create a molten phase that sustains itself, it shouldn’t be hard to imagine what happens if the metal is ground into a fine powder, distributed as a fine dust cloud in a confined room and exposed to an ignition source. At that point the differences between carbohydrates, hydrocarbons and metals become mostly academic to any survivors of the resulting inferno.

Preventing Dust Explosions

As should be quite obvious at this point, there’s no real way to fight a dust explosion, only to prevent it. Proper ventilation, preventing dust from building up and having active dust extraction in place where possible are about the most minimal precautions one should take. Complacency as happened at the Imperial Sugar plant merely invites disaster: if you can see the dust build-up on surfaces & dust in the air, you’re already at least at DEFCON 2.

A demonstration of how easy it is to create a solid dust explosion came from the Mythbusters back in 2008 when they tested the ‘sawdust cannon’ myth. This involved blowing sawdust into a cloud and igniting it with a flare, creating a massive fireball. After nearly getting their facial hair singed off with this roaring success, they then tried the same with non-dairy coffee creamer, which created an even more massive fireball.

Fortunately the Mythbusters build team was supervised by adults on the bomb range for these experiments, as it shows just how incredibly dangerous dust explosions can be. Even out in the open on a secure bomb range, never mind in an enclosed space, as hundreds have found out over the decades in the US alone. One only has to look at the USCSB’s dust explosions statistics to learn to respect the dangers a bit more.

The Cost of a Cheap UPS is 10 Hours and a Replacement PCB

Por: Maya Posch
29 Mayo 2025 at 08:00

Recently [Florin] was in the market for a basic uninterruptible power supply (UPS) to provide some peace of mind for the smart home equipment he had stashed around. Unfortunately, the cheap Serioux LD600LI unit he picked up left a bit to be desired, and required a bit of retrofitting.

To be fair, the issues that [Florin] ended up dealing with were less about the UPS’ capability to deal with these power issues, and more with the USB interface on the UPS. Initially the UPS seemed to communicate happily with HomeAssistant (HA) via Network UPS Tools over a generic USB protocol, after figuring out what device profile matched this re-branded generic UPS. That’s when HA began to constantly lose the connection with the UPS, risking its integration in the smart home setup.

The old and new USB-serial boards side by side. (Credit: VoltLog, YouTube)
The old and new USB-serial boards side by side. (Credit: VoltLog, YouTube)

After tearing down the UPS to see what was going on, [Florin] found that it used a fairly generic USB-serial adapter featuring the common Cypress CY7C63310 family of low-speed USB controller. Apparently the firmware on this controller was simply not up to the task or poorly implemented, so a replacement was needed.

The process and implementation is covered in detail in the video. It’s quite straightforward, taking the 9600 baud serial link from the UPS’ main board and using a Silabs CP2102N USB-to-UART controller to create a virtual serial port on the USB side. These conversion boards have to be fully isolated, of course, which is where the HopeRF CMT8120 dual-channel digital isolator comes into play.

After assembly it almost fully worked, except that a Sonoff Zigbee controller in the smart home setup used the same Silabs controller, with thus the same USB PID/VID combo. Fortunately in Silabs AN721 it’s described how you can use an alternate PID (0xEA63) which fixed this issue until the next device with a CP2102N is installed

As it turns out, the cost of a $40 UPS is actually 10 hours of work and $61 in parts, although one cannot put a value on all the lessons learned here.

Remotely Interesting: Stream Gages

28 Mayo 2025 at 14:00

Near my childhood home was a small river. It wasn’t much more than a creek at the best of times, and in dry summers it would sometimes almost dry up completely. But snowmelt revived it each Spring, and the remains of tropical storms in late Summer and early Fall often transformed it into a raging torrent if only briefly before the flood waters receded and the river returned to its lazy ways.

Other than to those of us who used it as a playground, the river seemed of little consequence. But it did matter enough that a mile or so downstream was some sort of instrumentation, obviously meant to monitor the river. It was — and still is — visible from the road, a tall corrugated pipe standing next to the river, topped with a box bearing the logo of the US Geological Survey. On occasion, someone would visit and open the box to do mysterious things, which suggested the river was interesting beyond our fishing and adventuring needs.

Although I learned quite early that this device was a streamgage, and that it was part of a large network of monitoring instruments the USGS used to monitor the nation’s waterways, it wasn’t until quite recently — OK, this week — that I learned how streamgages work, or how extensive the network is. A lot of effort goes into installing and maintaining this far-flung network, and it’s worth looking at how these instruments work and their impact on everyday life.

Inventing Hydrography

First, to address the elephant in the room, “gage” is a rarely used but accepted alternative spelling of “gauge.” In general, gage tends to be used in technical contexts, which certainly seems to be the case here, as opposed to a non-technical context such as “A gauge of public opinion.” Moreover, the USGS itself uses that spelling, for interesting historical reasons that they’ve apparently had to address often enough that they wrote an FAQ on the subject. So I’ll stick with the USGS terminology in this article, even if I really don’t like it that much.

With that out of the way, the USGS has a long history of monitoring the nation’s rivers. The first streamgaging station was established in 1889 along the Rio Grande River at a railroad station in Embudo, New Mexico. Measurements were entirely manual in those days, performed by crews trained on-site in the nascent field of hydrography. Many of the tools and methods that would be used through the rest of the 19th century to measure the flow of rivers throughout the West and later the rest of the nation were invented at Embudo.

Then as now, river monitoring boils down to one critical measurement: discharge rate, or the volume of water passing a certain point in a fixed amount of time. In the US, discharge rate is measured in cubic feet per second, or cfs. The range over which discharge rate is measured can be huge, from streams that trickle a few dozen cubic feet of water every second to the over one million cfs discharge routinely measured at the mouth of the mighty Mississippi each Spring.

Measurements over such a wide dynamic range would seem to be an engineering challenge, but hydrographers have simplified the problem by cheating a little. While volumetric flow in a closed container like a pipe is relatively easy — flowmeters using paddlewheels or turbines are commonly used for such a task — direct measurement of flow rates in natural watercourses is much harder, especially in navigable rivers where such measuring instruments would pose a hazard to navigation. Instead, the USGS calculates the discharge rate indirectly using stream height, often referred to as flood stage.

Beside Still Waters

Schematic of a USGS stilling well. The water level in the well tracks the height of the stream, with a bit of lag. The height of the water column in the well is easier to read than the surface of the river. Source: USGS, public domain.

The height of a river at any given point is much easier to measure, with the bonus that the tools used for this task lend themselves to continuous measurements. Stream height is the primary data point of each streamgage in the USGS network, which uses several different techniques based on the specific requirements of each site.

A float-tape gage, with a counterweighted float attached to an encoder by a stainless steel tape. The encoder sends the height of the water column in the stilling well to the data logger. Source: USGS, public domain.

The most common is based on a stilling well. Stilling wells are vertical shafts dug into the bank adjacent to a river. The well is generally large enough for a technician to enter, and is typically lined with either concrete or steel conduit, such as the streamgage described earlier. The bottom of the shaft, which is also lined with an impervious material such as concrete, lies below the bottom of the river bed, while the height of the well is determined by the highest expected flood stage for the river. The lumen of the well is connected to the river via a pair of pipes, which terminate in the water above the surface of the riverbed. Water fills the well via these input pipes, with the level inside the well matching the level of the water in the river.

As the name implies, the stilling well performs the important job of damping any turbulence in the river, allowing for a stable column of water whose height can be easily measured. Most stilling wells measure the height of the water column with a float connected to a shaft encoder by a counterweighted stainless steel tape. Other stilling wells are measured using ultrasonic transducers, radar, or even lidar scanners located in the instrument shelter on the top of the well, which translate time-of-flight to the height of the water column.

While stilling well gages are cheap and effective, they are not without their problems. Chief among these is dealing with silt and debris. Even though intakes are placed above the bottom of the river, silt enters the stilling well and settles into the sump. This necessitates frequent maintenance, usually by flushing the sump and the intake lines using water from a flushing tank located within the stilling well. In rivers with a particularly high silt load, there may be a silt trap between the intakes and the stilling well. Essentially a concrete box with a series of vertical baffles, the silt trap allows silt to settle out of the river water before it enters the stilling well, and must be cleaned out periodically.

Bubbles, Bubbles

Bubble gages often live on pilings or other structures within the watercourse.

Making up for some of the deficiencies of the stilling well is the bubble gage, which measures river stage using gas pressure. A bubble gage typically consists of a small air pump or gas cylinders inside the instrument shelter, plumbed to a pipe that comes out below the surface of the river. As with stilling wells, the tube is fixed at a known point relative to a datum, which is the reference height for that station. The end of the pipe in the water has an orifice of known size, while the supply side has regulators and valves to control the flow of gas. River stage can be measured by sensing the gas pressure in the system, which will increase as the water column above the orifice gets higher.

Bubble gages have a distinct advantage over stilling wells in rivers with a high silt load, since the positive pressure through the orifice tends to keep silt out of the works. However, bubble gages tend to need a steady supply of electricity to power their air pump continuously, or for gages using bottled gas, frequent site visits for replenishment. Also, the pipe run to the orifice needs to be kept fairly short, meaning that bubble gage instrument shelters are often located on pilings within the river course or on bridge abutments, which can make maintenance tricky and pose a hazard to navigation.

While bubble gages and stilling wells are the two main types of gaging stations for fixed installations, the USGS also maintains a selection of temporary gaging instruments for tactical use, often for response to natural disasters. These Rapid Deployment Gages (RDGs) are compact units designed to affix to the rail of a bridge or some other structure across the river. Most RDGs use radar to sense the water level, but some use sonar.

Go With the Flow

No matter what method is used to determine the stage of a river, calculating the discharge rate is the next step. To do that, hydrographers have to head to the field and make flow measurements. By measuring the flow rates at intervals across the river, preferably as close as possible to the gaging station, the total flow through the channel at that point can be estimated, and a calibration curve relating flow rate to stage can be developed. The discharge rate can then be estimated from just the stage reading.

Flow readings are taken using a variety of tools, depending on the size of the river and the speed of the current. Current meters with bucket wheels can be lowered into a river on a pole; the flow rotates the bucket wheel and closes electrical contacts that can be counted on an electromagnetic totalizer. More recently, Acoustic Doppler Current Profilers (ADCPs) have come into use. These use ultrasound to measure the velocity of particulates in the water by their Doppler shift.

Crews can survey the entire width of a small stream by wading, from boats, or by making measurements from a convenient bridge. In some remote locations where the river is especially swift, the USGS may erect a cableway across the river, so that measurements can be taken at intervals from a cable car.

Nice work if you can get it. USGS crew making flow measurements from a cableway over the American River in California using an Acoustic Doppler Current Profiler. Source: USGS, public domain.

From Paper to Satellites

In the earliest days of streamgaging, recording data was strictly a pen-on-paper process. Station log books were updated by hydrographers for every observation, with results transmitted by mail or telegraph. Later, stations were equipped with paper chart recorders using a long-duration clockwork mechanism. The pen on the chart recorder was mechanically linked to the float in a stilling well, deflecting it as the river stage changed and leaving a record on the chart. Electrical chart recorders came next, with the position of the pen changing based on the voltage through a potentiometer linked to the float.

Chart recorders, while reliable, have the twin disadvantages of needing a site visit to retrieve the data and requiring a tedious manual transcription of the chart data to tabular form. To solve the latter problem, analog-digital recorders (ADRs) were introduced in the 1960s. These recorded stage data on paper tape as four binary-coded decimal (BCD) digits. The time of each stage reading was inferred from its position on the tape, given a known starting time and reading interval. Tapes still had to be retrieved from each station, but at least reading the data back at the office could be automated with a paper tape reader.

In the 1980s and 1990s, gaging stations were upgraded to electronic data loggers, with small solar panels and batteries where grid power wasn’t available. Data was stored locally in the logger between maintenance visits by a hydrographer, who would download the data. Alternately, gaging stations located close to public rights of way sometimes had leased telephone lines for transmitting data at intervals via modem. Later, gaging stations started sprouting cross-polarized Yagi antennas, aimed at one of the Geostationary Operational Environmental Satellites (GOES). Initially, gaging stations used one of the GOES low data rate telemetry channels with a 100 to 300 bps connection. This gave hydrologists near-real-time access to gaging data for the first time. Since 2013, all stations have been upgraded to a high data rate channel that allows up to 1,200 bps telemetry.

Currently, gage data is collected every 15 minutes normally, although the interval can be increased to every 5 minutes at times of peak flow. Data is buffered locally before a GOES uplink, which is about every hour or so, or as often as every 15 minutes in peak flow or emergencies. The uplink frequencies and intervals are very well documented on the USGS site, so you can easily pick them up with an SDR, and you can see if the creek is rising from the comfort of your own shack.

Reverse Engineering LEGO Island

24 Mayo 2025 at 23:00

While LEGO themed video games have become something of a staple, in 1997 they were something of an odity. LEGO Island became the first LEGO video game released outside of Japan in 1997 and become something of a hit with over one million copies sold. The game was beloved among fans and set the stage for more LEGO video games to come. In an effort of love, [MattKC] put together a team to reverse engineer the game.

The team set out with the intent to create a near perfect recreation of the codebase, relying on custom made tools to run byte checks on the rewrite compilation and the original binary. While the project is functionally complete, [MattKC] believes it is impossible to get a byte accurate codebase. This is because of what the team called “compiler entropy.” Strange behaviors exists inside of Microsoft’s Visual C++ compiler of the era, and small changes in the code have seemingly random effects to unrelated parts of the binary. To mitigate this issue would likely require either partially reverse engineering Visual C++ or brute forcing the code, both of which would take a large amount of effort and time for no real benefit.

Another interesting step the team had to work out was how the game handled graphics. In the version of Direct X used, the developers could chose between immediate mode and retained mode. The difference largely boils down to how models and assets are handled. In immediate mode, Direct X is largely just a render engine and everything else is handled by the developer. With retained mode, Direct X works more similarly to a game engine where all the model and asset management is handled by Direct X. Almost all developers ended up using immediate mode to the point that Microsoft deprecated support for retained mode. For this reason, if you were to download and run LEGO island on a modern Windows PC, it would yell at you for not having the proper libraries. There is debate about how best to handle this moving forward. The team could rely on an unsupported library from Microsoft, reverse engineer that library only making the functions needed, or using leaked source code.

With the completion of the reverse engineering, engineering can commence. For example, an annoying and persistent bug caused the game to crash if you tried to exit. While it was effective in closing the game, it also caused progress to be lost. That particular bug was fixed simply by initializing a variable in the game’s fronted. Interestingly, that bug was not present in the late betas of the game that had been dug up from the depths of the internet leading to questions as to why a rewrite of the fronted was necessary so late in the development. Now efforts are commencing to port the game to other platforms which bring with it fresh headaches including rewriting for OpenGL and the balance of keeping a historically accurate game with the needs of modern development.

 

EMF Forming Was A Neat Aerospace Breakthrough

Por: Lewin Day
24 Mayo 2025 at 02:00

Typically, when we think about forming metal parts, we think about beating them with hammers, or squeezing them with big hydraulic presses. But what if magnets could do the squeezing? As it turns out—Grumman Aerospace discovered they can, several decades ago! Even better, they summed up this technique in a great educational video which we’ve placed below the break.

The video concerns the development of the Grumman EMF Torque Tube. The parts are essentially tubes with gear-like fittings mounted in either end, which are fixed with electromagnetic forming techniques instead of riveting or crimping. Right away, we’re told the key benefits—torque tubes built this way are “stronger, lighter, and more fatigue resistant” than those built with conventional techniques. Grumman used these torque tubes in such famous aircraft as the F-14 Tomcat, highlighting their performance and reliability.

Before…
…and after. The part is formed and the coil is destroyed.

The video goes on to explain the basics of the EMF torque tube production process. A tube is placed inside a coil, with the end fitting then installed inside. A capacitor bank dumps current through the coil to generate a strong electromagnetic field. This field is opposed by a secondary field generated by eddy currents. The two forces result in an explosive force which drives the tube inwards, gripping into the grooves of the end fitting, and destroys the coil in the process. Grumman notes that it specifically optimized a grooving profile for bonding tubes with end fittings, which maximised the strength of these EMF-produced joints.

This tip was sent in by [irox]. The video itself was posted by [Greg Benoit], who notes his father Robert Benoit was intimately involved with the development of the technique. Indeed, it was useful enough that the technology was licensed to Boeing, generating many millions of dollars for Grumman.

We feature all kinds of machining and forming techniques here, but this sort of forming isn’t something we see a lot of around these parts. Still, we’re sure someone will be Kickstarting a home EMF forming machine before the end of next week.

A Brief History of Fuel Cells

22 Mayo 2025 at 14:32

If we asked you to think of a device that converts a chemical reaction into electricity, you’d probably say we were thinking of a battery. That’s true, but there is another device that does this that is both very similar and very different from a battery: the fuel cell.

In a very simple way, you can think of a fuel cell as a battery that consumes the chemicals it uses and allows you to replace those chemicals so that, as long as you have fuel, you can have electricity. However, the truth is a little more complicated than that. Batteries are energy storage devices. They run out when the energy stored in the chemicals runs out. In fact, many batteries can take electricity and reverse the chemical reaction, in effect recharging them. Fuel cells react chemicals to produce electricity. No fuel, no electricity.

Superficially, the two devices seem very similar. Like batteries, fuel cells have an anode and a cathode. They also have an electrolyte, but its purpose isn’t the same as in a conventional battery. Typically, a catalyst causes fuel to oxidize, creating positively charged ions and electrons. These ions move from the anode to the cathode, and the electrons move from the anode, through an external circuit, and then to the cathode, so electric current occurs. As a byproduct, many fuel cells produce potentially useful byproducts like water. NASA has the animation below that shows how one type of cell works.

History

Sir William Grove seems to have made the first fuel cell in 1838, publishing in The London and Edinburgh Philosophical Magazine and Journal of Science. His fuel cell used dilute acid, copper sulphate, along with sheet metal and porcelain. Today, the phosphoric acid fuel cell is similar to Grove’s design.

The Bacon fuel cell is due to Francis Thomas Bacon and uses alkaline fuel. Modern versions of this are in use today by NASA and others. Although Bacon’s fuel cell could produce 5 kW, it was General Electric in 1955 that started creating larger units. GE chemists developed an ion exchange membrane that included a platinum catalyst. Named after the developers, the “Grubb-Niedrach” fuel cell flew in Gemini space capsules. By 1959, a fuel cell tractor prototype was running, as well as a welding machine powered by a Bacon cell.

One of the reasons spacecraft often use fuel cells is that many cells take hydrogen and oxygen as fuel and put out electricity and water. There are already gas tanks available, and you can always use water.

Types of Fuel Cells

Not all fuel cells use the same fuel or produce the same byproducts. At the anode, a catalyst ionizes the fuel, which produces a positive ion and a free electron. The electrolyte, often a membrane, can pass ions, but not the electrons. That way, the ions move towards the cathode, but the electrons have to find another way — through the load — to get to the cathode. When they meet again, a reaction with more fuel and a catalyst produces the byproduct: hydrogen and oxygen form water.

Most common cells use hydrogen and oxygen with an anode catalyst of platinum and a cathode catalyst of nickel. The voltage output per cell is often less than a volt. However, some fuel cells use hydrocarbons. Diesel, methanol, and other hydrocarbons can produce electricity and carbon dioxide as a byproduct, along with water. You can even use some unusual organic inputs, although to be fair, those are microbial fuel cells.

Common types include:

  • Alkaline – The Bacon cell was a fixture in space capsules, using carbon electrodes, a catalyst, and a hydroxide electrolyte.
  • Solid acid – These use a solid acid material as electrolyte. The material is heated to increase conductivity.
  • Phosphoric acid – Another acid-based technology that operates at hotter temperatures.
  • Molten carbonate – These work at high temperatures using lithium potassium carbonate as an electrolyte.
  • Solid oxide – Another high temperature that uses zirconia ceramic as the electrolyte.

In addition to technology, you can consider some fuel cells as stationary — typically producing a lot of power for consumption by some power grid — or mobile.

Using fuel cells in stationary applications is attractive partly because they have no moving parts. However, you need a way to fuel it and — if you want efficiency — you need a way to harness the waste heat produced. It is possible, for example, to use solar power to turn water into gas and then use that gas to feed a fuel cell. It is possible to use the heat directly or to convert it to electricity in a more conventional way.

Space

Fuel cells have a long history in space. You can see how alkaline Bacon cells were used in early fuel cells in the video below.

Apollo (left) and Shuttle (right) fuel cells (from a NASA briefing)

Very early fuel cells — starting with Gemini in 1962 — used a proton exchange membrane. However, in 1967, NASA started using Nafion from DuPont, which was improved over the old membranes.

However, alkaline cells had vastly improved power density, and from Apollo on, these cells, using a potassium hydroxide electrolyte, were standard issue.

Even the Shuttle had fuel cells. Russian spacecraft also had fuel cells, starting with a liquid oxygen-hydrogen cell used on the Soviet Lunar Orbital Spacecraft (LOK).

The shuttle’s power plant measured 14 x 15 x 45 inches and weighed 260 pounds. They were installed under the payload bay, just aft of the crew compartment. They drew cryogenic gases from nearby tanks and could provide 12 kW continuously, and up to 16 kW. However, they typically were taxed at about 50% capacity. Each orbiter’s power plant contained 96 individual cells connected to achieve a 28-volt output.

Going Mobile

There have been attempts to make fuel cell cars, but with the difficulty of delivering, storing, and transporting hydrogen, there has been resistance. The Toyota Mirai, for example, costs $57,000, yet owners sued because they couldn’t obtain hydrogen. Some buses use fuel cells, and a small number of trains (including the one mentioned in the video below).

Surprisingly, there is a market for forklifts using fuel cells. The clean output makes them ideal for indoor operation. Batteries? They take longer to charge and don’t work well in the cold. Fuel cells don’t mind the cold, and you can top them off in three minutes.

There have been attempts to put fuel cells into any vehicle you can imagine. Airplanes, motorcycles, and boats sporting fuel cells have all made the rounds.

Can You DIY?

We have seen a few fuel cell projects, but they all seem to vanish over time. In theory, it shouldn’t be that hard, unless you demand commercial efficiency. However, it can be done, as you can see in the video below. If you make a fuel cell, be sure to send us a tip so we can spread the word.

Featured image: “SEM micrograph of an MEA cross section” by [Xi Yin]

Fault Analysis of a 120W Anker GaNPrime Charger

Por: Maya Posch
21 Mayo 2025 at 08:00

Taking a break from his usual prodding at suspicious AliExpress USB chargers, [DiodeGoneWild] recently had a gander at what used to be a good USB charger.

The Anker 737 USB charger prior to its autopsy. (Credit: DiodeGoneWild, YouTube)
The Anker 737 USB charger prior to its autopsy.

Before it went completely dead, the Anker 737 GaNPrime USB charger which a viewer sent him was capable of up to 120 Watts combined across its two USB-C and one USB-A outputs. Naturally the charger’s enclosure couldn’t be opened non-destructively, and it turned out to have (soft) potting compound filling up the voids, making it a treat to diagnose. Suffice it to say that these devices are not designed to be repaired.

With it being an autopsy, the unit got broken down into the individual PCBs, with a short detected that eventually got traced down to an IC marked ‘SW3536’, which is one of the ICs that communicates with the connected USB device to negotiate the voltage. With the one IC having shorted, it appears that it rendered the entire charger into an expensive paperweight.

Since the charger was already in pieces, the rest of the circuit and its ICs were also analyzed. Here the gallium nitride (GaN) part was found in the Navitas GaNFast NV6136A FET with integrated gate driver, along with an Infineon CoolGaN IGI60F1414A1L integrated power stage. Unfortunately all of the cool technology was rendered useless by one component developing a short, even if it made for a fascinating look inside one of these very chonky USB chargers.

Hackaday Links: May 18, 2025

18 Mayo 2025 at 23:00
Hackaday Links Column Banner

Saw what you want about the wisdom of keeping a 50-year-old space mission going, but the dozen or so people still tasked with keeping the Voyager mission running are some major studs. That’s our conclusion anyway, after reading about the latest heroics that revived a set of thrusters on Voyager 1 that had been offline for over twenty years. The engineering aspects of this feat are interesting enough, but we’re more interested in the social engineering aspects of this exploit, which The Register goes into a bit. First of all, even though both Voyagers are long past their best-by dates, they are our only interstellar assets, and likely will be for centuries to come, or perhaps forever. Sure, the rigors of space travel and the ravages of time have slowly chipped away at what these machines can so, but while they’re still operating, they’re irreplaceable assets.

That makes the fix to the thruster problem all the more ballsy, since the Voyager team couldn’t be 100% sure about the status of the primary thrusters, which were shut down back in 2004. They thought it might have been that the fuel line heaters were still good, but if they actually had gone bad, trying to switch the primary thrusters back on with frozen fuel lines could have resulted in an explosion when Voyager tried to fire them, likely ending in a loss of the spacecraft. So the decision to try this had to be a difficult one, to say the least. Add in an impending shutdown of the only DSN antenna capable of communicating with the spacecraft and a two-day communications round trip, and the pressure must have been unbearable. But they did it, and Voyager successfully navigated yet another crisis. But what we’re especially excited about is discovering a 2023 documentary about the current Voyager mission team called “It’s Quieter in the Twilight.” We know what we’ll be watching this weekend.

Speaking of space exploration, one thing you don’t want to do is send anything off into space bearing Earth microbes. That would be a Very Bad Thing™, especially for missions designed to look for life anywhere else but here. But, it turns out that just building spacecraft in cleanrooms might not be enough, with the discovery of 26 novel species of bacteria growing in the cleanroom used to assemble a Mars lander. The mission in question was Phoenix, which landed on Mars in 2008 to learn more about the planet’s water. In 2007, while the lander was in the Payload Hazardous Servicing Facility at Kennedy Space Center, biosurveillance teams collected samples from the cleanroom floor. Apparently, it wasn’t very clean, with 215 bacterial strains isolated, 26 of which were novel. What’s more, genomic analysis of the new bugs suggests they have genes that make them especially tough, both in their resistance to decontamination efforts on Earth and in their ability to survive the rigors of life in space. We’re not really sure if these results say more about NASA’s cleanliness than they do about the selective pressure that an extreme environment like a cleanroom exerts on fast-growing organisms like bacteria. Either way, it doesn’t bode well for our planetary protection measures.

Closer to home but more terrifying is video from an earthquake in Myanmar that has to be seen to be believed. And even then, what’s happening in the video is hard to wrap your head around. It’s not your typical stuff-falling-off-the-shelf video; rather, the footage is from an outdoor security camera that shows the ground outside of a gate literally ripping apart during the 7.7 magnitude quake in March. The ground just past the fence settles a bit while moving away from the camera a little, but the real action is the linear motion — easily three meters in about two seconds. The motion leaves the gate and landscaping quivering but largely intact; sadly, the same can’t be said for a power pylon in the distance, which crumples as if it were made from toothpicks.

And finally, “Can it run DOOM?” has become a bit of a meme in our community, a benchmark against which hacking chops can be measured. If it has a microprocessor in it, chances are someone has tried to make it run the classic first-person shooter video game. We’ve covered dozens of these hacks before, everything from a diagnostic ultrasound machine to a custom keyboard keycap, while recent examples tend away from hardware ports to software platforms such as a PDF file, Microsoft Word, and even SQL. Honestly, we’ve lost count of the ways to DOOM, which is where Can It Run Doom? comes in handy. It lists all the unique platforms that hackers have tortured into playing the game, as well as links to source code and any relevant video proof of the exploit. Check it out the next time you get the urge to port DOOM to something cool; you wouldn’t want to go through all the work to find out it’s already been done, would you?

LACED: Peeling Back PCB Layers With Chemical Etching and a Laser

Por: Maya Posch
15 Mayo 2025 at 20:00
Exposed inner copper on multilayer PCB. (Credit: mikeselectricstuff, YouTube)

Once a printed circuit board (PCB) has been assembled it’s rather hard to look inside of it, which can be problematic when you have e.g. a multilayer PCB of an (old) system that you really would like to dissect to take a look at the copper layers and other details that may be hidden inside, such as Easter eggs on inner layers. [Lorentio Brodeso]’s ‘LACED’ project offers one such method, using both chemical etching and a 5 Watt diode engraving laser to remove the soldermask, copper and FR4 fiberglass layers.

This project uses sodium hydroxide (NaOH) to dissolve the solder mask, followed by hydrogen chloride (HCl) and hydrogen peroxide (H2O2) to dissolve the copper in each layer. The engraving laser is used for the removing of the FR4 material. Despite the ‘LACED’ acronym standing for Laser-Controlled Etching and Delayering, the chemical method(s) and laser steps are performed independently from each other.

This makes it in a way a variation on the more traditional CNC-based method, as demonstrated by [mikeselectricstuff] (as shown in the top image) back in 2016, alongside the detailed setup video of how a multi-layer PCB was peeled back with enough resolution to make out each successive copper and fiberglass layer.

The term ‘laser-assisted etching’ is generally used for e.g. glass etching with HF or KOH in combination with a femtosecond laser to realize high-resolution optical features, ‘selective laser etching’ where the etchant is assisted by the laser-affected material, or the related laser-induced etching of hard & brittle materials. Beyond these there is a whole world of laser-induced or laser-activated etching or functionalized methods, all of which require that the chemical- and laser-based steps are used in unison.

Aside from this, the use of chemicals to etch away soldermask and copper does of course leave one with a similar messy clean-up as when etching new PCBs, but it can provide more control due to the selective etching, as a CNC’s carbide bit will just as happily chew through FR4 as copper. When reverse-engineering a PCB you will have to pick whatever method works best for you.

Top image: Exposed inner copper on multilayer PCB. (Credit: mikeselectricstuff, YouTube)

Inside Starlink’s User Terminal

15 Mayo 2025 at 02:00

If you talk about Starlink, you are usually talking about the satellites that orbit the Earth carrying data to and from ground stations. Why not? Space is cool. But there’s another important part of the system: the terminals themselves. Thanks to [DarkNavy], you don’t have to tear one open yourself to see what’s inside.

The terminal consists of two parts: the router and the antenna. In this context, antenna is somewhat of a misnomer, since it is really the RF transceiver and antenna all together. The post looks only at the “antenna” part of the terminal.

The unit is 100% full of printed circuit board with many RF chips and a custom ST Microelectronics Cortex A-53 quad-core CPU. There was a hack to gain root shell on the device. This led to SpaceX disabling the UART via a firmware update. However, there is still a way to break in.

[DarkNavy] wanted to look at the code, too, but there was no easy way to dump the flash memory. Desoldering the eMMC chip and reading it was, however, productive. The next step was to create a virtual environment to run the software under Qemu.

There were a few security questions raised. We wouldn’t call them red flags, per see, but maybe pink flags. For example, there are 41 trusted ssh keys placed in the device’s authorized_keys file. That seems like a lot for a production device on your network, but it isn’t any smoking gun.

We’ve watched the cat-and-mouse between Starlink and people hacking the receivers with interest.

Reading the color of money

12 Mayo 2025 at 02:00
Dollar bill validator

Ever wondered what happens when you insert a bill into a vending machine? [Janne] is back with his latest project: reverse engineering a banknote validator. Curious about how these common devices work, he searched for information but found few resources explaining their operation.

To learn more, [Janne] explored the security features that protect banknotes from counterfeiting. These can include microprinting, UV and IR inks, holograms, color-shifting coatings, watermarks, magnetic stripes, and specialty paper. These features not only deter fraud but also enable validators to quickly verify a bill’s authenticity.

[Janne] purchased several banknote validators to disassemble and compare. Despite varied exteriors, their core mechanisms were similar: systems to move the bill smoothly, a tape head to detect magnetic ink or security strips, and optical sensors to inspect visible, UV, and IR features. By reverse engineering the firmware of two devices, he uncovered their inner workings. There is a calibration procedure they use to normalize their readings, then it will analyze a bill through a sophisticated signal processing pipeline. If the data falls within a narrow acceptance range, the device authenticates the bill; otherwise, it rejects it.

Head over to his site to check out all the details he discovered while exploring these devices, as well as exploring the other cool projects he’s worked on in the past. Reverse engineering offers a unique window into technology Check out other projects we’ve featured showcasing this skill.

The Apple II MouseCard IRQ is Synced to Vertical Blanking After All

Por: Maya Posch
10 Mayo 2025 at 05:00
The Apple II MouseCard (Credit: AppleLogic.org)

Recently [Colin Leroy-Mira] found himself slipping into a bit of a rabbit hole while investigating why only under Apple II MAME emulation there was a lot of flickering when using the (emulated) Apple II MouseCard. This issue could not be reproduced on real (PAL or NTSC) hardware. The answer all comes down to how the card synchronizes with the system’s vertical blanking (VBL) while drawing to the screen.

The Apple II MouseCard is one of the many peripheral cards produced for the system, originally bundled with a version of MacPaint for the Apple II. While not a super popular card at the time, it nevertheless got used by other software despite this Apple system still being based around a command line interface.

According to the card’s documentation the interrupt call (IRQ) can be set to 50 or 60 Hz to match the local standard. Confusingly, certain knowledgeable people told him that the card could not be synced to the VBL as it had no knowledge of this. As covered in the article and associated MAME issue ticket, it turns out that the card is very much synced with the VBL exactly as described in The Friendly Manual, with the card’s firmware being run by the system’s CPU, which informs the card of synchronization events.

Big Chemistry: Cement and Concrete

7 Mayo 2025 at 14:00

Not too long ago, I was searching for ideas for the next installment of the “Big Chemistry” series when I found an article that discussed the world’s most-produced chemicals. It was an interesting article, right up my alley, and helpfully contained a top-ten list that I could use as a crib sheet for future articles, at least for the ones I hadn’t covered already, like the Haber-Bosch process for ammonia.

Number one on the list surprised me, though: sulfuric acid. The article stated that it was far and away the most produced chemical in the world, with 36 million tons produced every year in the United States alone, out of something like 265 million tons a year globally. It’s used in a vast number of industrial processes, and pretty much everywhere you need something cleaned or dissolved or oxidized, you’ll find sulfuric acid.

Staggering numbers, to be sure, but is it really the most produced chemical on Earth? I’d argue not by a long shot, when there’s a chemical that we make 4.4 billion tons of every year: Portland cement. It might not seem like a chemical in the traditional sense of the word, but once you get a look at what it takes to make the stuff, how finely tuned it can be for specific uses, and how when mixed with sand, gravel, and water it becomes the stuff that holds our world together, you might agree that cement and concrete fit the bill of “Big Chemistry.”

Rock Glue

To kick things off, it might be helpful to define some basic terms. Despite the tendency to use them as synonyms among laypeople, “cement” and “concrete” are entirely different things. Concrete is the finished building material of which cement is only one part, albeit a critical part. Cement is, for lack of a better term, the glue that binds gravel and sand together into a coherent mass, allowing it to be used as a building material.

What did the Romans ever do for us? The concrete dome of the Pantheon is still standing after 2,000 years. Source: Image by Sean O’Neill from Flickr via Monolithic Dome Institute (CC BY-ND 2.0)

It’s not entirely clear who first discovered that calcium oxide, or lime, mixed with certain silicate materials would form a binder strong enough to stick rocks together, but it certainly goes back into antiquity. The Romans get an outsized but well-deserved portion of the credit thanks to their use of pozzolana, a silicate-rich volcanic ash, to make the concrete that held the aqueducts together and built such amazing structures as the dome of the Pantheon. But the use of cement in one form or another can be traced back at least to ancient Egypt, and probably beyond.

Although there are many kinds of cement, we’ll limit our discussion to Portland cement, mainly because it’s what is almost exclusively manufactured today. (The “Portland” name was a bit of branding by its inventor, Joseph Aspdin, who thought the cured product resembled the famous limestone from the Isle of Portland off the coast of Dorset in the English Channel.)

Portland cement manufacturing begins with harvesting its primary raw material, limestone. Limestone is a sedimentary rock rich in carbonates, especially calcium carbonate (CaCO3), which tends to be found in areas once covered by warm, shallow inland seas. Along with the fact that limestone forms between 20% and 25% of all sedimentary rocks on Earth, that makes limestone deposits pretty easy to find and exploit.

Cement production begins with quarrying and crushing vast amounts of limestone. Cement plants are usually built alongside the quarries that produce the limestone or even right within them, to reduce transportation costs. Crushed limestone can be moved around the plant on conveyor belts or using powerful fans to blow the crushed rock through large pipes. Smaller plants might simply move raw materials around using haul trucks and front-end loaders. Along with the other primary ingredient, clay, limestone is stored in large silos located close to the star of the show: the rotary kiln.

Turning and Burning

A rotary kiln is an enormous tube, up to seven meters in diameter and perhaps 80 m long, set on a slight angle from the horizontal by a series of supports along its length. The supports have bearings built into them that allow the whole assembly to turn slowly, hence the name. The kiln is lined with refractory materials to resist the flames of a burner set in the lower end of the tube. Exhaust gases exit the kiln from the upper end through a riser pipe, which directs the hot gas through a series of preheaters that slowly raise the temperature of the entering raw materials, known as rawmix.

The rotary kiln is the centerpiece of Portland cement production. While hard to see in this photo, the body of the kiln tilts slightly down toward the structure on the left, where the burner enters and finished clinker exits. Source: by nordroden, via Adobe Stock (licensed).

Preheating the rawmix drives off any remaining water before it enters the kiln, and begins the decomposition of limestone into lime, or calcium oxide:

CaCO_{3} \rightarrow CaO + CO_{2}

The rotation of the kiln along with its slight slope results in a slow migration of rawmix down the length of the kiln and into increasingly hotter regions. Different reactions occur as the temperature increases. At the top of the kiln, the 500 °C heat decomposes the clay into silicate and aluminum oxide. Further down, as the heat reaches the 800 °C range, calcium oxide reacts with silicate to form the calcium silicate mineral known as belite:

2CaO + SiO_{2} \rightarrow 2CaO\cdot SiO_{2}

Finally, near the bottom of the kiln, belite and calcium oxide react to form another calcium silicate, alite:

2CaO\cdot SiO_{2} + CaO \rightarrow 3CaO\cdot SiO_{2}

It’s worth noting that cement chemists have a specialized nomenclature for alite, belite, and all the other intermediary phases of Portland cement production. It’s a shorthand that looks similar to standard chemical nomenclature, and while we’re sure it makes things easier for them, it’s somewhat infuriating to outsiders. We’ll stick to standard notation here to make things simpler. It’s also important to note that the aluminates that decomposed from the clay are still present in the rawmix. Even though they’re not shown in these reactions, they’re still critical to the proper curing of the cement.

Portland cement clinker. Each ball is just a couple of centimeters in diameter. Source: مرتضا, Public domain

The final section of the kiln is the hottest, at 1,500 °C. The extreme heat causes the material to sinter, a physical change that partially melts the particles and adheres them together into small, gray lumps called clinker. When the clinker pellets drop from the bottom of the kiln, they are still incandescently hot. Blasts of air that rapidly bring the clinker down to around 100 °C. The exhaust from the clinker cooler joins the kiln exhaust and helps preheat the incoming rawmix charge, while the cooled clinker is mixed with a small amount of gypsum and ground in a ball mill. The fine gray powder is either bagged or piped into bulk containers for shipment by road, rail, or bulk cargo ship.

The Cure

Most cement is shipped to concrete plants, which tend to be much more widely distributed than cement plants due to the perishable nature of the product they produce. True, both plants rely on nearby deposits of easily accessible rock, but where cement requires limestone, the gravel and sand that go into concrete can come from a wide variety of rock types.

Concrete plants quarry massive amounts of rock, crush it to specifications, and stockpile the material until needed. Orders for concrete are fulfilled by mixing gravel and sand in the proper proportions in a mixer housed in a batch house, which is elevated above the ground to allow space for mixer trucks to drive underneath. The batch house operators mix aggregate, sand, and any other admixtures the customer might require, such as plasticizers, retarders, accelerants, or reinforcers like chopped fiberglass, before adding the prescribed amount of cement from storage silos. Water may or may not be added to the mix at this point. If the distance from the concrete plant to the job site is far enough, it may make sense to load the dry mix into the mixer truck and add the water later. But once the water goes into the mix, the clock starts ticking, because the cement begins to cure.

Cement curing is a complex process involving the calcium silicates (alite and belite) in the cement, as well as the aluminate phases. Overall, the calcium silicates are hydrated by the water into a gel-like substance of calcium oxide and silicate. For alite, the reaction is:

Ca_{3}SiO_{5} + H_{2}O \rightarrow CaO\cdot SiO_{2} \cdot H_{2}O + Ca(OH)_{2}

Scanning electron micrograph of cured Portland cement, showing needle-like ettringite and plate-like calcium oxide. Source: US Department of Transportation, Public domain

At the same time, the aluminate phases in the cement are being hydrated and interacting with the gypsum, which prevents early setting by forming a mineral known as ettringite. Without the needle-like ettringite crystals, aluminate ions would adsorb onto alite and block it from hydrating, which would quickly reduce the plasticity of the mix. Ideally, the ettringite crystals interlock with the calcium silicate gel, which binds to the surface of the sand and gravel and locks it into a solid.

Depending on which adjuvants were added to the mix, most concretes begin to lose workability within a few hours of rehydration. Initial curing is generally complete within about 24 hours, but the curing process continues long after the material has solidified. Concrete in this state is referred to as “green,” and continues to gain strength over a period of weeks or even months.

The Convoluted Way Intel’s 386 Implemented its Registers

Por: Maya Posch
5 Mayo 2025 at 08:00
The 386's main register bank, at the bottom of the datapath. The numbers show how many bits of the register can be accessed. (Credit: Ken Shirriff)

The fact that modern-day x86 processors still pretty much support the same operating systems and software as their ancestors did is quite a feat. Much of this effort had already been accomplished with the release of the 80386 (later 386) CPU in 1985, which was not only the first 32-bit x86 CPU, but was also backwards compatible with 8- and 16-bit software dating back to the 1970s. Making this work transparently was anything but straightforward, as [Ken Shirriff]’s recent analysis of the 80386’s main register file shows.

Labelled Intel 80386 die shot. (Credit: Ken Shirriff)
Labelled Intel 80386 die shot. (Credit: Ken Shirriff)

Using die shots of the 386’s registers and surrounding silicon, it’s possible to piece together how backwards compatibility was implemented. The storage cells of the registers are implemented using static memory (SRAM) as is typical, with much of the register file triple-ported (two read, one write).

Most interestingly is the presence of different circuits (6) to support accessing the register file for 8-, 16- or 32-bit writes and reads. The ‘shuffle’ network as [Ken] calls it is responsible for handling these distinct writes and reads, which also leads to the finding that the bottom 16 bits in the registers are actually interleaved to make this process work smoother.

Fortunately for Intel (and AMD) engineers, this feat wouldn’t have to be repeated again with the arrival of AMD64 and x86_64 many years later, when the 386’s mere 275,000 transistors on a 1 µm process would already be ancient history.

Want to dive even deeper in to the 386? This isn’t the first time [Ken] has looked at the iconic chip.

Ratcheting Mechanism Gives Tendons a Tug

Por: Ian Bos
3 Mayo 2025 at 08:00
Full picture of tendon pulling actuator with Arduino elements in the backdrop

A common ratchet from your garage may work wonders for tightening hard to reach bolts on whatever everyday projects around the house. However, those over at [Chronova Engineering] had a particularly unusual project where a special ratchet mechanism needed to be developed. And developed it was, an absolutely beautiful machining job is done to create a ratcheting actuator for tendon pulling. Yes, this mechanical steampunk-esk ratchet is meant for yanking on the fleshy strings found in all of us.

The unique mechanism is necessary because of the requirement for bidirectional actuation for bio-mechanics research. Tendons are meant to be pulled and released to measure the movement of the fingers or toes. This is then compared with the distance pulled from the actuator. Hopefully, this method of actuation measurement may help doctors and surgeons treat people with impairments, though in this particular case the “patient” is a chicken’s foot.

Blurred for viewing ease

Manufacturing the mechanism itself consisted of a multitude of watch lathe operations and pantographed patterns. A mixture of custom and commercial screws are used in combination with a peg gear, cams, and a high performance servo to complete the complex ratchet. With simple control from an Arduino, the system completes its use case very effectively.

In all the actuator is an incredible piece of machining ability with one of the least expected use cases. The original public listed video chose to not show the chicken foot itself due to fear of the YouTube overlords.

If you wish to see the actuator in proper action check out the uncensored and unlisted video here.

Thanks to [DjBiohazard] on our Discord server tips-line!

To See Within: Detecting X-Rays

23 Abril 2025 at 14:00

It’s amazing how quickly medical science made radiography one of its main diagnostic tools. Medicine had barely emerged from its Dark Age of bloodletting and the four humours when X-rays were discovered, and the realization that the internal structure of our bodies could cast shadows of this mysterious “X-Light” opened up diagnostic possibilities that went far beyond the educated guesswork and exploratory surgery doctors had relied on for centuries.

The problem is, X-rays are one of those things that you can’t see, feel, or smell, at least mostly; X-rays cause visible artifacts in some people’s eyes, and the pencil-thin beam of a CT scanner can create a distinct smell of ozone when it passes through the nasal cavity — ask me how I know. But to be diagnostically useful, the varying intensities created by X-rays passing through living tissue need to be translated into an image. We’ve already looked at how X-rays are produced, so now it’s time to take a look at how X-rays are detected and turned into medical miracles.

Taking Pictures

For over a century, photographic film was the dominant way to detect medical X-rays. In fact, years before Wilhelm Conrad Röntgen’s first systematic study of X-rays in 1895, fogged photographic plates during experiments with a Crooke’s tube were among the first indications of their existence. But it wasn’t until Röntgen convinced his wife to hold her hand between one of his tubes and a photographic plate to create the first intentional medical X-ray that the full potential of radiography could be realized.

“Hand mit Ringen” by W. Röntgen, December 1895. Public domain.

The chemical mechanism that makes photographic film sensitive to X-rays is essentially the same as the process that makes light photography possible. X-ray film is made by depositing a thin layer of photographic emulsion on a transparent substrate, originally celluloid but later polyester. The emulsion is a mixture of high-grade gelatin, a natural polymer derived from animal connective tissue, and silver halide crystals. Incident X-ray photons ionize the halogens, creating an excess of electrons within the crystals to reduce the silver halide to atomic silver. This creates a latent image on the film that is developed by chemically converting sensitized silver halide crystals to metallic silver grains and removing all the unsensitized crystals.

Other than in the earliest days of medical radiography, direct X-ray imaging onto photographic emulsions was rare. While photographic emulsions can be exposed by X-rays, it takes a lot of energy to get a good image with proper contrast, especially on soft tissues. This became a problem as more was learned about the dangers of exposure to ionizing radiation, leading to the development of screen-film radiography.

In screen-film radiography, X-rays passing through the patient’s tissues are converted to light by one or more intensifying screens. These screens are made from plastic sheets coated with a phosphorescent material that glows when exposed to X-rays. Calcium tungstate was common back in the day, but rare earth phosphors like gadolinium oxysulfate became more popular over time. Intensifying screens were attached to the front and back covers of light-proof cassettes, with double-emulsion film sandwiched between them; when exposed to X-rays, the screens would glow briefly and expose the film.

By turning one incident X-ray photon into thousands or millions of visible light photons, intensifying screens greatly reduce the dose of radiation needed to create diagnostically useful images. That’s not without its costs, though, as the phosphors tend to spread out each X-ray photon across a physically larger area. This results in a loss of resolution in the image, which in most cases is an acceptable trade-off. When more resolution is needed, single-screen cassettes can be used with one-sided emulsion films, at the cost of increasing the X-ray dose.

Wiggle Those Toes

Intensifying screens aren’t the only place where phosphors are used to detect X-rays. Early on in the history of radiography, doctors realized that while static images were useful, continuous images of body structures in action would be a fantastic diagnostic tool. Originally, fluoroscopy was performed directly, with the radiologist viewing images created by X-rays passing through the patient onto a phosphor-covered glass screen. This required an X-ray tube engineered to operate with a higher duty cycle than radiographic tubes and had the dual disadvantages of much higher doses for the patient and the need for the doctor to be directly in the line of fire of the X-rays. Cataracts were enough of an occupational hazard for radiologists that safety glasses using leaded glass lenses were a common accessory.

How not to test your portable fluoroscope. The X-ray tube is located in the upper housing, while the image intensifier and camera are below. The machine is generally referred to as a “C-arm” and is used in the surgery suite and for bedside pacemaker placements. Source: Nightryder84, CC BY-SA 3.0.

One ill-advised spin-off of medical fluoroscopy was the shoe-fitting fluoroscopes that started popping up in shoe stores in the 1920s. Customers would stick their feet inside the machine and peer at a fluorescent screen to see how well their new shoes fit. It was probably not terribly dangerous for the once-a-year shoe shopper, but pity the shoe salesman who had to peer directly into a poorly regulated X-ray beam eight hours a day to show every Little Johnny’s mother how well his new Buster Browns fit.

As technology improved, image intensifiers replaced direct screens in fluoroscopy suites. Image intensifiers were vacuum tubes with a large input window coated with a fluorescent material such as zinc-cadmium sulfide or sodium-cesium iodide. The phosphors convert X-rays passing through the patient to visible light photons, which are immediately converted to photoelectrons by a photocathode made of cesium and antimony. The electrons are focused by coils and accelerated across the image intensifier tube by a high-voltage field on a cylindrical anode. The electrons pass through the anode and strike a phosphor-covered output screen, which is much smaller in diameter than the input screen. Incident X-ray photons are greatly amplified by the image intensifier, making a brighter image with a lower dose of radiation.

Originally, the radiologist viewed the output screen using a microscope, which at least put a little more hardware between his or her eyeball and the X-ray source. Later, mirrors and lenses were added to project the image onto a screen, moving the doctor’s head out of the direct line of fire. Later still, analog TV cameras were added to the optical path so the images could be displayed on high-resolution CRT monitors in the fluoroscopy suite. Eventually, digital cameras and advanced digital signal processing were introduced, greatly streamlining the workflow for the radiologist and technologists alike.

Get To The Point

So far, all the detection methods we’ve discussed fall under the general category of planar detectors, in that they capture an entire 2D shadow of the X-ray beam after having passed through the patient. While that’s certainly useful, there are cases where the dose from a single, well-defined volume of tissue is needed. This is where point detectors come into play.

Nuclear medicine image, or scintigraph, of metastatic cancer. 99Tc accumulates in lesions in the ribs and elbows (A), which are mostly resolved after chemotherapy (B). Note the normal accumulation of isotope in the kidneys and bladder. Kazunari Mado, Yukimoto Ishii, Takero Mazaki, Masaya Ushio, Hideki Masuda and Tadatoshi Takayama, CC BY-SA 2.0.

In medical X-ray equipment, point detectors often rely on some of the same gas-discharge technology that DIYers use to build radiation detectors at home. Geiger tubes and ionization chambers measure the current created when X-rays ionize a low-pressure gas inside an electric field. Geiger tubes generally use a much higher voltage than ionization chambers, and tend to be used more for radiological safety, especially in nuclear medicine applications, where radioisotopes are used to diagnose and treat diseases. Ionization chambers, on the other hand, were often used as a sort of autoexposure control for conventional radiography. Tubes were placed behind the film cassette holders in the exam tables of X-ray suites and wired into the control panels of the X-ray generators. When enough radiation had passed through the patient, the film, and the cassette into the ion chamber to yield a correct exposure, the generator would shut off the X-ray beam.

Another kind of point detector for X-rays and other kinds of radiation is the scintillation counter. These use a crystal, often cesium iodide or sodium iodide doped with thallium, that releases a few visible light photons when it absorbs ionizing radiation. The faint pulse of light is greatly amplified by one or more photomultiplier tubes, creating a pulse of current proportional to the amount of radiation. Nuclear medicine studies use a device called a gamma camera, which has a hexagonal array of PM tubes positioned behind a single large crystal. A patient is injected with a radioisotope such as the gamma-emitting technetium-99, which accumulates mainly in the bones. Gamma rays emitted are collected by the gamma camera, which derives positional information from the differing times of arrival and relative intensity of the light pulse at the PM tubes, slowly building a ghostly skeletal map of the patient by measuring where the 99Tc accumulated.

Going Digital

Despite dominating the industry for so long, the days of traditional film-based radiography were clearly numbered once solid-state image sensors began appearing in the 1980s. While it was reliable and gave excellent results, film development required a lot of infrastructure and expense, and resulted in bulky films that required a lot of space to store. The savings from doing away with all the trappings of film-based radiography, including the darkrooms, automatic film processors, chemicals, silver recycling, and often hundreds of expensive film cassettes, is largely what drove the move to digital radiography.

After briefly flirting with phosphor plate radiography, where a sensitized phosphor-coated plate was exposed to X-rays and then “developed” by a special scanner before being recharged for the next use, radiology departments embraced solid-state sensors and fully digital image capture and storage. Solid-state sensors come in two flavors: indirect and direct. Indirect sensor systems use a large matrix of photodiodes on amorphous silicon to measure the light given off by a scintillation layer directly above it. It’s basically the same thing as a film cassette with intensifying screens, but without the film.

Direct sensors, on the other hand, don’t rely on converting the X-ray into light. Rather, a large flat selenium photoconductor is used; X-rays absorbed by the selenium cause electron-hole pairs to form, which migrate to a matrix of fine electrodes on the underside of the sensor. The current across each pixel is proportional to the amount measured to the amount of radiation received, and can be read pixel-by-pixel to build up a digital image.

Modernizing an Enigma Machine

17 Abril 2025 at 08:00
Enigma buttons

This project by [Miro] is awesome, not only did he build a replica Enigma machine using modern technologies, but after completing it, he went back and revised several components to make it more usable. We’ve featured Enigma machines here before; they are complex combinations of mechanical and electrical components that form one of the most recognizable encryption methods in history.

His first Enigma machine was designed closely after the original. He used custom PCBs for the plugboard and lightboard, which significantly cleaned up the internal wiring. For the lightboard, he cleverly used a laser printer on semi-transparent paper to create crisp letters, illuminated from behind. For the keyboard, he again designed a custom PCB to connect all the switches. However, he encountered an unexpected setback due to error stack-up. We love that he took the time to document this issue and explain that the project didn’t come together perfectly on the first try and how some adjustments were needed along the way.

Custom rotary wheelThe real heart of this build is the thought and effort put into the design of the encryption rotors. These are the components that rotate with each keystroke, changing the signal path as the system is used. In a clever hack, he used a combination of PCBs, pogo pins, and 3D printed parts to replicate the function of the original wheels.

Enigma machine connoisseurs will notice that the wheels rotate differently than in the original design, which leads us to the second half of this project. After using the machine for a while, it became clear that the pogo pins were wearing down the PCB surfaces on the wheels. To solve this, he undertook an extensive redesign that resulted in a much more robust and reliable machine.

In the redesign, instead of using pogo pins to make contact with pads, he explored several alternative methods to detect the wheel position—including IR light with phototransistors, rotary encoders, magnetic encoders, Hall-effect sensors, and more. The final solution reduced the wiring and addressed long-term reliability concerns by eliminating the mechanical wear present in the original design.

Not only did he document the build on his site, but he also created a video that not only shows what he built but also gives a great explanation of the logic and function of the machine. Be sure to also check out some of the other cool enigma machines we’ve featured over the years.

❌
❌