Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Ayer — 12 Mayo 2025Salida Principal

Radio Apocalypse: Meteor Burst Communications

12 Mayo 2025 at 14:00

The world’s militaries have always been at the forefront of communications technology. From trumpets and drums to signal flags and semaphores, anything that allows a military commander to relay orders to troops in the field quickly or call for reinforcements was quickly seized upon and optimized. So once radio was invented, it’s little wonder how quickly military commanders capitalized on it for field communications.

Radiotelegraph systems began showing up as early as the First World War, but World War II was the first real radio war, with every belligerent taking full advantage of the latest radio technology. Chief among these developments was the ability of signals in the high-frequency (HF) bands to reflect off the ionosphere and propagate around the world, an important capability when prosecuting a global war.

But not long after, in the less kinetic but equally dangerous Cold War period, military planners began to see the need to move more information around than HF radio could support while still being able to do it over the horizon. What they needed was the higher bandwidth of the higher frequencies, but to somehow bend the signals around the curvature of the Earth. What they came up with was a fascinating application of practical physics: meteor burst communications.

Blame It on Shannon

In practical terms, a radio signal that can carry enough information to be useful for digital communications while still being able to propagate long distances is a bit of a paradox. You can thank Claude Shannon for that, after he developed the idea of channel capacity from the earlier work of Harry Nyquist and Ralph Hartley. The resulting Hartley-Shannon Theorem states that the bit rate of a channel in a noisy environment is directly related to the bandwidth of the channel. In other words, the more data you want to stuff down a channel, the higher the frequency needs to be.

Unfortunately, that runs afoul of the physics of ionospheric propagation. Thanks to the physics of the interaction between radio waves and the charged particles between about 50 km and 600 km above the ground, the maximum frequency that can be reflected back toward the ground is about 30 MHz, which is the upper end of the HF band. Beyond that is the very-high frequency (VHF) band from 30 MHz to 300 MHz, which has enough bandwidth for an effective data channel but to which the ionosphere is essentially transparent.

Luckily, the ionosphere isn’t the only thing capable of redirecting radio waves. Back in the 1920s, Japanese physicist Hantaro Nagaoka observed that the ionospheric propagation of shortwave radio signals would change a bit during periods of high meteoric activity. That discovery largely remained dormant until after World War II, when researchers picked up on Nagoka’s work and looked into the mechanism behind his observations.

Every day, the Earth sweeps up a huge number of meteoroids; estimates range from a million to ten billion. Most of those are very small, on the order of a few nanograms, with a few good-sized chunks in the tens of kilograms range mixed in. But the ones that end up being most interesting for communications purposes are the particles in the milligram range, in part because there are about 100 million such collisions on average every day, but also because they tend to vaporize in the E-level of the ionosphere, between 80 and 120 km above the surface. The air at that altitude is dense enough to turn the incoming cosmic debris into a long, skinny trail of ions, but thin enough that the free electrons take a while to recombine into neutral atoms. It’s a short time — anywhere between 500 milliseconds to a few seconds — but it’s long enough to be useful.

A meteor trail from the annual Perseid shower, which peaks in early August. This is probably a bit larger than the optimum for MBC, but beautiful nonetheless. Source: John Flannery, CC BY-ND 2.0.

The other aspect of meteor trails formed at these altitudes that makes them useful for communications is their relative reflectivity. The E-layer of the ionosphere normally has on the order of 107 electrons per cubic meter, a density that tends to refract radio waves below about 20 MHz. But meteor trails at this altitude can have densities as high as 1011 to 1012 electrons/m3. This makes the trails highly reflective to radio waves, especially at the higher frequencies of the VHF band.

In addition to the short-lived nature of meteor trails, daily and seasonal variations in the number of meteors complicate their utility for communications. The rotation of the Earth on its axis accounts for the diurnal variation, which tends to peak around dawn local time every day as the planet’s rotation and orbit are going in the same direction and the number of collisions increases. Seasonal variations occur because of the tilt of Earth’s axis relative to the plane of the ecliptic, where most meteoroids are concentrated. More collisions occur when the Earth’s axis is pointed in the direction of travel around the Sun, which is the second half of the year for the northern hemisphere.

Learning to Burst

Building a practical system that leverages these highly reflective but short-lived and variable mirrors in the sky isn’t easy, as shown by several post-war experimental systems. The first of these was attempted by the National Bureau of Standards in 1951. They set up a system between Cedar Rapids, Iowa, and Sterling, Virginia, a path length of about 1250 km. Originally built to study propagation phenomena such as forward scatter and sporadic E, the researchers noticed significant effects on their tests by meteor trails. This made them switch their focus to meteor trails, which caught the attention of the US Air Force. They were in the market for a four-channel continuous teletype link to their base in Thule, Greenland. They got it, but only just barely, thanks to the limited technology of the time. The NBS system also used the Iowa to Virginia system to study higher data rates by pointing highly directional rhombic antennas at each end of the connection at the same small patch of sky. They managed a whopping data rate of 3,200 bits per second with this system, but only for the second or so that a meteor trail happened to appear.

The successes and failures of the NBS system made it clear that a useful system based on meteor trails would need to operate in burst mode, to jam data through the link for as long as it existed and wait for the next one. The NBS tested a burst-mode system in 1958 that used the 50-MHz band and offered a full-duplex link at 2,400 bits per second. The system used magnetic tape loops to buffer data and transmitters at both ends of the link that operated continually to probe for a path. Whenever the receiver at one end detected a sufficiently strong probe signal from the other end, the transmitter would start sending data. The Canadians got in on the MBC action with their JANET system, which had a similar dedicated probing channel and tape buffer. In 1954 they established a full-duplex teletype link between Ottawa and Nova Scotia at 1,300 bits per second with an error rate of only 1.5%

In the late 1950s, Hughes developed a single-channel air-to-ground MBC system. This was a significant development since not only had the equipment gotten small enough to install on an airplane but also because it really refined the burst-mode technology. The ground stations in the Hughes system periodically transmitted a 100-bit interrogation signal to probe for a path to the aircraft. The receiver on the ground listened for an acknowledgement from the plane, which turned the channel around and allowed the airborne transmitter to send a 100-bit data burst. The system managed a respectable 2,400 bps data rate, but suffered greatly from ground-based interference for TV stations and automotive ignition noise.

The SHAPE of Things to Come

Supreme HQ Allied Powers Europe (SHAPE), NATO’s European headquarters in the mid-60s. The COMET meteor-bounce system kept NATO commanders in touch with member-nation HQs via teletype. Source: NATO

The first major MBC system fielded during the Cold War was the Communications by Meteor Trails system, or COMET. It was used by the North Atlantic Treaty Organization (NATO) to link its far-flung outposts in member nations with Supreme Headquarters Allied Powers Europe, or SHAPE, located in Belgium. COMET took cues from the Hughes system, especially its error detection and correction scheme. COMET was a robust and effective MBC system that provided between four and eight teletype circuits depending on daily and seasonal conditions, each handling 60 words per minute.

COMET was in continuous use from the mid-1960s until well after the official end of the Cold War. By that point, secure satellite communications were nowhere near as prohibitively expensive as they had been at the beginning of the Space Age, and MBC systems became less critical to NATO. They weren’t retired, though, and COMET actually still exists, although rebranded as “Compact Over-the-Horizon Mobile Expeditionary Terminal.” These man-portable systems don’t use MBC; rather, they use high-power UHF and microwave transmitters to scatter signals off the troposphere. A small amount of the signal is reflected back to the ground, where high-gain antennas pick up the vanishingly weak signals.

Although not directly related to Cold War communications, it’s worth noting that there was a very successful MBC system fielded in the civilian space in the United States: SNOTEL. We’ve covered this system in some depth already, but briefly, it’s a network of stations in the western part of the USA with the critical job of monitoring the snowpack. A commercial MBC system connected the solar-powered monitoring stations, often in remote and rugged locations, to two different central bases. Taking advantage of diurnal meteor variations, each morning the master station would send a polling signal out to every remote, which would then send back the previous day’s data once a return path was opened. The system could collect data from 180 remote sites in just 20 minutes. It operated successfully from the mid-1970s until just recently, when pervasive cell technology and cheap satellite modems made the system obsolete.

Hackaday Links: May 11, 2025

11 Mayo 2025 at 23:00
Hackaday Links Column Banner

Did artificial intelligence just jump the shark? Maybe so, and it came from the legal world of all places, with this report of an AI-generated victim impact statement. In an apparent first, the family of an Arizona man killed in a road rage incident in 2021 used AI to bring the victim back to life to testify during the sentencing phase of his killer’s trial. The video was created by the sister and brother-in-law of the 37-year-old victim using old photos and videos, and was quite well done, despite the normal uncanny valley stuff around lip-syncing that seems to be the fatal flaw for every deep-fake video we’ve seen so far. The victim’s beard is also strangely immobile, which we found off-putting.

In the video, the victim expresses forgiveness toward his killer and addresses his family members directly, talking about things like what he would have looked like if he’d gotten the chance to grow old. That seemed incredibly inflammatory to us, but according to Arizona law, victims and their families get to say pretty much whatever they want in their impact statements. While this appears to be legal, we wouldn’t be surprised to see it appealed, since the judge tacked an extra year onto the killer’s sentence over what the prosecution sought based on the power of the AI statement. If this tactic withstands the legal tests it’ll no doubt face, we could see an entire industry built around this concept.

Last week, we warned about the impending return of Kosmos 482, a Soviet probe that was supposed to go to Venus when it was launched in 1972. It never quite chooched, though, and ended up circling the Earth for the last 53 years. The satellite made its final orbit on Saturday morning, ending up in the drink in the Indian Ocean, far from land. Alas, the faint hope that it would have a soft landing thanks to the probe’s parachute having apparently been deployed at some point in the last five decades didn’t come to pass. That’s a bit of a disappointment to space fans, who’d love to get a peek inside this priceless bit of space memorabilia. Roscosmos says they monitored the descent, so presumably they know more or less where the debris rests. Whether it’s worth an expedition to retrieve it remains to be seen.

Are we really at the point where we have to worry about counterfeit thermal paste? Apparently, yes, judging by the effort Arctic Cooling is putting into authenticity verification of its MX brand pastes. To make sure you’re getting the real deal, boxes will come with seals that rival those found on over-the-counter medications and scratch-off QR codes that can be scanned and cross-referenced to an online authentication site. We suppose it makes sense; chip counterfeiting is a very real thing, after all, and it’s probably as easy to put a random glob of goo into a syringe as it is to laser new markings onto a chip package. And Arctic compound commands a pretty penny, so the incentive is obvious. But still, something about this just bothers us.

Another very cool astrophotography shot this week, this time a breathtaking collection of galaxies. Taken from the Near Infrared camera on the James Webb Space Telescope with help from the Hubble Space Telescope and the XMM-Newton X-ray space observatory, the image shows thousands of galaxies of all shapes and sizes, along with the background X-ray glow emitted by all the clouds of superheated dust and gas between them. The stars with the characteristic six-pointed diffraction spikes are all located within our galaxy, but everything else is a galaxy. The variety is fascinating, and the scale of the image is mind-boggling. It’s galactic eye candy!

And finally, if you’ve ever wondered about what happens when a nuclear reactor melts down, you’re in luck with this interesting animagraphic on the process. It’s not a detailed 3D render of any particular nuclear power plant and doesn’t have a specific meltdown event in mind, although it does mention both Chernobyl and Fukushima. Rather, it’s a general look at pressurized water reactors and what can go wrong when the cooling water stops flowing. It also touches on potentially safer designs with passive safety systems that rely on natural convection to keep cooling water circulating in the event of disaster, along with gravity-fed deluge systems to cool the containment vessel if things get out of hand. It’s a good overview of how reactors work and where they can go wrong. Enjoy.

AnteayerSalida Principal

Big Chemistry: Cement and Concrete

7 Mayo 2025 at 14:00

Not too long ago, I was searching for ideas for the next installment of the “Big Chemistry” series when I found an article that discussed the world’s most-produced chemicals. It was an interesting article, right up my alley, and helpfully contained a top-ten list that I could use as a crib sheet for future articles, at least for the ones I hadn’t covered already, like the Haber-Bosch process for ammonia.

Number one on the list surprised me, though: sulfuric acid. The article stated that it was far and away the most produced chemical in the world, with 36 million tons produced every year in the United States alone, out of something like 265 million tons a year globally. It’s used in a vast number of industrial processes, and pretty much everywhere you need something cleaned or dissolved or oxidized, you’ll find sulfuric acid.

Staggering numbers, to be sure, but is it really the most produced chemical on Earth? I’d argue not by a long shot, when there’s a chemical that we make 4.4 billion tons of every year: Portland cement. It might not seem like a chemical in the traditional sense of the word, but once you get a look at what it takes to make the stuff, how finely tuned it can be for specific uses, and how when mixed with sand, gravel, and water it becomes the stuff that holds our world together, you might agree that cement and concrete fit the bill of “Big Chemistry.”

Rock Glue

To kick things off, it might be helpful to define some basic terms. Despite the tendency to use them as synonyms among laypeople, “cement” and “concrete” are entirely different things. Concrete is the finished building material of which cement is only one part, albeit a critical part. Cement is, for lack of a better term, the glue that binds gravel and sand together into a coherent mass, allowing it to be used as a building material.

What did the Romans ever do for us? The concrete dome of the Pantheon is still standing after 2,000 years. Source: Image by Sean O’Neill from Flickr via Monolithic Dome Institute (CC BY-ND 2.0)

It’s not entirely clear who first discovered that calcium oxide, or lime, mixed with certain silicate materials would form a binder strong enough to stick rocks together, but it certainly goes back into antiquity. The Romans get an outsized but well-deserved portion of the credit thanks to their use of pozzolana, a silicate-rich volcanic ash, to make the concrete that held the aqueducts together and built such amazing structures as the dome of the Pantheon. But the use of cement in one form or another can be traced back at least to ancient Egypt, and probably beyond.

Although there are many kinds of cement, we’ll limit our discussion to Portland cement, mainly because it’s what is almost exclusively manufactured today. (The “Portland” name was a bit of branding by its inventor, Joseph Aspdin, who thought the cured product resembled the famous limestone from the Isle of Portland off the coast of Dorset in the English Channel.)

Portland cement manufacturing begins with harvesting its primary raw material, limestone. Limestone is a sedimentary rock rich in carbonates, especially calcium carbonate (CaCO3), which tends to be found in areas once covered by warm, shallow inland seas. Along with the fact that limestone forms between 20% and 25% of all sedimentary rocks on Earth, that makes limestone deposits pretty easy to find and exploit.

Cement production begins with quarrying and crushing vast amounts of limestone. Cement plants are usually built alongside the quarries that produce the limestone or even right within them, to reduce transportation costs. Crushed limestone can be moved around the plant on conveyor belts or using powerful fans to blow the crushed rock through large pipes. Smaller plants might simply move raw materials around using haul trucks and front-end loaders. Along with the other primary ingredient, clay, limestone is stored in large silos located close to the star of the show: the rotary kiln.

Turning and Burning

A rotary kiln is an enormous tube, up to seven meters in diameter and perhaps 80 m long, set on a slight angle from the horizontal by a series of supports along its length. The supports have bearings built into them that allow the whole assembly to turn slowly, hence the name. The kiln is lined with refractory materials to resist the flames of a burner set in the lower end of the tube. Exhaust gases exit the kiln from the upper end through a riser pipe, which directs the hot gas through a series of preheaters that slowly raise the temperature of the entering raw materials, known as rawmix.

The rotary kiln is the centerpiece of Portland cement production. While hard to see in this photo, the body of the kiln tilts slightly down toward the structure on the left, where the burner enters and finished clinker exits. Source: by nordroden, via Adobe Stock (licensed).

Preheating the rawmix drives off any remaining water before it enters the kiln, and begins the decomposition of limestone into lime, or calcium oxide:

CaCO_{3} \rightarrow CaO + CO_{2}

The rotation of the kiln along with its slight slope results in a slow migration of rawmix down the length of the kiln and into increasingly hotter regions. Different reactions occur as the temperature increases. At the top of the kiln, the 500 °C heat decomposes the clay into silicate and aluminum oxide. Further down, as the heat reaches the 800 °C range, calcium oxide reacts with silicate to form the calcium silicate mineral known as belite:

2CaO + SiO_{2} \rightarrow 2CaO\cdot SiO_{2}

Finally, near the bottom of the kiln, belite and calcium oxide react to form another calcium silicate, alite:

2CaO\cdot SiO_{2} + CaO \rightarrow 3CaO\cdot SiO_{2}

It’s worth noting that cement chemists have a specialized nomenclature for alite, belite, and all the other intermediary phases of Portland cement production. It’s a shorthand that looks similar to standard chemical nomenclature, and while we’re sure it makes things easier for them, it’s somewhat infuriating to outsiders. We’ll stick to standard notation here to make things simpler. It’s also important to note that the aluminates that decomposed from the clay are still present in the rawmix. Even though they’re not shown in these reactions, they’re still critical to the proper curing of the cement.

Portland cement clinker. Each ball is just a couple of centimeters in diameter. Source: مرتضا, Public domain

The final section of the kiln is the hottest, at 1,500 °C. The extreme heat causes the material to sinter, a physical change that partially melts the particles and adheres them together into small, gray lumps called clinker. When the clinker pellets drop from the bottom of the kiln, they are still incandescently hot. Blasts of air that rapidly bring the clinker down to around 100 °C. The exhaust from the clinker cooler joins the kiln exhaust and helps preheat the incoming rawmix charge, while the cooled clinker is mixed with a small amount of gypsum and ground in a ball mill. The fine gray powder is either bagged or piped into bulk containers for shipment by road, rail, or bulk cargo ship.

The Cure

Most cement is shipped to concrete plants, which tend to be much more widely distributed than cement plants due to the perishable nature of the product they produce. True, both plants rely on nearby deposits of easily accessible rock, but where cement requires limestone, the gravel and sand that go into concrete can come from a wide variety of rock types.

Concrete plants quarry massive amounts of rock, crush it to specifications, and stockpile the material until needed. Orders for concrete are fulfilled by mixing gravel and sand in the proper proportions in a mixer housed in a batch house, which is elevated above the ground to allow space for mixer trucks to drive underneath. The batch house operators mix aggregate, sand, and any other admixtures the customer might require, such as plasticizers, retarders, accelerants, or reinforcers like chopped fiberglass, before adding the prescribed amount of cement from storage silos. Water may or may not be added to the mix at this point. If the distance from the concrete plant to the job site is far enough, it may make sense to load the dry mix into the mixer truck and add the water later. But once the water goes into the mix, the clock starts ticking, because the cement begins to cure.

Cement curing is a complex process involving the calcium silicates (alite and belite) in the cement, as well as the aluminate phases. Overall, the calcium silicates are hydrated by the water into a gel-like substance of calcium oxide and silicate. For alite, the reaction is:

Ca_{3}SiO_{5} + H_{2}O \rightarrow CaO\cdot SiO_{2} \cdot H_{2}O + Ca(OH)_{2}

Scanning electron micrograph of cured Portland cement, showing needle-like ettringite and plate-like calcium oxide. Source: US Department of Transportation, Public domain

At the same time, the aluminate phases in the cement are being hydrated and interacting with the gypsum, which prevents early setting by forming a mineral known as ettringite. Without the needle-like ettringite crystals, aluminate ions would adsorb onto alite and block it from hydrating, which would quickly reduce the plasticity of the mix. Ideally, the ettringite crystals interlock with the calcium silicate gel, which binds to the surface of the sand and gravel and locks it into a solid.

Depending on which adjuvants were added to the mix, most concretes begin to lose workability within a few hours of rehydration. Initial curing is generally complete within about 24 hours, but the curing process continues long after the material has solidified. Concrete in this state is referred to as “green,” and continues to gain strength over a period of weeks or even months.

Hackaday Links: May 4, 2025

4 Mayo 2025 at 23:00
Hackaday Links Column Banner

By now, you’ve probably heard about Kosmos 482, a Soviet probe destined for Venus in 1972 that fell a bit short of the mark and stayed in Earth orbit for the last 53 years. Soon enough, though, the lander will make its fiery return; exactly where and when remain a mystery, but it should be sometime in the coming week. We talked about the return of Kosmos briefly on this week’s podcast and even joked a bit about how cool it would be if the parachute that would have been used for the descent to Venus had somehow deployed over its half-century in space. We might have been onto something, as astrophotographer Ralf Vanderburgh has taken some pictures of the spacecraft that seem to show a structure connected to and trailing behind it. The chute is probably in pretty bad shape after 50 years of UV torture, but how cool is that?

Parachute or not, chances are good that the 495-kilogram spacecraft, built to not only land on Venus but to survive the heat, pressure, and corrosive effects of the hellish planet’s atmosphere, will at least partially survive reentry into Earth’s more welcoming environs. That’s a good news, bad news thing: good news that we might be able to recover a priceless artifact of late-Cold War space technology, bad news to anyone on the surface near where this thing lands. If Kosmos 482 does manage to do some damage, it won’t be the first time. Shortly after launch, pieces of titanium rained down on New Zealand after the probe’s booster failed to send it on its way to Venus, damaging crops and starting some fires. The Soviets, ever secretive about their space exploits until they could claim complete success, disavowed the debris and denied responsibility for it. That made the farmers whose fields they fell in the rightful owners, which is also pretty cool. We doubt that the long-lost Kosmos lander will get the same treatment, but it would be nice if it did.

Also of note in the news this week is a brief clip of a Unitree humanoid robot going absolutely ham during a demonstration — demo-hell, amiright? Potential danger to the nearby engineers notwithstanding, the footage is pretty hilarious. The demo, with a robot hanging from a hoist in a crowded lab, starts out calmly enough, but goes downhill quickly as the robot starts flailing its arms around. We’d say the movements were uncontrolled, but there are points where the robot really seems to be chasing the engineer and taking deliberate swipes at the poor guy, who was probably just trying to get to the e-stop switch. We know that’s probably just the anthropomorphization talking, but it sure looks like the bot had a beef to settle.  You be the judge.

Also from China comes a report of “reverse ATMs” that accept gold and turn it into cash on the spot (apologies for yet another social media link, but that’s where the stories are these days). The machine shown has a hopper into which customers can load their unwanted jewelry, after which it is reportedly melted down and assayed for purity. The funds are then directly credited to the customer’s account electronically. We’re not sure we fully believe this — thinking about the various failure modes of one of those fresh-brewed coffee machines, we shudder to think about the consequences of a machine with a 1,000°C furnace built into it. We also can’t help but wonder how the machine assays the scrap gold — X-ray fluorescence? Ramann spectroscopy? Also, what happens to the unlucky customer who puts some jewelry in that they thought was real gold, only to be told by the machine that it wasn’t? Do they just get their stuff back as a molten blob? The mind boggles.

And finally, the European Space Agency has released a stunning new image of the Sun. Captured by their Solar Orbiter spacecraft in March from about 77 million kilometers away, the mosaic is composed of about 200 images from the Extreme Ultraviolet Imager. The Sun was looking particularly good that day, with filaments, active regions, prominences, and coronal loops in evidence, along with the ethereal beauty of the Sun’s atmosphere. The image is said to be the most detailed view of the Sun yet taken, and needs to be seen in full resolution to be appreciated. Click on the image below and zoom to your heart’s content.

 

 

 

Look! It’s a Knob! It’s a Jack! It’s Euroknob!

28 Abril 2025 at 08:00

Are your Eurorack modules too crowded? Sick of your patch cables making it hard to twiddle your knobs? Then you might be very interested in the new Euroknob, the knob that sports a hidden patch cable jack.

Honestly, when we first saw the Euroknob demo board, we thought [Mitxela] had gone a little off the rails. It looks like nothing more than a PCB-mount potentiometer or perhaps an encoder with a knob attached. Twist the knob and a row of LEDs on the board light up in sequence. Nice, but not exactly what we’re used to seeing from him. But then he popped the knob off the board, revealing that what we thought was the pot body is actually a 3.5-mm audio jack, and that the knob was attached to a mating plug that acts as an axle.

The kicker is that underneath the audio jack is an AS5600 magnetic encoder, and hidden in a slot milled in the tip of the audio jack is a tiny magnet. Pop the knob into the jack, give it a twist, and you’ve got manual control of your module. Take the knob out, plug in a patch cable, and you can let a control voltage from another module do the job. Genius!

To make it all work mechanically, [Mitxela] had to sandwich a spacer board on top of the main PCB. The spacer has a large cutout to make room for the sensor chip so the magnet can rotate without hitting anything. He also added a CH32V003 to run the encoder and drive the LEDs to provide feedback for the knob-jack. The video below has a brief demo.

This is just a proof of concept, to be sure, but it’s still pretty slick. Almost as slick as [Mitxela]’s recent fluid-motion simulation pendant, or his dual-wielding soldering irons.

Hackaday Links: April 27, 2025

27 Abril 2025 at 23:00
Hackaday Links Column Banner

Looks like the Simpsons had it right again, now that an Australian radio station has been caught using an AI-generated DJ for their midday slot. Station CADA, a Sydney-based broadcaster that’s part of the Australian Radio Network, revealed that “Workdays with Thy” isn’t actually hosted by a person; rather, “Thy” is a generative AI text-to-speech system that has been on the air since November. An actual employee of the ARN finance department was used for Thy’s voice model and her headshot, which adds a bit to the creepy factor.

The discovery that they’ve been listening to a bot for months apparently has Thy’s fans in an uproar, although we suspect that the media doing the reporting is probably more exercised about this than the general public. Radio stations have used robo-jocks for the midday slot for ages, albeit using actual human DJs to record patter to play between tunes and commercials. Anyone paying attention over the last few years probably shouldn’t be surprised by this development, and we suspect similar disclosures will be forthcoming across the industry now that the cat’s out of the bag.

Also from the world of robotics, albeit the hardware kind, is this excellent essay from Brian Potter over at Construction Physics about the sad state of manual dexterity in humanoid robots. The whole article is worth reading, not least for the link to a rogue’s gallery of the current crop of humanoid robots, but briefly, the essay contends that while humanoid robots do a pretty good job of navigating in the world, their ability to do even the simplest tasks is somewhat wanting.

Brian’s example of unwrapping and applying a Band-Aid, a task that any toddler can handle, as being unimaginably difficult for any current robot to handle is quite apt. He attributes the gap in abilities between gross movements and fine motor control partly to hardware and partly to software. We think the blame skews more to the hardware side; while the legs and torso of the typical humanoid robot offer a lot of real estate for powerful actuators, squeezing that much equipment into a hand approximately the size of a human’s is a tall order. These problems will likely be overcome, of course, and when they do, Brian’s helpful list of “Dexterity Evals” or something similar will act as a sort of Turing test for robot dexterity. Although the day a humanoid robot can start a new roll of toilet paper without tearing the first sheet is the day we head for the woods.

We recently did a story on the use of nitrogen-vacancy diamonds as magnetic sensors, which we found really exciting because it’s about the simplest way we’ve seen to play with quantum physics at home. After that story ran, eagle-eyed reader Kealan noticed that Brian over at the “Real Engineering” channel on YouTube had recently run a video on anti-submarine warfare, which includes the uses of similar quantum magnetometers to detect submarines. The magnetometers in the video are based on the Zeeman effect and use laser-pumped helium atoms to detect tiny variations in the Earth’s magnetic field due to large ferrous objects like submarines. Pretty cool video; check it out.

And finally, if you have the slightest interest in civil engineering you’ve got to check out Animagraff’s recent 3D tour of the insides of Hoover Dam. If you thought a dam was just a big, boring block of concrete dumped in the middle of a river, think again. The video is incredibly detailed and starts with accurate 3D models of Black Canyon before the dam was built. Every single detail of the dam is shown, with the “X-ray views” of the dam with the surrounding rock taken away being our favorite bit — reminds us a bit of the book Underground by David Macaulay. But at the end of the day, it’s the enormity of Hoover Dam that really comes across in this video. The way that the structure dwarfs the human-for-scale included in almost every sequence is hard to express — megalophobics, beware. We were also floored by just how much machinery is buried in all that concrete. Sure, we knew about the generators, but the gates on the intake towers and the way the spillways work were news to us. Highly recommended.

Clickspring’s Experimental Archaeology: Concentric Thin-Walled Tubing

25 Abril 2025 at 08:00

It’s human nature to look at the technological achievements of the ancients — you know, anything before the 1990s — and marvel at how they were able to achieve precision results in such benighted times. How could anyone create a complicated mechanism without the aid of CNC machining and computer-aided design tools? Clearly, it was aliens.

Or, as [Chris] from Click Spring demonstrates by creating precision nesting thin-wall tubing, it was human beings running the same wetware as what’s running between our ears but with a lot more patience and ingenuity. It’s part of his series of experiments into how the craftsmen of antiquity made complicated devices like the Antikythera mechanism with simple tools. He starts by cleaning up roughly wrought brass rods on his hand-powered lathe, followed by drilling and reaming to create three tubes with incremental precision bores. He then creates matching pistons for each tube, with an almost gas-tight enough fit right off the lathe.

Getting the piston fit to true gas-tight precision came next, by lapping with a jeweler’s rouge made from iron swarf recovered from the bench. Allowed to rust and ground to a paste using a mortar and pestle, the red iron oxide mixed with olive oil made a dandy fine abrasive, perfect for polishing the metal to a high gloss finish. Making the set of tubes concentric required truing up the bores on the lathe, starting with the inner-most tube and adding the next-largest tube once the outer diameter was lapped to spec.

Easy? Not by a long shot! It looks like a tedious job that we suspect was given to the apprentice while the master worked on more interesting chores. But clearly, it was possible to achieve precision challenging today’s most exacting needs with nothing but the simplest tools and plenty of skill.

To See Within: Detecting X-Rays

23 Abril 2025 at 14:00

It’s amazing how quickly medical science made radiography one of its main diagnostic tools. Medicine had barely emerged from its Dark Age of bloodletting and the four humours when X-rays were discovered, and the realization that the internal structure of our bodies could cast shadows of this mysterious “X-Light” opened up diagnostic possibilities that went far beyond the educated guesswork and exploratory surgery doctors had relied on for centuries.

The problem is, X-rays are one of those things that you can’t see, feel, or smell, at least mostly; X-rays cause visible artifacts in some people’s eyes, and the pencil-thin beam of a CT scanner can create a distinct smell of ozone when it passes through the nasal cavity — ask me how I know. But to be diagnostically useful, the varying intensities created by X-rays passing through living tissue need to be translated into an image. We’ve already looked at how X-rays are produced, so now it’s time to take a look at how X-rays are detected and turned into medical miracles.

Taking Pictures

For over a century, photographic film was the dominant way to detect medical X-rays. In fact, years before Wilhelm Conrad Röntgen’s first systematic study of X-rays in 1895, fogged photographic plates during experiments with a Crooke’s tube were among the first indications of their existence. But it wasn’t until Röntgen convinced his wife to hold her hand between one of his tubes and a photographic plate to create the first intentional medical X-ray that the full potential of radiography could be realized.

“Hand mit Ringen” by W. Röntgen, December 1895. Public domain.

The chemical mechanism that makes photographic film sensitive to X-rays is essentially the same as the process that makes light photography possible. X-ray film is made by depositing a thin layer of photographic emulsion on a transparent substrate, originally celluloid but later polyester. The emulsion is a mixture of high-grade gelatin, a natural polymer derived from animal connective tissue, and silver halide crystals. Incident X-ray photons ionize the halogens, creating an excess of electrons within the crystals to reduce the silver halide to atomic silver. This creates a latent image on the film that is developed by chemically converting sensitized silver halide crystals to metallic silver grains and removing all the unsensitized crystals.

Other than in the earliest days of medical radiography, direct X-ray imaging onto photographic emulsions was rare. While photographic emulsions can be exposed by X-rays, it takes a lot of energy to get a good image with proper contrast, especially on soft tissues. This became a problem as more was learned about the dangers of exposure to ionizing radiation, leading to the development of screen-film radiography.

In screen-film radiography, X-rays passing through the patient’s tissues are converted to light by one or more intensifying screens. These screens are made from plastic sheets coated with a phosphorescent material that glows when exposed to X-rays. Calcium tungstate was common back in the day, but rare earth phosphors like gadolinium oxysulfate became more popular over time. Intensifying screens were attached to the front and back covers of light-proof cassettes, with double-emulsion film sandwiched between them; when exposed to X-rays, the screens would glow briefly and expose the film.

By turning one incident X-ray photon into thousands or millions of visible light photons, intensifying screens greatly reduce the dose of radiation needed to create diagnostically useful images. That’s not without its costs, though, as the phosphors tend to spread out each X-ray photon across a physically larger area. This results in a loss of resolution in the image, which in most cases is an acceptable trade-off. When more resolution is needed, single-screen cassettes can be used with one-sided emulsion films, at the cost of increasing the X-ray dose.

Wiggle Those Toes

Intensifying screens aren’t the only place where phosphors are used to detect X-rays. Early on in the history of radiography, doctors realized that while static images were useful, continuous images of body structures in action would be a fantastic diagnostic tool. Originally, fluoroscopy was performed directly, with the radiologist viewing images created by X-rays passing through the patient onto a phosphor-covered glass screen. This required an X-ray tube engineered to operate with a higher duty cycle than radiographic tubes and had the dual disadvantages of much higher doses for the patient and the need for the doctor to be directly in the line of fire of the X-rays. Cataracts were enough of an occupational hazard for radiologists that safety glasses using leaded glass lenses were a common accessory.

How not to test your portable fluoroscope. The X-ray tube is located in the upper housing, while the image intensifier and camera are below. The machine is generally referred to as a “C-arm” and is used in the surgery suite and for bedside pacemaker placements. Source: Nightryder84, CC BY-SA 3.0.

One ill-advised spin-off of medical fluoroscopy was the shoe-fitting fluoroscopes that started popping up in shoe stores in the 1920s. Customers would stick their feet inside the machine and peer at a fluorescent screen to see how well their new shoes fit. It was probably not terribly dangerous for the once-a-year shoe shopper, but pity the shoe salesman who had to peer directly into a poorly regulated X-ray beam eight hours a day to show every Little Johnny’s mother how well his new Buster Browns fit.

As technology improved, image intensifiers replaced direct screens in fluoroscopy suites. Image intensifiers were vacuum tubes with a large input window coated with a fluorescent material such as zinc-cadmium sulfide or sodium-cesium iodide. The phosphors convert X-rays passing through the patient to visible light photons, which are immediately converted to photoelectrons by a photocathode made of cesium and antimony. The electrons are focused by coils and accelerated across the image intensifier tube by a high-voltage field on a cylindrical anode. The electrons pass through the anode and strike a phosphor-covered output screen, which is much smaller in diameter than the input screen. Incident X-ray photons are greatly amplified by the image intensifier, making a brighter image with a lower dose of radiation.

Originally, the radiologist viewed the output screen using a microscope, which at least put a little more hardware between his or her eyeball and the X-ray source. Later, mirrors and lenses were added to project the image onto a screen, moving the doctor’s head out of the direct line of fire. Later still, analog TV cameras were added to the optical path so the images could be displayed on high-resolution CRT monitors in the fluoroscopy suite. Eventually, digital cameras and advanced digital signal processing were introduced, greatly streamlining the workflow for the radiologist and technologists alike.

Get To The Point

So far, all the detection methods we’ve discussed fall under the general category of planar detectors, in that they capture an entire 2D shadow of the X-ray beam after having passed through the patient. While that’s certainly useful, there are cases where the dose from a single, well-defined volume of tissue is needed. This is where point detectors come into play.

Nuclear medicine image, or scintigraph, of metastatic cancer. 99Tc accumulates in lesions in the ribs and elbows (A), which are mostly resolved after chemotherapy (B). Note the normal accumulation of isotope in the kidneys and bladder. Kazunari Mado, Yukimoto Ishii, Takero Mazaki, Masaya Ushio, Hideki Masuda and Tadatoshi Takayama, CC BY-SA 2.0.

In medical X-ray equipment, point detectors often rely on some of the same gas-discharge technology that DIYers use to build radiation detectors at home. Geiger tubes and ionization chambers measure the current created when X-rays ionize a low-pressure gas inside an electric field. Geiger tubes generally use a much higher voltage than ionization chambers, and tend to be used more for radiological safety, especially in nuclear medicine applications, where radioisotopes are used to diagnose and treat diseases. Ionization chambers, on the other hand, were often used as a sort of autoexposure control for conventional radiography. Tubes were placed behind the film cassette holders in the exam tables of X-ray suites and wired into the control panels of the X-ray generators. When enough radiation had passed through the patient, the film, and the cassette into the ion chamber to yield a correct exposure, the generator would shut off the X-ray beam.

Another kind of point detector for X-rays and other kinds of radiation is the scintillation counter. These use a crystal, often cesium iodide or sodium iodide doped with thallium, that releases a few visible light photons when it absorbs ionizing radiation. The faint pulse of light is greatly amplified by one or more photomultiplier tubes, creating a pulse of current proportional to the amount of radiation. Nuclear medicine studies use a device called a gamma camera, which has a hexagonal array of PM tubes positioned behind a single large crystal. A patient is injected with a radioisotope such as the gamma-emitting technetium-99, which accumulates mainly in the bones. Gamma rays emitted are collected by the gamma camera, which derives positional information from the differing times of arrival and relative intensity of the light pulse at the PM tubes, slowly building a ghostly skeletal map of the patient by measuring where the 99Tc accumulated.

Going Digital

Despite dominating the industry for so long, the days of traditional film-based radiography were clearly numbered once solid-state image sensors began appearing in the 1980s. While it was reliable and gave excellent results, film development required a lot of infrastructure and expense, and resulted in bulky films that required a lot of space to store. The savings from doing away with all the trappings of film-based radiography, including the darkrooms, automatic film processors, chemicals, silver recycling, and often hundreds of expensive film cassettes, is largely what drove the move to digital radiography.

After briefly flirting with phosphor plate radiography, where a sensitized phosphor-coated plate was exposed to X-rays and then “developed” by a special scanner before being recharged for the next use, radiology departments embraced solid-state sensors and fully digital image capture and storage. Solid-state sensors come in two flavors: indirect and direct. Indirect sensor systems use a large matrix of photodiodes on amorphous silicon to measure the light given off by a scintillation layer directly above it. It’s basically the same thing as a film cassette with intensifying screens, but without the film.

Direct sensors, on the other hand, don’t rely on converting the X-ray into light. Rather, a large flat selenium photoconductor is used; X-rays absorbed by the selenium cause electron-hole pairs to form, which migrate to a matrix of fine electrodes on the underside of the sensor. The current across each pixel is proportional to the amount measured to the amount of radiation received, and can be read pixel-by-pixel to build up a digital image.

A Scratch-Built Commodore 64, Turing Style

23 Abril 2025 at 08:00

Building a Commodore 64 is among the easier projects for retrocomputing fans to tackle. That’s because the C64’s core chipset does most of the heavy lifting; source those and you’re probably 80% of the way there. But what if you can’t find those chips, or if you want more of a challenge than plugging and chugging? Are you out of luck?

Hardly. The video below from [DrMattRegan] is the first in a series on his scratch-built C64 that doesn’t use the core chipset, and it looks pretty promising. This video concentrates on building a replacement for the 6502 microprocessor — actually the 6510, but close enough — using just a couple of EPROMs, some SRAM chips, and a few standard logic chips to glue everything together. He uses the EPROMs as a “rulebook” that contains the code to emulate the 6502 — derived from his earlier Turing 6502 project — and the SRAM chips as a “notebook” for scratch memory and registers to make a Turing-complete random access machine.

[DrMatt] has made good progress so far, with the core 6502 CPU built on a PCB and able to run the Apple II version of Pac-Man as a benchmark. We’re looking forward to the rest of this series, but in the meantime, a look back at his VIC-less VIC-20 project might be informative.

Thanks to [Clint] for the tip.

Hackaday Links: April 20, 2025

20 Abril 2025 at 23:00
Hackaday Links Column Banner

We appear to be edging ever closer to a solid statement of “We are not alone” in the universe with this week’s announcement of the detection of biosignatures in the atmosphere of exoplanet K2-18b. The planet, which is 124 light-years away, has been the focus of much attention since it was discovered in 2015 using the Kepler space telescope because it lies in the habitable zone around its red-dwarf star. Initial observations with Hubble indicated the presence of water vapor, and follow-up investigations using the James Webb Space Telescope detected all sorts of goodies in the atmosphere, including carbon dioxide and methane. But more recently, JWST saw signs of dimethyl sulfide (DMS) and dimethyl disulfide (DMDS), organic molecules which, on Earth, are strongly associated with biological processes in marine bacteria and phytoplankton.

The team analyzing the JWST data says that the data is currently pretty good, with a statistical significance of 99.7%. That’s a three-sigma result, and while it’s promising, it’s not quite good enough to seal the deal that life evolved more than once in the universe. If further JWST observations manage to firm that up to five sigma, it’ll be the most important scientific result of all time. To our way of thinking, it would be much more significant than finding evidence of ancient or even current life in our solar system, since cross-contamination is so easy in the relatively cozy confines of the Sun’s gravity well. K2-18b is far enough away from our system as to make that virtually impossible, and that would say a lot about the universality of biochemical evolution. It could also provide an answer to the Fermi Paradox, since it could indicate that the galaxy is actually teeming with life but under conditions that make it difficult to evolve into species capable of making detectable techno-signatures. It’s hard to build a radio or a rocket when you live on a high-g water world, after all.

Closer to home, there’s speculation that the famous Antikythera mechanism may not have worked at all in its heyday. According to researchers from Universidad Nacional de Mar del Plata in Argentina, “the world’s first analog computer” could not have worked due to the accumulated mechanical error of its gears. They blame this on the shape of the gear teeth, which appear triangular on CT scans of the mechanism, and which they seem to attribute to manufacturing defects. Given the 20-odd centuries the brass-and-iron device spent at the bottom of the Aegean Sea and the potential for artifacts in CT scans, we’re not sure it’s safe to pin the suboptimal shape of the gear teeth on the maker of the mechanism. They also seem to call into question the ability of 1st-century BCE craftsmen to construct a mechanism with sufficient precision to serve as a useful astronomical calculator, a position that Chris from Clickspring has been putting the lie to with his ongoing effort to reproduce the Antikythera mechanism using ancient tools and materials. We’re keen to hear what he has to say about this issue.

Speaking of questionable scientific papers, have you heard about “vegetative electron microscopy”? It’s all the rage, having been mentioned in at least 22 scientific papers recently, even though no such technique exists. Or rather, it didn’t exist until around 2017, when it popped up in a couple of Iranian scientific papers. How it came into being is a bit of a mystery, but it may have started with faulty scans of a paper from the 1950s, which had the terms “vegetative” and “electron microscopy” printed in different columns but directly across from each other. That somehow led to the terms getting glued together, possibly in one of those Iranian papers because the Farsi spelling of “vegetative” is very similar to “scanning,” a much more sensible prefix to “electron microscopy.” Once the nonsense term was created, it propagated into subsequent papers of dubious scientific provenance by authors who didn’t bother to check their references, or perhaps never existed in the first place. The wonders of our AI world never cease to amaze.

And finally, from the heart of Silicon Valley comes a tale of cyber hijinks as several crosswalks were hacked to taunt everyone’s favorite billionaires. Twelve Palo Alto crosswalks were targeted by persons unknown, who somehow managed to gain access to the voice announcement system in the crosswalks and replaced the normally helpful voice messages with deep-fake audio of Elon Musk and Mark Zuckerberg saying ridiculous but plausible things. Redwood City and Menlo Park crosswalks may have also been attacked, and soulless city officials responded by disabling the voice feature. We get why they had to do it, but as cyberattacks go, this one seems pretty harmless.

Designing an FM Drum Synth from Scratch

17 Abril 2025 at 20:00

How it started: a simple repair job on a Roland drum machine. How it ended: a scratch-built FM drum synth module that’s completely analog, and completely cool.

[Moritz Klein]’s journey down the analog drum machine rabbit hole started with a Roland TR-909, a hybrid drum machine from the mid-80s that combined sampled sounds with analog synthesis. The unit [Moritz] picked up was having trouble with the decay on the kick drum, so he spread out the gloriously detailed schematic and got to work. He breadboarded a few sections of the kick drum circuit to aid troubleshooting, but one thing led to another and he was soon in new territory.

The video below is on the longish side, with the first third or so dedicated to recreating the circuits used to create the 909’s iconic sound, slightly modifying some of them to simplify construction. Like the schematic that started the whole thing, this section of the video is jam-packed with goodness, too much to detail here. But a few of the gems that caught our eye were the voltage-controlled amplifier (VCA) circuit that seems to make appearances in multiple places in the circuit, and the dead-simple wave-shaper circuit, which takes some of the harmonics out of the triangle wave oscillator’s output with just a couple of diodes and some resistors.

Once the 909’s kick and toms section had been breadboarded, [Moritz] turned his attention to adding something Roland hadn’t included: frequency modulation. He did this by adding a second, lower-frequency voltage-controlled oscillator (VCO) and using that to modulate the drum section. That resulted in a weird, metallic sound that can be tuned to imitate anything from a steel drum to a bell. He also added a hi-hat and cymbal section by mixing the square wave outputs on the VCOs through a funky XOR gate made from discrete components and a high-pass filter.

There’s a lot of information packed into this video, and by breaking everything down into small, simple blocks, [Moritz] makes it easy to understand analog synths and the circuits behind them.

An Absolute Zero of a Project

17 Abril 2025 at 02:00

How would you go about determining absolute zero? Intuitively, it seems like you’d need some complicated physics setup with lasers and maybe some liquid helium. But as it turns out, all you need is some simple lab glassware and a heat gun. And a laser, of course.

To be clear, the method that [Markus Bindhammer] describes in the video below is only an estimation of absolute zero via Charles’s Law, which describes how gases expand when heated. To gather the needed data, [Marb] used a 50-ml glass syringe mounted horizontally on a stand and fitted with a thermocouple. Across from the plunger of the syringe he placed a VL6180 laser time-of-flight sensor, to measure the displacement of the plunger as the air within it expands.

Data from the TOF sensor and the thermocouple were recorded by a microcontroller as the air inside the syringe was gently heated. Plotting the volume of the gas versus the temperature results shows a nicely linear relationship, and the linear regression can be used to calculate the temperature at which the volume of the gas would be zero. The result: -268.82°C, or only about four degrees off from the accepted value of -273.15°. Not too shabby.

[Marb] has been on a tear lately with science projects like these; check out his open-source blood glucose measurement method or his all-in-one electrochemistry lab.

Homemade VNA Delivers High-Frequency Performance on a Budget

16 Abril 2025 at 11:00

With vector network analyzers, the commercial offerings seem to come in two flavors: relatively inexpensive but limited capabilities, and full-featured but scary expensive. There doesn’t seem to be much middle ground, especially if you want something that performs well in the microwave bands.

Unless, of course, you build your own vector network analyzer (VNA). That’s what [Henrik Forsten] did, and we’ve got to say we’re even more impressed by the results than we were with his earlier effort. That version was not without its problems, and fixing them was very much on the list of goals for this build. Keeping the build affordable was also key, which resulted in some design compromises while still meeting [Henrik]’s measurement requirements.

The Bill of Materials includes dual-channel broadband RF mixer chips, high-speed 12-bit ADCs, and a fast FPGA to handle the torrent of data and run the digital signal processing functions. The custom six-layer PCB is on the large side and includes large cutouts for the directional couplers, which use short lengths of stripped coaxial cable lined with ferrite rings. To properly isolate signals between stages, [Henrik] sandwiched the PCB between a two-piece aluminum enclosure. Wisely, he printed a prototype enclosure and lined it with aluminum foil to test for fit and function before committing to milling the final version. He did note some leakage around the SMA connectors, but a few RF gaskets made from scraps of foil and solder braid did the trick.

This is a pretty slick build, especially considering he managed to keep the price tag at a very reasonable $300. It’s more expensive than the popular NanoVNA or its clones, but it seems like quite a bargain considering its capabilities.

Shine On You Crazy Diamond Quantum Magnetic Sensor

15 Abril 2025 at 11:00

We’re probably all familiar with the Hall Effect, at least to the extent that it can be used to make solid-state sensors for magnetic fields. It’s a cool bit of applied physics, but there are other ways to sense magnetic fields, including leveraging the weird world of quantum physics with this diamond, laser, and microwave open-source sensor.

Having never heard of quantum sensors before, we took the plunge and read up on the topic using some of the material provided by [Mark C] and his colleagues at Quantum Village. The gist of it seems to be that certain lab-grown diamonds can be manufactured with impurities such as nitrogen, which disrupt the normally very orderly lattice of carbon atoms and create a “nitrogen vacancy,” small pockets within the diamond with extra electrons. Shining a green laser on N-V diamonds can stimulate those electrons to jump up to higher energy states, releasing red light when they return to the ground state. Turning this into a sensor involves sweeping the N-V diamond with microwave energy in the presence of a magnetic field, which modifies which spin states of the electrons and hence how much red light is emitted.

Building a practical version of this quantum sensor isn’t as difficult as it sounds. The trickiest part seems to be building the diamond assembly, which has the N-V diamond — about the size of a grain of sand and actually not that expensive — potted in clear epoxy along with a loop of copper wire for the microwave antenna, a photodiode, and a small fleck of red filter material. The electronics primarily consist of an ADF4531 phase-locked loop RF signal generator and a 40-dB RF amplifier to generate the microwave signals, a green laser diode module, and an ESP32 dev board.

All the design files and firmware have been open-sourced, and everything about the build seems quite approachable. The write-up emphasizes Quantum Village’s desire to make this quantum technology’s “Apple II moment,” which we heartily endorse. We’ve seen N-V sensors detailed before, but this project might make it easier to play with quantum physics at home.

❌
❌