Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

Dwingeloo to Venus: Report of a Successful Bounce

28 Marzo 2025 at 11:00
Dwingeloo telescope with sun shining through

Radio waves travel fast, and they can bounce, too. If you are able to operate a 25-meter dish, a transmitter, a solid software-defined radio, and an atomic clock, the answer is: yes, they can go all the way to Venus and back. On March 22, 2025, the Dwingeloo telescope in the Netherlands successfully pulled off an Earth-Venus-Earth (EVE) bounce, making them the second group of amateurs ever to do so. The full breakdown of this feat is available in their write-up here.

Bouncing signals off planets isn’t new. NASA has been at it since the 1960s – but amateur radio astronomers have far fewer toys to play with. Before Dwingeloo’s success, AMSAT-DL achieved the only known amateur EVE bounce back in 2009. This time, the Dwingeloo team transmitted a 278-second tone at 1299.5 MHz, with the round trip to Venus taking about 280 seconds. Stockert’s radio telescope in Germany also picked up the returning echo, stronger than Dwingeloo’s own, due to its more sensitive receiving setup.

Post-processing wasn’t easy either. Doppler shift corrections had to be applied, and the received signal was split into 1 Hz frequency bins. The resulting detections clocked in at 5.4 sigma for Dwingeloo alone, 8.5 sigma for Stockert’s recording, and 9.2 sigma when combining both datasets. A clear signal, loud and proud, straight from Venus’ surface.

The experiment was cut short when Dwingeloo’s transmitter started failing after four successful bounces. More complex signal modulations will have to wait for the next Venus conjunction in October 2026. Until then, you can read our previously published article on achievements of the Dwingeloo telescope.

General Fusion Claims Success with Magnetized Target Fusion

Por: Maya Posch
27 Marzo 2025 at 14:00

It’s rarely appreciated just how much more complicated nuclear fusion is than nuclear fission. Whereas the latter involves a process that happens all around us without any human involvement, and where the main challenge is to keep the nuclear chain reaction within safe bounds, nuclear fusion means making atoms do something that goes against their very nature, outside of a star’s interior.

Fusing helium isotopes can be done on Earth fairly readily these days, but doing it in a way that’s repeatable — bombs don’t count — and in a way that makes economical sense is trickier. As covered previously, plasma stability is a problem with the popular approach of tokamak-based magnetic confinement fusion (MCF). Although this core problem has now been largely addressed, and stellarators are mostly unbothered by this particular problem, a Canadian start-up figures that they can do even better, in the form of a nuclear fusion reactors based around the principle of magnetized target fusion (MTF).

Although General Fusion’s piston-based fusion reactor has people mostly very confused, MTF is based on real physics and with GF’s current LM26 prototype having recently achieved first plasma, this seems like an excellent time to ask the question of what MTF is, and whether it can truly compete billion-dollar tokamak-based projects.

Squishing Plasma Toroids

Lawson criterion of important magnetic confinement fusion experiments (Credit: Horvath, A., 2016)
Lawson criterion of important magnetic confinement fusion experiments (Credit: Horvath, A., 2016)

In general, to achieve nuclear fusion, the target atoms have to be pushed past the Coulomb barrier, which is an electrostatic interaction that normally prevents atoms from approaching each other and even spontaneously fusing. In stars, the process of nucleosynthesis is enabled by the intense pressures due to the star’s mass, which overcomes this electrostatic force.

Replicating the nuclear fusion process requires a similar way to overcome the Coulomb barrier, but in lieu of even a small-sized star like our Sun, we need alternate means such as much higher temperatures, alternative ways to provide pressure and longer confinement times. The efficiency of each approach was originally captured in the Lawson criterion, which was developed by John D. Lawson in a (then classified) 1955 paper (PDF on Archive.org).

In order to achieve a self-sustaining fusion reaction, the energy losses should be less than the energy produced by the reaction. The break-even point here is expressed as having a Q (energy gain factor) of 1, where the added energy and losses within the fusion process are in balance. For sustained fusion with excess energy generation, the Q value should be higher than 1, typically around 5 for contemporary fuels and fusion technology.

In the slow march towards ignition, we have seen many reports in the popular media that turn out to be rather meaningless, such as the horrendous inefficiency demonstrated by the laser-based inertial confinement fusion (ICF) at the National Ignition Facility (NIF). This makes it rather fascinating that what General Fusion is attempting is closer to ICF, just without the lasers and artisan Hohlraum-based fuel pellets.

Instead they use a plasma injector, a type of plasma railgun called a Marshall gun, that produces hydrogen isotope plasma, which is subsequently contained in a magnetic field as a self-stable compact toroid. This toroid is then squished by a mechanical system in a matter of milliseconds, with the resulting compression induces fusion. Creating this toroid is the feat that was recently demonstrated in the current Lawson Machine 26 (LM26) prototype reactor with its first plasma in the target chamber.

Magneto-Inertial Fusion

Whereas magnetic confinement fusion does effectively what it says on the tin, magnetic target fusion is pretty much a hybrid of magnetic confinement fusion and the laser-based intertial confinement fusion. Because the magnetic containment is only there to essentially keep the plasma in a nice stable toroid, it doesn’t have nearly the same requirements as in a tokamak or stellarator. Yet rather than using complex and power-hungry lasers, MCF applies mechanical energy using an impulse driver — the liner — that rapidly compresses the low-density plasma toroid.

Schematic of the Lawson Machine 26 MTF reactor. (Credit: General Fusion)
Schematic of the Lawson Machine 26 MTF reactor. (Credit: General Fusion)

The juiciest parts of General Fusion’s experimental setup can be found in the Research Library on the GF website. The above graphic was copied from the LM26 poster (PDF), which provides a lot of in-depth information on the components of the device and its operation, as well as the experiments that informed its construction.

The next step will be to test the ring compressor that is designed to collapse the lithium liner around the plasma toroid, compressing it and achieving fusion.

Long Road Ahead

Interpretation of General Fusion's commercial MTF reactor design. (Credit: Evan Mason)
Interpretation of General Fusion’s commercial MTF reactor design. (Credit: Evan Mason)

As promising this may sound, there is still a lot of work to do before MTF can be considered a viable option for commercial fusion. As summarized on the Wikipedia entry for General Fusion, the goal is to have a liquid liner rather than the solid lithium liner of LM26. This liquid lithium liner will both breed new tritium fuel from neutron exposure, as well as provide the liner that compresses the deuterium-tritium fuel.

This liquid liner would also provide cooling, linked with a heat exchanger or steam generator to generate electricity. Because the liquid liner would be infinitely renewable, it should allow for about 1 cycle per second. To keep the liquid liner in place on the inside of the sphere, it would need to be constantly spun, further complicating the design.

Although getting plasma in the reaction chamber where it can be squished by the ring compressor’s lithium liner is a major step, the real challenge will be in moving from a one-cycle-a-day MTF prototype to something that can integrate not only the aforementioned features, but also run one cycle per second, while being more economical to run than tokamaks, stellarators, or even regular nuclear fission plants, especially Gen IV fast neutron reactors.

That said, there is a strong argument to be made that MTF is significantly more practical for commercial power generation than ICF. And regardless, it is just really cool science and engineering.

Top image: General Fusion’s Lawson Machine 26. (Credit: General Fusion)

Twisting Magnetism to Control Electron Flow

23 Marzo 2025 at 02:00
Microscopic view of chiral magnetic material

If you ever wished electrons would just behave, this one’s for you. A team from Tohoku, Osaka, and Manchester Universities has cracked open an interesting phenomenon in the chiral helimagnet α-EuP3: they’ve induced one-way electron flow without bringing diodes into play. Their findings are published in the Proceedings of the National Academy of Sciences.

The twist in this is quite literal. By coaxing europium atoms into a chiral magnetic spiral, the researchers found they could generate rectification: current that prefers one direction over another. Think of it as adding a one-way street in your circuit, but based on magnetic chirality rather than semiconductors. When the material flips to an achiral (ferromagnetic) state, the one-way effect vanishes. No asymmetry, no preferential flow. They’ve essentially toggled the electron highway signs with an external magnetic field. This elegant control over band asymmetry might lead to low-power, high-speed data storage based on magnetic chirality.

If you are curious how all this ties back to quantum theory, you can trace the roots of chiral electron flow back to the early days of quantum electrodynamics – when physicists first started untangling how particles and fields really interact.

There’s a whole world of weird physics waiting for us. In the field of chemistry, chirality has been covered by Hackaday, foreshadowing the lesser favorable ways of use. Read up on the article and share with us what you think.

Biosynthesis of Polyester Amides in Engineered Escherichia Coli

Por: Maya Posch
22 Marzo 2025 at 08:00

Polymers are one of the most important elements of modern-day society, particularly in the form of plastics. Unfortunately most common polymers are derived from fossil resources, which not only makes them a finite resource, but is also problematic from a pollution perspective. A potential alternative being researched is that of biopolymers, in particular those produced by microorganisms such as everyone’s favorite bacterium Escherichia coli (E. coli).

These bacteria were the subject of a recent biopolymer study by [Tong Un Chae] et al., as published in Nature Chemical Biology (paywalled, break-down on Arstechnica).

By genetically engineering E. coli bacteria to use one of their survival energy storage pathways instead for synthesizing long chains of polyester amides (PEAs), the researchers were able to make the bacteria create long chains of mostly pure PEA. A complication here is that this modified pathway is not exactly picky about what amino acid monomers to stick onto the chain next, including metabolism products.

Although using genetically engineered bacteria for the synthesis of products on an industrial scale isn’t uncommon (see e.g. the synthesis of insulin), it would seem that biosynthesis of plastics using our prokaryotic friends isn’t quite ready yet to graduate from laboratory experiments.

Producing Syngas From CO2 and Sunlight With Direct Air Capture

Por: Maya Posch
22 Marzo 2025 at 02:00
The prototype DACCU device for producing syngas from air. (Credit: Sayan Kar, University of Cambridge)

There is more carbon dioxide (CO2) in the atmosphere these days than ever before in human history, and while it would be marvelous to use these carbon atoms for something more useful, capturing CO2 directly from the air isn’t that easy. After capturing it would also be great if you could do something more with it than stuff it into a big hole. Something like producing syngas (CO + H2) for example, as demonstrated by researchers at the University of Cambridge.

Among the improvements claimed in the paper as published in Nature Energy for this direct air capture and utilization (DACCU) approach are that it does not require pure CO2 feedstock, but will adsorb it directly from the air passing over a bed of solid silica-amine. After adsorption, the CO2 can be released again by exposure to concentrated light. Following this the conversion to syngas is accomplished by passing it over a second bed consisting of silica/alumina-titania-cobalt bis(terpyridine), that acts as a photocatalyst.

The envisioned usage scenario would be CO2 adsorption during the night, with concentrated solar power releasing it the day with subsequent production of syngas. Inlet air would be passed only over the adsorption section before switching the inlet off during the syngas generating phase. As a lab proof-of-concept it seems to work well, with outlet air stripped from virtually all CO2 and very high conversion ratio from CO2 to syngas.

Syngas has historically been used as a replacement for gasoline, but is also used as a source of hydrogen (e.g. steam reformation (SMR) of natural gas) where it’s used for reduction of iron ore, as well as the production of methanol as a precursor to many industrial processes. Whether this DACCU approach provides a viable alternative to SMR and other existing technologies will become clear once this technology moves from the lab into the real world.

Thanks to [Dan] for the tip.

Current Mirrors Tame Common Mode Noise

18 Marzo 2025 at 02:00
Long-tail pair waves

If you’re the sort who finds beauty in symmetry – and I’m not talking about your latest PCB layout – then you’ll appreciate this clever take on the long-tailed pair. [Kevin]’s video on this topic explores boosting common mode rejection by swapping out the old-school tail resistor for a current mirror. Yes, the humble current mirror – long underestimated in DIY analog circles – steps up here, giving his differential amplifier a much-needed backbone.

So why does this matter? Well, in Kevin’s bench tests, this hack more than doubles the common mode rejection, leaping from a decent 35 dB to a noise-crushing 93 dB. That’s not just tweaking for tweaking’s sake; that’s taking a breadboard standard and making it ready for sensitive, low-level signal work. Instead of wrestling with mismatched transistors or praying to the gods of temperature stability, he opts for a practical approach. A couple of matched NPNs, a pair of emitter resistors, and a back-of-the-envelope resistor calculation – and boom, clean differential gain without the common mode muck.

If you want the nitty-gritty details, schematics of the demo circuits are on his project GitHub. Kevin’s explanation is equal parts history lesson and practical engineering, and it’s worth the watch. Keep tinkering, and do share your thoughts on this.

Transmitting Wireless Power Over Longer Distances

Por: Maya Posch
16 Marzo 2025 at 14:00
Proof-of-concept of the inductive coupling transmitter with the 12V version of the circuitry (Credit: Hyperspace Pirate, YouTube)
Proof-of-concept of the inductive coupling transmitter with the 12V version of the circuitry (Credit: Hyperspace Pirate, YouTube)

Everyone loves wireless power these days, almost vindicating [Tesla’s] push for wireless power. One reason why transmitting electricity this way is a terrible idea is the massive losses involved once you increase the distance between transmitter and receiver. That said, there are ways to optimize wireless power transfer using inductive coupling, as [Hyperspace Pirate] demonstrates in a recent video.

Starting with small-scale proof of concept coils, the final version of the transmitter is powered off 120 VAC. The system has 10 kV on the coil and uses a half-bridge driver to oscillate at 145 kHz. The receiver matches this frequency precisely for optimal efficiency. The transmitting antenna is a 4.6-meter hexagon with eight turns of 14 AWG wire. During tests, a receiver of similar size could light an LED at a distance of 40 meters with an open circuit voltage of 2.6 V.

Although it’s also an excellent example of why air core transformers like this are lousy for efficient remote power transfer, a fascinating finding is that intermediate (unpowered) coils between the transmitter and receiver can help to boost the range due to coupling effects. Even if it’s not a practical technology (sorry, [Tesla]), it’s undeniable that it makes for a great science demonstration.

Of course, people do charge phones wirelessly. It works, but it trades efficiency for convenience. Modern attempts at beaming power around seem to focus more on microwaves or lasers.

Building a Fully Automatic Birkeland-Eyde Reactor

Por: Maya Posch
16 Marzo 2025 at 02:00

Ever wanted to produce nitrogen fertilizer like they did in the 1900s? In that case, you’re probably looking at the Birkeland-Eyde process, which was the first industrial-scale atmospheric nitrogen fixation process. It was eventually replaced by the Haber-Bosch and Ostwald processes. [Markus Bindhammer] covers the construction of a hobbyist-sized, fully automated reactor in this video.

It uses tungsten electrodes to produce the requisite arc, with a copper rod brazed onto both. The frame is made of aluminium profiles mounted on a polypropylene board, supporting the reaction vessel. Powering the whole contraption is a 24 VDC, 20 A power supply, which powers the flyback transformer for the high-voltage arc, as well as an air pump and smaller electronics, including the Arduino Uno board controlling the system.

The air is dried by silica gel before entering the reactor, with the airflow measured by a mass air flow sensor and the reaction temperature by a temperature sensor. This should give the MCU a full picture of the state of the reaction, with the airflow having to be sufficiently high relative to the arc to extract the maximum yield for this already very low-yield (single-digit %) process.

Usually, we are more interested in getting our nitrogen in liquid form. We’ve also looked at the Haber-Bosch method in the past.

You Too Can Do the Franck-Hertz Experiment

15 Marzo 2025 at 20:00

We talk about quantum states — that is, something can be at one of several discrete values but not in between. For example, a binary digit can be a 1 or a 0, but not 0.3 or 0.5. Atoms have quantum states, but how do we know that? That’s what the Franck-Hertz experiment demonstrates, and [stoppi] shows you how to replicate that famous experiment yourself.

You might need to translate the web page if your German isn’t up to speed, but there’s also a video you can watch below. The basic idea is simple. A gas-filled tube sees a large voltage across the cathode and grid. A smaller voltage connects to the grid and anode. If you increase the grid voltage, you might expect the anode current to increase linearly. However, that doesn’t happen. Instead, you’ll observe dips in the anode current.

When electrons reach a certain energy they excite the gas in the tube. This robs them of the energy they need to overcome the grid/anode voltage, which explains the dips. As the energy increases, the current will again start to rise until it manages to excite the gas to the next quantum level, at which point another dip will occur.

Why not build a whole lab? Quantum stuff, at a certain level, is weird, but this experiment seems understandable enough.

High-Speed Reservoir Computing With Integrated Laser Graded Artificial Neurons

Por: Maya Posch
13 Marzo 2025 at 11:00

So-called neuromorphic computing involves the use of physical artificial neurons to do computing in a way that is inspired by the human brain. With photonic neuromorphic computing these artificial neurons generally use laser sources and structures such as micro-ring resonators and resonant tunneling diodes to inject photons and modulate them akin to biological neurons.

General reservoir computing with laser graded neuron. (Credit: Yikun Nie et al., 2024, Optica)

One limitation of photonic artificial neurons was that these have a binary response and a refractory period, making them unlike the more versatile graded neurons. This has now been addressed by [Yikun Nie] et al. with their research published in Optica.

The main advantage of graded neurons is that they are capable of analog graded responses, combined with no refractory period in which the neuron is unresponsive. For the photonic version, a quantum dot (QD) based gain section was constructed, with the input pulses determining the (analog) output.

Multiple of these neurons were then combined on a single die, for use in a reservoir computing configuration. This was used with a range of tests, including arrhythmia detection (98% accuracy) and handwriting classification (92% accuracy). By having the lasers integrated and the input pulses being electrical in nature, this should make it quite low-power, as well as fast, featuring 100 GHz QD lasers.

Josephine Cochrane Invented the Modern Dishwasher — In 1886

11 Marzo 2025 at 11:00

Popular Science has an excellent article on how Josephine Cochrane transformed how dishes are cleaned by inventing an automated dish washing machine and obtaining a patent in 1886. Dishwashers had been attempted before, but hers was the first with the revolutionary idea of using water pressure to clean dishes placed in wire racks, rather than relying on some sort of physical scrubber. The very first KitchenAid household dishwashers were based on her machines, making modern dishwashers direct descendants of her original design.

Josephine Cochrane (née Garis)

It wasn’t an overnight success. Josephine faced many hurdles. Saying it was difficult for a woman to start a venture or do business during this period of history doesn’t do justice to just how many barriers existed, even discounting the fact that her late husband was something we would today recognize as a violent alcoholic. One who left her little money and many debts upon his death, to boot.

She was nevertheless able to focus on developing her machine, and eventually hired mechanic George Butters to help create a prototype. The two of them working in near secrecy because a man being seen regularly visiting her home was simply asking for trouble. Then there were all the challenges of launching a product in a business world that had little place for a woman. One can sense the weight of it all in a quote from Josephine (shared in a write-up by the USPTO) in which she says “If I knew all I know today when I began to put the dishwasher on the market, I never would have had the courage to start.”

But Josephine persevered and her invention made a stir at the 1893 World’s Fair in Chicago, winning an award and mesmerizing onlookers. Not only was it invented by a woman, but her dishwashers were used by restaurants on-site to clean tens of thousands of dishes, day in and day out. Her marvelous machine was not yet a household device, but restaurants, hotels, colleges, and hospitals all saw the benefits and lined up to place orders.

Early machines were highly effective, but they were not the affordable, standard household appliances they are today. There certainly existed a household demand for her machine — dishwashing was a tedious chore that no one enjoyed — but household dishwashing was a task primarily done by women. Women did not control purchasing decisions, and it was difficult for men of the time (who did not spend theirs washing dishes) to be motivated about the benefits. The device was expensive, but it did away with a tremendous amount of labor. Surely the price was justified? Yet women themselves — the ones who would benefit the most — were often not on board. Josephine reflected that many women did not yet seem to think of their own time and comfort as having intrinsic value.

Josephine Cochrane ran a highly successful business and continued to refine her designs. She died in 1913 and it wasn’t until the 1950s that dishwashers — direct descendants of her original design — truly started to become popular with the general public.

Nowadays, dishwashers are such a solved problem that not only are they a feature in an instructive engineering story, but we rarely see anyone building one (though it has happened.)

We have Josephine Cochrane to thank for that. Not just her intellect and ingenuity in coming up with it, but the fact that she persevered enough to bring her creation over the finish line.

You Are Already Traveling at the Speed of Light

10 Marzo 2025 at 11:00

Science fiction authors and readers dream of travelling at the speed of light, but Einstein tells us we can’t. You might think that’s an arbitrary rule, but [FloatHeadPhysics] shows a different way to think about it. Based on a book he’s been reading, “Relativity Visualized,” he provides a graphic argument for relativity that you can see in the video below.

The argument starts off by explaining how a three-dimensional object might appear in a two-dimensional world. In this world, everything is climbing in the hidden height dimension at the exact same speed.

Our 2D friends, of course, can only see the shadow of the 3D object so if it is staying in one place on the table surface, the object never seems to move. However, just as we can measure time with a clock, the flat beings could devise a way to measure height. They would see that the object was moving “through height” at the fixed speed.

Now suppose the object turns a bit and is moving at, say, a 45 degree angle relative to the table top. Now the shadow moves and the “clock speed” measuring the height starts moving more slowly. If the object moves totally parallel to the surface, the shadow moves at the fixed speed and the clock speed shadow doesn’t move at all.

This neatly explains time dilation and length contraction. It also shows that the speed of light isn’t necessarily a rule. It is simply that everything in the observable universe is moving at the speed of light and how moving through space affects it.

Doesn’t make sense? Watch the video and it will. Pretty heady stuff. We love how passionate [FloatHeadPhysics] gets about the topic. If you prefer a funnier approach, turn to the BBC. Or, if you like the hands-on approach, build a cloud chamber and measure some muons.

Taming the Wobble: An Arduino Self-Balancing Bot

9 Marzo 2025 at 14:00
self-stabilizing robot on tabletop

Getting a robot to stand on two wheels without tipping over involves a challenging dance with the laws of physics. Self-balancing robots are a great way to get into control systems, sensor fusion, and embedded programming. This build by [mircemk] shows how to make one with just a few common components, an Arduino, and a bit of patience fine-tuning the PID controller.

At the heart of the bot is the MPU6050 – a combo accelerometer/gyroscope sensor that keeps track of tilt and movement. An Arduino Uno takes this data, runs it through a PID loop, and commands an L298N motor driver to adjust the speed and direction of two DC motors. The power comes from two Li-ion batteries feeding everything with enough juice to keep it upright. The rest of the magic lies in the tuning.

PID (Proportional-Integral-Derivative) control is what makes the robot stay balanced. Kp (proportional gain) determines how aggressively the motors respond to tilting. Kd (derivative gain) dampens oscillations, and Ki (integral gain) helps correct slow drifts. Set them wrong, and your bot either wobbles like a confused penguin or falls flat on its face. A good trick is to start with only Kp, then slowly add Kd and Ki until it stabilizes. Then don’t forget to calibrate your MPU6050; each sensor has unique offsets that need to be compensated in the code.

Once dialed in, the result is a robot that looks like it defies gravity. Whether you’re hacking it for fun, turning it into a segway-like ride, or using it as a learning tool, a balancing bot is a great way to sharpen your control system skills. For more inspiration, check out this earlier attempt from 2022, or these self-balancing robots (one with a little work) from a year before that. You can read up on [mircemk]’s project details here.

This Laser Knows about Gasses

8 Marzo 2025 at 03:00

What’s that smell? If you can’t tell, maybe a new laser system from CU Bolder and NIST can help. The device is simple and sensitive enough to detect gasses at concentrations down to parts per trillion.

The laser at the system’s heart is a frequency comb laser, originally made for optical atomic clocks. The laser has multiple optical frequencies in its output. The gas molecules absorb light of different wavelengths differently, giving each type of molecule a unique fingerprint.

Unlike traditional lasers, which emit a single frequency,  a frequency comb laser can emit thousands or millions of colors at once. The inventor picked up the Nobel prize in 2005 for that work.

The gas is placed between two highly-reflective mirrors. The beam bounces in this optical cavity, although previous attempts were difficult because the cavity has a particular affinity for frequencies. The answer was to jiggle the mirrors to change the size of the cavity during measurments.

This is one of those things that doesn’t seem very complicated except — whoops — you need an exotic comb laser. But if those ever become widely available, you could probably figure out how to replicate this.

This could revolutionize air quality instruments. Small quantities of hydrogen sulfide can be detected easily (although, paradoxically, too much is hard to smell).

China Claims Commercial Nuclear Fusion by 2050 as Germany Goes Stellarator

Por: Maya Posch
5 Marzo 2025 at 12:00

Things are heating up in the world of nuclear fusion research, with most fundamental issues resolved and an increasing rate of announcements being made regarding commercial fusion power. China’s CNNC is one of the most recent voices here, with their statement that they expect to have commercial nuclear fusion plants online by 2050. Although scarce on details, China is one of the leading nations when it comes to nuclear fusion research, with multiple large tokamaks, including the HL-2M and the upcoming CFETR which we covered a few years ago.

Stellaris stellarator. (Credit: Proxima Fusion)

In addition to China’s fusion-related news, a German startup called Proxima Fusion announced their Stellaris commercial fusion plant design concept, with a targeted grid connection by the 2030s. Of note is that this involves a stellarator design, which has the major advantage of inherent plasma stability, dodging the confinement mode and Greenwald density issues that plague tokamaks. The Stellaris design is an evolution of the famous Wendelstein 7-X research stellarator at the Max Planck Institute.

While Wendelstein 7-X was not designed to produce power, it features everything from the complex coiled design and cooled divertors plus demonstrated long-term operation that a commercial reactor would need. This makes it quite likely that the coming decades we’ll be seeing the end spurt for commercial fusion power, with conceivably stellarators being the unlikely winner long before tokamaks cross the finish line.

Make Your Own Air Knife and Air Amplifier

Por: Maya Posch
4 Marzo 2025 at 00:00

Want to make your own air knife to cut things with? Unfortunately that’s not what these devices are intended for, but [This Old Tony] will show you how to make your own, while explaining what they are generally intended for.  His version deviates from the commercial version which he got his hands on in that he makes a round version instead of the straight one, but the concept is the same.

In short, an air knife is a laminar pressurized airflow device that provides a very strong and narrow air pattern, using either compressed air or that from a blower. Generally air knives will use the Coandă effect to keep the laminar flow attached to the device for as long as possible to multiply the air pressure above that from the laminar flow from the air knife itself. These are commonly used for cleaning debris and dust off surfaces in e.g. production lines.

As [Tony] shows in the disassembly of a commercial device, they are quite basic, with just two aluminium plates and a thin shim that creates the narrow opening through which the air can escape. The keyword here is ‘thin shim’, as [Tony] discovers that even a paper shim is too thick already. Amusingly, although he makes a working round air knife this way, it turns out that these are generally called an air amplifier, such as those from Exair and are often used for cooling and ventilation, with some having an adjustable opening to adjust the resulting airflow.

Some may recognize this principle for those fancy ‘bladeless’ fans that companies like Dyson sell, as they use essentially the same principle, just with a fan providing the pressure rather than a compressor.

Sensory Substitution Device Tingles Back Of Your Hand

3 Marzo 2025 at 19:30

A team from the University of Chicago brings us a new spin on sensory substitution, the “Seeing with the Hands” project, turning external environment input into sensations. Here specifically, the focus is on substituting vision into hand sensations, aimed at blind and vision disabled. The prototype is quite inspiration-worthy!

On the input side, we have a wrist-mounted camera, sprinkled with a healthy amount of image processing, of course. As for the output, no vibromotors or actuators are in use – instead, tactile receptors are stimulated by passing small amounts of current through your skin, triggering your touch receptors electrically. An 8×8 array of such “tactile” pixels is placed on the back of the hand and fingers. The examples provided show it to be a decent substitution.

This technique depends on the type of image processing being used, as well as the “resolution” of the pixels, but it’s a fun concept nevertheless, and the study preprint has some great stories to tell. This one’s far from the first sensory substitution devices we’ve covered, though, as quite a few of them were mechanical in nature – the less moving parts, the better, we reckon!

Making a PCR Machine Crypto Sign Its Results

1 Marzo 2025 at 12:00
A PCR machine with its side cover taken off exposing its guts, and the tray extended out

Money, status, or even survival – there’s no shortage of incentives for faking results in the scientific community. What can we do to prevent it, or at least make it noticeable? One possible solution is cryptographic signing of measurement results.

Here’s a proof-of-concept from [Clement Heyd] and [Arbion Halili]. They took a ThermoFisher Scientific 7500 Fast PCR (Polymerase Chain Reaction) machine, isolated its daughter-software, and confined it into a pipeline that automatically signs each result with help of a HSM (Hardware Security Module).

A many machines do, this one has to be paired to a PC, running bespoke software. This one’s running Windows XP, at least! The software got shoved into a heavily isolated virtual machine running XP, protected by TEE (Trusted Execution Environment). The software’s output is now piped into a data diode virtual serial port out of the VM, immediately signed with the HSM, and signed data is accessible through a read-only interface. Want to verify the results’ authenticity? Check them against the system’s public key, and you’re golden – in theory.

This design is just a part of the puzzle, given a typical chain of custody for samples in medical research, but it’s a solid start – and it happens to help make the Windows XP setup more resilient, too.

Wondering what PCR testing is good for? Tons of things all over the medical field, for instance, we’ve talked about PCR in a fair bit of detail in this article about COVID-19 testing. We’ve also covered a number of hacker-built PCR and PCR-enabling machines, from deceivingly simple to reasonably complex!

Phytoremediation to Clean the Environment and Mine Critical Materials

1 Marzo 2025 at 06:00
A balding man in a blue suit and tie sits behind rows of plants on tables. A bright yellow watering can is close to the camera and out of focus.

Nickel contamination can render soils infertile at levels that are currently impractical to treat. Researchers at UMass Amherst are looking at how plants can help these soils and source nickel for the growing EV market.

Phytoremediation is the use of plants that preferentially hyperaccumulate certain contaminants to clean the soil. When those contaminants are also critical materials, you get phytomining. Starting with Camelina sativa, the researchers are looking to enhance its preference for nickel accumulation with genes from the even more adept hyperaccumulator Odontarrhena to have a quick-growing plant that can be a nickel feedstock as well as produce seeds containing oil for biofuels.

Despite being able to be up to 3% Ni by weight, Odontarrhena was ruled out as a candidate itself due to its slow-growing nature and that it is invasive to the United States. The researchers are also looking into what soil amendments can best help this super Camelina sativa best achieve its goals. It’s no panacea for expected nickel demand, but they do project that phytomining could provide 20-30% of our nickel needs for 50 years, at which point the land could be turned back over to other uses.

Recycling things already in technical cycles will be important to a circular economy, but being able to remove contaminants from the environment’s biological cycles and place them into a safer technical cycle instead of just burying them will be a big benefit as well. If you want learn about a more notorious heavy metal, checkout our piece on the blessings and destruction wrought by lead.

❌
❌