Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

The SS United States: The Most Important Ocean Liner We May Soon Lose Forever

Por: Maya Posch
27 Junio 2024 at 14:30

Although it’s often said that the era of ocean liners came to an end by the 1950s with the rise of commercial aviation, reality isn’t quite that clear-cut. Coming out of the troubled 1940s arose a new kind of ocean liner, one using cutting-edge materials and propulsion, with hybrid civil and military use as the default, leading to a range of fascinating design decisions. This was the context in which the SS United States was born, with the beating heart of the US’ fastest battle ships, with light-weight aluminium structures and survivability built into every single aspect of its design.

Outpacing the super-fast Iowa-class battleships with whom it shares a lot of DNA due to its lack of heavy armor and triple 16″ turrets, it easily became the fastest ocean liner, setting speed records that took decades to be beaten by other ocean-going vessels, though no ocean liner ever truly did beat it on speed or comfort. Tricked out in the most tasteful non-flammable 1950s art and decorations imaginable, it would still be the fastest and most comfortable way to cross the Atlantic today. Unfortunately ocean liners are no longer considered a way to travel in this era of commercial aviation, leading to the SS United States and kin finding themselves either scrapped, or stuck in limbo.

In the case of the SS United States, so far it has managed to escape the cutting torch, but while in limbo many of its fittings were sold off at auction, and the conservation group which is in possession of the ship is desperately looking for a way to fund the restoration. Most recently, the owner of the pier where the ship is moored in Camden, New Jersey got the ship’s eviction approved by a judge, leading to very tough choices to be made by September.

A Unique Design

WW II-era United States Maritime Commission (MARCOM) poster.
WW II-era United States Maritime Commission (MARCOM) poster.

The designer of the SS United States is William Francis Gibbs, who despite being a self-taught engineer managed to translate his life-long passion for shipbuilding into a range of very notable ships. Many of these were designed at the behest of the United States Maritime Commission (MARCOM), which was created by the Merchant Marine Act of 1936, until it was abolished in 1950. MARCOM’s task was to create a merchant shipbuilding program for hundreds of modern cargo ships that would replace the World War I vintage vessels which formed the bulk of the US Merchant Marine. As a hybrid civil and federal organization, the merchant marine is intended to provide the logistical backbone for the US Navy in case of war and large-scale conflict.

The first major vessel to be commissioned for MARCOM was the SS America, which was an ocean liner commissioned in 1939 and whose career only ended in 1994 when it (then named the American Star) wrecked at the Canary Islands. This came after it had been sold in 1992 to be turned into a five-star hotel in Thailand. Drydocking in 1993 had revealed that despite the advanced age of the vessel, it was still in remarkably good condition.

Interestingly, the last merchant marine vessel to be commissioned by MARCOM was the SS United States, which would be a hybrid civilian passenger liner and military troop transport. Its sibling, the SS America, was in Navy service from 1941 to 1946 when it was renamed the USS West Point (AP-23) and carried over 350,000 troops during the war period, more than any other Navy troopship. Its big sister would thus be required to do all that and much more.

Need For Speed

SS United States colorized promotional B&W photograph. The ship's name and an American flag have been painted in position here as both were missing when this photo was taken during 1952 sea trials.
SS United States colorized promotional B&W photograph. The ship’s name and an American flag have been painted in position here as both were missing when this photo was taken during 1952 sea trials.

William Francis Gibbs’ naval architecture firm – called Gibbs & Cox by 1950 after Daniel H. Cox joined – was tasked to design the SS United States, which was intended to be a display of the best the United States of America had to offer. It would be the largest, fastest ocean liner and thus also the largest and fastest troop and supply carrier for the US Navy.

Courtesy of the major metallurgical advances during WW II, and with the full backing of the US Navy, the design featured a military-style propulsion plant and a heavily compartmentalized design following that of e.g. the Iowa-class battleships. This meant two separate engine rooms and similar levels of redundancy elsewhere, to isolate any flooding and other types of damage. Meanwhile the superstructure was built out of aluminium, making it both very light and heavily corrosion-resistant. The eight US Navy M-type boilers (run at only 54% of capacity) and a four-shaft propeller design took lessons learned with fast US Navy ships to reduce vibrations and cavitation to a minimum. These lessons include e.g. the the five- and four-bladed propeller design also seen used with the Iowa-class battleships with their newer configurations.

Another lessons-learned feature was a top to bottom fire-proofing after the terrible losses of the SS Morro Castle and SS Normandie, with no wood, fabrics or other flammable materials onboard, leading to the use of glass, metal and spun-glass fiber, as well as fireproof fabrics and carpets. This extended to the art pieces that were onboard the ship, as well as the ship’s grand piano which was made from mahogany whose inability to ignite was demonstrated by trying to burn it with a gasoline fire.

The actual maximum speed that the SS United States can reach is still unknown, with it originally having been a military secret. Its first speed trial supposedly saw the vessel hit an astounding 43 knots (80 km/h), though after the ship was retired from the United States Lines (USL) by the 1970s and no longer seen as a naval auxiliary asset, its top speed during the June 10, 1952 trial was revealed to be 38.32 knots (70.97 km/h). In service with USL, its cruising speed was 36 knots, gaining it the Blue Riband and rightfully giving it its place as America’s Flagship.

A Fading Star

The SS United States was withdrawn from passenger service by 1969, in a very unexpected manner. Although the USL was no longer using the vessel, it remained a US Navy reserve vessel until 1978, meaning that it remained sealed off to anyone but US Navy personnel during that period. Once the US Navy no longer deemed the vessel relevant for its needs in 1978, it was sold off, leading to a period of successive owners. Notable was Richard Hadley who had planned to convert it into seagoing time-share condominiums, and auctioned off all the interior fittings in 1984 before his financing collapsed.

In 1992, Fred Mayer wanted to create a new ocean liner to compete with the Queen Elizabeth, leading him to have the ship’s asbestos and other hazardous materials removed in Ukraine, after which the vessel was towed back to Philadelphia in 1996, where it has remained ever since. Two more owners including Norwegian Cruise Line (NCL) briefly came onto the scene, but economic woes scuttled plans to revive it as an active ocean liner. Ultimately NCL sought to sell the vessel off for scrap, which led to the SS United States Conservancy (SSUSC) to take over ownership in 2010 and preserve the ship while seeking ways to restore and redevelop the vessel.

Considering that the running mate of the SS United States (the SS America) was lost only a few years prior, this leaves the SS United States as the only example of a Gibbs ocean liner, and a poignant reminder of what would have been a highlight of the US’s marine prowess. Compared to the United Kingdom’s record here, with the Queen Elizabeth 2 (QE2, active since 1969) now a floating hotel in Dubai and the Queen Mary 2‘s maiden voyage in 2004, the US looks to be rather meager when it comes to preserving its ocean liner legacy.

End Of The Line?

The curator of the Iowa-class USS New Jersey (BB-62, currently fresh out of drydock), Ryan Szimanski, walked over from his museum ship last year to take a look at the SS United States, which is moored literally within viewing distance from his own pride and joy. Through the videos he made, one gains a good understanding of both how stripped the interior of the ship is, but also how amazingly well-conserved the ship is today. Even after decades without drydocking or in-depth maintenance, the ship looks like could slip into a drydock tomorrow and come out like new a year or so later.

At the end of all this, the question remains whether the SS United States deserves it to be preserved. There are many arguments for why this would the case, from its unique history as part of the US Merchant Marine, its relation to the highly successful SS America, it being effectively a sister ship to the four Iowa-class battleships, as well as a strong reminder of the importance of the US Merchant Marine at some point in time. The latter especially is a point which professor Sal Mercogliano (from What’s Going on With Shipping? fame) is rather passionate about.

Currently the SSUSC is in talks with a New York-based real-estate developer about a redevelopment concept, but this was thrown into peril when the owner of the pier suddenly doubled the rent, leading to the eviction by September. Unless something changes for the better soon, the SS United States stands a good chance of soon following the USS Kitty Hawk, USS John F. Kennedy (which nearly became a museum ship) and so many more into the scrapper’s oblivion.

What, one might ask, is truly in the name of the SS United States?

The Book That Could Have Killed Me

24 Junio 2024 at 14:00

It is funny how sometimes things you think are bad turn out to be good in retrospect. Like many of us, when I was a kid, I was fascinated by science of all kinds. As I got older, I focused a bit more, but that would come later. Living in a small town, there weren’t many recent science and technology books, so you tended to read through the same ones over and over. One day, my library got a copy of the relatively recent book “The Amateur Scientist,” which was a collection of [C. L. Stong’s] Scientific American columns of the same name. [Stong] was an electrical engineer with wide interests, and those columns were amazing. The book only had a snapshot of projects, but they were awesome. The magazine, of course, had even more projects, most of which were outside my budget and even more of them outside my skill set at the time.

If you clicked on the links, you probably went down a very deep rabbit hole, so… welcome back. The book was published in 1960, but the projects were mostly from the 1950s. The 57 projects ranged from building a telescope — the original topic of the column before [Stong] took it over — to using a bathtub to study aerodynamics of model airplanes.

X-Rays

[Harry’s] first radiograph. Not bad!
However, there were two projects that fascinated me and — lucky for me — I never got even close to completing. One was for building an X-ray machine. An amateur named [Harry Simmons] had described his setup complaining that in 23 years he’d never met anyone else who had X-rays as a hobby. Oddly, in those days, it wasn’t a problem that the magazine published his home address.

You needed a few items. An Oudin coil, sort of like a Tesla coil in an autotransformer configuration, generated the necessary high voltage. In fact, it was the Ouidn coil that started the whole thing. [Harry] was using it to power a UV light to test minerals for flourescence. Out of idle curiosity, he replaced the UV bulb with an 01 radio tube. These old tubes had a magnesium coating — a getter — that absorbs stray gas left inside the tube.

The tube glowed in [Harry’s] hand and it reminded him of how an old gas-filled X-ray tube looked. He grabbed some film and was able to image screws embedded in a block of wood.

With 01 tubes hard to find, why not blow your own X-ray tubes?

However, 01 tubes were hard to get even then. So [Harry], being what we would now call a hacker, took the obvious step of having a local glass blower create custom tubes to his specifications.

Given that I lived where the library barely had any books published after 1959, it is no surprise that I had no access to 01 tubes or glass blowers. It wasn’t clear, either, if he was evacuating the tubs or if the glass blower was doing it for him, but the tube was down to 0.0001 millimeters of mercury.

Why did this interest me as a kid? I don’t know. For that matter, why does it interest me now? I’d build one today if I had the time. We have seen more than one homemade X-ray tube projects, so it is doable. But today I am probably able to safely operate high voltages, high vaccums, and shield myself from the X-rays. Probably. Then again, maybe I still shouldn’t build this. But at age 10, I definitely would have done something bad to myself or my parent’s house, if not both.

Then It Gets Worse

The other project I just couldn’t stop reading about was a “homemade atom smasher” developed by [F. B. Lee]. I don’t know about “atom smasher,” but it was a linear particle accelerators, so I guess that’s an accurate description.

The business part of the “atom smasher” (does not show all the vacuum equipment).

I doubt I have the chops to pull this off today, much less back then. Old refigerator compressors were run backwards to pull a rough vaccuum. A homemade mercury diffusion pump got you the rest of the way there. I would work with some of this stuff later in life with scanning electron microscopes and similar instruments, but I was buying them, not cobbling them together from light bulbs, refigerators, and home-made blown glass!

You needed a good way to measure low pressure, too, so you needed to build a McLeod gauge full of mercury. The accelerator itself is three foot long,  borosilicate glass tube, two inches in diameter. At the top is a metal globe with a peephole in it to allow you to see a neon bulb to judge the current in the electron beam. At the bottom is a filament.

The globe at the top matches one on top of a Van de Graf generator that creates about 500,000 volts at a relatively low current. The particle accelerator is decidedly linear but, of course, all the cool particle accelerators these days form a loop.

[Andres Seltzman] built something similar, although not quite the same, some years back and you can watch it work in the video below:

What could go wrong? High vacuum, mercury, high voltage, an electron beam and plenty of unintentional X-rays. [Lee] mentions the danger of “water hammers” in the mercury tubes. In addition, [Stong] apparently felt nervous enough to get a second opinion from [James Bly] who worked for a company called High Voltage Engineering. He said, in part:

…we are somewhat concerned over the hazards involved. We agree wholeheartedly with his comments concerning the hazards of glass breakage and the use of mercury. We feel strongly, however, that there is inadequate discussion of the potential hazards due to X-rays and electrons. Even though the experimenter restricts himself to targets of low atomic number, there will inevitably be some generation of high-energy X-rays when using electrons of 200 to .300 kilovolt energy. If currents as high as 20 microamperes are achieved, we are sure that the resultant hazard is far from negligible. In addition, there will be substantial quantities of scattered electrons, some of which will inevitably pass through the observation peephole.

I Survived

Clearly, I didn’t build either of these, because I’m still here today. I did manage to make an arc furnace from a long-forgotten book. Curtain rods held carbon rods from some D-cells. The rods were in a flower pot packed with sand. An old power cord hooked to the curtain rods, although one conductor went through a jar of salt water, making a resistor so you didn’t blow the fuses.

Somehow, I survived without dying from fumes, blinding myself, or burning myself, but my parent’s house had a burn mark on the floor for many years after that experiement.

If you want to build an arc furnace, we’d start with a more modern concept. If you want a safer old book to read, try the one by [Edmund Berkeley], the developer of the Geniac.

Scrapping the Local Loop, by the Numbers

11 Junio 2024 at 14:00

A few years back I wrote an “Ask Hackaday” article inviting speculation on the future of the physical plant of landline telephone companies. It started innocently enough; an open telco cabinet spotted during my morning walk gave me a glimpse into the complexity of the network buried beneath my feet and strung along poles around town. That in turn begged the question of what to do with all that wire, now that wireless communications have made landline phones so déclassé.

At the time, I had a sneaking suspicion that I knew what the answer would be, but I spent a good bit of virtual ink trying to convince myself that there was still some constructive purpose for the network. After all, hundreds of thousands of technicians and engineers spent lifetimes building, maintaining, and improving these networks; surely there must be a way to repurpose all that infrastructure in a way that pays at least a bit of homage to them. The idea of just ripping out all that wire and scrapping it seemed unpalatable.

With the decreasing need for copper voice and data networks and the increasing demand for infrastructure to power everything from AI data centers to decarbonized transportation, the economic forces arrayed against these carefully constructed networks seem irresistible. But what do the numbers actually look like? Are these artificial copper mines as rich as they appear? Or is the idea of pulling all that copper out of the ground and off the poles and retasking it just a pipe dream?

Phones To Cars

There are a lot of contenders for the title of “Largest Machine Ever Built,” but it’s a pretty safe bet that the public switched telephone network (PSTN) is in the top five. From its earliest days, the PSTN was centered around copper, with each and every subscriber getting at least one pair of copper wires connected from their home or business. These pairs, referred to collectively and somewhat loosely as the “local loop,” were gathered together into increasingly larger bundles on their way to a central office (CO) housing the switchgear needed to connect one copper pair to another. For local calls, it could all be done within the CO or by connecting to a nearby CO over copper lines dedicated to the task; long-distance calls were accomplished by multiplexing calls together, sometimes over microwave links but often over thick coaxial cables.

Fiber optic cables and wireless technologies have played a large part in making all the copper in the local loops and beyond redundant, but the fact remains that something like 800,000 metric tons of copper is currently locked up in the PSTN. And judging by the anti-theft efforts that Home Depot and other retailers are making, not to mention the increase in copper thefts from construction sites and other soft targets, that material is incredibly valuable. Current estimates are that PSTNs are sitting on something like $7 billion worth of copper.

That sure sounds like a lot, but what does it really mean? Assuming that the goal of harvesting all that largely redundant PSTN copper is to support decarbonization, $7 billion worth of copper isn’t really that much. Take EVs for example. The typical EV on the road today has about 132 pounds (60 kg) of copper, or about 2.5 times the amount in the typical ICE vehicle. Most of that copper is locked up in motor windings, but there’s a lot in the bus bars and wires needed to connect the batteries to the motors, plus all the wires needed to connect all the data systems, sensors, and accessories. If you pulled all the copper out of the PSTN and used it to do nothing but build new EVs, you’d be able to build about 13.3 million cars. That’s a lot, but considering that 80 million cars were put on the road globally in 2021, it wouldn’t have that much of an impact.

Farming the Wind

What about on the generation side? Thirteen million new EVs are going to need a lot of extra generation and transmission capacity, and with the goal of decarbonization, that probably means a lot of wind power. Wind turbines take a lot of copper; currently, bringing a megawatt of on-shore wind capacity online takes about 3 metric tons of copper. A lot of that goes into the windings in the generator, but that also takes into account the wire needed to get the power from the nacelle down to the ground, plus the wires needed to connect the turbines together and the transformers and switchgear needed to boost the voltage for transmission. So, if all of the 800,000 metric tons of copper currently locked up in the PSTN were recycled into wind turbines, they’d bring a total of 267,000 megawatts of capacity online.

To put that into perspective, the total power capacity in the United States is about 1.6 million megawatts, so converting the PSTN to wind turbines would increase US grid capacity by about 16% — assuming no losses, of course. Not too shabby; that’s over ten times the capacity of the world’s largest wind farm, the Gansu Wind Farm in the Gobi Desert in China.

There’s one more way to look at the problem, one that I think puts a fine point of things. It’s estimated that to reach global decarbonization goals, in the next 25 years we’ll need to mine at least twice the amount of copper that has ever been mined in human history. That’s quite a lot; we’ve taken 700 million metric tons of copper in the last 11,000 years. Doubling that means we’ve got to come up with 1.4 billion metric tons in the next quarter century. The 800,000 metric tons of obsolete PSTN copper is therefore only about 0.05% of what’s needed — not even a drop in the bucket.

Accepting the Inevitable

These are just a few examples of what could be done with the “Buried Fortune” of PSTN copper, as Bloomberg somewhat breathlessly refers to it in the article linked above. It goes without saying that this is just back-of-the-envelope math, and that a real analysis of what it would take to recycle the old PSTN copper and what the results would be would require a lot more engineering and financial chops than I have. Even if it is just a drop in the bucket, I think we’ll probably end up doing it, if for no other reason than it takes something like two decades to bring a new copper mine into production. Until those mines come online and drive the price of copper down, all that refined and (relatively) easily recycled copper just sitting there is a tempting target for investors. So it’ll probably happen, which is sad in a way, but maybe it’s a more fitting end to the PSTN than just letting it sit there and corrode.

8-Tracks Are Back? They Are In My House

10 Junio 2024 at 14:00

What was the worst thing about the 70s? Some might say the oil crisis, inflation, or even disco. Others might tell you it was 8-track tapes, no matter what was on them. I’ve heard that the side of the road was littered with dead 8-tracks. But for a while, they were the only practical way to have music in the car that didn’t come from the AM/FM radio.

If you know me at all, you know that I can’t live without music. I’m always trying to expand my collection by any means necessary, and that includes any format I can play at home. Until recently, that list included vinyl, cassettes, mini-discs, and CDs. I had an 8-track player about 20 years ago — a portable Toyo that stopped working or something. Since then, I’ve wanted another one so I can collect tapes again. Only this time around, I’m trying to do it right by cleaning and restoring them instead of just shoving them in the player willy-nilly.

Update: I Found a Player

A small 8-track player and equally small speakers, plus a stack of VHS tapes.
I have since cleaned it.

A couple of weeks ago, I was at an estate sale and I found a little stereo component player and speakers. There was no receiver in sight. I tested the player with the speakers and bought them for $15 total because it was 75% off day and they were overpriced originally. While I was still at the sale, I hooked it up to the little speakers and made sure it played and changed programs.

Well, I got it home and it no longer made sound or changed programs. I thought about the play head inside and how dirty it must be, based on the smoker residue on the front plate of the player. Sure enough, I blackened a few Q-tips and it started playing sweet tunes again. This is when I figured out it wouldn’t change programs anymore.

I found I couldn’t get very far into the player, but I was able to squirt some contact cleaner into the program selector switch. After many more desperate button presses, it finally started changing programs again. Hooray!

I feel I got lucky. If you want to read about an 8-track player teardown, check out Jenny List’s awesome article.

These Things Are Not Without Their Limitations

A diagram of an 8-track showing the direction of tape travel, the program-changing solenoid, the playback head, the capstan and pinch roller, and the path back to the reel.
This is what’s going on, inside and out. Image via 8-Track Heaven, a site which has itself gone to 8-Track Heaven.

So now, the problem is the tapes themselves. I think there are two main reasons why people think that 8-tracks suck. The first one is the inherent limitations of the tape. Although there were 90- and 120-minute tapes, most of them were more like 40-60 minutes, divided up into four programs. One track for the left channel, one for the right, and you have your eight tracks and stereo sound.

The tape is in a continuous loop around a single hub. Open one up and you’ll see that the tape comes off the center toward the left and loops back onto the outside from the right. 8-tracks can’t be rewound, only fast-forwarded, and it doesn’t seem like too many players even had this option. If you want to listen to the first song on program one, for instance, you’d better at least tolerate the end of program four.

The tape is divided into four programs, which are separated by a foil splice. A sensor in the machine raises or lowers the playback head depending on the program to access the appropriate tracks (1 and 5, 2 and 6, and so on.)

Because of the 10-12 minute limitation of each program, albums were often rearranged to fit better within the loud solenoidal ka-chunk of each program change.

For a lot of people, this was outright heresy. Then you have to consider that not every album could fit neatly within four programs, so some tracks faded out for the program change, and then faded back in, usually in the middle of the guitar solo.

Other albums fit into the scheme with some rearrangement, but they did so at the expense of silence on one or more of the programs. Check out the gallery below to see all of these conditions, plus one that divided up perfectly without any continuations or silence.

A copy of Jerry Reed's Texas Bound and Flyin' on 8-track. A copy of Yes' Fragile on 8-track. It's pink! A copy of Fleetwood Mac's Mystery To Me on 8-track. A copy of Blood, Sweat, & Tears' Greatest Hits on 8-track, man. A copy of Dolly Parton's Here You Come Again on 8-track, darlin'.

The second reason people dislike 8-tracks is that they just don’t sound that good, especially since cassette tapes were already on the market. They didn’t sound super great when they were new, and years of sitting around in cars and dusty basements and such didn’t help. In my experience, at this point, some sound better than others. I suppose after the tape dropout, it’s all subjective.

What I Look For When Buying Tapes

The three most important things to consider are the pressure pads, the foil splices, and the pinch roller. All of these can be replaced, although some jobs are easier than others.

Start by looking at the pressure pads. These are either made of foam that’s covered with a slick surface so the tape can slide along easily, or they are felt pads on a sproingy metal thing like a cassette tape. You want to see felt pads when you’re out shopping, but you’ll usually see foam. That’s okay. You can get replacement foam on ebay or via 8-track avenue directly, or you can do what I do.

A bad, gross, awful pinch roller, and a good one.

After removing the old foam and scraping the plastic backing with my tweezers, I cut a piece of packing tape about 3/8″ wide — just enough to cover the width of some adhesive foam window seal. The weatherstripping’s response is about the same as the original foam, and the packing tape provides a nice, slick surface. I put a tiny strip of super glue on the adhesive side and stick one end down into the tape, curling it a little to rock it into position, then I press it down and re-tension the tape. The cool part is that you can do all this without opening up the tape by just pulling some out. Even if the original foam seems good, you should go ahead and replace it. Once you’ve seen the sticky, black powder it can turn to with time, you’ll understand why.

A copy of Jimi Hendrix's Are You Experienced? on 8-track with a very gooey pinch roller that has almost enveloped the tape.
An example of what not to buy. This one is pretty much hopeless unless you’re experienced.

Another thing you can address without necessarily opening up the tape are the foil splices that separate the programs. As long as the pressure pads are good, shove that thing in the player and let it go until the ka-chunk, and then pull it out quickly to catch the splice. Once you’ve got the old foil off of it, use the sticky part of a Post-It note to realign the tape ends and keep them in place while you apply new foil.

Again, you can get sensing foil on ebay, either in a roll, or in pre-cut strips that have that nice 60° angle to them. Don’t try to use copper tape like I did. I’ll never know if it worked or not, because I accidentally let too much tape un-spool from the hub while I was splicing it, but it seemed a little too heavy. Real-deal aluminium foil sensing tape is even lighter-weight than copper tape.

One thing you can’t do without at least opening the tape part way is to replace the pinch roller. Fortunately, these are usually in pretty good shape, but you can usually tell right away if they are gooey without having to press your fingernail into it. Even so, I have salvaged the pinch rollers out of tapes I have tried to save and couldn’t, just to have some extras around.

If you’re going to open the tape up, you might as well take some isopropyl alcohol and clean the graphite off of the pinch roller. This will take a while, but is worth it.

Other Problems That Come Up

Sometimes, you shove one of these bad boys in the player and nothing happens. This usually means that the tape is seized up and isn’t moving. Much like blowing into an N64 cartridge, I have heard that whacking the tape on your thigh a few times will fix a seized tape, but so far, that has not worked for me. I have so far been unable to fix a seized tape, but there are guides out there. Basically, you cut the tape somewhere, preferably at a foil splice, fix the tension, and splice it back together.

Another thing that can happen is called a wedding cake. Basically, you open up the cartridge and find that the inner loops of tape have raised up around the hub, creating a two-layer effect that resembles a wedding cake. I have not so far successfully fixed such a situation, but I’ve only run across one so far. Basically, you pull the loops off of the center, re-tension the tape from the other side, and spin those loops back into the center. This person makes it look insanely easy.

Preventive Maintenance On the Player

As with cassette players, the general sentiment is that one should never actually use a head-cleaning tape as they are rough. As I said earlier, I cleaned the playback head thoroughly with 91% isopropyl alcohol and Q-tips that I wished were longer.

Dionne Warwick's Golden Hits on 8-track, converted to a capstan cleaner. Basically, there's no tape, and it has a bit of scrubby pad shoved into the pinch roller area.
An early set of my homemade pressure pads. Not the greatest.

Another thing I did to jazz up my discount estate sale player was to make a capstan-cleaning tape per these instructions on 8-Track Avenue. Basically, I took my poor Dionne Warwick tape that I couldn’t fix, threw away the tape, kept the pinch roller for a rainy day, and left the pressure pads intact.

To clean the capstan, I took a strip of reusable dishrag material and stuffed it in the place where the pinch roller goes. Then I put a few drops of alcohol on the dishrag material and inserted the tape for a few seconds. I repeated this with new material until it came back clean.

In order to better grab the tape and tension it against the pinch roller, the capstan should be roughed up a bit. I ripped the scrubby side off of an old sponge and cut a strip of that, then tucked it into the pinch roller pocket and let the player run for about ten seconds. If you listen to a lot of tapes, you should do this often.

Final Thoughts

I still have a lot to learn about fixing problematic 8-tracks, but I think I have the basics of refurbishment down. There are people out there who have no qualms about ironing tapes that have gotten accordioned, or re-spooling entire tapes using a drill and a homemade hub-grabbing attachment. If this isn’t the hacker’s medium, I don’t know what is. Long live 8-tracks!

Mining and Refining: Fracking

5 Junio 2024 at 14:33

Normally on “Mining and Refining,” we concentrate on the actual material that’s mined and refined. We’ve covered everything from copper to tungsten, with side trips to more unusual materials like sulfur and helium. The idea is to shine a spotlight on the geology and chemistry of the material while concentrating on the different technologies needed to exploit often very rare or low-concentration deposits and bring them to market.

This time, though, we’re going to take a look at not a specific resource, but a technique: fracking. Hydraulic fracturing is very much in the news lately for its potential environmental impact, both in terms of its immediate effects on groundwater quality and for its perpetuation of our dependence on fossil fuels. Understanding what fracking is and how it works is key to being able to assess the risks and benefits of its use. There’s also the fact that like many engineering processes carried out on a massive scale, there are a lot of interesting things going on with fracking that are worth exploring in their own right.

Fossil Mud

Although hydraulic fracturing has been used since at least the 1940s to stimulate production in oil and gas wells and is used in all kinds of well drilled into multiple rock types, fracking is most strongly associated these days with the development of oil and natural gas deposits in shale. Shale is a sedimentary rock formed from ancient muds made from fine grains of clay and silt. These are some of the finest-grained materials possible, with grains ranging from 62 microns in diameter down to less than a micron. Grains that fine only settle out of suspension very slowly, and tend to do so only where there are no currents.

Shale outcropping in a road cut in Kentucky. The well-defined layers were formed in still waters, where clay and silt particles slowly accumulated. The dark color means a lot of organic material from algae and plankton mixed in. Source: James St. John, CC BY 2.0, via Wikimedia Commons

The breakup of Pangea during the Cretaceous period provided much of the economically important shale formations in today’s eastern United States, like the Marcellus formation that stretches from New York state into Ohio and down almost to Tennesee. The warm, calm waters of the newly forming Atlantic Ocean formed the perfect place for clay- and silt-laden runoff to accumulate and settle, eventually forming the shale formation.

Shale is often associated with oil and natural gas because the conditions that favor its formation also favor hydrocarbon creation. The warm, still Cretaceous waters were perfect for phytoplankton and algal growth, and when those organisms died they rained down along with the silt and clay grains to the low-oxygen environment at the bottom. Layer upon layer built up slowly over the millennia, but instead of decomposing as they would have in an oxygen-rich environment, the reducing conditions slowly transformed the biomass into kerogen, or solid deposits of hydrocarbons. With the addition of heat and pressure, the hydrocarbons in kerogen were cooked into oil and natural gas.

In some cases, the tight grain structure of shale acts as an impermeable barrier to keep oil and gas generated in lower layers from floating up, forming underground deposits of liquid and gas. In other cases, kerogens are transformed into oil or natural gas right within the shale, trapped within its pores. Under enough pressure, gas can even dissolve right into the shale matrix itself, to be released only when the pressure in the rock is relieved.

Horizontal Boring

While getting at these sequestered oil and gas deposits requires more than just drilling a hole in the ground, fracking starts with exactly that. Traditional well-drilling techniques, where a rotary table rig using lengths of drill pipe spins a drill bit into rock layers underground while pumping a slurry called drilling mud down the bore to cool and lubricate the bit, are used to start the well. The initial bore proceeds straight down until it passes through the lowest aquifer in the region, at which point the entire bore is lined with a steel pipe casing. The casing is filled with cementitious grout that’s forced out of the bottom of the casing by a plug inserted at the surface and pressed down by the drilling rig. This squeezes the grout between the outside of the casing and the borehole and back up to the surface, sealing it off from the water-bearing layers it passes through and serving as a foundation for equipment that will eventually be added to the wellhead, such as blow-out preventers.

Once the well is sealed off, vertical boring continues until the kickoff point, where the bore transitions from vertical to horizontal. Because the target shale seam is relatively thin — often only 50 to 300 feet (15 to 100 meters) thick — drilling a vertical bore through it would only expose a small amount of surface area. Fracking is all about increasing surface area and connecting as many pores in the shale to the bore; drilling horizontally within the shale seam makes that possible. Geologists and mining engineers determine the kickoff point based on seismic surveys and drilling logs from other wells in the area and calculate the radius needed to put the bore in the middle of the seam. Given that the drill string can only turn by a few degrees at most, the radius tends to be huge — often hundreds of meters.

Directional drilling has been used since the 1920s, often to steal oil from other claims, and so many techniques have been developed for changing the direction of a drill string deep underground. One of the most common methods used in fracking wells is the mud motor. Powered by drilling mud pumped down the drill pipe and forced between a helical stator and rotor, the mud motor can spin the drill bit at 60 to 100 RPM. When boring a traditional vertical well, the mud motor can be used in addition to spinning the entire drill string, to achieve a higher rate of penetration. The mud motor can also power the bit with the drill string locked in place, and by adding angled spacers between the mud motor and the drill string, the bit can begin drilling at a shallow angle, generally just a few degrees off vertical. The drill string is flexible enough to bend and follow the mud motor on its path to intersect the shale seam. The azimuth of the bore can be changed, too, by rotating the drill string so the bit heads off in a slightly different direction. Some tools allow the bend in the motor to be changed without pulling the entire drill string up, which represents significant savings.

Determining where the drill bit is under miles of rock is the job of downhole tools like the measurement while drilling (MWD) tool. These battery-powered tools vary in what they can measure, but typically include temperature and pressure sensors and inertial measuring units (IMU) to determine the angle of the bit. Some MWD tools also include magnetometers for orientation to Earth’s magnetic field. Transmitting data back to the surface from the MWD can be a problem, and while more use is being made of electrical and fiber optic connections these days, many MWDs use the drilling mud itself as a physical transport medium. Mud telemetry uses pressure waves set up in the column of drilling mud to send data back up to pressure transducers on the surface. Data rates are low; 40 bps at best, dropping off sharply with increasing distance. Mud telemetry is also hampered by any gas dissolved in the drilling mud, which strongly attenuates the signal.

Let The Fracking Begin

Once the horizontal borehole is placed in the shale seam, a steel casing is placed in the bore and grouted with cement. At this point, the bore is completely isolated from the surrounding rock and needs to be perforated. This is accomplished with a perforating gun, a length of pipe studded with small shaped charges. The perforating gun is prepared on the surface by pyrotechnicians who place the charges into the gun and connect them together with detonating cord. The gun is lowered into the bore and placed at the very end of the horizontal section, called the toe. When the charges are detonated, they form highly energetic jets of fluidized metal that lance through the casing and grout and into the surrounding shale. Penetration depth and width depend on the specific shaped charge used but can extend up to half a meter into the surrounding rock.

Perforation can also be accomplished non-explosively, using a tool that directs jets of high-pressure abrasive-charged fluid through ports in its sides. It’s not too far removed from water jet cutting, and can cut right through the steel and cement casing and penetrate well into the surrounding shale. The advantage to this type of perforation is that it can be built into a single multipurpose tool which can

Once the bore has been perforated, fracturing can occur. The principle is simple: an incompressible fluid is pumped into the borehole under great pressure. The fluid leaves the borehole and enters the perforations, cracking the rock and enlarging the original perforations. The cracks can extend many meters from the original borehole into the rock, exposing vastly more surface area of the rock to the borehole.

Fracking is more than making cracks. The network of cracks produced by fracking physically connects kerogen deposits within the shale to the borehole. But getting the methane (black in inset) free from the kerogen (yellow) is a complicated balance of hydrophobic and hydrophilic interactions between the shale, the kerogen, and the fracturing fluid. Source: Thomas Lee, Lydéric Bocquet, Benoit Coasne, CC BY 4.0, via Wikimedia Commons

The pressure needed to hydraulically fracture solid rock perhaps a mile or more below the surface can be tremendous — up to 15,000 pounds per square inch (100 MPa). In addition to the high pressure, the fracking fluid must be pumped at extremely high volumes, up to 10 cu ft/s (265 lps). The overall volume of material needed is impressive, too — a 6″ borehole that’s 10,000 feet long would take almost 15,000 gallons of fluid to fill alone. Add in the volume of fluid needed to fill the fractures and that could easily exceed 5 million gallons.

Fracking fluid is a slurry made mostly from water and sand. The sand serves as a proppant, which keeps the tiny microfractures from collapsing after fracking pressure is released. Fracking fluid also contains a fraction of a percent of various chemical additives, mostly to form a gel that effectively transfers the hydraulic force while keeping the proppant suspended. Guar gum, a water-soluble polysaccharide extracted from guar beans, is often used to create the gel. Fracking gels are sometimes broken down after a while to clear the fractures and allow freer flow; a combination of acids and enzymes is usually used for this job.

Once fracturing is complete, the fracking fluid is removed from the borehole. It’s impossible to recover all the fluid; sometimes as much as 50% is recovered, but often as little as 5% can be pumped back to the surface. Once a section of the borehole has been fractured, it’s sealed off from the rest of the well by an isolating plug placed upstream of the freshly fracked section. The entire process — perforating, fracking, recovery, isolation — is repeated up the borehole until the entire horizontal bore is fracked. The isolating plugs are then bored out, and the well can begin production.

You’ve Probably Never Considered Taking an Airship To Orbit

Por: Lewin Day
13 Mayo 2024 at 14:00

There have been all kinds of wild ideas to get spacecraft into orbit. Everything from firing huge cannons to spinning craft at rapid speed has been posited, explored, or in some cases, even tested to some degree. And yet, good ol’ flaming rockets continue to dominate all, because they actually get the job done.

Rockets, fuel, and all their supporting infrastructure remain expensive, so the search for an alternative goes on. One daring idea involves using airships to loft payloads into orbit. What if you could simply float up into space?

Lighter Than Air

NASA regularly launches lighter-than-air balloons to great altitudes, but they’re not orbital craft. Credit: NASA, public domain

The concept sounds compelling from the outset. Through the use of hydrogen or helium as a lifting gas, airships and balloons manage to reach great altitudes while burning zero propellant. What if you could just keep floating higher and higher until you reached orbital space?

This is a huge deal when it comes to reaching orbit. One of the biggest problems of our current space efforts is referred to as the tyranny of the rocket equation. The more cargo you want to launch into space, the more fuel you need. But then that fuel adds more weight, which needs yet more fuel to carry its weight into orbit. To say nothing of the greater structure and supporting material to contain it all.

Carrying even a few extra kilograms of weight to space can require huge amounts of additional fuel. This is why we use staged rockets to reach orbit at present. By shedding large amounts of structural weight at the end of each rocket stage, it’s possible to move the remaining rocket farther with less fuel.

If you could get to orbit while using zero fuel, it would be a total gamechanger. It wouldn’t just be cheaper to launch satellites or other cargoes. It would also make missions to the Moon or Mars far easier. Those rockets would no longer have to carry the huge amount of fuel required to escape Earth’s surface and get to orbit. Instead, they could just carry the lower amount of fuel required to go from Earth orbit to their final destination.

The rumored “Chinese spy balloon” incident of 2023 saw a balloon carrying a payload that looked very much like a satellite. It was even solar powered. However, such a craft would never reach orbit, as it had no viable propulsion system to generate the huge delta-V required. Credit: USAF, public domain

Of course, it’s not that simple. Reaching orbit isn’t just about going high above the Earth. If you just go straight up above the Earth’s surface, and then stop, you’ll just fall back down. If you want to orbit, you have to go sideways really, really fast.

Thus, an airship-to-orbit launch system would have to do two things. It would have to haul a payload up high, and then get it up to the speed required for its desired orbit. That’s where it gets hard. The minimum speed to reach a stable orbit around Earth is 7.8 kilometers per second (28,000 km/h or 17,500 mph). Thus, even if you’ve floated up very, very high, you still need a huge rocket or some kind of very efficient ion thruster to push your payload up to that speed. And you still need fuel to generate that massive delta-V (change in velocity).

For this reason, airships aren’t the perfect hack to reaching orbit that you might think. They’re good for floating about, and you can even go very, very high. But if you want to circle the Earth again and again and again, you better bring a bucketload of fuel with you.

Someone’s Working On It

JP Aerospace founder John Powell regularly posts updates to YouTube regarding the airship-to-orbit concept. Credit: John Powell, YouTube

Nevertheless, this concept is being actively worked on, but not by the usual suspects. Don’t look at NASA, JAXA, SpaceX, ESA, or even Roscosmos. Instead, it’s the work of the DIY volunteer space program known as JP Aerospace.

The organization has grand dreams of launching airships into space. Its concept isn’t as simple as just getting into a big balloon and floating up into orbit, though. Instead, it envisions a three-stage system.

The first stage would involve an airship designed to travel from ground level up to 140,000 feet. The company proposes a V-shaped design with an airfoil profile to generate additional lift as it moves through the atmosphere. Propulsion would be via propellers that are specifically designed to operate in the near-vacuum at those altitudes.

Once at that height, the first stage craft would dock with a permanently floating structure called Dark Sky Station. It would serve as a docking station where cargo could be transferred from the first stage craft to the Orbital Ascender, which is the craft designed to carry the payload into orbit.

The Ascender H1 Variant is the company’s latest concept for an airship to carry payloads from an altitude of 140,000ft and into orbit. Credit: John Powell, YouTube screenshot

The Orbital Ascender itself sounds like a fantastical thing on paper. The team’s current concept is for a V-shaped craft with a fabric outer shell which contains many individual plastic cells full of lifting gas. That in itself isn’t so wild, but the proposed size is. It’s slated to measure 1,828 meters on each side of the V — well over a mile long — with an internal volume of over 11 million cubic meters. Thin film solar panels on the craft’s surface are intended to generate 90 MW of power, while a plasma generator on the leading edge is intended to help cut drag. The latter is critical, as the craft will need to reach hypersonic speeds in the ultra-thin atmosphere to get its payload up to orbital speeds. To propel the craft up to orbital velocity, the team has been running test firings on its own designs for plasma thrusters.

Payload would be carried in two cargo bays, each measuring 30 meters square, and 20 meters deep. Credit: John Powell, YouTube Screenshot

The team at JP Aerospace is passionate, but currently lacks the means to execute their plans at full scale. Right now, the team has some experimental low-altitude research craft that are a few hundred feet long. Presently, Dark Sky Station and the Orbital Ascender remain far off dreams.

Realistically, the team hasn’t found a shortcut to orbit just yet. Building a working version of the Orbital Ascender would require lofting huge amounts of material to high altitude where it would have to be constructed. Such a craft would be torn to shreds by a simple breeze in the lower atmosphere. A lighter-than-air craft that could operate at such high altitudes and speeds might not even be practical with modern materials, even if the atmosphere is vanishingly thin above 140,000 feet.  There are huge questions around what materials the team would use, and whether the theoretical concepts for plasma drag reduction could be made to work on the monumentally huge craft.

The team has built a number of test craft for lower-altitude operation. Credit: John Powell, Youtube Screenshot

Even if the craft’s basic design could work, there are questions around the practicalities of crewing and maintaining a permanent floating airship station at high altitude. Let alone how payloads would be transferred from one giant balloon craft to another. These issues might be solvable with billions of dollars. Maybe. JP Aerospace is having a go on a budget several orders of magnitude more shoestring than that.

One might imagine a simpler idea could be worth trying first. Lofting conventional rockets to 100,000 feet with balloons would be easier and still cut fuel requirements to some degree. But ultimately, the key challenge of orbit remains. You still need to find a way to get your payload up to a speed of at least 8 kilometers per second, regardless of how high you can get it in the air. That would still require a huge rocket, and a suitably huge balloon to lift it!

For now, orbit remains devastatingly hard to reach, whether you want to go by rocket, airship, or nuclear-powered paddle steamer. Don’t expect to float to the Moon by airship anytime soon, even if it sounds like a good idea.

The Great Green Wall: Africa’s Ambitious Attempt To Fight Desertification

Por: Lewin Day
9 Mayo 2024 at 14:00

As our climate changes, we fear that warmer temperatures and drier conditions could make life hard for us. In most locations, it’s a future concern that feels uncomfortably near, but for some locations, it’s already very real. Take the Sahara desert, for example, and the degraded landscapes to the south in the Sahel. These arid regions are so dry that they struggle to support life at all, and temperatures there are rising faster than almost anywhere else on the planet.

In the face of this escalating threat, one of the most visionary initiatives underway is the Great Green Wall of Africa. It’s a mega-sized project that aims to restore life to barren terrain.

A Living Wall

Concentrated efforts have helped bring dry lands back to life. Credit: WFP

Launched in 2007 by the African Union, the Great Green Wall was originally an attempt to halt the desert in its tracks. The Sahara Desert has long been expanding, and the Sahel region has been losing the battle against desertification. The Green Wall hopes to put a stop to this, while also improving food security in the area.

The concept of the wall is simple. The idea is to take degraded land and restore it to life, creating a green band across the breadth of Africa which would resist the spread of desertification to the south. Intended to span the continent from Senegal in the west to Djibouti in the east, it was originally intended to be 15 kilometers wide and a full 7,775 kilometers long. The hope was to complete the wall by 2030.

The Great Green Wall concept moved past initial ideas around simply planting a literal wall of trees. It eventually morphed into a broader project to create a “mosaic” of green and productive landscapes that can support local communities in the region.

Reforestation is at the heart of the Great Green Wall. Millions of trees have been planted, with species chosen carefully to maximise success. Trees like Acacia, Baobab, and Moringa are commonly planted not only for their resilience in arid environments but also for their economic benefits. Acacia trees, for instance, produce gum arabic—a valuable ingredient in the food and pharmaceutical industries—while Moringa trees are celebrated for their nutritious leaves.

 

Choosing plants with economic value has a very important side effect that sustains the project. If random trees of little value were planted solely as an environmental measure, they probably wouldn’t last long. They could be harvested by the local community for firewood in short order, completely negating all the hard work done to plant them. Instead, by choosing species that have ongoing productive value, it gives the local community a reason to maintain and support the plants.

Special earthworks are also aiding in the fight to repair barren lands. In places like Mauritania, communities have been digging  half-moon divots into the ground. Water can easily run off or flow away on hard, compacted dirt. However, the half-moon structures trap water in the divots, and the raised border forms a protective barrier. These divots can then be used to plant various species where they will be sustained by the captured water. Do this enough times over a barren landscape, and with a little rain, formerly dead land can be brought back to life. It’s a traditional technique that is both cheap and effective at turning brown lands green again.

Progress

The project has been an opportunity to plant economically valuable plants which have proven useful to local communities. Credit: WFP

The initiative plans to restore 100 million hectares of currently degraded land, while also sequestering 250 million tons of carbon to help fight against climate change. Progress has been sizable, but at the same time, limited. As of mid-2023, the project had restored approximately 18 million hectares of formerly degraded land. That’s a lot of land by any measure. And yet, it’s less than a fifth of the total that the project hoped to achieve. The project has been frustrated by funding issues, delays, and the degraded security situation in some of the areas involved. Put together, this all bodes poorly for the project’s chances of reaching its goal by 2030, given 17 years have passed and we draw ever closer to 2030.

While the project may not have met its loftiest goals, that’s not to say it has all been in vain. The Great Green Wall need not be seen as an all or nothing proposition. Those 18 million hectares that have been reclaimed are not nothing, and one imagines the communities in these areas are enjoying the boons of their newly improved land.

In the driest parts of the world, good land can be hard to come by. While the Great Green Wall may not span the African continent yet, it’s still having an effect. It’s showing communities that with the right techniques, it’s possible to bring some barren zones from the brink, turning hem back into useful productive land. That, at least, is a good legacy, and if the projects full goals can be realized? All the better.

Your Open-Source Client Options In the non-Mastodon Fediverse

Por: Lewin Day
8 Mayo 2024 at 14:00

When things started getting iffy over at Twitter, Mastodon rose as a popular alternative to the traditional microblogging platfrom. In contrast to the walled gardens of other social media channels, it uses an open protocol that runs on distributed servers that loosely join together, forming the “Fediverse”.

The beauty of the Fediverse isn’t just in its server structure, though. It’s also in the variety of clients available for accessing the network. Where Twitter is now super-strict about which apps can hook into the network, the Fediverse welcomes all comers to the platform! And although Mastodon is certainly the largest player, it’s absolutely not the only elephant in the room.

Today, we’ll look at a bunch of alternative clients for the platform, ranging from mobile apps to web clients. They offer unique features and interfaces that cater to different user preferences and needs. We’ll look at the most notable examples—each of which brings a different flavor to your Fediverse experience.

Phanpy

Phanpy is relatively new on the scene when it comes to Mastodon alternatives, but it has a fun name and a clean, user-friendly interface. Designed as a web client, Phanpy stands out in the way it hides status actions—like reply, boost, and favorite buttons. It’s an intentional design choice to reduce clutter, with the developer noting they are happy with this tradeoff even if it reduces engagement on the platform. It’s for the chillers, not the attention-starved.

Phanpy also supports multiple accounts, making it a handy tool for those who manage different personas or profiles across the Fediverse. Other power-user features include a multi-column interface if you want to really chug down the posts, and a recovery system for unsent drafts.

Rodent

Rodent, on the other hand, is tailored for users on Android smartphones and tablets. The developers have a bold vision, noting that “Rodent is disruptive, unapologetical, and has a user-first approach.” Despite this, it’s not foreboding to new users—the interface will be instantly familiar to a Mastodon or Twitter user.

Rodent brings you access to Mastodon with a unique set of features. It will let you access instances without having to log in to them (assuming the instance allows it), and has a multi-instance view that lets you flip between them easily. The interface also has neatly nested replies which can make following a conversation far easier. The latest update also set it up to give you meaningful notifications rather than just vague pings from the app. That’s kind of a baseline feature for most social media apps, but this is an app with a small but dedicated developer base.

Tusky

Tusky is perhaps one of the most popular Mastodon clients for Android users. Known for its sleek and minimalist design, Tusky provides a smooth and efficient way to navigate Mastodon. It’s clean, uncluttered, and unfussy.

Tusky handles all the basics—the essential features like notifications, direct messaging, and timeline filters. It’s a lightweight app that doesn’t hog a lot of space or system resources. However, it’s still nicely customizable to ensure it’s showing you what you want, when you want.

If you’ve tried the official Mastodon app and found it’s not for you, Tusky might be more your speed. Where some apps bombard you with buttons and features, Tusky gets out of the way of you and the feed you’re trying to scroll.

Fedilab

The thing about the Fediverse is that it’s all about putting power back in individual hands. Diversity is its strength, and that’s where apps like Fedilab come in. Fedilab isn’t just about accessing social media content either. It wants to let you access other sites in the Fediverse too. A notable example is Peertube—an open-source alternative to YouTube. It’ll handle a bunch of others, too.

You might think this makes Fedilab more complicated, but it’s not really the case. If you just want to use it to access Mastodon, it does that just fine. But if you want to pull in other content to the app, from places like Misskey, Lemmy, or even Twitter, it’ll gladly show you what you’re looking for.

Trunks.social

Trunks.social is a newer entrant designed to enhance the Mastodon experience for everybody. Unlike some other options, it’s truly multi-platform—available as a webclient, or as an app for both Android and iOS. If you want to use Mastodon across a bunch of devices and with a consistent experience across all of them, Trunks.social could be a good option for you.

It focuses on integrating tightly with iOS features, such as the system-wide dark mode, to deliver a coherent and aesthetically pleasing experience across all Apple devices. Trunks.social also places a strong emphasis on privacy and data protection, offering advanced settings that let users control how their data is handled and interacted with on the platform.

Conclusion

Choosing the right Fediverse client can significantly enhance your experience of the platform. Whether you’re a casual user looking for a simple interface on your smartphone or a power user needing to work across multiple accounts or instances, there’s a client out there for you.

The diversity of clients shows the vibrant ecosystem surrounding the Fediverse. It’s not just Mastodon! It’s all driven by the community’s commitment to open-source development and user-centric design. Twitter once had something similar before it shunned flexibility to rule its community with an iron fist. In the open-source world, though, you don’t need to worry about being treated like that.

The Computers of Voyager

6 Mayo 2024 at 14:00

After more than four decades in space and having traveled a combined 44 billion kilometers, it’s no secret that the Voyager spacecraft are closing in on the end of their extended interstellar mission. Battered and worn, the twin spacecraft are speeding along through the void, far outside the Sun’s influence now, their radioactive fuel decaying, their signals becoming ever fainter as the time needed to cross the chasm of space gets longer by the day.

But still, they soldier on, humanity’s furthest-flung outposts and testaments to the power of good engineering. And no small measure of good luck, too, given the number of nearly mission-ending events which have accumulated in almost half a century of travel. The number of “glitches” and “anomalies” suffered by both Voyagers seems to be on the uptick, too, contributing to the sense that someday, soon perhaps, we’ll hear no more from them.

That day has thankfully not come yet, in no small part due to the computers that the Voyager spacecraft were, in a way, designed around. Voyager was to be a mission unlike any ever undertaken, a Grand Tour of the outer planets that offered a once-in-a-lifetime chance to push science far out into the solar system. Getting the computers right was absolutely essential to delivering on that promise, a task made all the more challenging by the conditions under which they’d be required to operate, the complexity of the spacecraft they’d be running, and the torrent of data streaming through them. Forty-six years later, it’s safe to say that the designers nailed it, and it’s worth taking a look at how they pulled it off.

Volatile (Institutional) Memory

That turns out that getting to the heart of the Voyager computers, in terms of schematics and other technical documentation, wasn’t that easy. For a project with such an incredible scope and which had an outsized impact on our understanding of the outer planets and our place in the galaxy, the dearth of technical information about Voyager is hard to get your head around. Most of the easily accessible information is pretty high-level stuff; the juicy technical details are much harder to come by. This is doubly so for the computers running Voyager, many of the details of which seem to be getting lost in the sands of time.

As a case in point, I’ll offer an anecdote. As I was doing research for this story, I was looking for anything that would describe the architecture of the Flight Data System, one of the three computers aboard each spacecraft and the machine that has been the focus of the recent glitch and recovery effort aboard Voyager 1. I kept coming across a reference to a paper with a most promising title: “Design of a CMOS Processor for use in the Flight Data Subsystem of a Deep Space Probe.” I searched high and low for this paper online, but it appears not to be available anywhere but in a special collection in the library of Witchita State University, where it’s in the personal papers of a former professor who did some work for NASA.

Unfortunately, thanks to ongoing construction, the library has no access to the document right now. The difficulty I had in rounding up this potentially critical document seems to indicate a loss of institutional knowledge of the Voyager program’s history and its technical origins. That became apparent when I reached out to public affairs at Jet Propulsion Lab, where the Voyagers were built, in the hope that they might have a copy of that paper in their archives. Sadly, they don’t, and engineers on the Voyager team haven’t even heard of the paper. In fact, they’re very keen to see a copy if I ever get a hold of it, presumably to aid their job of keeping the spacecraft going.

In the absence of detailed technical documents, the original question remains: How do the computers of Voyager work? I’ll do the best I can to answer that from the existing documentation, and hopefully fill in the blanks later with any other documents I can scrape up.

Good Old TTL

As mentioned above, each Voyager contains three different computers, each of which is assigned different functions. Voyager was the first unmanned mission to include distributed computing, partly because the sheer number of tasks to be executed with precision during the high-stakes planetary fly-bys would exceed the capabilities of any single computer that could be made flyable. There was a social engineering angle to this as well, in that it kept the various engineering teams from competing for resources from a single computer.

Redundancy galore: block diagram for the Command Computer Subsystem (CCS) used on the Viking orbiters. The Voyager CCS is almost identical. Source: NASA/JPL.

To the extent that any one computer in a tightly integrated distributed system such as the one on Voyager can be considered the “main computer,” the Computer and Command Subsystem (CCS) would be it. The Voyager CCS was almost identical to another JPL-built machine, the Viking orbiter CCS. The Viking mission, which put two landers on Mars in the summer of 1976, was vastly more complicated than any previous unmanned mission that JPL had built spacecraft for, most of which used simple sequencers rather than programmable computers.

On Voyager, the CCS is responsible for receiving commands from the ground and passing them on to the other computers that run the spacecraft itself and the scientific instruments. The CCS was built with autonomy and reliability in mind, since after just a few days in space, the communication delay would make direct ground control impossible. This led JPL to make everything about the CCS dual-redundant — two separate power supplies, two processors, two output units, and two complete sets of command buffers. Additionally, each processor could be cross-connected to each output unit, and interrupts were distributed to both processors.

There are no microprocessors in the CCS. Rather, the processors are built from discrete 7400-series TTL chips. The machine does not have an operating system but rather runs bare-metal instructions. Both data and instruction words are 18 bits wide, with the instruction words having a 6-bit opcode and a 12-bit address. The 64 instructions contain the usual tools for moving data in and out of registers and doing basic arithmetic, although there are only commands for adding and subtracting, not for multiplication or division. The processors access 4 kilowords of redundant plated-wire memory, which is similar to magnetic core memory in that it records bits as magnetic domains, but with an iron-nickel alloy plated onto the surface of wires rather than ferrite beads.

The Three-Axis Problem

On Voyager, the CCS does almost nothing in terms of flying the spacecraft. The tasks involved in keeping Voyager pointed in the right direction are farmed out to the Attitude and Articulation Control Subsystem, or AACS. Earlier interplanetary probes such as Pioneer were spin-stabilized, meaning they maintained their orientation gyroscopically by rotating the craft around the longitudinal axis. Spin stabilization wouldn’t work for Voyager, since a lot of the science planned for the mission, especially the photographic studies, required a stable platform. This meant that three-axis stabilization was required, and the AACS was designed to accommodate that need.

Voyager’s many long booms complicate attitude control by adding a lot of “wobble”.

The physical design of Voyager injected some extra complexity into attitude control. While previous deep-space vehicles had been fairly compact, Voyager bristles with long booms. Sprouting from the compact bus located behind its huge high-gain antenna are booms for the three radioisotope thermoelectric generators that power the spacecraft, a very long boom for the magnetometers, a shorter boom carrying the heavy imaging instruments, and a pair of very long antennae for the Plasma Wave Subsystem experiment. All these booms tend to wobble a bit when the thrusters fire or actuators move, complicating the calculations needed to stay on course.

The AACS is responsible for running the gyros, thrusters, attitude sensors, and actuators needed to keep Voyager oriented in space. Like the CCS, the AACS has a redundant design using TTL-based processors and 18-bit words. The same 4k of redundant plated-wire memory was used, and many instructions were shared between the two computers. To handle three-axis attitude control in a more memory-efficient manner, the AACS uses index registers to point to the same block of code multiple times.

Years of Boredom, Minutes of Terror

Rounding out the computers of Voyager is the Flight Data Subsystem or FDS, the culprit in the latest “glitch” on Voyager 1, which was traced to a corrupted memory location and nearly ended the extended interstellar mission. Compared with the Viking-descended CCS and AACS, the FDS was to be a completely new kind of computer, custom-made for the demands of a torrent of data from eleven scientific experiments and hundreds of engineering sensors during the high-intensity periods of planetary flybys, while not being overbuilt for the long, boring cruises between the planets.

The FDS was designed strictly to handle the data to and from the eleven separate scientific instruments on Voyager, as well as the engineering data from dozens of sensors installed around the spacecraft. The need for a dedicated data computer was apparent early on in the Voyager design process, when it became clear that the torrent of data streaming from the scientific platforms during flybys would outstrip the capabilities of any of the hard-wired data management systems used in previous deep space probes.

One of the eight cards comprising the Voyager FDS. Covered with discrete CMOS chips, this card bears the “MJS77” designation; “Mariner Jupiter Saturn 1977” was the original name of the Voyager mission. Note the D-sub connectors for inter-card connections. Source: NASA/JPL.

It was evident early in the Voyager design process that data-handling requirements would outstrip the capabilities of any of the hard-wired data management systems used in previous deep space probes. This led to an initial FDS design using the same general architecture as the CCS and AACS — dual TTL processors, 18-bit word width, and the same redundant 4k of plated-wire memory.  But when the instruction time of a breadboard version of this machine was measured, it turned out to be about half the speed necessary to support peak flyby data throughput.

Voyager FDS. Source: National Air and Space Museum.

To double the speed, direct memory access circuits were added. This allowed data to move in and out of memory without having to go through the processor first. Further performance gains were made by switching the processor design to CMOS chips, a risky move in the early 1970s. Upping the stakes was the decision to move away from the reliable plated-wire memory to CMOS memory, which could be accessed much faster.

The speed gains came at a price, though: volatility. Unlike plated-wire memory, CMOS memory chips lose their data if the power is lost, meaning a simple power blip could potentially erase the FDS memory at the worst possible time. JPL engineers worked around this with brutal simplicity — rather than power the FDS memories from the main spacecraft power systems, they ran dedicated power lines directly back to the radioisotope thermoelectric generators (RTG) powering the craft. This means the only way to disrupt power to the CMOS memories would be a catastrophic loss of all three RTGs, in which case the mission would be over anyway.

Physically, the FDS was quite compact, especially for a computer built of discrete chips in the early 1970s. Unfortunately, it’s hard to find many high-resolution photos of the flight hardware, but the machine appears to be built from eight separate cards that are attached to a card cage. Each card has a row of D-sub connectors along the top edge, which appear to be used for card-to-card connections in lieu of a backplane. A series of circular MIL-STD connectors provide connection to the spacecraft’s scientific instruments, power bus, communications, and the Data Storage Subsystem (DSS), the digital 8-track tape recorder used to buffer data during flybys.

Next Time?

Even with the relative lack of information on Voyager’s computers, there’s still a lot of territory to cover, including some of the interesting software architecture techniques used, and the details of how new software is uploaded to spacecraft that are currently almost a full light-day distant. And that’s not to mention the juicy technical details likely to be contained in a paper hidden away in some dusty box in a Kansas library. Here’s hoping that I can get my hands on that document and follow up with more details of the Voyager computers.

NASA Is Now Tasked With Developing A Lunar Time Standard, Relativity Or Not

Por: Lewin Day
2 Mayo 2024 at 14:00

A little while ago, we talked about the concept of timezones and the Moon. It’s a complicated issue, because on Earth, time is all about the Sun and our local relationship with it. The Moon and the Sun have their own weird thing going on, so time there doesn’t really line up well with our terrestrial conception of it.

Nevertheless, as humanity gets serious about doing Moon things again, the issue needs to be solved. To that end, NASA has now officially been tasked with setting up Moon time – just a few short weeks after we last talked about it! (Does the President read Hackaday?) Only problem is, physics is going to make it a damn sight more complicated!

Relatively Speaking

You know it’s serious when the White House sends you a memo. “Tell NASA to invent lunar time, and get off their fannies!”

The problem is all down to general and special relativity. The Moon is in motion relative to Erath, and it also has a lower gravitational pull. We won’t get into the physics here, but it basically means that time literally moves at a different pace up there. Time on the Moon passes on average 58.7 microseconds faster over a 24 hour Earth day. It’s not constant, either—there is a certain degree of periodic variation involved.

It’s a tiny difference, but it’s cumulative over time. Plus, as it is, many space and navigational applications need the utmost in precise timing to function, so it’s not something NASA can ignore. Even if the agency just wanted to just use UTC and call it good, the relativity problem would prevent that from being a workable solution.

Without a reliable and stable timebase, space agencies like NASA would struggle to establish useful infrastructure on the Moon. Things like lunar satellite navigation wouldn’t work accurately without taking into account the time slip, for example. GPS is highly sensitive to relativistic time effects, and indeed relies upon them to function. Replicating it on the Moon is only possible if these factors are accounted for. Looking even further ahead, things like lunar commerce or secure communication would be difficult to manage reliably without stable timebases for equipment involved.

Banks of atomic clocks—like these at the US Naval Observatory—are used to establish high-quality time standards. Similar equipment may need to be placed on the Moon to establish Coordinated Lunar Time (LTC). Credit: public domain

Still, the order to find a solution has come down from the top. A memo from the Executive Office of the President charged NASA with its task to deliver a standard solution for lunar timing by December 31, 2026.  Coordinated Lunar Time (LTC) must be established and in a way that is traceable to Coordinated Universal Time (UTC). That will enable operators on Earth to synchronize operations with crews or unmanned systems on the Moon itself. LTC is required to be accurate enough for scientific and navigational purposes, and it must be resilient to any loss of contact with systems back on Earth.

It’s also desired that the future LTC standard will be extensible and scalable to space environments we may explore in future beyond the Earth-Moon system itself. In time, NASA may find it necessary to establish time standards for other celestial bodies, due to their own unique differences in relative velocity and gravitational field.

The deadline means there’s time for NASA to come up with a plan to tackle the problem. However, for a federal agency, less than two years is not exactly a lengthy time frame. It’s likely that whatever NASA comes up with will involve some kind of timekeeping equipment deployed on the Moon itself. This equipment would thus be subject to the time shift relative to Earth, making it easier to track differences in time between the lunar and terrestrial time-realities.

The US Naval Observatory doesn’t just keep careful track of time, it displays it on a big LED display for people in the area. NASA probably doesn’t need to establish a big time billboard on the Moon, but it’d be cool if they did. Credit: Votpuske, CC BY 4.0

Great minds are already working on the problem, like Kevin Coggins, NASA’s space communications and navigation chief. “Think of the atomic clocks at the U.S. Naval Observatory—they’re the heartbeat of the nation, synchronizing everything,” he said in an interview. “You’re going to want a heartbeat on the moon.”

For now, establishing CLT remains a project for the American space agency. It will work on the project in partnership with the Departments of Commerce, Defense, State and Transportation. One fears for the public servants required to coordinate meetings amongst all those departments.

Establishing new time standards isn’t cheap. It requires smart minds, plenty of research and development, and some serious equipment. Space-rated atomic clocks don’t come cheap, either. Regardless, the U.S. government hopes that NASA will lead the way for all spacefaring nations in this regard, setting a lunar time standard that can serve future operations well.

 

VAR Is Ruining Football, and Tech Is Ruining Sport

Por: Lewin Day
29 Abril 2024 at 14:00
The symbol of all that is wrong with football.

Another week in football, another VAR controversy to fill the column inches and rile up the fans. If you missed it, Coventry scored a last-minute winner in extra time in a crucial match—an FA Cup semi-final. Only, oh wait—computer says no. VAR ruled Haji Wright was offside, and the goal was disallowed. Coventry fans screamed that the system got it wrong, but no matter. Man United went on to win and dreams were forever dashed.

Systems like the Video Assistant Referee were brought in to make sport fairer, with the aim that they would improve the product and leave fans and competitors better off. And yet, years later, with all this technology, we find ourselves up in arms more than ever.

It’s my sincere belief that technology is killing sport, and the old ways were better. Here’s why.

The Old Days

Moments like these came down to the people on the pitch. Credit: Sdo216, CC BY-SA 3.0

For hundreds of years, we adjudicated sports the same way. The relevant authority nominated some number of umpires or referees to control the game. The head referee was the judge, jury, and executioner as far as rules were concerned. Players played to the whistle, and a referee’s decision was final. Whatever happened, happened, and the game went on.

It was not a perfect system. Humans make mistakes. Referees would make bad calls. But at the end of the day, when the whistle blew, the referee’s decision carried the day. There was no protesting it—you had to suck it up and move on.

This worked fine until the advent of a modern evil—the instant replay. Suddenly, stadiums were full of TV cameras that captured the play from all angles. Now and then, it would become obvious that a referee had made a mistake, with television stations broadcasting incontrovertible evidence to thousands of viewers across the land. A ball at Wimbledon was in, not out. A striker was on side prior to scoring. Fans started to groan and grumble. This wasn’t good enough!

And yet, the system hung strong. As much as it pained the fans to see a referee screw over their favored team, there was nothing to be done. The referee’s call was still final. Nobody could protest or overrule the call. The decision was made, the whistle was blown. The game rolled on.

Then somebody had a bright idea. Why don’t we use these cameras and all this video footage, and use it to double check the referee’s work? Then, there’ll never be a problem—any questionable decision can be reviewed outside of the heat of the moment. There’ll never be a bad call again!

Oh, what a beautiful solution it seemed. And it ruined everything.

The Villain, VAR

The assistant video assistant referees are charged with monitoring various aspects of the game and reporting to the Video Assistant Referee (VAR). The VAR then reports to the referee on the ground, who may overturn a decision, hold firm, or look at the footage themself on a pitchside display. Credit: Niko4it, CC BY-SA 4.0

Enter the Video Assistant Referee (VAR). The system was supposed to bring fairness and accuracy to a game fraught with human error. The Video Assistant Referee was an official that would help guide the primary referee’s judgement based on available video evidence. They would be fed information from a cadre of Assistant Video Assistant Referees (AVARs) who sat in the stadium behind screens, reviewing the game from all angles. No, I didn’t make that second acronym up.

It was considered a technological marvel. So many cameras, so many views, so much slow-mo to pour over. The assembed VAR team would look into everything from fouls to offside calls. The information would be fed to the main referee on the pitch, and they could refer to a pitchside video replay screen if they needed to see things with their own eyes.

A VAR screen mounted on the pitch for the main referee to review as needed. Credit: Carlos Figueroa, CC BY-SA 4.0

The key was that VAR was to be an assistive tool. It was to guide the primary referee, who still had the final call at the end of the day.

You’d be forgiven for thinking that giving a referee more information to do their job would be a good thing.  Instead, the system has become a curse word in the mouths of fans, and a scourge on football’s good name.

From its introduction, VAR began to pervert the game of football. Fans were soon decrying the system’s failures, as entire championships fell the wrong way due to unreliability in VAR systems. Assistant referees were told to hold their offside calls to let the video regime take over. Players were quickly chided for demanding video reviews time and again. New rules would see yellow cards issued for players desperately making “TV screen” gestures in an attempt to see a rivals goal overturned. Their focus wasn’t on the game, but on gaming the system in charge of it.

Fans and players are so often stuck waiting for the penny to drop that celebrations lose any momentum they might have had. Credit: Rlwjones, CC BY-SA 4.0

VAR achieves one thing with brutal technological efficiency: it sucks the life out of the game. The spontaneity of celebrating a goal is gone. Forget running to the stands, embracing team mates, and punching the air in sweet elation. Instead, so many goals now lead to minute-long reviews while the referee consults with those behind the video screens and reviews the footage. Fans sit in a stunted silence, sitting in the dreaded drawn-out suspense of “goal” or “no goal.”

The immediacy and raw emotion of the game has been shredded to pieces. Instead of jumping in joy, fans and players sit waiting for a verdict from an unseen, remote official. The communal experience of instant joy or despair is muted by the system’s mere presence. What was once a straightforward game now feels like a courtroom drama where every play can be contested and overanalyzed.

It’s not just football where this is a problem, either. Professional cricket is now weighed down with microphone systems to listen out for the slightest snick of bat on ball. Tennis, weighed down by radar reviews of line calls. The interruptions never cease—because it’s in every player’s interest to whip out the measuring tape whenever it would screw over their rival. The more technology, the more reviews are made, and the further we get from playing out the game we all came to see.

Making Things Right

Enough of this nonsense! Blow the whistle and move on. Credit: SounderBruce, CC BY-SA 4.0

With so much footage to review, and so many layers of referees involved, VAR can only slow football down. There’s no point trying to make it faster or trying to make it better. The correct call is to scrap it entirely.

As it stands, good games of football are being regularly interrupted by frustrating video checks. Even better games are being ruined when the VAR system fails or a bad call still slips through. Moments of jubilant celebration are all too often brought to naught when someone’s shoelace was thought to be a whisker’s hair ahead of someone’s pinky toe in a crucial moment of the game.

Yes, bad calls will happen. Yes, these will frustrate the fans. But they will frustrate them far less than the current way of doing things. It’s my experience that fans get over a bad call far faster when it’s one ref and and a whistle. When it’s four referees, sixteen camera angles, and a bunch of lines on the video screen? They’ll rage for days that this mountain of evidence suggests their team was ripped off. They won’t get over it. They’ll moan about it for years.

Let the referees make the calls. Refereeing is an art form. A good referee understands the flow of the game, and knows when to let the game breathe versus when to assert control. This subtle art is being lost to the halting interruptions of the video inspection brigade.

Football was better before. They were fools to think they could improve it by measuring it to the nth degree. Scrap VAR, scrap the interruptions. Put it back on the referees on the pitch, and let the game flow.

Mining and Refining: Uranium and Plutonium

24 Abril 2024 at 14:00

When I was a kid we used to go to a place we just called “The Book Barn.” It was pretty descriptive, as it was just a barn filled with old books. It smelled pretty much like you’d expect a barn filled with old books to smell, and it was a fantastic place to browse — all of the charm of an old library with none of the organization. On one visit I found a stack of old magazines, including a couple of Popular Mechanics from the late 1940s. The cover art always looked like pulp science fiction, with a pipe-smoking father coming home from work to his suburban home in a flying car.

But the issue that caught my eye had a cover showing a couple of rugged men in a Jeep, bouncing around the desert with a Geiger counter. “Build your own uranium detector,” the caption implored, suggesting that the next gold rush was underway and that anyone could get in on the action. The world was a much more optimistic place back then, looking forward as it was to a nuclear-powered future with electricity “too cheap to meter.” The fact that sudden death in an expanding ball of radioactive plasma was potentially the other side of that coin never seemed to matter that much; one tends to abstract away realities that are too big to comprehend.

Things are more complicated now, but uranium remains important. Not only is it needed to build new nuclear weapons and maintain the existing stockpile, it’s also an important part of the mix of non-fossil-fuel electricity options we’re going to need going forward. And getting it out of the ground and turned into useful materials, including its radioactive offspring plutonium, is anything but easy.

Lixiviants and Leachates

Despite its rarity in everyday life, uranium is surprisingly abundant. It’s literally as common as dirt; stick a shovel into the ground almost anywhere on Earth and you’ll probably come up with a detectable amount of uranium. The same goes for seawater, which has about 3.3 micrograms of uranium dissolved in every liter, on average. But as with most elements, uranium isn’t evenly distributed, resulting in deposits that are far easier to exploit commercially than others. Australia is the winner of this atomic lottery, with over 2 million tonnes of proven reserves, followed by Kazakhstan with almost a million tonnes, and Canada with 873,000.

While most of the attention uranium garners has to do with the properties of its large, barely stable nucleus, the element also participates in a lot of chemical reactions, thanks to its 92 electrons. The most common uranium compounds are oxides like uranium (IV) oxide, or uranium dioxide (UO2), the main mineral in the ore uranite, also known as pitchblende. Uranite also contains some triuranium octoxide (U3O8), which forms when UO2 reacts with atmospheric oxygen. The oxides make up the bulk of commercially significant ores, with at least a dozen other minerals including uranium silicates, titanates, phosphates, and vanadates being mined somewhere in the world.

Getting uranium out of the ground used to be accomplished through traditional hard-rock mining techniques, where ore is harvested from open-pit mines or via shafts and tunnels running into concentrated seams. The ore is then put through the usual methods of extraction that we’ve seen before in this series, such as crushing and grinding followed by physical separation steps like centrifugation, froth flotation, and filtration. However, the unique chemical properties of uranium, especially its ready solubility, make in situ leaching (ISL) an attractive alternative to traditional extraction.

ISL is a hydrometallurgical process that has become the predominant extraction method for uranium. ISL begins by drilling boreholes into an ore-bearing seam, either from drill rigs on the surface or via tunnels and shafts dug by traditional mining methods. The boreholes are then connected to injection wells that pump a chemical leaching agent or lixiviant into the holes. For uranium, the lixiviant is based on the minerals in the ore and the surrounding rock, and is generally something like a dilute sulfuric acid or an aqueous solution of sodium bicarbonate. Oxygen is often added to the solution, either via the addition of hydrogen peroxide or by bubbling air through the lixivant. The solution reacts with and solubilizes the uranium minerals in the ore seam.

ISL offers huge advantages compared to conventional mining. Although uranium is abundant, it’s still only a small percentage of the volume of the rock bearing it, and conventional mining requires massive amounts of material to be drilled and blasted out of the ground and transported to the surface for processing. ISL, on the other hand, gets the uranium into aqueous solution while it’s still in the ground, meaning it can be pumped to the processing plant. This makes ISL a more continuous flow process, as opposed to the more batch-wise processing methods of conventional mining. Plus, the lixiviant can be tailored to the minerals in the ore so that only the uranium is dissolved, leaving the rock matrix and unwanted minerals underground.

Reacting With Hex

Yellow cake is a mixture of various oxides of uranium. Source: Nuclear Regulatory Commission, public domain.

Uranium dioxide (UO2) is the primary endpoint of uranium refinement. It’s a dark gray powder; the so-called “yellow cake” powder, which is also produced by chemical leaching, is an intermediate form in uranium processing and contains a mix of oxides, particularly U3O8. Natural uranium oxide, however, is not especially useful as a nuclear fuel; only a few reactors in the world, such as the Canada Deuterium Uranium (CANDU) reactor can use natural uranium directly. Every other application requires the uranium dioxide to be enriched to some degree.

Enrichment is the process of increasing the concentration of the rare fissile isotope 235U in the raw uranium dioxide relative to the more abundant, non-fissile isotope 238U. Natural uranium is about 99.7% 238U, which can’t sustain a chain reaction under normal conditions, but with three fewer neutrons in its nucleus, 235U is just unstable enough to be fissionable under the right conditions.

Unlike refining, which takes advantage of the chemical properties of uranium, enrichment is based on its nuclear properties. Separating one isotope from another, especially when they differ by only three neutrons, isn’t a simple process. The vast majority of the effort that went into the Manhattan Project during World War II was directed at finding ways to sort uranium atoms, and many of those methods are still in use to this day.

For most of the Cold War period, the principal method for enriching uranium was the gaseous diffusion method. Uranium oxide is first turned into a gas by reacting it with hydrofluoric acid to form uranium tetrafluoride, which is then treated with fluorine to first yield uranium pentafluoride and finally uranium hexafluoride:

UO{_2} + 4HF \rightarrow UF{_4} + 2H{_2}O

2UF{_4} + F{_2} \rightarrow 2 UF{_5}

2UF{_5} + F{_2} \rightarrow 2 UF{_6}

Cascade of gas centrifuges used to enrich uranium, circa 1984. Source: Nuclear Regulatory Commission, public domain.

The highly volatile, incredibly corrosive uranium hexafluoride gas, or hex, is pumped at high pressure into a pressure vessel that contains a semi-permeable separator made from sintered nickel or aluminum. The pore size is tiny, only about 20 nanometers. Since the rate at which a gas molecule passes through a pore depends on its mass, the slightly lighter 235UF6 tends to get through the barrier faster, leaving the high-pressure side of the chamber slightly depleted of the desirable 235U6. Multiple stages are cascaded together, with the slightly enriched output of each stage acting as the input for the next stage, eventually resulting in the desired enrichment — either low-enriched uranium (LEU), which is in the 2-3% 235U range needed for civilian nuclear reactor fuel, or high-enriched uranium (HEU), which is anything greater than 20% enriched, including the 85-90% required for nuclear weapons.

These days, gaseous diffusion is considered largely obsolete and has given way to gas centrifugation enrichment. In this method, gaseous hex is pumped into a tall, narrow cylinder spinning in a vacuum at very high speed, often greater than 50,000 revolutions per minute. The heavier 238UF6 is flung against the outer wall of the centrifuge while the lighter 235UF6 migrates toward the center. The slightly enriched hex is pumped from the center of the centrifuge and fed into the next stage in a cascade, resulting in the desired enrichment. The enriched hex can then be chemically converted back into uranium dioxide for processing into fuel.

Made, Not Found

Unlike any of the other elements we’ve covered in the “Mining and Refining” series so far, plutonium is neither mined nor refined, at least not in the traditional sense. Trace amounts of plutonium do exist in nature, but at the parts per trillion level. So to get anything approaching usable quantities, plutonium, the primary fuel for nuclear weapons, needs to be synthesized in a nuclear reactor.

The main fissile isotope of plutonium, 239Pu, is made by bombarding 238U with neutrons. Each atom of 238U that absorbs a neutron becomes 239U, a radioactive isotope with a half-life of only 23.5 minutes. That decays via beta radiation to neptunium-239 (239Np), another short half-life (52 hours) isotope that decays to 239Pu:

Uranium decay series. Adding a neutron to uranium-238 in a reactor “breeds” plutonium-239.

The process of creating 239Pu from uranium is called “breeding.” From the look of the reaction above, it seems like a civilian nuclear reactor, with its high neutron flux and fuel rods composed of about 96% unenriched uranium, would be the perfect place to make plutonium. There are practical reasons why that won’t work, though, and it has to do with one little neutron.

Elemental plutonium “buttons” are recovered from the bottoms of crucibles after reduction. Buttons are the raw material that then goes to forging and machining to form the pits of nuclear weapons. Source: Los Alamos National Lab, public domain.

Plutonium isn’t really enriched the way that uranium is. Rather, plutonium is graded by the amount of 240Pu it contains; the lower the concentration relative to 239Pu, the higher the grade. That’s because 240Pu tends to undergo spontaneous fission, releasing neutrons that could pre-detonate the plutonium core of the bomb before it’s completely imploded. Weapons-grade plutonium has to have less than 7% 240Pu, and the longer the reaction is allowed to continue, the more it accumulates. Weapons-grade plutonium can only cook for a couple of weeks, which means a civilian reactor would need to be shut down far too often for it to both generate power and synthesize plutonium. So, special production reactors are used to create fissile plutonium.

Once the fuel rods in a production reactor are finished, the plutonium is chemically separated from any remaining 238U and other contaminating fission byproducts using a long, complicated process of extraction. One such process, PUREX (plutonium uranium reduction extraction), uses nitric acid and a combination of organic solvents like kerosene to dissolve the uranium, plus aqueous solvents and reducing agents to solubilize the plutonium. Plutonium dioxide can then be reduced to metallic plutonium, for example by heating it with powdered aluminum. The resulting metal is notoriously difficult to machine, and so is often alloyed with gallium to stabilize its crystal structure and make it easier to handle.

❌
❌