Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

The SS United States: The Most Important Ocean Liner We May Soon Lose Forever

Por: Maya Posch
27 Junio 2024 at 14:30

Although it’s often said that the era of ocean liners came to an end by the 1950s with the rise of commercial aviation, reality isn’t quite that clear-cut. Coming out of the troubled 1940s arose a new kind of ocean liner, one using cutting-edge materials and propulsion, with hybrid civil and military use as the default, leading to a range of fascinating design decisions. This was the context in which the SS United States was born, with the beating heart of the US’ fastest battle ships, with light-weight aluminium structures and survivability built into every single aspect of its design.

Outpacing the super-fast Iowa-class battleships with whom it shares a lot of DNA due to its lack of heavy armor and triple 16″ turrets, it easily became the fastest ocean liner, setting speed records that took decades to be beaten by other ocean-going vessels, though no ocean liner ever truly did beat it on speed or comfort. Tricked out in the most tasteful non-flammable 1950s art and decorations imaginable, it would still be the fastest and most comfortable way to cross the Atlantic today. Unfortunately ocean liners are no longer considered a way to travel in this era of commercial aviation, leading to the SS United States and kin finding themselves either scrapped, or stuck in limbo.

In the case of the SS United States, so far it has managed to escape the cutting torch, but while in limbo many of its fittings were sold off at auction, and the conservation group which is in possession of the ship is desperately looking for a way to fund the restoration. Most recently, the owner of the pier where the ship is moored in Camden, New Jersey got the ship’s eviction approved by a judge, leading to very tough choices to be made by September.

A Unique Design

WW II-era United States Maritime Commission (MARCOM) poster.
WW II-era United States Maritime Commission (MARCOM) poster.

The designer of the SS United States is William Francis Gibbs, who despite being a self-taught engineer managed to translate his life-long passion for shipbuilding into a range of very notable ships. Many of these were designed at the behest of the United States Maritime Commission (MARCOM), which was created by the Merchant Marine Act of 1936, until it was abolished in 1950. MARCOM’s task was to create a merchant shipbuilding program for hundreds of modern cargo ships that would replace the World War I vintage vessels which formed the bulk of the US Merchant Marine. As a hybrid civil and federal organization, the merchant marine is intended to provide the logistical backbone for the US Navy in case of war and large-scale conflict.

The first major vessel to be commissioned for MARCOM was the SS America, which was an ocean liner commissioned in 1939 and whose career only ended in 1994 when it (then named the American Star) wrecked at the Canary Islands. This came after it had been sold in 1992 to be turned into a five-star hotel in Thailand. Drydocking in 1993 had revealed that despite the advanced age of the vessel, it was still in remarkably good condition.

Interestingly, the last merchant marine vessel to be commissioned by MARCOM was the SS United States, which would be a hybrid civilian passenger liner and military troop transport. Its sibling, the SS America, was in Navy service from 1941 to 1946 when it was renamed the USS West Point (AP-23) and carried over 350,000 troops during the war period, more than any other Navy troopship. Its big sister would thus be required to do all that and much more.

Need For Speed

SS United States colorized promotional B&W photograph. The ship's name and an American flag have been painted in position here as both were missing when this photo was taken during 1952 sea trials.
SS United States colorized promotional B&W photograph. The ship’s name and an American flag have been painted in position here as both were missing when this photo was taken during 1952 sea trials.

William Francis Gibbs’ naval architecture firm – called Gibbs & Cox by 1950 after Daniel H. Cox joined – was tasked to design the SS United States, which was intended to be a display of the best the United States of America had to offer. It would be the largest, fastest ocean liner and thus also the largest and fastest troop and supply carrier for the US Navy.

Courtesy of the major metallurgical advances during WW II, and with the full backing of the US Navy, the design featured a military-style propulsion plant and a heavily compartmentalized design following that of e.g. the Iowa-class battleships. This meant two separate engine rooms and similar levels of redundancy elsewhere, to isolate any flooding and other types of damage. Meanwhile the superstructure was built out of aluminium, making it both very light and heavily corrosion-resistant. The eight US Navy M-type boilers (run at only 54% of capacity) and a four-shaft propeller design took lessons learned with fast US Navy ships to reduce vibrations and cavitation to a minimum. These lessons include e.g. the the five- and four-bladed propeller design also seen used with the Iowa-class battleships with their newer configurations.

Another lessons-learned feature was a top to bottom fire-proofing after the terrible losses of the SS Morro Castle and SS Normandie, with no wood, fabrics or other flammable materials onboard, leading to the use of glass, metal and spun-glass fiber, as well as fireproof fabrics and carpets. This extended to the art pieces that were onboard the ship, as well as the ship’s grand piano which was made from mahogany whose inability to ignite was demonstrated by trying to burn it with a gasoline fire.

The actual maximum speed that the SS United States can reach is still unknown, with it originally having been a military secret. Its first speed trial supposedly saw the vessel hit an astounding 43 knots (80 km/h), though after the ship was retired from the United States Lines (USL) by the 1970s and no longer seen as a naval auxiliary asset, its top speed during the June 10, 1952 trial was revealed to be 38.32 knots (70.97 km/h). In service with USL, its cruising speed was 36 knots, gaining it the Blue Riband and rightfully giving it its place as America’s Flagship.

A Fading Star

The SS United States was withdrawn from passenger service by 1969, in a very unexpected manner. Although the USL was no longer using the vessel, it remained a US Navy reserve vessel until 1978, meaning that it remained sealed off to anyone but US Navy personnel during that period. Once the US Navy no longer deemed the vessel relevant for its needs in 1978, it was sold off, leading to a period of successive owners. Notable was Richard Hadley who had planned to convert it into seagoing time-share condominiums, and auctioned off all the interior fittings in 1984 before his financing collapsed.

In 1992, Fred Mayer wanted to create a new ocean liner to compete with the Queen Elizabeth, leading him to have the ship’s asbestos and other hazardous materials removed in Ukraine, after which the vessel was towed back to Philadelphia in 1996, where it has remained ever since. Two more owners including Norwegian Cruise Line (NCL) briefly came onto the scene, but economic woes scuttled plans to revive it as an active ocean liner. Ultimately NCL sought to sell the vessel off for scrap, which led to the SS United States Conservancy (SSUSC) to take over ownership in 2010 and preserve the ship while seeking ways to restore and redevelop the vessel.

Considering that the running mate of the SS United States (the SS America) was lost only a few years prior, this leaves the SS United States as the only example of a Gibbs ocean liner, and a poignant reminder of what would have been a highlight of the US’s marine prowess. Compared to the United Kingdom’s record here, with the Queen Elizabeth 2 (QE2, active since 1969) now a floating hotel in Dubai and the Queen Mary 2‘s maiden voyage in 2004, the US looks to be rather meager when it comes to preserving its ocean liner legacy.

End Of The Line?

The curator of the Iowa-class USS New Jersey (BB-62, currently fresh out of drydock), Ryan Szimanski, walked over from his museum ship last year to take a look at the SS United States, which is moored literally within viewing distance from his own pride and joy. Through the videos he made, one gains a good understanding of both how stripped the interior of the ship is, but also how amazingly well-conserved the ship is today. Even after decades without drydocking or in-depth maintenance, the ship looks like could slip into a drydock tomorrow and come out like new a year or so later.

At the end of all this, the question remains whether the SS United States deserves it to be preserved. There are many arguments for why this would the case, from its unique history as part of the US Merchant Marine, its relation to the highly successful SS America, it being effectively a sister ship to the four Iowa-class battleships, as well as a strong reminder of the importance of the US Merchant Marine at some point in time. The latter especially is a point which professor Sal Mercogliano (from What’s Going on With Shipping? fame) is rather passionate about.

Currently the SSUSC is in talks with a New York-based real-estate developer about a redevelopment concept, but this was thrown into peril when the owner of the pier suddenly doubled the rent, leading to the eviction by September. Unless something changes for the better soon, the SS United States stands a good chance of soon following the USS Kitty Hawk, USS John F. Kennedy (which nearly became a museum ship) and so many more into the scrapper’s oblivion.

What, one might ask, is truly in the name of the SS United States?

8-Tracks Are Back? They Are In My House

10 Junio 2024 at 14:00

What was the worst thing about the 70s? Some might say the oil crisis, inflation, or even disco. Others might tell you it was 8-track tapes, no matter what was on them. I’ve heard that the side of the road was littered with dead 8-tracks. But for a while, they were the only practical way to have music in the car that didn’t come from the AM/FM radio.

If you know me at all, you know that I can’t live without music. I’m always trying to expand my collection by any means necessary, and that includes any format I can play at home. Until recently, that list included vinyl, cassettes, mini-discs, and CDs. I had an 8-track player about 20 years ago — a portable Toyo that stopped working or something. Since then, I’ve wanted another one so I can collect tapes again. Only this time around, I’m trying to do it right by cleaning and restoring them instead of just shoving them in the player willy-nilly.

Update: I Found a Player

A small 8-track player and equally small speakers, plus a stack of VHS tapes.
I have since cleaned it.

A couple of weeks ago, I was at an estate sale and I found a little stereo component player and speakers. There was no receiver in sight. I tested the player with the speakers and bought them for $15 total because it was 75% off day and they were overpriced originally. While I was still at the sale, I hooked it up to the little speakers and made sure it played and changed programs.

Well, I got it home and it no longer made sound or changed programs. I thought about the play head inside and how dirty it must be, based on the smoker residue on the front plate of the player. Sure enough, I blackened a few Q-tips and it started playing sweet tunes again. This is when I figured out it wouldn’t change programs anymore.

I found I couldn’t get very far into the player, but I was able to squirt some contact cleaner into the program selector switch. After many more desperate button presses, it finally started changing programs again. Hooray!

I feel I got lucky. If you want to read about an 8-track player teardown, check out Jenny List’s awesome article.

These Things Are Not Without Their Limitations

A diagram of an 8-track showing the direction of tape travel, the program-changing solenoid, the playback head, the capstan and pinch roller, and the path back to the reel.
This is what’s going on, inside and out. Image via 8-Track Heaven, a site which has itself gone to 8-Track Heaven.

So now, the problem is the tapes themselves. I think there are two main reasons why people think that 8-tracks suck. The first one is the inherent limitations of the tape. Although there were 90- and 120-minute tapes, most of them were more like 40-60 minutes, divided up into four programs. One track for the left channel, one for the right, and you have your eight tracks and stereo sound.

The tape is in a continuous loop around a single hub. Open one up and you’ll see that the tape comes off the center toward the left and loops back onto the outside from the right. 8-tracks can’t be rewound, only fast-forwarded, and it doesn’t seem like too many players even had this option. If you want to listen to the first song on program one, for instance, you’d better at least tolerate the end of program four.

The tape is divided into four programs, which are separated by a foil splice. A sensor in the machine raises or lowers the playback head depending on the program to access the appropriate tracks (1 and 5, 2 and 6, and so on.)

Because of the 10-12 minute limitation of each program, albums were often rearranged to fit better within the loud solenoidal ka-chunk of each program change.

For a lot of people, this was outright heresy. Then you have to consider that not every album could fit neatly within four programs, so some tracks faded out for the program change, and then faded back in, usually in the middle of the guitar solo.

Other albums fit into the scheme with some rearrangement, but they did so at the expense of silence on one or more of the programs. Check out the gallery below to see all of these conditions, plus one that divided up perfectly without any continuations or silence.

A copy of Jerry Reed's Texas Bound and Flyin' on 8-track. A copy of Yes' Fragile on 8-track. It's pink! A copy of Fleetwood Mac's Mystery To Me on 8-track. A copy of Blood, Sweat, & Tears' Greatest Hits on 8-track, man. A copy of Dolly Parton's Here You Come Again on 8-track, darlin'.

The second reason people dislike 8-tracks is that they just don’t sound that good, especially since cassette tapes were already on the market. They didn’t sound super great when they were new, and years of sitting around in cars and dusty basements and such didn’t help. In my experience, at this point, some sound better than others. I suppose after the tape dropout, it’s all subjective.

What I Look For When Buying Tapes

The three most important things to consider are the pressure pads, the foil splices, and the pinch roller. All of these can be replaced, although some jobs are easier than others.

Start by looking at the pressure pads. These are either made of foam that’s covered with a slick surface so the tape can slide along easily, or they are felt pads on a sproingy metal thing like a cassette tape. You want to see felt pads when you’re out shopping, but you’ll usually see foam. That’s okay. You can get replacement foam on ebay or via 8-track avenue directly, or you can do what I do.

A bad, gross, awful pinch roller, and a good one.

After removing the old foam and scraping the plastic backing with my tweezers, I cut a piece of packing tape about 3/8″ wide — just enough to cover the width of some adhesive foam window seal. The weatherstripping’s response is about the same as the original foam, and the packing tape provides a nice, slick surface. I put a tiny strip of super glue on the adhesive side and stick one end down into the tape, curling it a little to rock it into position, then I press it down and re-tension the tape. The cool part is that you can do all this without opening up the tape by just pulling some out. Even if the original foam seems good, you should go ahead and replace it. Once you’ve seen the sticky, black powder it can turn to with time, you’ll understand why.

A copy of Jimi Hendrix's Are You Experienced? on 8-track with a very gooey pinch roller that has almost enveloped the tape.
An example of what not to buy. This one is pretty much hopeless unless you’re experienced.

Another thing you can address without necessarily opening up the tape are the foil splices that separate the programs. As long as the pressure pads are good, shove that thing in the player and let it go until the ka-chunk, and then pull it out quickly to catch the splice. Once you’ve got the old foil off of it, use the sticky part of a Post-It note to realign the tape ends and keep them in place while you apply new foil.

Again, you can get sensing foil on ebay, either in a roll, or in pre-cut strips that have that nice 60° angle to them. Don’t try to use copper tape like I did. I’ll never know if it worked or not, because I accidentally let too much tape un-spool from the hub while I was splicing it, but it seemed a little too heavy. Real-deal aluminium foil sensing tape is even lighter-weight than copper tape.

One thing you can’t do without at least opening the tape part way is to replace the pinch roller. Fortunately, these are usually in pretty good shape, but you can usually tell right away if they are gooey without having to press your fingernail into it. Even so, I have salvaged the pinch rollers out of tapes I have tried to save and couldn’t, just to have some extras around.

If you’re going to open the tape up, you might as well take some isopropyl alcohol and clean the graphite off of the pinch roller. This will take a while, but is worth it.

Other Problems That Come Up

Sometimes, you shove one of these bad boys in the player and nothing happens. This usually means that the tape is seized up and isn’t moving. Much like blowing into an N64 cartridge, I have heard that whacking the tape on your thigh a few times will fix a seized tape, but so far, that has not worked for me. I have so far been unable to fix a seized tape, but there are guides out there. Basically, you cut the tape somewhere, preferably at a foil splice, fix the tension, and splice it back together.

Another thing that can happen is called a wedding cake. Basically, you open up the cartridge and find that the inner loops of tape have raised up around the hub, creating a two-layer effect that resembles a wedding cake. I have not so far successfully fixed such a situation, but I’ve only run across one so far. Basically, you pull the loops off of the center, re-tension the tape from the other side, and spin those loops back into the center. This person makes it look insanely easy.

Preventive Maintenance On the Player

As with cassette players, the general sentiment is that one should never actually use a head-cleaning tape as they are rough. As I said earlier, I cleaned the playback head thoroughly with 91% isopropyl alcohol and Q-tips that I wished were longer.

Dionne Warwick's Golden Hits on 8-track, converted to a capstan cleaner. Basically, there's no tape, and it has a bit of scrubby pad shoved into the pinch roller area.
An early set of my homemade pressure pads. Not the greatest.

Another thing I did to jazz up my discount estate sale player was to make a capstan-cleaning tape per these instructions on 8-Track Avenue. Basically, I took my poor Dionne Warwick tape that I couldn’t fix, threw away the tape, kept the pinch roller for a rainy day, and left the pressure pads intact.

To clean the capstan, I took a strip of reusable dishrag material and stuffed it in the place where the pinch roller goes. Then I put a few drops of alcohol on the dishrag material and inserted the tape for a few seconds. I repeated this with new material until it came back clean.

In order to better grab the tape and tension it against the pinch roller, the capstan should be roughed up a bit. I ripped the scrubby side off of an old sponge and cut a strip of that, then tucked it into the pinch roller pocket and let the player run for about ten seconds. If you listen to a lot of tapes, you should do this often.

Final Thoughts

I still have a lot to learn about fixing problematic 8-tracks, but I think I have the basics of refurbishment down. There are people out there who have no qualms about ironing tapes that have gotten accordioned, or re-spooling entire tapes using a drill and a homemade hub-grabbing attachment. If this isn’t the hacker’s medium, I don’t know what is. Long live 8-tracks!

Hands On: Inkplate 6 MOTION

Por: Tom Nardi
6 Junio 2024 at 14:00

Over the last several years, DIY projects utilizing e-paper displays have become more common. While saying the technology is now cheap might be overstating the situation a bit, the prices on at least small e-paper panels have certainly become far more reasonable for the hobbyist. Pair one of them with a modern microcontroller such as the RP2040 or ESP32, sprinkle in a few open source libraries, and you’re well on the way to creating an energy-efficient smart display for your home or office.

But therein lies the problem. There’s still a decent amount of leg work involved in getting the hardware wired up and talking to each other. Putting the e-paper display and MCU together is often only half the battle — depending on your plans, you’ll probably want to add a few sensors to the mix, or perhaps some RGB status LEDs. An onboard battery charger and real-time clock would be nice as well. Pretty soon, your homebrew e-paper gadget is starting to look remarkably like the bottom of your junk bin.

For those after a more integrated solution, the folks at Soldered Electronics have offered up a line of premium open source hardware development boards that combine various styles of e-paper panels (touch, color, lighted, etc) with a microcontroller, an array of sensors, and pretty much every other feature they could think of. To top it off, they put in the effort to produce fantastic documentation, easy to use libraries, and free support software such as an online GUI builder and image converter.

We’ve reviewed a number of previous Inkplate boards, and always came away very impressed by the attention to detail from Soldered Electronics. When they asked if we’d be interested in taking a look at a prototype for their new MOTION 6 board, we were eager to see what this new variant brings to the table. Since both the software and hardware are still pre-production, we won’t call this a review, but it should give you a good idea of what to expect when the final units start shipping out in October.

Faster and Stronger

As mentioned previously, the Inkplate boards have generally been differentiated by the type of e-paper display they’ve featured. In the case of the new MOTION, the theme this time around is speed — Soldered says this new display is capable of showing 11 frames per second, no small feat for a technology that’s notoriously slow to refresh. You still won’t be watching movies at 11 FPS of course, but it’s more than enough to display animations and dynamic information thanks to its partial refresh capability that only updates the areas of the display where the image has actually changed.

But it’s not just the e-paper display that’s been swapped out for a faster model. For the MOTION 6, Soldered traded in the ESP32 used on all previous Inkplates for the STM32H743, an ARM Cortex-M7 chip capable of running at 480 MHz. Well, at least partially. You’ll still find an ESP32 hanging out on the back of the MOTION 6, but it’s there as a co-processor to handle WiFi and Bluetooth communications. The STM32 chip features 1 MB of internal SRAM and has been outfitted with a whopping 32 MB of external DRAM, which should come in handy when you’re throwing 4-bit grayscale images at the 1024 x 758 display.

The Inkplate MOTION 6 also features an impressive suite of sensors, including a front-mounted APDS-9960 which can detect motion, proximity, and color. On the backside you’ll find the SHTC3 for detecting temperature and humidity, as well as a LSM6DSO32 accelerometer and gyroscope. One of the most impressive demos included in the MOTION 6’s Arduino library pulls data from the gyro and uses it to rotate a wireframe 3D cube as you move the device around. Should you wish to connect other sensors or devices to the board, you’ve got breakouts for the standard expansion options such as I²C and SPI, as well as Ethernet, USB OTG, I²S, SDMMC, and UART.

Although no battery is included with the MOTION 6, there’s a connector for one on the back of the board, and the device includes a MCP73831 charge controller and the appropriate status LEDs. Primary power is supplied through the board’s USB-C connector, and there’s also a set of beefy solder pads along the bottom edge where you could wire up an external power source.

For user input you have three physical buttons along the side, and a rather ingenious rotary encoder — but to explain how that works we need to switch gears and look at the 3D printed enclosure Soldered has created for the Inkplate MOTION 6.

Wrapped Up Tight

Under normal circumstances I wouldn’t go into so much detail about a 3D printed case, but I’ve got to give Soldered credit for the little touches they put into this design. Living hinges are used for both the power button and the three user buttons on the side, there’s a holder built into the back for a pouch battery, and there’s even a little purple “programming tool” that tucks into a dedicated pocket — you’ll use that to poke the programming button when the Inkplate is inside the enclosure.

But the real star is the transparent wheel on the right hand side. The embedded magnet in the center lines up perfectly with a AS5600 magnetic angle encoder on the Inkplate, with an RGB LED just off to the side. Reading the value from the AS5600 as the wheel rotates gives you a value between 0 and 4048, and the library offers macros to convert that to radians and degrees. Combined with the RGB LED, this arrangement provides an input device with visual feedback at very little cost.

It’s an awesome idea, and now I’m looking for an excuse to include it in my own hardware designs.

The 3D printed case is being offered as an add-on for the Inkplate MOTION 6 at purchase time, but both the STLs and  Fusion 360 files for it will be made available with the rest of the hardware design files for those that would rather print it themselves.

An Exciting Start

As I said in the beginning of this article, the unit I have here is the prototype — while the hardware seems pretty close to final, the software side of things is obviously still in the early stages. Some of the libraries simply weren’t ready in time, so I wasn’t able to test things like WiFi or Bluetooth. Similarly, I wasn’t able to try out the MicroPython build for the MOTION 6. That said, I have absolutely no doubt that the team at Soldered Electronics will have everything where it needs to be by the time customers get their hands on the final product.

There’s no denying that the $169 USD price tag of the Inkplate MOTION 6 will give some users pause. If you’re looking for a budget option, this absolutely isn’t it. But what you get for the price is considerable. You’re not just paying for the hardware, you’re also getting the software, documentation, schematics, and PCB design files. If those things are important to you, I’d say it’s more than worth the premium price.

So far, it looks like plenty of people feel the same way. As of this writing, the Inkplate MOTION 6 is about to hit 250% of its funding goal on Crowd Supply, with more than 30 days left in the campaign.

Mining and Refining: Fracking

5 Junio 2024 at 14:33

Normally on “Mining and Refining,” we concentrate on the actual material that’s mined and refined. We’ve covered everything from copper to tungsten, with side trips to more unusual materials like sulfur and helium. The idea is to shine a spotlight on the geology and chemistry of the material while concentrating on the different technologies needed to exploit often very rare or low-concentration deposits and bring them to market.

This time, though, we’re going to take a look at not a specific resource, but a technique: fracking. Hydraulic fracturing is very much in the news lately for its potential environmental impact, both in terms of its immediate effects on groundwater quality and for its perpetuation of our dependence on fossil fuels. Understanding what fracking is and how it works is key to being able to assess the risks and benefits of its use. There’s also the fact that like many engineering processes carried out on a massive scale, there are a lot of interesting things going on with fracking that are worth exploring in their own right.

Fossil Mud

Although hydraulic fracturing has been used since at least the 1940s to stimulate production in oil and gas wells and is used in all kinds of well drilled into multiple rock types, fracking is most strongly associated these days with the development of oil and natural gas deposits in shale. Shale is a sedimentary rock formed from ancient muds made from fine grains of clay and silt. These are some of the finest-grained materials possible, with grains ranging from 62 microns in diameter down to less than a micron. Grains that fine only settle out of suspension very slowly, and tend to do so only where there are no currents.

Shale outcropping in a road cut in Kentucky. The well-defined layers were formed in still waters, where clay and silt particles slowly accumulated. The dark color means a lot of organic material from algae and plankton mixed in. Source: James St. John, CC BY 2.0, via Wikimedia Commons

The breakup of Pangea during the Cretaceous period provided much of the economically important shale formations in today’s eastern United States, like the Marcellus formation that stretches from New York state into Ohio and down almost to Tennesee. The warm, calm waters of the newly forming Atlantic Ocean formed the perfect place for clay- and silt-laden runoff to accumulate and settle, eventually forming the shale formation.

Shale is often associated with oil and natural gas because the conditions that favor its formation also favor hydrocarbon creation. The warm, still Cretaceous waters were perfect for phytoplankton and algal growth, and when those organisms died they rained down along with the silt and clay grains to the low-oxygen environment at the bottom. Layer upon layer built up slowly over the millennia, but instead of decomposing as they would have in an oxygen-rich environment, the reducing conditions slowly transformed the biomass into kerogen, or solid deposits of hydrocarbons. With the addition of heat and pressure, the hydrocarbons in kerogen were cooked into oil and natural gas.

In some cases, the tight grain structure of shale acts as an impermeable barrier to keep oil and gas generated in lower layers from floating up, forming underground deposits of liquid and gas. In other cases, kerogens are transformed into oil or natural gas right within the shale, trapped within its pores. Under enough pressure, gas can even dissolve right into the shale matrix itself, to be released only when the pressure in the rock is relieved.

Horizontal Boring

While getting at these sequestered oil and gas deposits requires more than just drilling a hole in the ground, fracking starts with exactly that. Traditional well-drilling techniques, where a rotary table rig using lengths of drill pipe spins a drill bit into rock layers underground while pumping a slurry called drilling mud down the bore to cool and lubricate the bit, are used to start the well. The initial bore proceeds straight down until it passes through the lowest aquifer in the region, at which point the entire bore is lined with a steel pipe casing. The casing is filled with cementitious grout that’s forced out of the bottom of the casing by a plug inserted at the surface and pressed down by the drilling rig. This squeezes the grout between the outside of the casing and the borehole and back up to the surface, sealing it off from the water-bearing layers it passes through and serving as a foundation for equipment that will eventually be added to the wellhead, such as blow-out preventers.

Once the well is sealed off, vertical boring continues until the kickoff point, where the bore transitions from vertical to horizontal. Because the target shale seam is relatively thin — often only 50 to 300 feet (15 to 100 meters) thick — drilling a vertical bore through it would only expose a small amount of surface area. Fracking is all about increasing surface area and connecting as many pores in the shale to the bore; drilling horizontally within the shale seam makes that possible. Geologists and mining engineers determine the kickoff point based on seismic surveys and drilling logs from other wells in the area and calculate the radius needed to put the bore in the middle of the seam. Given that the drill string can only turn by a few degrees at most, the radius tends to be huge — often hundreds of meters.

Directional drilling has been used since the 1920s, often to steal oil from other claims, and so many techniques have been developed for changing the direction of a drill string deep underground. One of the most common methods used in fracking wells is the mud motor. Powered by drilling mud pumped down the drill pipe and forced between a helical stator and rotor, the mud motor can spin the drill bit at 60 to 100 RPM. When boring a traditional vertical well, the mud motor can be used in addition to spinning the entire drill string, to achieve a higher rate of penetration. The mud motor can also power the bit with the drill string locked in place, and by adding angled spacers between the mud motor and the drill string, the bit can begin drilling at a shallow angle, generally just a few degrees off vertical. The drill string is flexible enough to bend and follow the mud motor on its path to intersect the shale seam. The azimuth of the bore can be changed, too, by rotating the drill string so the bit heads off in a slightly different direction. Some tools allow the bend in the motor to be changed without pulling the entire drill string up, which represents significant savings.

Determining where the drill bit is under miles of rock is the job of downhole tools like the measurement while drilling (MWD) tool. These battery-powered tools vary in what they can measure, but typically include temperature and pressure sensors and inertial measuring units (IMU) to determine the angle of the bit. Some MWD tools also include magnetometers for orientation to Earth’s magnetic field. Transmitting data back to the surface from the MWD can be a problem, and while more use is being made of electrical and fiber optic connections these days, many MWDs use the drilling mud itself as a physical transport medium. Mud telemetry uses pressure waves set up in the column of drilling mud to send data back up to pressure transducers on the surface. Data rates are low; 40 bps at best, dropping off sharply with increasing distance. Mud telemetry is also hampered by any gas dissolved in the drilling mud, which strongly attenuates the signal.

Let The Fracking Begin

Once the horizontal borehole is placed in the shale seam, a steel casing is placed in the bore and grouted with cement. At this point, the bore is completely isolated from the surrounding rock and needs to be perforated. This is accomplished with a perforating gun, a length of pipe studded with small shaped charges. The perforating gun is prepared on the surface by pyrotechnicians who place the charges into the gun and connect them together with detonating cord. The gun is lowered into the bore and placed at the very end of the horizontal section, called the toe. When the charges are detonated, they form highly energetic jets of fluidized metal that lance through the casing and grout and into the surrounding shale. Penetration depth and width depend on the specific shaped charge used but can extend up to half a meter into the surrounding rock.

Perforation can also be accomplished non-explosively, using a tool that directs jets of high-pressure abrasive-charged fluid through ports in its sides. It’s not too far removed from water jet cutting, and can cut right through the steel and cement casing and penetrate well into the surrounding shale. The advantage to this type of perforation is that it can be built into a single multipurpose tool which can

Once the bore has been perforated, fracturing can occur. The principle is simple: an incompressible fluid is pumped into the borehole under great pressure. The fluid leaves the borehole and enters the perforations, cracking the rock and enlarging the original perforations. The cracks can extend many meters from the original borehole into the rock, exposing vastly more surface area of the rock to the borehole.

Fracking is more than making cracks. The network of cracks produced by fracking physically connects kerogen deposits within the shale to the borehole. But getting the methane (black in inset) free from the kerogen (yellow) is a complicated balance of hydrophobic and hydrophilic interactions between the shale, the kerogen, and the fracturing fluid. Source: Thomas Lee, Lydéric Bocquet, Benoit Coasne, CC BY 4.0, via Wikimedia Commons

The pressure needed to hydraulically fracture solid rock perhaps a mile or more below the surface can be tremendous — up to 15,000 pounds per square inch (100 MPa). In addition to the high pressure, the fracking fluid must be pumped at extremely high volumes, up to 10 cu ft/s (265 lps). The overall volume of material needed is impressive, too — a 6″ borehole that’s 10,000 feet long would take almost 15,000 gallons of fluid to fill alone. Add in the volume of fluid needed to fill the fractures and that could easily exceed 5 million gallons.

Fracking fluid is a slurry made mostly from water and sand. The sand serves as a proppant, which keeps the tiny microfractures from collapsing after fracking pressure is released. Fracking fluid also contains a fraction of a percent of various chemical additives, mostly to form a gel that effectively transfers the hydraulic force while keeping the proppant suspended. Guar gum, a water-soluble polysaccharide extracted from guar beans, is often used to create the gel. Fracking gels are sometimes broken down after a while to clear the fractures and allow freer flow; a combination of acids and enzymes is usually used for this job.

Once fracturing is complete, the fracking fluid is removed from the borehole. It’s impossible to recover all the fluid; sometimes as much as 50% is recovered, but often as little as 5% can be pumped back to the surface. Once a section of the borehole has been fractured, it’s sealed off from the rest of the well by an isolating plug placed upstream of the freshly fracked section. The entire process — perforating, fracking, recovery, isolation — is repeated up the borehole until the entire horizontal bore is fracked. The isolating plugs are then bored out, and the well can begin production.

A Treasure Trove In An English Field

Por: Jenny List
3 Junio 2024 at 14:00

This is being written in a tent in a field in Herefordshire, one of the English counties that borders Wales. It’s the site of Electromagnetic Field, this year’s large European hacker camp, and outside my tent the sky is lit by a laser light show to the sound of electronic music. I’m home.

One of the many fun parts of EMF is its swap table. A gazebo to which you can bring your junk, and from which you can take away other people’s junk. It’s an irresistible destination which turns a casual walk into half an hour pawing through the mess in search of treasure, and along the way it provides an interesting insight into technological progress. What is considered junk in 2024?

Something for everyone

As always, the items on offer range from universal treasures of the I-can’t-believe-they-put that-there variety, through this-is-treasure-to-someone-I’m-sure items, to absolute junk. Some things pass around the camp like legends; I wasn’t there when someone dropped off a box of LED panels for example, but I’ve heard the story relayed in hushed tones several times since, and even seen some of the precious haul. A friend snagged a still-current AMD processor and some Noctua server fans as another example, and I’m told that amazingly someone deposited a Playstation 5. But these are the exceptions, in most cases the junk is either very specific to something, or much more mundane. I saw someone snag an audio effects unit that may or may not work, and there are PC expansion cards and outdated memory modules aplenty.

Finally, there is the absolute junk, which some might even call e-waste but I’ll be a little more charitable about. Mains cables, VGA cables, and outdated computer books. Need to learn about some 1990s web technology? We’ve got you covered.

Perhaps most fascinating is what the junk tells us about the march of technology. There are bins full of VoIP telephones, symptomatic of the move to mobile devices even in the office. As an aside I saw a hackerspace member in his twenties using a phone hooked up to the camp’s copper phone network walk away with the handset clamped to his ear and yank the device off the table; it’s obvious that wired handsets are a thing of the past when adults no longer know how to use them. And someone dropped off an entire digital video distribution system probably from a hotel or similar, a huge box of satellite TV receivers and some very specialised rack modules with 2008 date codes on the chips. We don’t watch linear TV any more, hotel customers want streaming.

Amid all this treasure, what did I walk away with? As I have grown older I have restricted my urge to acquire, so I’m very wary at these places. Even so, there were a few things that caught my eye, a pair of Sennheiser headphones with a damaged cord, a small set of computer speakers — mainly because we don’t have anything in our village on which to play music — and because I couldn’t quite resist it, a microcassette recorder. As each new box arrives the hardware hackers swarm over it like flies though, so who knows what treasures I’ll be tempted by over the rest of the camp.

You’ve Probably Never Considered Taking an Airship To Orbit

Por: Lewin Day
13 Mayo 2024 at 14:00

There have been all kinds of wild ideas to get spacecraft into orbit. Everything from firing huge cannons to spinning craft at rapid speed has been posited, explored, or in some cases, even tested to some degree. And yet, good ol’ flaming rockets continue to dominate all, because they actually get the job done.

Rockets, fuel, and all their supporting infrastructure remain expensive, so the search for an alternative goes on. One daring idea involves using airships to loft payloads into orbit. What if you could simply float up into space?

Lighter Than Air

NASA regularly launches lighter-than-air balloons to great altitudes, but they’re not orbital craft. Credit: NASA, public domain

The concept sounds compelling from the outset. Through the use of hydrogen or helium as a lifting gas, airships and balloons manage to reach great altitudes while burning zero propellant. What if you could just keep floating higher and higher until you reached orbital space?

This is a huge deal when it comes to reaching orbit. One of the biggest problems of our current space efforts is referred to as the tyranny of the rocket equation. The more cargo you want to launch into space, the more fuel you need. But then that fuel adds more weight, which needs yet more fuel to carry its weight into orbit. To say nothing of the greater structure and supporting material to contain it all.

Carrying even a few extra kilograms of weight to space can require huge amounts of additional fuel. This is why we use staged rockets to reach orbit at present. By shedding large amounts of structural weight at the end of each rocket stage, it’s possible to move the remaining rocket farther with less fuel.

If you could get to orbit while using zero fuel, it would be a total gamechanger. It wouldn’t just be cheaper to launch satellites or other cargoes. It would also make missions to the Moon or Mars far easier. Those rockets would no longer have to carry the huge amount of fuel required to escape Earth’s surface and get to orbit. Instead, they could just carry the lower amount of fuel required to go from Earth orbit to their final destination.

The rumored “Chinese spy balloon” incident of 2023 saw a balloon carrying a payload that looked very much like a satellite. It was even solar powered. However, such a craft would never reach orbit, as it had no viable propulsion system to generate the huge delta-V required. Credit: USAF, public domain

Of course, it’s not that simple. Reaching orbit isn’t just about going high above the Earth. If you just go straight up above the Earth’s surface, and then stop, you’ll just fall back down. If you want to orbit, you have to go sideways really, really fast.

Thus, an airship-to-orbit launch system would have to do two things. It would have to haul a payload up high, and then get it up to the speed required for its desired orbit. That’s where it gets hard. The minimum speed to reach a stable orbit around Earth is 7.8 kilometers per second (28,000 km/h or 17,500 mph). Thus, even if you’ve floated up very, very high, you still need a huge rocket or some kind of very efficient ion thruster to push your payload up to that speed. And you still need fuel to generate that massive delta-V (change in velocity).

For this reason, airships aren’t the perfect hack to reaching orbit that you might think. They’re good for floating about, and you can even go very, very high. But if you want to circle the Earth again and again and again, you better bring a bucketload of fuel with you.

Someone’s Working On It

JP Aerospace founder John Powell regularly posts updates to YouTube regarding the airship-to-orbit concept. Credit: John Powell, YouTube

Nevertheless, this concept is being actively worked on, but not by the usual suspects. Don’t look at NASA, JAXA, SpaceX, ESA, or even Roscosmos. Instead, it’s the work of the DIY volunteer space program known as JP Aerospace.

The organization has grand dreams of launching airships into space. Its concept isn’t as simple as just getting into a big balloon and floating up into orbit, though. Instead, it envisions a three-stage system.

The first stage would involve an airship designed to travel from ground level up to 140,000 feet. The company proposes a V-shaped design with an airfoil profile to generate additional lift as it moves through the atmosphere. Propulsion would be via propellers that are specifically designed to operate in the near-vacuum at those altitudes.

Once at that height, the first stage craft would dock with a permanently floating structure called Dark Sky Station. It would serve as a docking station where cargo could be transferred from the first stage craft to the Orbital Ascender, which is the craft designed to carry the payload into orbit.

The Ascender H1 Variant is the company’s latest concept for an airship to carry payloads from an altitude of 140,000ft and into orbit. Credit: John Powell, YouTube screenshot

The Orbital Ascender itself sounds like a fantastical thing on paper. The team’s current concept is for a V-shaped craft with a fabric outer shell which contains many individual plastic cells full of lifting gas. That in itself isn’t so wild, but the proposed size is. It’s slated to measure 1,828 meters on each side of the V — well over a mile long — with an internal volume of over 11 million cubic meters. Thin film solar panels on the craft’s surface are intended to generate 90 MW of power, while a plasma generator on the leading edge is intended to help cut drag. The latter is critical, as the craft will need to reach hypersonic speeds in the ultra-thin atmosphere to get its payload up to orbital speeds. To propel the craft up to orbital velocity, the team has been running test firings on its own designs for plasma thrusters.

Payload would be carried in two cargo bays, each measuring 30 meters square, and 20 meters deep. Credit: John Powell, YouTube Screenshot

The team at JP Aerospace is passionate, but currently lacks the means to execute their plans at full scale. Right now, the team has some experimental low-altitude research craft that are a few hundred feet long. Presently, Dark Sky Station and the Orbital Ascender remain far off dreams.

Realistically, the team hasn’t found a shortcut to orbit just yet. Building a working version of the Orbital Ascender would require lofting huge amounts of material to high altitude where it would have to be constructed. Such a craft would be torn to shreds by a simple breeze in the lower atmosphere. A lighter-than-air craft that could operate at such high altitudes and speeds might not even be practical with modern materials, even if the atmosphere is vanishingly thin above 140,000 feet.  There are huge questions around what materials the team would use, and whether the theoretical concepts for plasma drag reduction could be made to work on the monumentally huge craft.

The team has built a number of test craft for lower-altitude operation. Credit: John Powell, Youtube Screenshot

Even if the craft’s basic design could work, there are questions around the practicalities of crewing and maintaining a permanent floating airship station at high altitude. Let alone how payloads would be transferred from one giant balloon craft to another. These issues might be solvable with billions of dollars. Maybe. JP Aerospace is having a go on a budget several orders of magnitude more shoestring than that.

One might imagine a simpler idea could be worth trying first. Lofting conventional rockets to 100,000 feet with balloons would be easier and still cut fuel requirements to some degree. But ultimately, the key challenge of orbit remains. You still need to find a way to get your payload up to a speed of at least 8 kilometers per second, regardless of how high you can get it in the air. That would still require a huge rocket, and a suitably huge balloon to lift it!

For now, orbit remains devastatingly hard to reach, whether you want to go by rocket, airship, or nuclear-powered paddle steamer. Don’t expect to float to the Moon by airship anytime soon, even if it sounds like a good idea.

The Great Green Wall: Africa’s Ambitious Attempt To Fight Desertification

Por: Lewin Day
9 Mayo 2024 at 14:00

As our climate changes, we fear that warmer temperatures and drier conditions could make life hard for us. In most locations, it’s a future concern that feels uncomfortably near, but for some locations, it’s already very real. Take the Sahara desert, for example, and the degraded landscapes to the south in the Sahel. These arid regions are so dry that they struggle to support life at all, and temperatures there are rising faster than almost anywhere else on the planet.

In the face of this escalating threat, one of the most visionary initiatives underway is the Great Green Wall of Africa. It’s a mega-sized project that aims to restore life to barren terrain.

A Living Wall

Concentrated efforts have helped bring dry lands back to life. Credit: WFP

Launched in 2007 by the African Union, the Great Green Wall was originally an attempt to halt the desert in its tracks. The Sahara Desert has long been expanding, and the Sahel region has been losing the battle against desertification. The Green Wall hopes to put a stop to this, while also improving food security in the area.

The concept of the wall is simple. The idea is to take degraded land and restore it to life, creating a green band across the breadth of Africa which would resist the spread of desertification to the south. Intended to span the continent from Senegal in the west to Djibouti in the east, it was originally intended to be 15 kilometers wide and a full 7,775 kilometers long. The hope was to complete the wall by 2030.

The Great Green Wall concept moved past initial ideas around simply planting a literal wall of trees. It eventually morphed into a broader project to create a “mosaic” of green and productive landscapes that can support local communities in the region.

Reforestation is at the heart of the Great Green Wall. Millions of trees have been planted, with species chosen carefully to maximise success. Trees like Acacia, Baobab, and Moringa are commonly planted not only for their resilience in arid environments but also for their economic benefits. Acacia trees, for instance, produce gum arabic—a valuable ingredient in the food and pharmaceutical industries—while Moringa trees are celebrated for their nutritious leaves.

 

Choosing plants with economic value has a very important side effect that sustains the project. If random trees of little value were planted solely as an environmental measure, they probably wouldn’t last long. They could be harvested by the local community for firewood in short order, completely negating all the hard work done to plant them. Instead, by choosing species that have ongoing productive value, it gives the local community a reason to maintain and support the plants.

Special earthworks are also aiding in the fight to repair barren lands. In places like Mauritania, communities have been digging  half-moon divots into the ground. Water can easily run off or flow away on hard, compacted dirt. However, the half-moon structures trap water in the divots, and the raised border forms a protective barrier. These divots can then be used to plant various species where they will be sustained by the captured water. Do this enough times over a barren landscape, and with a little rain, formerly dead land can be brought back to life. It’s a traditional technique that is both cheap and effective at turning brown lands green again.

Progress

The project has been an opportunity to plant economically valuable plants which have proven useful to local communities. Credit: WFP

The initiative plans to restore 100 million hectares of currently degraded land, while also sequestering 250 million tons of carbon to help fight against climate change. Progress has been sizable, but at the same time, limited. As of mid-2023, the project had restored approximately 18 million hectares of formerly degraded land. That’s a lot of land by any measure. And yet, it’s less than a fifth of the total that the project hoped to achieve. The project has been frustrated by funding issues, delays, and the degraded security situation in some of the areas involved. Put together, this all bodes poorly for the project’s chances of reaching its goal by 2030, given 17 years have passed and we draw ever closer to 2030.

While the project may not have met its loftiest goals, that’s not to say it has all been in vain. The Great Green Wall need not be seen as an all or nothing proposition. Those 18 million hectares that have been reclaimed are not nothing, and one imagines the communities in these areas are enjoying the boons of their newly improved land.

In the driest parts of the world, good land can be hard to come by. While the Great Green Wall may not span the African continent yet, it’s still having an effect. It’s showing communities that with the right techniques, it’s possible to bring some barren zones from the brink, turning hem back into useful productive land. That, at least, is a good legacy, and if the projects full goals can be realized? All the better.

Your Open-Source Client Options In the non-Mastodon Fediverse

Por: Lewin Day
8 Mayo 2024 at 14:00

When things started getting iffy over at Twitter, Mastodon rose as a popular alternative to the traditional microblogging platfrom. In contrast to the walled gardens of other social media channels, it uses an open protocol that runs on distributed servers that loosely join together, forming the “Fediverse”.

The beauty of the Fediverse isn’t just in its server structure, though. It’s also in the variety of clients available for accessing the network. Where Twitter is now super-strict about which apps can hook into the network, the Fediverse welcomes all comers to the platform! And although Mastodon is certainly the largest player, it’s absolutely not the only elephant in the room.

Today, we’ll look at a bunch of alternative clients for the platform, ranging from mobile apps to web clients. They offer unique features and interfaces that cater to different user preferences and needs. We’ll look at the most notable examples—each of which brings a different flavor to your Fediverse experience.

Phanpy

Phanpy is relatively new on the scene when it comes to Mastodon alternatives, but it has a fun name and a clean, user-friendly interface. Designed as a web client, Phanpy stands out in the way it hides status actions—like reply, boost, and favorite buttons. It’s an intentional design choice to reduce clutter, with the developer noting they are happy with this tradeoff even if it reduces engagement on the platform. It’s for the chillers, not the attention-starved.

Phanpy also supports multiple accounts, making it a handy tool for those who manage different personas or profiles across the Fediverse. Other power-user features include a multi-column interface if you want to really chug down the posts, and a recovery system for unsent drafts.

Rodent

Rodent, on the other hand, is tailored for users on Android smartphones and tablets. The developers have a bold vision, noting that “Rodent is disruptive, unapologetical, and has a user-first approach.” Despite this, it’s not foreboding to new users—the interface will be instantly familiar to a Mastodon or Twitter user.

Rodent brings you access to Mastodon with a unique set of features. It will let you access instances without having to log in to them (assuming the instance allows it), and has a multi-instance view that lets you flip between them easily. The interface also has neatly nested replies which can make following a conversation far easier. The latest update also set it up to give you meaningful notifications rather than just vague pings from the app. That’s kind of a baseline feature for most social media apps, but this is an app with a small but dedicated developer base.

Tusky

Tusky is perhaps one of the most popular Mastodon clients for Android users. Known for its sleek and minimalist design, Tusky provides a smooth and efficient way to navigate Mastodon. It’s clean, uncluttered, and unfussy.

Tusky handles all the basics—the essential features like notifications, direct messaging, and timeline filters. It’s a lightweight app that doesn’t hog a lot of space or system resources. However, it’s still nicely customizable to ensure it’s showing you what you want, when you want.

If you’ve tried the official Mastodon app and found it’s not for you, Tusky might be more your speed. Where some apps bombard you with buttons and features, Tusky gets out of the way of you and the feed you’re trying to scroll.

Fedilab

The thing about the Fediverse is that it’s all about putting power back in individual hands. Diversity is its strength, and that’s where apps like Fedilab come in. Fedilab isn’t just about accessing social media content either. It wants to let you access other sites in the Fediverse too. A notable example is Peertube—an open-source alternative to YouTube. It’ll handle a bunch of others, too.

You might think this makes Fedilab more complicated, but it’s not really the case. If you just want to use it to access Mastodon, it does that just fine. But if you want to pull in other content to the app, from places like Misskey, Lemmy, or even Twitter, it’ll gladly show you what you’re looking for.

Trunks.social

Trunks.social is a newer entrant designed to enhance the Mastodon experience for everybody. Unlike some other options, it’s truly multi-platform—available as a webclient, or as an app for both Android and iOS. If you want to use Mastodon across a bunch of devices and with a consistent experience across all of them, Trunks.social could be a good option for you.

It focuses on integrating tightly with iOS features, such as the system-wide dark mode, to deliver a coherent and aesthetically pleasing experience across all Apple devices. Trunks.social also places a strong emphasis on privacy and data protection, offering advanced settings that let users control how their data is handled and interacted with on the platform.

Conclusion

Choosing the right Fediverse client can significantly enhance your experience of the platform. Whether you’re a casual user looking for a simple interface on your smartphone or a power user needing to work across multiple accounts or instances, there’s a client out there for you.

The diversity of clients shows the vibrant ecosystem surrounding the Fediverse. It’s not just Mastodon! It’s all driven by the community’s commitment to open-source development and user-centric design. Twitter once had something similar before it shunned flexibility to rule its community with an iron fist. In the open-source world, though, you don’t need to worry about being treated like that.

The Computers of Voyager

6 Mayo 2024 at 14:00

After more than four decades in space and having traveled a combined 44 billion kilometers, it’s no secret that the Voyager spacecraft are closing in on the end of their extended interstellar mission. Battered and worn, the twin spacecraft are speeding along through the void, far outside the Sun’s influence now, their radioactive fuel decaying, their signals becoming ever fainter as the time needed to cross the chasm of space gets longer by the day.

But still, they soldier on, humanity’s furthest-flung outposts and testaments to the power of good engineering. And no small measure of good luck, too, given the number of nearly mission-ending events which have accumulated in almost half a century of travel. The number of “glitches” and “anomalies” suffered by both Voyagers seems to be on the uptick, too, contributing to the sense that someday, soon perhaps, we’ll hear no more from them.

That day has thankfully not come yet, in no small part due to the computers that the Voyager spacecraft were, in a way, designed around. Voyager was to be a mission unlike any ever undertaken, a Grand Tour of the outer planets that offered a once-in-a-lifetime chance to push science far out into the solar system. Getting the computers right was absolutely essential to delivering on that promise, a task made all the more challenging by the conditions under which they’d be required to operate, the complexity of the spacecraft they’d be running, and the torrent of data streaming through them. Forty-six years later, it’s safe to say that the designers nailed it, and it’s worth taking a look at how they pulled it off.

Volatile (Institutional) Memory

That turns out that getting to the heart of the Voyager computers, in terms of schematics and other technical documentation, wasn’t that easy. For a project with such an incredible scope and which had an outsized impact on our understanding of the outer planets and our place in the galaxy, the dearth of technical information about Voyager is hard to get your head around. Most of the easily accessible information is pretty high-level stuff; the juicy technical details are much harder to come by. This is doubly so for the computers running Voyager, many of the details of which seem to be getting lost in the sands of time.

As a case in point, I’ll offer an anecdote. As I was doing research for this story, I was looking for anything that would describe the architecture of the Flight Data System, one of the three computers aboard each spacecraft and the machine that has been the focus of the recent glitch and recovery effort aboard Voyager 1. I kept coming across a reference to a paper with a most promising title: “Design of a CMOS Processor for use in the Flight Data Subsystem of a Deep Space Probe.” I searched high and low for this paper online, but it appears not to be available anywhere but in a special collection in the library of Witchita State University, where it’s in the personal papers of a former professor who did some work for NASA.

Unfortunately, thanks to ongoing construction, the library has no access to the document right now. The difficulty I had in rounding up this potentially critical document seems to indicate a loss of institutional knowledge of the Voyager program’s history and its technical origins. That became apparent when I reached out to public affairs at Jet Propulsion Lab, where the Voyagers were built, in the hope that they might have a copy of that paper in their archives. Sadly, they don’t, and engineers on the Voyager team haven’t even heard of the paper. In fact, they’re very keen to see a copy if I ever get a hold of it, presumably to aid their job of keeping the spacecraft going.

In the absence of detailed technical documents, the original question remains: How do the computers of Voyager work? I’ll do the best I can to answer that from the existing documentation, and hopefully fill in the blanks later with any other documents I can scrape up.

Good Old TTL

As mentioned above, each Voyager contains three different computers, each of which is assigned different functions. Voyager was the first unmanned mission to include distributed computing, partly because the sheer number of tasks to be executed with precision during the high-stakes planetary fly-bys would exceed the capabilities of any single computer that could be made flyable. There was a social engineering angle to this as well, in that it kept the various engineering teams from competing for resources from a single computer.

Redundancy galore: block diagram for the Command Computer Subsystem (CCS) used on the Viking orbiters. The Voyager CCS is almost identical. Source: NASA/JPL.

To the extent that any one computer in a tightly integrated distributed system such as the one on Voyager can be considered the “main computer,” the Computer and Command Subsystem (CCS) would be it. The Voyager CCS was almost identical to another JPL-built machine, the Viking orbiter CCS. The Viking mission, which put two landers on Mars in the summer of 1976, was vastly more complicated than any previous unmanned mission that JPL had built spacecraft for, most of which used simple sequencers rather than programmable computers.

On Voyager, the CCS is responsible for receiving commands from the ground and passing them on to the other computers that run the spacecraft itself and the scientific instruments. The CCS was built with autonomy and reliability in mind, since after just a few days in space, the communication delay would make direct ground control impossible. This led JPL to make everything about the CCS dual-redundant — two separate power supplies, two processors, two output units, and two complete sets of command buffers. Additionally, each processor could be cross-connected to each output unit, and interrupts were distributed to both processors.

There are no microprocessors in the CCS. Rather, the processors are built from discrete 7400-series TTL chips. The machine does not have an operating system but rather runs bare-metal instructions. Both data and instruction words are 18 bits wide, with the instruction words having a 6-bit opcode and a 12-bit address. The 64 instructions contain the usual tools for moving data in and out of registers and doing basic arithmetic, although there are only commands for adding and subtracting, not for multiplication or division. The processors access 4 kilowords of redundant plated-wire memory, which is similar to magnetic core memory in that it records bits as magnetic domains, but with an iron-nickel alloy plated onto the surface of wires rather than ferrite beads.

The Three-Axis Problem

On Voyager, the CCS does almost nothing in terms of flying the spacecraft. The tasks involved in keeping Voyager pointed in the right direction are farmed out to the Attitude and Articulation Control Subsystem, or AACS. Earlier interplanetary probes such as Pioneer were spin-stabilized, meaning they maintained their orientation gyroscopically by rotating the craft around the longitudinal axis. Spin stabilization wouldn’t work for Voyager, since a lot of the science planned for the mission, especially the photographic studies, required a stable platform. This meant that three-axis stabilization was required, and the AACS was designed to accommodate that need.

Voyager’s many long booms complicate attitude control by adding a lot of “wobble”.

The physical design of Voyager injected some extra complexity into attitude control. While previous deep-space vehicles had been fairly compact, Voyager bristles with long booms. Sprouting from the compact bus located behind its huge high-gain antenna are booms for the three radioisotope thermoelectric generators that power the spacecraft, a very long boom for the magnetometers, a shorter boom carrying the heavy imaging instruments, and a pair of very long antennae for the Plasma Wave Subsystem experiment. All these booms tend to wobble a bit when the thrusters fire or actuators move, complicating the calculations needed to stay on course.

The AACS is responsible for running the gyros, thrusters, attitude sensors, and actuators needed to keep Voyager oriented in space. Like the CCS, the AACS has a redundant design using TTL-based processors and 18-bit words. The same 4k of redundant plated-wire memory was used, and many instructions were shared between the two computers. To handle three-axis attitude control in a more memory-efficient manner, the AACS uses index registers to point to the same block of code multiple times.

Years of Boredom, Minutes of Terror

Rounding out the computers of Voyager is the Flight Data Subsystem or FDS, the culprit in the latest “glitch” on Voyager 1, which was traced to a corrupted memory location and nearly ended the extended interstellar mission. Compared with the Viking-descended CCS and AACS, the FDS was to be a completely new kind of computer, custom-made for the demands of a torrent of data from eleven scientific experiments and hundreds of engineering sensors during the high-intensity periods of planetary flybys, while not being overbuilt for the long, boring cruises between the planets.

The FDS was designed strictly to handle the data to and from the eleven separate scientific instruments on Voyager, as well as the engineering data from dozens of sensors installed around the spacecraft. The need for a dedicated data computer was apparent early on in the Voyager design process, when it became clear that the torrent of data streaming from the scientific platforms during flybys would outstrip the capabilities of any of the hard-wired data management systems used in previous deep space probes.

One of the eight cards comprising the Voyager FDS. Covered with discrete CMOS chips, this card bears the “MJS77” designation; “Mariner Jupiter Saturn 1977” was the original name of the Voyager mission. Note the D-sub connectors for inter-card connections. Source: NASA/JPL.

It was evident early in the Voyager design process that data-handling requirements would outstrip the capabilities of any of the hard-wired data management systems used in previous deep space probes. This led to an initial FDS design using the same general architecture as the CCS and AACS — dual TTL processors, 18-bit word width, and the same redundant 4k of plated-wire memory.  But when the instruction time of a breadboard version of this machine was measured, it turned out to be about half the speed necessary to support peak flyby data throughput.

Voyager FDS. Source: National Air and Space Museum.

To double the speed, direct memory access circuits were added. This allowed data to move in and out of memory without having to go through the processor first. Further performance gains were made by switching the processor design to CMOS chips, a risky move in the early 1970s. Upping the stakes was the decision to move away from the reliable plated-wire memory to CMOS memory, which could be accessed much faster.

The speed gains came at a price, though: volatility. Unlike plated-wire memory, CMOS memory chips lose their data if the power is lost, meaning a simple power blip could potentially erase the FDS memory at the worst possible time. JPL engineers worked around this with brutal simplicity — rather than power the FDS memories from the main spacecraft power systems, they ran dedicated power lines directly back to the radioisotope thermoelectric generators (RTG) powering the craft. This means the only way to disrupt power to the CMOS memories would be a catastrophic loss of all three RTGs, in which case the mission would be over anyway.

Physically, the FDS was quite compact, especially for a computer built of discrete chips in the early 1970s. Unfortunately, it’s hard to find many high-resolution photos of the flight hardware, but the machine appears to be built from eight separate cards that are attached to a card cage. Each card has a row of D-sub connectors along the top edge, which appear to be used for card-to-card connections in lieu of a backplane. A series of circular MIL-STD connectors provide connection to the spacecraft’s scientific instruments, power bus, communications, and the Data Storage Subsystem (DSS), the digital 8-track tape recorder used to buffer data during flybys.

Next Time?

Even with the relative lack of information on Voyager’s computers, there’s still a lot of territory to cover, including some of the interesting software architecture techniques used, and the details of how new software is uploaded to spacecraft that are currently almost a full light-day distant. And that’s not to mention the juicy technical details likely to be contained in a paper hidden away in some dusty box in a Kansas library. Here’s hoping that I can get my hands on that document and follow up with more details of the Voyager computers.

NASA Is Now Tasked With Developing A Lunar Time Standard, Relativity Or Not

Por: Lewin Day
2 Mayo 2024 at 14:00

A little while ago, we talked about the concept of timezones and the Moon. It’s a complicated issue, because on Earth, time is all about the Sun and our local relationship with it. The Moon and the Sun have their own weird thing going on, so time there doesn’t really line up well with our terrestrial conception of it.

Nevertheless, as humanity gets serious about doing Moon things again, the issue needs to be solved. To that end, NASA has now officially been tasked with setting up Moon time – just a few short weeks after we last talked about it! (Does the President read Hackaday?) Only problem is, physics is going to make it a damn sight more complicated!

Relatively Speaking

You know it’s serious when the White House sends you a memo. “Tell NASA to invent lunar time, and get off their fannies!”

The problem is all down to general and special relativity. The Moon is in motion relative to Erath, and it also has a lower gravitational pull. We won’t get into the physics here, but it basically means that time literally moves at a different pace up there. Time on the Moon passes on average 58.7 microseconds faster over a 24 hour Earth day. It’s not constant, either—there is a certain degree of periodic variation involved.

It’s a tiny difference, but it’s cumulative over time. Plus, as it is, many space and navigational applications need the utmost in precise timing to function, so it’s not something NASA can ignore. Even if the agency just wanted to just use UTC and call it good, the relativity problem would prevent that from being a workable solution.

Without a reliable and stable timebase, space agencies like NASA would struggle to establish useful infrastructure on the Moon. Things like lunar satellite navigation wouldn’t work accurately without taking into account the time slip, for example. GPS is highly sensitive to relativistic time effects, and indeed relies upon them to function. Replicating it on the Moon is only possible if these factors are accounted for. Looking even further ahead, things like lunar commerce or secure communication would be difficult to manage reliably without stable timebases for equipment involved.

Banks of atomic clocks—like these at the US Naval Observatory—are used to establish high-quality time standards. Similar equipment may need to be placed on the Moon to establish Coordinated Lunar Time (LTC). Credit: public domain

Still, the order to find a solution has come down from the top. A memo from the Executive Office of the President charged NASA with its task to deliver a standard solution for lunar timing by December 31, 2026.  Coordinated Lunar Time (LTC) must be established and in a way that is traceable to Coordinated Universal Time (UTC). That will enable operators on Earth to synchronize operations with crews or unmanned systems on the Moon itself. LTC is required to be accurate enough for scientific and navigational purposes, and it must be resilient to any loss of contact with systems back on Earth.

It’s also desired that the future LTC standard will be extensible and scalable to space environments we may explore in future beyond the Earth-Moon system itself. In time, NASA may find it necessary to establish time standards for other celestial bodies, due to their own unique differences in relative velocity and gravitational field.

The deadline means there’s time for NASA to come up with a plan to tackle the problem. However, for a federal agency, less than two years is not exactly a lengthy time frame. It’s likely that whatever NASA comes up with will involve some kind of timekeeping equipment deployed on the Moon itself. This equipment would thus be subject to the time shift relative to Earth, making it easier to track differences in time between the lunar and terrestrial time-realities.

The US Naval Observatory doesn’t just keep careful track of time, it displays it on a big LED display for people in the area. NASA probably doesn’t need to establish a big time billboard on the Moon, but it’d be cool if they did. Credit: Votpuske, CC BY 4.0

Great minds are already working on the problem, like Kevin Coggins, NASA’s space communications and navigation chief. “Think of the atomic clocks at the U.S. Naval Observatory—they’re the heartbeat of the nation, synchronizing everything,” he said in an interview. “You’re going to want a heartbeat on the moon.”

For now, establishing CLT remains a project for the American space agency. It will work on the project in partnership with the Departments of Commerce, Defense, State and Transportation. One fears for the public servants required to coordinate meetings amongst all those departments.

Establishing new time standards isn’t cheap. It requires smart minds, plenty of research and development, and some serious equipment. Space-rated atomic clocks don’t come cheap, either. Regardless, the U.S. government hopes that NASA will lead the way for all spacefaring nations in this regard, setting a lunar time standard that can serve future operations well.

 

Programming Ada: First Steps on the Desktop

Por: Maya Posch
23 Abril 2024 at 14:00

Who doesn’t want to use a programming language that is designed to be reliable, straightforward to learn and also happens to be certified for everything from avionics to rockets and ICBMs? Despite Ada’s strong roots and impressive legacy, it has the reputation among the average hobbyist of being ‘complicated’ and ‘obscure’, yet this couldn’t be further from the truth, as previously explained. In fact, anyone who has some or even no programming experience can learn Ada, as the very premise of Ada is that it removes complexity and ambiguity from programming.

In this first part of a series, we will be looking at getting up and running with a basic desktop development environment on Windows and Linux, and run through some Ada code that gets one familiarized with the syntax and basic principles of the Ada syntax. As for the used Ada version, we will be targeting Ada 2012, as the newer Ada 2022 standard was only just approved in 2023 and doesn’t change anything significant for our purposes.

Toolchain Things

The go-to Ada toolchain for those who aren’t into shelling out big amounts of money for proprietary, certified and very expensive Ada toolchains is GNAT, which at one point in time stood for the GNU NYU Ada Translator. This was the result of the United States Air Force awarding the New York University (NYU) a contract in 1992 for a free Ada compiler. The result of this was the GNAT toolchain, which per the stipulations in the contract would be licensed under the GNU GPL and its copyright assigned to the Free Software Foundation. The commercially supported (by AdaCore) version of GNAT is called GNAT Pro.

Obtaining a copy of GNAT is very easy if you’re on a common Linux distro, with the package gnat for Debian-based distros and gcc-ada if you’re Arch-based. For Windows you can either download the AdaCore GNAT Community Edition, or if you use MSYS2, you can use its package manager to install the mingw-w64-ucrt-x86_64-gcc-ada package for e.g. the new ucrt64 environment. My personal preference on Windows is the MSYS2 method, as this also provides a Unix-style shell and tools, making cross-platform development that much easier. This is also the environment that will be assumed throughout the article.

Hello Ada

The most important part of any application is its entry point, as this determines where the execution starts. Most languages have some kind of fixed name for this, such as main, but in Ada you are free to name the entry point whatever you want, e.g.:

with Ada.Text_IO;
procedure Greet is
begin
    -- Print "Hello, World!" to the screen
    Ada.Text_IO.Put_Line ("Hello, World!");
end Greet;

Here the entry point is the Greet procedure, because it’s the only procedure or function in the code. The difference between a procedure and a function is that only the latter returns a value, while the former returns nothing (similar to void in C and C++). Comments start with two dashes, and packages are imported using the with statement. In this case we want the Ada.Text_IO package, as it contains the standard output routines like Put_Line. Note that since Ada is case-insensitive, we can type all of those names in lower-case as well.

Also noticeable might be the avoidance of any symbols where an English word can be used, such as the use of is, begin and end rather than curly brackets. When closing a block with end, this is post-fixed with the name of the function or procedure, or the control structure that’s being closed (e.g. an if/else block or loop). This will be expanded upon later in the series. Finally, much like in C and C++ lines end with a semicolon.

For a reference of the syntax and much more, AdaCore has an online reference as well as a number of freely downloadable books, which include a comparison with Java and C++. The Ada Language Reference Manual (LRM) is also freely available.

Compile And Run

To compile the simple sample code above, we need to get it into a source file, which we’ll call greet.adb. The standard extensions with the GNAT toolchain are .adb for the implementation (body) and .ads for the specification (somewhat like a C++ header file). It’s good practice to use the same file name as the main package or entry point name (unit name) for the file name. It will work if not matched, but you will get a warning depending on the toolchain configuration.

Unlike in C and C++, Ada code isn’t just compiled and linked, but also has an intermediate binding step, because the toolchain fully determines the packages, dependencies, and other elements within the project before assembling the compiled code into a binary.

An important factor here is also that Ada does not work with a preprocessor, and specification files aren’t copied into the file which references them with a with statement, but only takes note of the dependency during compilation. A nice benefit of this is that include guards are not necessary, and headaches with linking such as link order of objects and libraries are virtually eliminated. This does however come at the cost of dealing with the binder.

Although GNAT comes with individual tools for each of these steps, the gnatmake tool allows the developer to handle all of these steps in one go. Although some prefer to use the AdaCore-developed gprbuild, we will not be using this as it adds complexity that is rarely helpful. To use gnatmate to compile the example code, we use a Makefile which produces the following output:

mkdir -p bin
mkdir -p obj
gnatmake -o bin/hello_world greet.adb -D obj/
gcc -c -o obj\greet.o greet.adb
gnatbind -aOobj -x obj\greet.ali
gnatlink obj\greet.ali -o bin/hello_world.exe

Although we just called gnatmake, the compilation, binding and linking steps were all executed subsequently, resulting in our extremely sophisticated Hello World application.

For reference, the Makefile used with the example is the following:

GNATMAKE = gnatmake
MAKEDIR = mkdir -p
RM = rm -f

BIN_OUTPUT := hello_world
ADAFLAGS := -D obj/

SOURCES := greet.adb

all: makedir build

build:
	$(GNATMAKE) -o bin/$(BIN_OUTPUT) $(SOURCES) $(ADAFLAGS)
	
makedir:
	$(MAKEDIR) bin
	$(MAKEDIR) obj

clean:
	rm -rf obj/
	rm -rf bin/
	
.PHONY: test src

Next Steps

Great, so now you have a working development environment for Ada with which you can build and run any code that you write. Naturally, the topic of code editors and IDEs is one can of flamewar that I won’t be cracking open here. As mentioned in my 2019 article, you can use AdaCore’s GNAT Programming Studio (GPS) for an integrated development environment experience, if that is your jam.

My own development environment is a loose constellation of Notepad++ on Windows, and Vim on Windows and elsewhere, with Bash and similar shells the environment for running the Ada toolchain in. If there is enough interest I’d be more than happy to take a look at other development environments as well in upcoming articles, so feel free to sound off in the comments.

For the next article I’ll be taking a more in-depth look at what it takes to write an Ada application that actually does something useful, using the preparatory steps of this article.

❌
❌