Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

The SS United States: The Most Important Ocean Liner We May Soon Lose Forever

Por: Maya Posch
27 Junio 2024 at 14:30

Although it’s often said that the era of ocean liners came to an end by the 1950s with the rise of commercial aviation, reality isn’t quite that clear-cut. Coming out of the troubled 1940s arose a new kind of ocean liner, one using cutting-edge materials and propulsion, with hybrid civil and military use as the default, leading to a range of fascinating design decisions. This was the context in which the SS United States was born, with the beating heart of the US’ fastest battle ships, with light-weight aluminium structures and survivability built into every single aspect of its design.

Outpacing the super-fast Iowa-class battleships with whom it shares a lot of DNA due to its lack of heavy armor and triple 16″ turrets, it easily became the fastest ocean liner, setting speed records that took decades to be beaten by other ocean-going vessels, though no ocean liner ever truly did beat it on speed or comfort. Tricked out in the most tasteful non-flammable 1950s art and decorations imaginable, it would still be the fastest and most comfortable way to cross the Atlantic today. Unfortunately ocean liners are no longer considered a way to travel in this era of commercial aviation, leading to the SS United States and kin finding themselves either scrapped, or stuck in limbo.

In the case of the SS United States, so far it has managed to escape the cutting torch, but while in limbo many of its fittings were sold off at auction, and the conservation group which is in possession of the ship is desperately looking for a way to fund the restoration. Most recently, the owner of the pier where the ship is moored in Camden, New Jersey got the ship’s eviction approved by a judge, leading to very tough choices to be made by September.

A Unique Design

WW II-era United States Maritime Commission (MARCOM) poster.
WW II-era United States Maritime Commission (MARCOM) poster.

The designer of the SS United States is William Francis Gibbs, who despite being a self-taught engineer managed to translate his life-long passion for shipbuilding into a range of very notable ships. Many of these were designed at the behest of the United States Maritime Commission (MARCOM), which was created by the Merchant Marine Act of 1936, until it was abolished in 1950. MARCOM’s task was to create a merchant shipbuilding program for hundreds of modern cargo ships that would replace the World War I vintage vessels which formed the bulk of the US Merchant Marine. As a hybrid civil and federal organization, the merchant marine is intended to provide the logistical backbone for the US Navy in case of war and large-scale conflict.

The first major vessel to be commissioned for MARCOM was the SS America, which was an ocean liner commissioned in 1939 and whose career only ended in 1994 when it (then named the American Star) wrecked at the Canary Islands. This came after it had been sold in 1992 to be turned into a five-star hotel in Thailand. Drydocking in 1993 had revealed that despite the advanced age of the vessel, it was still in remarkably good condition.

Interestingly, the last merchant marine vessel to be commissioned by MARCOM was the SS United States, which would be a hybrid civilian passenger liner and military troop transport. Its sibling, the SS America, was in Navy service from 1941 to 1946 when it was renamed the USS West Point (AP-23) and carried over 350,000 troops during the war period, more than any other Navy troopship. Its big sister would thus be required to do all that and much more.

Need For Speed

SS United States colorized promotional B&W photograph. The ship's name and an American flag have been painted in position here as both were missing when this photo was taken during 1952 sea trials.
SS United States colorized promotional B&W photograph. The ship’s name and an American flag have been painted in position here as both were missing when this photo was taken during 1952 sea trials.

William Francis Gibbs’ naval architecture firm – called Gibbs & Cox by 1950 after Daniel H. Cox joined – was tasked to design the SS United States, which was intended to be a display of the best the United States of America had to offer. It would be the largest, fastest ocean liner and thus also the largest and fastest troop and supply carrier for the US Navy.

Courtesy of the major metallurgical advances during WW II, and with the full backing of the US Navy, the design featured a military-style propulsion plant and a heavily compartmentalized design following that of e.g. the Iowa-class battleships. This meant two separate engine rooms and similar levels of redundancy elsewhere, to isolate any flooding and other types of damage. Meanwhile the superstructure was built out of aluminium, making it both very light and heavily corrosion-resistant. The eight US Navy M-type boilers (run at only 54% of capacity) and a four-shaft propeller design took lessons learned with fast US Navy ships to reduce vibrations and cavitation to a minimum. These lessons include e.g. the the five- and four-bladed propeller design also seen used with the Iowa-class battleships with their newer configurations.

Another lessons-learned feature was a top to bottom fire-proofing after the terrible losses of the SS Morro Castle and SS Normandie, with no wood, fabrics or other flammable materials onboard, leading to the use of glass, metal and spun-glass fiber, as well as fireproof fabrics and carpets. This extended to the art pieces that were onboard the ship, as well as the ship’s grand piano which was made from mahogany whose inability to ignite was demonstrated by trying to burn it with a gasoline fire.

The actual maximum speed that the SS United States can reach is still unknown, with it originally having been a military secret. Its first speed trial supposedly saw the vessel hit an astounding 43 knots (80 km/h), though after the ship was retired from the United States Lines (USL) by the 1970s and no longer seen as a naval auxiliary asset, its top speed during the June 10, 1952 trial was revealed to be 38.32 knots (70.97 km/h). In service with USL, its cruising speed was 36 knots, gaining it the Blue Riband and rightfully giving it its place as America’s Flagship.

A Fading Star

The SS United States was withdrawn from passenger service by 1969, in a very unexpected manner. Although the USL was no longer using the vessel, it remained a US Navy reserve vessel until 1978, meaning that it remained sealed off to anyone but US Navy personnel during that period. Once the US Navy no longer deemed the vessel relevant for its needs in 1978, it was sold off, leading to a period of successive owners. Notable was Richard Hadley who had planned to convert it into seagoing time-share condominiums, and auctioned off all the interior fittings in 1984 before his financing collapsed.

In 1992, Fred Mayer wanted to create a new ocean liner to compete with the Queen Elizabeth, leading him to have the ship’s asbestos and other hazardous materials removed in Ukraine, after which the vessel was towed back to Philadelphia in 1996, where it has remained ever since. Two more owners including Norwegian Cruise Line (NCL) briefly came onto the scene, but economic woes scuttled plans to revive it as an active ocean liner. Ultimately NCL sought to sell the vessel off for scrap, which led to the SS United States Conservancy (SSUSC) to take over ownership in 2010 and preserve the ship while seeking ways to restore and redevelop the vessel.

Considering that the running mate of the SS United States (the SS America) was lost only a few years prior, this leaves the SS United States as the only example of a Gibbs ocean liner, and a poignant reminder of what would have been a highlight of the US’s marine prowess. Compared to the United Kingdom’s record here, with the Queen Elizabeth 2 (QE2, active since 1969) now a floating hotel in Dubai and the Queen Mary 2‘s maiden voyage in 2004, the US looks to be rather meager when it comes to preserving its ocean liner legacy.

End Of The Line?

The curator of the Iowa-class USS New Jersey (BB-62, currently fresh out of drydock), Ryan Szimanski, walked over from his museum ship last year to take a look at the SS United States, which is moored literally within viewing distance from his own pride and joy. Through the videos he made, one gains a good understanding of both how stripped the interior of the ship is, but also how amazingly well-conserved the ship is today. Even after decades without drydocking or in-depth maintenance, the ship looks like could slip into a drydock tomorrow and come out like new a year or so later.

At the end of all this, the question remains whether the SS United States deserves it to be preserved. There are many arguments for why this would the case, from its unique history as part of the US Merchant Marine, its relation to the highly successful SS America, it being effectively a sister ship to the four Iowa-class battleships, as well as a strong reminder of the importance of the US Merchant Marine at some point in time. The latter especially is a point which professor Sal Mercogliano (from What’s Going on With Shipping? fame) is rather passionate about.

Currently the SSUSC is in talks with a New York-based real-estate developer about a redevelopment concept, but this was thrown into peril when the owner of the pier suddenly doubled the rent, leading to the eviction by September. Unless something changes for the better soon, the SS United States stands a good chance of soon following the USS Kitty Hawk, USS John F. Kennedy (which nearly became a museum ship) and so many more into the scrapper’s oblivion.

What, one might ask, is truly in the name of the SS United States?

The Book That Could Have Killed Me

24 Junio 2024 at 14:00

It is funny how sometimes things you think are bad turn out to be good in retrospect. Like many of us, when I was a kid, I was fascinated by science of all kinds. As I got older, I focused a bit more, but that would come later. Living in a small town, there weren’t many recent science and technology books, so you tended to read through the same ones over and over. One day, my library got a copy of the relatively recent book “The Amateur Scientist,” which was a collection of [C. L. Stong’s] Scientific American columns of the same name. [Stong] was an electrical engineer with wide interests, and those columns were amazing. The book only had a snapshot of projects, but they were awesome. The magazine, of course, had even more projects, most of which were outside my budget and even more of them outside my skill set at the time.

If you clicked on the links, you probably went down a very deep rabbit hole, so… welcome back. The book was published in 1960, but the projects were mostly from the 1950s. The 57 projects ranged from building a telescope — the original topic of the column before [Stong] took it over — to using a bathtub to study aerodynamics of model airplanes.

X-Rays

[Harry’s] first radiograph. Not bad!
However, there were two projects that fascinated me and — lucky for me — I never got even close to completing. One was for building an X-ray machine. An amateur named [Harry Simmons] had described his setup complaining that in 23 years he’d never met anyone else who had X-rays as a hobby. Oddly, in those days, it wasn’t a problem that the magazine published his home address.

You needed a few items. An Oudin coil, sort of like a Tesla coil in an autotransformer configuration, generated the necessary high voltage. In fact, it was the Ouidn coil that started the whole thing. [Harry] was using it to power a UV light to test minerals for flourescence. Out of idle curiosity, he replaced the UV bulb with an 01 radio tube. These old tubes had a magnesium coating — a getter — that absorbs stray gas left inside the tube.

The tube glowed in [Harry’s] hand and it reminded him of how an old gas-filled X-ray tube looked. He grabbed some film and was able to image screws embedded in a block of wood.

With 01 tubes hard to find, why not blow your own X-ray tubes?

However, 01 tubes were hard to get even then. So [Harry], being what we would now call a hacker, took the obvious step of having a local glass blower create custom tubes to his specifications.

Given that I lived where the library barely had any books published after 1959, it is no surprise that I had no access to 01 tubes or glass blowers. It wasn’t clear, either, if he was evacuating the tubs or if the glass blower was doing it for him, but the tube was down to 0.0001 millimeters of mercury.

Why did this interest me as a kid? I don’t know. For that matter, why does it interest me now? I’d build one today if I had the time. We have seen more than one homemade X-ray tube projects, so it is doable. But today I am probably able to safely operate high voltages, high vaccums, and shield myself from the X-rays. Probably. Then again, maybe I still shouldn’t build this. But at age 10, I definitely would have done something bad to myself or my parent’s house, if not both.

Then It Gets Worse

The other project I just couldn’t stop reading about was a “homemade atom smasher” developed by [F. B. Lee]. I don’t know about “atom smasher,” but it was a linear particle accelerators, so I guess that’s an accurate description.

The business part of the “atom smasher” (does not show all the vacuum equipment).

I doubt I have the chops to pull this off today, much less back then. Old refigerator compressors were run backwards to pull a rough vaccuum. A homemade mercury diffusion pump got you the rest of the way there. I would work with some of this stuff later in life with scanning electron microscopes and similar instruments, but I was buying them, not cobbling them together from light bulbs, refigerators, and home-made blown glass!

You needed a good way to measure low pressure, too, so you needed to build a McLeod gauge full of mercury. The accelerator itself is three foot long,  borosilicate glass tube, two inches in diameter. At the top is a metal globe with a peephole in it to allow you to see a neon bulb to judge the current in the electron beam. At the bottom is a filament.

The globe at the top matches one on top of a Van de Graf generator that creates about 500,000 volts at a relatively low current. The particle accelerator is decidedly linear but, of course, all the cool particle accelerators these days form a loop.

[Andres Seltzman] built something similar, although not quite the same, some years back and you can watch it work in the video below:

What could go wrong? High vacuum, mercury, high voltage, an electron beam and plenty of unintentional X-rays. [Lee] mentions the danger of “water hammers” in the mercury tubes. In addition, [Stong] apparently felt nervous enough to get a second opinion from [James Bly] who worked for a company called High Voltage Engineering. He said, in part:

…we are somewhat concerned over the hazards involved. We agree wholeheartedly with his comments concerning the hazards of glass breakage and the use of mercury. We feel strongly, however, that there is inadequate discussion of the potential hazards due to X-rays and electrons. Even though the experimenter restricts himself to targets of low atomic number, there will inevitably be some generation of high-energy X-rays when using electrons of 200 to .300 kilovolt energy. If currents as high as 20 microamperes are achieved, we are sure that the resultant hazard is far from negligible. In addition, there will be substantial quantities of scattered electrons, some of which will inevitably pass through the observation peephole.

I Survived

Clearly, I didn’t build either of these, because I’m still here today. I did manage to make an arc furnace from a long-forgotten book. Curtain rods held carbon rods from some D-cells. The rods were in a flower pot packed with sand. An old power cord hooked to the curtain rods, although one conductor went through a jar of salt water, making a resistor so you didn’t blow the fuses.

Somehow, I survived without dying from fumes, blinding myself, or burning myself, but my parent’s house had a burn mark on the floor for many years after that experiement.

If you want to build an arc furnace, we’d start with a more modern concept. If you want a safer old book to read, try the one by [Edmund Berkeley], the developer of the Geniac.

Can You Freeze-Dry Strawberries Without a Machine?

20 Junio 2024 at 14:00
Just a pile of strawberries.

Summer has settled upon the northern hemisphere, which means that it’s time for sweet, sweet strawberries to be cheap and plentiful. But would you believe they taste even better in freeze-dried format? I wouldn’t have ever known until I happened to get on a health kick and was looking for new things to eat. I’m not sure I could have picked a more expensive snack, but that’s why we’re here — I wanted to start freeze-drying my own strawberries.

While I could have just dropped a couple grand and bought some kind of freeze-drying contraption, I just don’t have that kind of money. And besides, no good Hackaday article would have come out of that. So I started looking for alternative ways of getting the job done.

Dry Ice Is Nice

Dry ice, sublimating away in a metal measuring cup.
Image via Air Products

Early on in my web crawling on the topic, I came across this Valley Food Storage blog entry that seems to have just about all the information I could possibly want about the various methods of freeze-drying food. The one that caught my eye was the dry ice method, mostly because it’s only supposed to take 24 hours.

Here’s what you do, in a nutshell: wash, hull, and slice the strawberries, then put them in a resealable bag. Leave the bag open so the moisture can evaporate. Put these bags in the bottom of a large Styrofoam cooler, and lay the dry ice on top. Loosely affix the lid and wait 24 hours for the magic to happen.

I still had some questions. Does all the moisture simply evaporate? Or will there be a puddle at the bottom of the cooler that could threaten my tangy, crispy strawberries? One important question: should I break up the dry ice? My local grocer sells it in five-pound blocks, according to their site. The freeze-drying blog suggests doing a pound-for-pound match-up of fruit and dry ice, so I guess I’m freeze-drying five entire pounds of strawberries. Hopefully, this works out and I have tasty treats for a couple of weeks or months.

Preparation

In order to make this go as smoothly as possible, I bought both a strawberry huller and a combination fruit and egg slicer. Five pounds of strawberries is kind of a lot, eh? I’m thinking maybe I will break up the ice and try doing fewer strawberries in case it’s a complete failure.

I must have gotten rid of all our Styrofoam coolers, so I called the grocery store to make sure they have them. Unfortunately, my regular store doesn’t also have dry ice, but that’s okay — I kind of want to be ready with my cooler when I get the dry ice and not have to negotiate buying both while also handling the ice.

So my plan is to go out and get the cooler and the strawberries, then come back and wash the berries. Then I’ll go back out and get the dry ice and then hull and slice all the berries. In the meantime, I bought some food-safe desiccant packets that absorb moisture and change color. If this experiment works, I don’t want my crispy strawberries ruined by Midwestern humidity.

Actually Doing the Thing

So I went and bought the cooler and the strawberries. They were $2.99 for a 2 lb. box, so I bought two boxes, thinking that a little more poundage in dry ice than berries would be a good thing. I went back out to the other grocery store for the dry ice, and the person in the meat department told me they sell it in pellets now, in 3- and 6-lb. bags. So I asked for the latter. All that worrying about breaking it up for nothing!

Then it was go time. I got out my cutting board and resigned myself to hulling and slicing around 75 strawberries. But you know, it really didn’t take that long, especially once I got a rhythm going. I had no idea what the volume would be like, so I started throwing the slices into a gallon-sized bag. But then it seemed like too much mass, so I ended up with them spread across five quart-sized bags. I laid them in the bottom of the cooler in layers, and poured the dry ice pellets on top. Then I took the cooler down to the basement and made note of the time.

Since I ended up with six pounds of dry ice and only four pounds of strawberries, my intent is to check on things after 18 hours, even though it’s supposed to take 24. My concern is that the strawberries will get done drying out earlier than the 24-hour mark, and then start absorbing moisture from the air.

Fruits of Labor

I decided to check the strawberries a little early. There was no way the ice was going to last 24 hours, and I think it’s because I purposely put the lid on upside down to make it extra loose. The strawberries are almost frozen and are quite tasty, but they are nowhere near depleted of moisture. So I decided to get more ice and keep going with the experiment.

I went out and got another 6 lb. of pellets. This time, I layered everything, starting with ice in the bottom and ending with ice on top. This time, I put the lid on the right way, just loosely.

Totally Not Dry, But Tasty

Well, I checked them a few hours before the 24-hour mark, and the result looks much the same as the previous morning. Very cold berries that appear to have lost no moisture at all. They taste great, though, so I put them in the freezer to use in smoothies.

All in all, I would say that this was a good experiment. Considering I didn’t have anything I needed when I started out, I would say it was fairly cost-effective as well. Here’s how the pricing breaks down:

  • 28-quart Styrofoam cooler: $4.99
  • 4 lbs. of strawberries: $5.99
  • 12 lbs. of dry ice at $1.99/lb.: $24
  • a couple of resealable bags: $1

Total: $36, which is a little more than I paid for a big canister of freeze-dried strawberries on Amazon that lasted maybe a week. If this had worked, it would have been pretty cost-effective compared with buying them.

So, can you freeze-dry strawberries without a machine? Signs still point to yes, but I’m going to go ahead and blame the Midwestern humidity on this one. You can bet I’ll be trying this again in the winter, probably with fewer berries and smaller cooler. By the way, there was a small puddle underneath the cooler when it was all said and done.

Have you ever tried freeze-drying anything with dry ice? If so, how did it go? Do you have any tips? Let us know in the comments.

 

Main and thumbnail images via Unsplash

PCB Design Review: A 5V UPS With LTC4040

13 Junio 2024 at 14:00

Do you have a 5 V device you want to run 24/7, no matter whether you have electricity? Not to worry – Linear Technology has made a perfect IC for you, the LTC4040; with the perfect assortment of features, except perhaps for the hefty price tag.

[Lukilukeskywalker] has shared a PCB for us to review – a LTC4040-based stamp you can drop onto your PCB whenever you want a LTC4040 design. It’s a really nice module to see designed – things like LiFePO4 support make this IC a perfect solution for many hacker usecases. For instance, are you designing a custom Pi HAT? Drop this module to give your HAT the UPS capability for barely any PCB effort. if your Pi or any other single-board computer needs just a little bit of custom sauce, this module spices it up alright!

This one is a well-designed module! I almost feel like producing a couple of these, just to make sure I have them handy. If you like the LTC4040, given its numerous features and all, this is also not the only board in town – here’s yet another LTC4040 board that has two 18650 holders, and referencing its PCB design will help me today in this review, you can take a look at it too!

Now, having looked at this PCB for a fair bit, it has a few things that we really do want to take care of. Part of today’s review will be connector selection, another would be the module form-factor, some layout, and some suggestions on sizing the passives – the rows of 1206 components are pretty, but they’re also potentially a problem. Let’s waste no time and delve in.

Battery Wireup And Formfactor

The battery connector uses JST-SH, one pin for VBAT and one for GND. The problem with this is that the module is capable of 2.5 A at 5 V = 12 W. At 3.6 V, that’s 4 A if not more and JST-SH is only rated for 1 A per pin. Using this module with a battery as-intended will melt things. You could add a bigger connector like the standard JST-PH, but that’d increase the module size, and my assessment is that this board doesn’t have to be larger than it already is.

Thankfully, this is an open-source module, so we can change its pinout easily enough, adding pins for the battery into the mix. Currently, this board feels breadboardable, but it isn’t quite – it’s pretty wide, so it will take two breadboards to handle, and a breadboard would also probably be disappointed with the pin amount required. With that in mind, adding pins at the top looks convenient enough.

In general, shuffling the pins around will help a fair bit. My hunch is to make the module’s castellations asymmetric, say, do 7-5-5-5 – one side with seven pins, three sides with five pins. It might not look as perfect, but what’s important is that it will be way way harder to mount incorrectly, something I’ve done with a module of my own design – that was not fun to fix. If you are worried about having enough pins to fill the resulting 22-pin combination, it’s always great to just add GND, doubly so for a power-related module!

Adding more castellations also helps us shuffle the pinout around, freeing up the routing – let’s go through the pins and see what that could help with.

Pinout Changes

The schematic is seriously nice looking – every single block is nicely framed and has its description listed neatly. Comparing it with reference schematic, it looks pretty good!

There’s a few nits to pick. For instance, BST_OFF and CHG_OFF need to be grounded for the IC to work – datasheet page 10. You could ground them through a resistor and pull them onto a castellation, but you can’t leave them floating. This is not easy to notice, however, unless you go through the pins one by one and recheck their wiring; I noticed it because I was looking at the board, saw two unconnected pins and decided to check.

My hunch is that, first, all the pins were given power names, and then two of them were missed as not connected anywhere, which is an understandable mistake to make.

Let’s keep with the schematic style – add two more connectors, one 5-pin and one 7-pin, rearrange the pinout, and keep them in their own nicely delineated area. The 7-pin connector gets the battery pins and a healthy dose of ground, and as for the 5 extra pins at the bottom, they’ll serve as extra ground pins, and give us shuffling slots for pins that are best routed southward.

Components And Placement

Having 1206 resistors on such a module is a double-edged sword. On one hand, given the adjustability, you definitely want resistors that you’d be able to replace easily, so 0402 is not a great option. However, 1206 can actually be harder to replace with a soldering iron, since you need to heat up both sides. The writing is more readable on 1206, no doubt, and it’s also nice that this module is optimized by size. Still, for the sake of routability, I will start by replacing the LEDs and LED resistors with 0603 components – those are resistors you will not be expected to replace, anyway.

Also, I have a hunch that a few components need to be moved around. First one is the RProg, no doubt – it’s in the way of the switching path, going right under the SW polygon. Then, I will rotate the Rsense resistor so that it’s oriented vertically – it feels like that should make the VIN track less awkward, and show whether there’s any space to be freed on the left.

Resistors replaced, a few components moved, and here’s where the fun begins. The IGATE track is specifically designated in the datasheet as pretty sensitive, to the point the PDF talks about leakage from this track to the other tracks – it is a FET gate driver output, after all. Having it snake all around power tracks feels uncomfortable I’d like to refactor these FETs a bit, and see if I can make the IGATE track a bit more straightforward, perhaps also make the space usage on the left more optimized. While doing that, I will be shuffling pins between the castellated edges every now and then.

After a bit of shuffling and component rerouting, it felt like I wasn’t getting anywhere. It was time to try and reconstruct the circuit in the way it could make sense, prioritizing the power path and starting with it. For that, I pulled out both FETs, current sense resistor and the feedback divider out of the circuit, and tried rearranging them until it looked like they would work.

Following quite a few attempts at placing the components, I had to settle on the last one. I_GATE took quite a detour, though I did route it via-less in the end; VIN and CLN went on the bottom layer to give room to I_GATE (and be able to cross each other), and all the non-sensitive signals went into vias so that they could be routed outside of the switching area. It turned out the pinout is seriously not conducive to a neat layout; I suppose, some chips are just like that. Perhaps, it was that the gate driver only could’ve had been located on this particular, so that’s why the IGATE pin is on the opposite side of where the FET could be, instead of it, say, being next to V_SYS outputs.

Post-Redesign Clarity

Is the board better now? In many ways, yes; in some ways, no. I don’t know that it’s necessarily prettier, if that makes sense, there were certainly things about the board’s original state that were seriously nice. The package chosen for the FETs definitely didn’t help routing with my I_GATE target in mind, giving no leeway to route things between pins; if I were to change them to DFN8, I could indeed more easily provide a VSYS guard track that the datasheet suggests you use for I_GATE.

I’ve also rearranged the pinout quite a bit. That does mean the STATUS/POWER side distinction of the original board no longer works, but now pins don’t have to go across the board, cutting GND in half. After looking into the datasheet, I didn’t find any use for the CSN pin being broken out, since it’s just a sense resistor net; that space is now occupied by a GND pin, and there’s one less track to route out.

There’s now a good few GND pins on the board – way more than you might feel like you need; the right header feels particularly empty. If you wanted, you could add a Maxim I2C LiIon fuel gauge onto the board, since there’s now enough space in the top right, and quite a few free pins on the right. This would let your UPS-powered device also query the UPS’s status, for one. Of course, such things can always be added onto the actual board that the module would mount onto.

I also removed designators about things that felt too generic – specifically, resistors that only have one possible value and won’t need to be replaced, like LED resistors and pullups for mode selection jumpers. All in all, this board is now a little easier to work with, and perhaps, its ground distribution is a little better.

This module’s idea, and both its authors and my implementation are seriously cool! I hope I’ve helped make it cooler, if at least in the battery connector department. Both the pre-review and post-review versions are open-source, so you can also base your own castellated module off this board if you desire – it’s a good reference design for both LTC4040 and also self-made castellated modules. It’s only 30 mm x 30 mm, too, so it will be very cheap to get made. I hope my input can make this module all that cooler, and, at this point, I want to make a board around this module – stay tuned!

As usual, if you would like a design review for your board, submit a tip to us with [design review] in the title, linking to your board files. KiCad design files strongly preferred, both repository-stored files (GitHub/GitLab/etc) and shady Google Drive/Dropbox/etc .zip links are accepted.

Displays We Love Hacking: DSI

12 Junio 2024 at 14:00

We would not be surprised if DSI screens made up the majority of screens on our planet at this moment in time. If you own a smartphone, there’s a 99.9% chance its screen is DSI. Tablets are likely to use DSI too, unless it’s eDP instead, and a smartwatch of yours definitely will. In a way, DSI displays are inescapable.

This is for a good reason. The DSI interface is a mainstay in SoCs and mobile CPUs worth their salt, it allows for higher speeds and thus higher resolutions than SPI ever could achieve, comparably few pins, an ability to send commands to the display’s controller unlike LVDS or eDP, and staying low power while doing all of it.

There’s money and power in hacking on DSI – an ability to equip your devices with screens that can’t be reused otherwise, building cooler and cooler stuff, tapping into sources of cheap phone displays. What’s more, it’s a comparably underexplored field, too. Let’s waste no time, then!

Decently Similar Internals

DSI is an interface defined by the MIPI Alliance, a group whose standards are not entirely open. Still, nothing is truly new under the sun, and DSI shares a lot of concepts with interfaces we’re used to. For a start, if you remember DisplayPort internals, there are similarities. When it comes to data lanes, DSI can have one, two or four lanes of a high-speed data stream; smaller displays can subsist with a single-lane, while very high resolution displays will want all four. This is where the similarities end. There’s no AUX to talk to the display controller, though – instead, the data lanes switch between two modes.

The first mode is low-speed, used for sending commands to the display, like initialization sequences, tweaking the controller parameters, or entering sleep mode. You can capture this with a logic analyzer. If you’ve ever sniffed communications of an SPI display, you will find that there are many similarities with how DSI commands are sent – in fact, many SPI displays use a command set defined by the MIPI Alliance that DSI displays also use. (If your Sigrok install lists a DSI decoder, don’t celebrate too soon – it’s an entirely different kind of DSI.)

The second mode is high-speed, and it’s the one typically used for pixel transfer. A logic analyzer won’t do here, at least not unless it’s seriously powerful when it comes to capture rate. You will want to use a decent scope for working with high-speed DSI signals, know your way around triggers, and perhaps make a custom PCB tap with a buffer for the the DSI signal so that your probe doesn’t become a giant stub, and figure out a way to work with the impedance discontinuities. Still, it is very much possible to tap into high-speed DSI, like [Wenting Zhang] has recently demonstrated, sometimes an approximation of the high-speed signal is more than enough for reverse-engineering.

Got a datasheet for your panel? Be careful – the initialization sequence in it might be wrong; if your bringup is not successful or your resulting image is weird, this just might be the culprit, so even if you have procured the correct PDF, you might still end up having to capture the init sequence with a logic analyzer. Whether your display’s initialization are well-known, or you end up capturing them from a known-working device, you will need something to drive your display with – a typical Arduino board will no longer do; though, who knows, an RP2040 just might, having seen what you all are capable of.

Ideally, you will want a microcontroller or a CPU that has a DSI peripheral, with decent documentation and/or examples on how to use it – that part is important. Linux-capable Raspberry Pi boards can help you here a surprising amount – you may remember the Raspberry Pi DSI header as being proprietary, but that was only true initially. With developments like the official Raspberry Pi screen and open-source graphics drivers aided by that $10k driver bounty they put out, it became viable to connect custom screens. WaveShare DSI screens are a known alternative if you want to get a DSI display for your Pi. On the regular Pi, you only get two lanes of DSI, but that is good enough for many a display. Funnily enough, you can get a third-party display for your Pi that uses the same panel, with two extra chips that seem to run the display without a driver like the official Pi display (this thread on these displays is fascinating!); the display is still limited to the same resolution, the only advantage is a slightly lower price, and the ability to overload your 3.3V rail is a questionable benefit. It’s not quite clear why this display exists, but you might want to resist the temptation.

If you’re using a Pi Compute Module, you get entire two DSI peripherals to play with, one four-lane and one two-lane, and it doesn’t take long to find a good few examples of Raspberry Pi Compute Module boards with DSI screens. If you have a Compute Module and its devboard somewhere on a shelf, you can do four-lane DSI, with a Linux-exposed interface that works in the same way alternative OSes do on your phone. Given that CMs are typically used for custom stuff and a hacker using one is more likely to have patience for figuring out DSI panel parameters, a Compute Module baseboard is a pretty popular option to hack on that one cheap DSI display from a tablet that caught your eye! Don’t have a baseboard? You can even etch one, here’s a single-layer breakout with a DSI socket. Not that you don’t need a Compute Module if you’re doing two-lane DSI: a regular Pi will do.

So, get out there and hack – there is a ton of unexplored potential in the never-ending supply of aftermarket screens for older iPhone and Samsung models!  Speaking of phones, they are the forefront of DSI hacking, as you might suspect, thanks to all the alternative OS projects and Linux kernel mainlining efforts. You can enjoy fruits of their labour fairly easily, sparing you a logic analyzer foray – reusing a seriously nice DSI display might be as easy as loading a kernel module.

Want A Panel? Linux Is Here To Help

There’s a fun hacker tactic – if you’re looking for an I2C GPIO expander chip, you can scroll through the Linux kernel config file that lists supported GPIO expanders, and find a good few ICs you’ve never known about! What’s great is, you know you’re getting a driver, too.

The same goes for DSI screens, except the payoff is way higher. If you’re on the market for a DSI screen, you can open the list of Linux kernel drivers for various DSI panels. Chances are, all you need is just the physical wireup part, maybe some backlight driving, and a Device Tree snippet.

Want a $20 1920 x 1200 IPS display for your Compute Module? Who doesn’t! Well, wouldn’t you know, the Google Nexus 7 tablet uses one, and the driver for it is in mainline Linux! Just solder together a small FPC-to-bespoke-connector adapter board (or order PCBA), add a Device Tree snippet into your configuration, and off you go; there are even custom boards for using this display with a CM4, it’s that nice.

New displays get added into the kernel all the time; all it takes is someone willing to poke at the original firmware, perhaps load a proprietary kernel module into Ghidra and pull out the initialization sequence, or simply enable the right kind of debug logging in the stock firmware. All of this is thanks to tireless efforts of people trying to make their phones work beyond the bloatware-ridden shackles of the stock Android OS; sometimes, it’s some company doing the right thing and upstreaming a driver for a panel used by hundreds of thousands of devices in the wild.

There are some fun nuances in the display scene, as much as of a “scene” it is – people are just trying to make their devices work for them, then share that work with other people in the same situation, figuring out a display is part of the process. It’s not uncommon that a smartphone will use slightly different screens in the same batch – it’s an uncommon but real issue with alternative OSes like LineageOS, where, say, 10% of your firmware’s users might have their panel malfunction because, despite the phone listing the same model on the lid, their specific phones use a display with a different controller, that only the manufacturer’s firmware properly accounts for.

Our DSI Role Models

These are the basics of what you need to reuse DSI displays as if effortlessly. Now, I’d like to highlight a good few examples of people hacking on DSI, from our coverage and otherwise.

Without a doubt, the first one that springs to mind is [Mike Harrison] aka [mikeselectricstuff], from way back in 2013. I’ve spent a lot of time with the exact iPod Nano being reverse-engineered, and [Mike]’s videos gave me insight into a piece of tech I relied on for a fair bit. For instance, in this video, [Mike] masterfully builds a scoping jig, solders microscoping wires to the tiny PCB, walks us through the entire reverse-engineering process, and successfully reuses the LCD for a project.

Following in [Mike]’s footsteps, we’ve even seen this display reused in an ESP32 project, thanks to a parallel RGB to DSI converter chip!

[Wenting Zhang] reverse-engineering a Macbook Touchbar display is definitely on my favourites list. In this short video, he teaches us DSI fundamentals, and manages to show the entire reverse-engineering process from start to end., no detail spared. Having just checked the video description, the code is open-source, and it’s indeed a RP2040 project – just like I forecasted a good few paragraphs above.

Are mysterious ASICs your vibe, and would you like to poke at some firmware? You should see this HDMI-to-DSI adapter project, then. The creator even turns it into a powerbank with a built-in screen as a demo – that’s a hacker accessory if I’ve ever seen one. More of a gateware fan? Here’s an FPGA board doing the same, and another one, that you can see here driving a Galaxy S4 screen effortlessly. Oh, and if you are friends with a Xilinx+Vivado combination, there are DSI IP cores for you to use with barely any restrictions.

The Year Of DSI Hacking

DSI is an interface that is becoming increasingly hacker-friendly – the economies of scale are simply forcing our hand, and even the microcontroller makers are following suit. The official devboard for Espressif’s ESP32-P4, a pretty beefy RISC-V chip, sports a DPI interface alongside the now-usual CSI for cameras. We will see DSI more and more, and I raise a glass of water for numerous hackers soon to reap the fields of DSI. May your harvest be plentiful.

I thank [timonsku] for help with this article!

Scrapping the Local Loop, by the Numbers

11 Junio 2024 at 14:00

A few years back I wrote an “Ask Hackaday” article inviting speculation on the future of the physical plant of landline telephone companies. It started innocently enough; an open telco cabinet spotted during my morning walk gave me a glimpse into the complexity of the network buried beneath my feet and strung along poles around town. That in turn begged the question of what to do with all that wire, now that wireless communications have made landline phones so déclassé.

At the time, I had a sneaking suspicion that I knew what the answer would be, but I spent a good bit of virtual ink trying to convince myself that there was still some constructive purpose for the network. After all, hundreds of thousands of technicians and engineers spent lifetimes building, maintaining, and improving these networks; surely there must be a way to repurpose all that infrastructure in a way that pays at least a bit of homage to them. The idea of just ripping out all that wire and scrapping it seemed unpalatable.

With the decreasing need for copper voice and data networks and the increasing demand for infrastructure to power everything from AI data centers to decarbonized transportation, the economic forces arrayed against these carefully constructed networks seem irresistible. But what do the numbers actually look like? Are these artificial copper mines as rich as they appear? Or is the idea of pulling all that copper out of the ground and off the poles and retasking it just a pipe dream?

Phones To Cars

There are a lot of contenders for the title of “Largest Machine Ever Built,” but it’s a pretty safe bet that the public switched telephone network (PSTN) is in the top five. From its earliest days, the PSTN was centered around copper, with each and every subscriber getting at least one pair of copper wires connected from their home or business. These pairs, referred to collectively and somewhat loosely as the “local loop,” were gathered together into increasingly larger bundles on their way to a central office (CO) housing the switchgear needed to connect one copper pair to another. For local calls, it could all be done within the CO or by connecting to a nearby CO over copper lines dedicated to the task; long-distance calls were accomplished by multiplexing calls together, sometimes over microwave links but often over thick coaxial cables.

Fiber optic cables and wireless technologies have played a large part in making all the copper in the local loops and beyond redundant, but the fact remains that something like 800,000 metric tons of copper is currently locked up in the PSTN. And judging by the anti-theft efforts that Home Depot and other retailers are making, not to mention the increase in copper thefts from construction sites and other soft targets, that material is incredibly valuable. Current estimates are that PSTNs are sitting on something like $7 billion worth of copper.

That sure sounds like a lot, but what does it really mean? Assuming that the goal of harvesting all that largely redundant PSTN copper is to support decarbonization, $7 billion worth of copper isn’t really that much. Take EVs for example. The typical EV on the road today has about 132 pounds (60 kg) of copper, or about 2.5 times the amount in the typical ICE vehicle. Most of that copper is locked up in motor windings, but there’s a lot in the bus bars and wires needed to connect the batteries to the motors, plus all the wires needed to connect all the data systems, sensors, and accessories. If you pulled all the copper out of the PSTN and used it to do nothing but build new EVs, you’d be able to build about 13.3 million cars. That’s a lot, but considering that 80 million cars were put on the road globally in 2021, it wouldn’t have that much of an impact.

Farming the Wind

What about on the generation side? Thirteen million new EVs are going to need a lot of extra generation and transmission capacity, and with the goal of decarbonization, that probably means a lot of wind power. Wind turbines take a lot of copper; currently, bringing a megawatt of on-shore wind capacity online takes about 3 metric tons of copper. A lot of that goes into the windings in the generator, but that also takes into account the wire needed to get the power from the nacelle down to the ground, plus the wires needed to connect the turbines together and the transformers and switchgear needed to boost the voltage for transmission. So, if all of the 800,000 metric tons of copper currently locked up in the PSTN were recycled into wind turbines, they’d bring a total of 267,000 megawatts of capacity online.

To put that into perspective, the total power capacity in the United States is about 1.6 million megawatts, so converting the PSTN to wind turbines would increase US grid capacity by about 16% — assuming no losses, of course. Not too shabby; that’s over ten times the capacity of the world’s largest wind farm, the Gansu Wind Farm in the Gobi Desert in China.

There’s one more way to look at the problem, one that I think puts a fine point of things. It’s estimated that to reach global decarbonization goals, in the next 25 years we’ll need to mine at least twice the amount of copper that has ever been mined in human history. That’s quite a lot; we’ve taken 700 million metric tons of copper in the last 11,000 years. Doubling that means we’ve got to come up with 1.4 billion metric tons in the next quarter century. The 800,000 metric tons of obsolete PSTN copper is therefore only about 0.05% of what’s needed — not even a drop in the bucket.

Accepting the Inevitable

These are just a few examples of what could be done with the “Buried Fortune” of PSTN copper, as Bloomberg somewhat breathlessly refers to it in the article linked above. It goes without saying that this is just back-of-the-envelope math, and that a real analysis of what it would take to recycle the old PSTN copper and what the results would be would require a lot more engineering and financial chops than I have. Even if it is just a drop in the bucket, I think we’ll probably end up doing it, if for no other reason than it takes something like two decades to bring a new copper mine into production. Until those mines come online and drive the price of copper down, all that refined and (relatively) easily recycled copper just sitting there is a tempting target for investors. So it’ll probably happen, which is sad in a way, but maybe it’s a more fitting end to the PSTN than just letting it sit there and corrode.

Hands On: Inkplate 6 MOTION

Por: Tom Nardi
6 Junio 2024 at 14:00

Over the last several years, DIY projects utilizing e-paper displays have become more common. While saying the technology is now cheap might be overstating the situation a bit, the prices on at least small e-paper panels have certainly become far more reasonable for the hobbyist. Pair one of them with a modern microcontroller such as the RP2040 or ESP32, sprinkle in a few open source libraries, and you’re well on the way to creating an energy-efficient smart display for your home or office.

But therein lies the problem. There’s still a decent amount of leg work involved in getting the hardware wired up and talking to each other. Putting the e-paper display and MCU together is often only half the battle — depending on your plans, you’ll probably want to add a few sensors to the mix, or perhaps some RGB status LEDs. An onboard battery charger and real-time clock would be nice as well. Pretty soon, your homebrew e-paper gadget is starting to look remarkably like the bottom of your junk bin.

For those after a more integrated solution, the folks at Soldered Electronics have offered up a line of premium open source hardware development boards that combine various styles of e-paper panels (touch, color, lighted, etc) with a microcontroller, an array of sensors, and pretty much every other feature they could think of. To top it off, they put in the effort to produce fantastic documentation, easy to use libraries, and free support software such as an online GUI builder and image converter.

We’ve reviewed a number of previous Inkplate boards, and always came away very impressed by the attention to detail from Soldered Electronics. When they asked if we’d be interested in taking a look at a prototype for their new MOTION 6 board, we were eager to see what this new variant brings to the table. Since both the software and hardware are still pre-production, we won’t call this a review, but it should give you a good idea of what to expect when the final units start shipping out in October.

Faster and Stronger

As mentioned previously, the Inkplate boards have generally been differentiated by the type of e-paper display they’ve featured. In the case of the new MOTION, the theme this time around is speed — Soldered says this new display is capable of showing 11 frames per second, no small feat for a technology that’s notoriously slow to refresh. You still won’t be watching movies at 11 FPS of course, but it’s more than enough to display animations and dynamic information thanks to its partial refresh capability that only updates the areas of the display where the image has actually changed.

But it’s not just the e-paper display that’s been swapped out for a faster model. For the MOTION 6, Soldered traded in the ESP32 used on all previous Inkplates for the STM32H743, an ARM Cortex-M7 chip capable of running at 480 MHz. Well, at least partially. You’ll still find an ESP32 hanging out on the back of the MOTION 6, but it’s there as a co-processor to handle WiFi and Bluetooth communications. The STM32 chip features 1 MB of internal SRAM and has been outfitted with a whopping 32 MB of external DRAM, which should come in handy when you’re throwing 4-bit grayscale images at the 1024 x 758 display.

The Inkplate MOTION 6 also features an impressive suite of sensors, including a front-mounted APDS-9960 which can detect motion, proximity, and color. On the backside you’ll find the SHTC3 for detecting temperature and humidity, as well as a LSM6DSO32 accelerometer and gyroscope. One of the most impressive demos included in the MOTION 6’s Arduino library pulls data from the gyro and uses it to rotate a wireframe 3D cube as you move the device around. Should you wish to connect other sensors or devices to the board, you’ve got breakouts for the standard expansion options such as I²C and SPI, as well as Ethernet, USB OTG, I²S, SDMMC, and UART.

Although no battery is included with the MOTION 6, there’s a connector for one on the back of the board, and the device includes a MCP73831 charge controller and the appropriate status LEDs. Primary power is supplied through the board’s USB-C connector, and there’s also a set of beefy solder pads along the bottom edge where you could wire up an external power source.

For user input you have three physical buttons along the side, and a rather ingenious rotary encoder — but to explain how that works we need to switch gears and look at the 3D printed enclosure Soldered has created for the Inkplate MOTION 6.

Wrapped Up Tight

Under normal circumstances I wouldn’t go into so much detail about a 3D printed case, but I’ve got to give Soldered credit for the little touches they put into this design. Living hinges are used for both the power button and the three user buttons on the side, there’s a holder built into the back for a pouch battery, and there’s even a little purple “programming tool” that tucks into a dedicated pocket — you’ll use that to poke the programming button when the Inkplate is inside the enclosure.

But the real star is the transparent wheel on the right hand side. The embedded magnet in the center lines up perfectly with a AS5600 magnetic angle encoder on the Inkplate, with an RGB LED just off to the side. Reading the value from the AS5600 as the wheel rotates gives you a value between 0 and 4048, and the library offers macros to convert that to radians and degrees. Combined with the RGB LED, this arrangement provides an input device with visual feedback at very little cost.

It’s an awesome idea, and now I’m looking for an excuse to include it in my own hardware designs.

The 3D printed case is being offered as an add-on for the Inkplate MOTION 6 at purchase time, but both the STLs and  Fusion 360 files for it will be made available with the rest of the hardware design files for those that would rather print it themselves.

An Exciting Start

As I said in the beginning of this article, the unit I have here is the prototype — while the hardware seems pretty close to final, the software side of things is obviously still in the early stages. Some of the libraries simply weren’t ready in time, so I wasn’t able to test things like WiFi or Bluetooth. Similarly, I wasn’t able to try out the MicroPython build for the MOTION 6. That said, I have absolutely no doubt that the team at Soldered Electronics will have everything where it needs to be by the time customers get their hands on the final product.

There’s no denying that the $169 USD price tag of the Inkplate MOTION 6 will give some users pause. If you’re looking for a budget option, this absolutely isn’t it. But what you get for the price is considerable. You’re not just paying for the hardware, you’re also getting the software, documentation, schematics, and PCB design files. If those things are important to you, I’d say it’s more than worth the premium price.

So far, it looks like plenty of people feel the same way. As of this writing, the Inkplate MOTION 6 is about to hit 250% of its funding goal on Crowd Supply, with more than 30 days left in the campaign.

Mining and Refining: Fracking

5 Junio 2024 at 14:33

Normally on “Mining and Refining,” we concentrate on the actual material that’s mined and refined. We’ve covered everything from copper to tungsten, with side trips to more unusual materials like sulfur and helium. The idea is to shine a spotlight on the geology and chemistry of the material while concentrating on the different technologies needed to exploit often very rare or low-concentration deposits and bring them to market.

This time, though, we’re going to take a look at not a specific resource, but a technique: fracking. Hydraulic fracturing is very much in the news lately for its potential environmental impact, both in terms of its immediate effects on groundwater quality and for its perpetuation of our dependence on fossil fuels. Understanding what fracking is and how it works is key to being able to assess the risks and benefits of its use. There’s also the fact that like many engineering processes carried out on a massive scale, there are a lot of interesting things going on with fracking that are worth exploring in their own right.

Fossil Mud

Although hydraulic fracturing has been used since at least the 1940s to stimulate production in oil and gas wells and is used in all kinds of well drilled into multiple rock types, fracking is most strongly associated these days with the development of oil and natural gas deposits in shale. Shale is a sedimentary rock formed from ancient muds made from fine grains of clay and silt. These are some of the finest-grained materials possible, with grains ranging from 62 microns in diameter down to less than a micron. Grains that fine only settle out of suspension very slowly, and tend to do so only where there are no currents.

Shale outcropping in a road cut in Kentucky. The well-defined layers were formed in still waters, where clay and silt particles slowly accumulated. The dark color means a lot of organic material from algae and plankton mixed in. Source: James St. John, CC BY 2.0, via Wikimedia Commons

The breakup of Pangea during the Cretaceous period provided much of the economically important shale formations in today’s eastern United States, like the Marcellus formation that stretches from New York state into Ohio and down almost to Tennesee. The warm, calm waters of the newly forming Atlantic Ocean formed the perfect place for clay- and silt-laden runoff to accumulate and settle, eventually forming the shale formation.

Shale is often associated with oil and natural gas because the conditions that favor its formation also favor hydrocarbon creation. The warm, still Cretaceous waters were perfect for phytoplankton and algal growth, and when those organisms died they rained down along with the silt and clay grains to the low-oxygen environment at the bottom. Layer upon layer built up slowly over the millennia, but instead of decomposing as they would have in an oxygen-rich environment, the reducing conditions slowly transformed the biomass into kerogen, or solid deposits of hydrocarbons. With the addition of heat and pressure, the hydrocarbons in kerogen were cooked into oil and natural gas.

In some cases, the tight grain structure of shale acts as an impermeable barrier to keep oil and gas generated in lower layers from floating up, forming underground deposits of liquid and gas. In other cases, kerogens are transformed into oil or natural gas right within the shale, trapped within its pores. Under enough pressure, gas can even dissolve right into the shale matrix itself, to be released only when the pressure in the rock is relieved.

Horizontal Boring

While getting at these sequestered oil and gas deposits requires more than just drilling a hole in the ground, fracking starts with exactly that. Traditional well-drilling techniques, where a rotary table rig using lengths of drill pipe spins a drill bit into rock layers underground while pumping a slurry called drilling mud down the bore to cool and lubricate the bit, are used to start the well. The initial bore proceeds straight down until it passes through the lowest aquifer in the region, at which point the entire bore is lined with a steel pipe casing. The casing is filled with cementitious grout that’s forced out of the bottom of the casing by a plug inserted at the surface and pressed down by the drilling rig. This squeezes the grout between the outside of the casing and the borehole and back up to the surface, sealing it off from the water-bearing layers it passes through and serving as a foundation for equipment that will eventually be added to the wellhead, such as blow-out preventers.

Once the well is sealed off, vertical boring continues until the kickoff point, where the bore transitions from vertical to horizontal. Because the target shale seam is relatively thin — often only 50 to 300 feet (15 to 100 meters) thick — drilling a vertical bore through it would only expose a small amount of surface area. Fracking is all about increasing surface area and connecting as many pores in the shale to the bore; drilling horizontally within the shale seam makes that possible. Geologists and mining engineers determine the kickoff point based on seismic surveys and drilling logs from other wells in the area and calculate the radius needed to put the bore in the middle of the seam. Given that the drill string can only turn by a few degrees at most, the radius tends to be huge — often hundreds of meters.

Directional drilling has been used since the 1920s, often to steal oil from other claims, and so many techniques have been developed for changing the direction of a drill string deep underground. One of the most common methods used in fracking wells is the mud motor. Powered by drilling mud pumped down the drill pipe and forced between a helical stator and rotor, the mud motor can spin the drill bit at 60 to 100 RPM. When boring a traditional vertical well, the mud motor can be used in addition to spinning the entire drill string, to achieve a higher rate of penetration. The mud motor can also power the bit with the drill string locked in place, and by adding angled spacers between the mud motor and the drill string, the bit can begin drilling at a shallow angle, generally just a few degrees off vertical. The drill string is flexible enough to bend and follow the mud motor on its path to intersect the shale seam. The azimuth of the bore can be changed, too, by rotating the drill string so the bit heads off in a slightly different direction. Some tools allow the bend in the motor to be changed without pulling the entire drill string up, which represents significant savings.

Determining where the drill bit is under miles of rock is the job of downhole tools like the measurement while drilling (MWD) tool. These battery-powered tools vary in what they can measure, but typically include temperature and pressure sensors and inertial measuring units (IMU) to determine the angle of the bit. Some MWD tools also include magnetometers for orientation to Earth’s magnetic field. Transmitting data back to the surface from the MWD can be a problem, and while more use is being made of electrical and fiber optic connections these days, many MWDs use the drilling mud itself as a physical transport medium. Mud telemetry uses pressure waves set up in the column of drilling mud to send data back up to pressure transducers on the surface. Data rates are low; 40 bps at best, dropping off sharply with increasing distance. Mud telemetry is also hampered by any gas dissolved in the drilling mud, which strongly attenuates the signal.

Let The Fracking Begin

Once the horizontal borehole is placed in the shale seam, a steel casing is placed in the bore and grouted with cement. At this point, the bore is completely isolated from the surrounding rock and needs to be perforated. This is accomplished with a perforating gun, a length of pipe studded with small shaped charges. The perforating gun is prepared on the surface by pyrotechnicians who place the charges into the gun and connect them together with detonating cord. The gun is lowered into the bore and placed at the very end of the horizontal section, called the toe. When the charges are detonated, they form highly energetic jets of fluidized metal that lance through the casing and grout and into the surrounding shale. Penetration depth and width depend on the specific shaped charge used but can extend up to half a meter into the surrounding rock.

Perforation can also be accomplished non-explosively, using a tool that directs jets of high-pressure abrasive-charged fluid through ports in its sides. It’s not too far removed from water jet cutting, and can cut right through the steel and cement casing and penetrate well into the surrounding shale. The advantage to this type of perforation is that it can be built into a single multipurpose tool which can

Once the bore has been perforated, fracturing can occur. The principle is simple: an incompressible fluid is pumped into the borehole under great pressure. The fluid leaves the borehole and enters the perforations, cracking the rock and enlarging the original perforations. The cracks can extend many meters from the original borehole into the rock, exposing vastly more surface area of the rock to the borehole.

Fracking is more than making cracks. The network of cracks produced by fracking physically connects kerogen deposits within the shale to the borehole. But getting the methane (black in inset) free from the kerogen (yellow) is a complicated balance of hydrophobic and hydrophilic interactions between the shale, the kerogen, and the fracturing fluid. Source: Thomas Lee, Lydéric Bocquet, Benoit Coasne, CC BY 4.0, via Wikimedia Commons

The pressure needed to hydraulically fracture solid rock perhaps a mile or more below the surface can be tremendous — up to 15,000 pounds per square inch (100 MPa). In addition to the high pressure, the fracking fluid must be pumped at extremely high volumes, up to 10 cu ft/s (265 lps). The overall volume of material needed is impressive, too — a 6″ borehole that’s 10,000 feet long would take almost 15,000 gallons of fluid to fill alone. Add in the volume of fluid needed to fill the fractures and that could easily exceed 5 million gallons.

Fracking fluid is a slurry made mostly from water and sand. The sand serves as a proppant, which keeps the tiny microfractures from collapsing after fracking pressure is released. Fracking fluid also contains a fraction of a percent of various chemical additives, mostly to form a gel that effectively transfers the hydraulic force while keeping the proppant suspended. Guar gum, a water-soluble polysaccharide extracted from guar beans, is often used to create the gel. Fracking gels are sometimes broken down after a while to clear the fractures and allow freer flow; a combination of acids and enzymes is usually used for this job.

Once fracturing is complete, the fracking fluid is removed from the borehole. It’s impossible to recover all the fluid; sometimes as much as 50% is recovered, but often as little as 5% can be pumped back to the surface. Once a section of the borehole has been fractured, it’s sealed off from the rest of the well by an isolating plug placed upstream of the freshly fracked section. The entire process — perforating, fracking, recovery, isolation — is repeated up the borehole until the entire horizontal bore is fracked. The isolating plugs are then bored out, and the well can begin production.

Programming Ada: Records and Containers for Organized Code

Por: Maya Posch
4 Junio 2024 at 14:00

Writing code without having some way to easily organize sets of variables or data would be a real bother. Even if in the end you could totally do all of the shuffling of bits and allocating in memory by yourself, it’s much easier when the programming language abstracts all of that housekeeping away. In Ada you generally use a few standard types, ranging from records (equivalent to structs in C) to a series of containers like vectors and maps. As with any language, there are some subtle details about how all of these work, which is where the usage of these types in the Sarge project will act as an illustrative example.

In this project’s Ada code, a record is used for information about command line arguments (flag names, values, etc.) with these argument records stored in a vector. In addition, a map is created that links the names of these arguments, using a string as the key, to the index of the corresponding record in the vector. Finally, a second vector is used to store any text fragments that follow the list of arguments provided on the command line. This then provides a number of ways to access the record information, either sequentially in the arguments vector, or by argument (flag) name via the map.

Introducing Generics

Not unlike the containers provided by the Standard Template Library (STL) of C++, the containers provided by Ada are provided as generics, meaning that they cannot be used directly. Instead we have to create a new package that uses the container generic to formulate a container implementation limited to the types which we intend to use with it. For a start let’s take a look at how to create a vector:

with Ada.Containers.Vectors;
use Ada.Containers;
package arg_vector is new Vectors(Natural, Argument);

The standard containers are part of the Ada.Containers package, which we include here before the instantiating of the desired arguments vector, which is indexed using natural numbers (all positive integers, no zero or negative numbers), and with the Argument type as value. This latter type is the custom record, which is defined as follows:

type Argument is record
    arg_short: aliased Unbounded_String;
    arg_long: aliased Unbounded_String;
    description: aliased Unbounded_String;
    hasValue: aliased boolean := False;
    value: aliased Unbounded_String;
    parsed: aliased boolean := False;
end record;

Here the aliased keyword means that the variable will have a memory address rather than only exist in a register. This is a self-optimizing feature of Ada that is being copied by languages like C and C++ that used to require the inverse action by the programmer in the form of the C & C++ register keyword. For Ada’s aliased keyword, this means that the variable it is associated with can have its access (‘pointer’, in C parlance) taken.

Moving on, we can now create the two vectors and the one map, starting with the arguments vector using the earlier defined arg_vector package:

args : arg_vector.vector;

The text arguments vector is created effectively the same way, just with an unbounded string as its value:

package tArgVector is new Vectors(Natural, Unbounded_String);
textArguments: tArgVector.vector;

Finally, the map container is created in a similar fashion. Note that for this we are using the Ada.Containers.Indefinite_Ordered_Maps package. Ordered maps contrast with hashed maps in that they do not require a hash function, but will use the < operator (existing for the type or custom).  These maps provide a look-up time defined as O(log N), which is faster than the O(N) of a vector and the reason why the map is used as an index for the vector here.

package argNames_map is new Indefinite_Ordered_Maps(Unbounded_String, Natural);
argNames: argNames_map.map;

With these packages and instances defined and instantiated, we are now ready to fill them with data.

Cross Mapping

When we define a new argument to look for when parsing command line arguments, we have to perform three operations: first create a new Argument record instance and assign its members the relevant information, secondly we assign this record to the args vector. The record is provided with data via the setArgument procedure:

procedure setArgument(arg_short: in Unbounded_String; arg_long: in Unbounded_String; 
                            desc: in Unbounded_String; hasVal: in boolean);

This allows us to create the Argument instance as follows in the initialization section (before begin in the procedure block) as follows:

arg: aliased Argument := (arg_short => arg_short, arg_long => arg_long, 
                          description => desc, hasValue => hasVal, 
                          value => +"", parsed => False);

This Argument record can then be added to the args vector:

args.append(arg);

Next we have to set up links between the flag names (short and long version) in the map to the relevant index in the argument vector:

argNames.include(arg_short, args.Last_Index);
argNames.include(arg_long, args.Last_Index);

This sets the key for the map entry to the short or long version of the flag, and takes the last added (highest) index of the arguments vector for the value. We’re now ready to find and update records.

Search And Insert

Using the contraption which we just setup is fairly straightforward. If we want to check for example that an argument flag has been defined or not, we can use the arguments vector and the map as follows:

flag_it: argNames_map.Cursor;
flag_it := argNames.find(arg_flag);
if flag_it = argNames_map.No_Element then
    return False;
elsif args(argNames_map.Element(flag_it)).parsed /= True then
    return False;
end if;

This same method can be used to find a specific record to update the freshly parsed value that we expect to trail certain flags:

flag_it: argNames_map.Cursor;
flag_it := argNames.find(arg_flag);
args.Reference(argNames_map.Element(flag_it)).value := arg;

Using the reference function on the args vector gets us a reference to the element which we can then update, unlike the element function of the package. The requisite index into the arguments vector is obtained by

We can now easily check that a particular flag has been found by looking up its record in the vector and return the found value, as defined in the getFlag function in the sarge.adb file of Sarge:

function getFlag(arg_flag: in Unbounded_String; arg_value: out Unbounded_String) return boolean is
flag_it: argNames_map.Cursor;
use argNames_map;
begin
    if parsed /= True then
        return False;
    end if;

    flag_it := argNames.find(arg_flag);
    if flag_it = argNames_map.No_Element then
         return False;
    elsif args(argNames_map.Element(flag_it)).parsed /= True then
        return False;
    end if;

    if args(argNames_map.Element(flag_it)).hasValue = True then
        arg_value := args(argNames_map.Element(flag_it)).value;
    end if;

    return True;
end getFlag;

Other Containers

There are of course many more containers than just the two types covered here defined in Ada’s Predefined Language Library (PLL). For instance, sets are effectively like vectors, except that they only allow for unique elements to exist within the container. This is only the beginning of the available containers, though, with the Ada 2005 standard defining only the first collection, which got massively extended in the Ada 2012 standard (which we focus on here). These include trees, queues, linked lists and so on. We’ll cover some of these in more detail in upcoming articles.

Together with the packages, functions and procedures covered earlier in this series, records and containers form the basics of organizing code in Ada. Naturally, Ada also supports more advanced types of modularization and reusability, such as object-oriented programming, which will also be covered in upcoming articles.

A Treasure Trove In An English Field

Por: Jenny List
3 Junio 2024 at 14:00

This is being written in a tent in a field in Herefordshire, one of the English counties that borders Wales. It’s the site of Electromagnetic Field, this year’s large European hacker camp, and outside my tent the sky is lit by a laser light show to the sound of electronic music. I’m home.

One of the many fun parts of EMF is its swap table. A gazebo to which you can bring your junk, and from which you can take away other people’s junk. It’s an irresistible destination which turns a casual walk into half an hour pawing through the mess in search of treasure, and along the way it provides an interesting insight into technological progress. What is considered junk in 2024?

Something for everyone

As always, the items on offer range from universal treasures of the I-can’t-believe-they-put that-there variety, through this-is-treasure-to-someone-I’m-sure items, to absolute junk. Some things pass around the camp like legends; I wasn’t there when someone dropped off a box of LED panels for example, but I’ve heard the story relayed in hushed tones several times since, and even seen some of the precious haul. A friend snagged a still-current AMD processor and some Noctua server fans as another example, and I’m told that amazingly someone deposited a Playstation 5. But these are the exceptions, in most cases the junk is either very specific to something, or much more mundane. I saw someone snag an audio effects unit that may or may not work, and there are PC expansion cards and outdated memory modules aplenty.

Finally, there is the absolute junk, which some might even call e-waste but I’ll be a little more charitable about. Mains cables, VGA cables, and outdated computer books. Need to learn about some 1990s web technology? We’ve got you covered.

Perhaps most fascinating is what the junk tells us about the march of technology. There are bins full of VoIP telephones, symptomatic of the move to mobile devices even in the office. As an aside I saw a hackerspace member in his twenties using a phone hooked up to the camp’s copper phone network walk away with the handset clamped to his ear and yank the device off the table; it’s obvious that wired handsets are a thing of the past when adults no longer know how to use them. And someone dropped off an entire digital video distribution system probably from a hotel or similar, a huge box of satellite TV receivers and some very specialised rack modules with 2008 date codes on the chips. We don’t watch linear TV any more, hotel customers want streaming.

Amid all this treasure, what did I walk away with? As I have grown older I have restricted my urge to acquire, so I’m very wary at these places. Even so, there were a few things that caught my eye, a pair of Sennheiser headphones with a damaged cord, a small set of computer speakers — mainly because we don’t have anything in our village on which to play music — and because I couldn’t quite resist it, a microcassette recorder. As each new box arrives the hardware hackers swarm over it like flies though, so who knows what treasures I’ll be tempted by over the rest of the camp.

How Facebook Killed Online Chat

Por: Lewin Day
29 Mayo 2024 at 14:00

In the early days of the internet, online conversations were an event. The technology was novel, and it was suddenly possible to socialize with a whole bunch of friends at a distance, all at once. No more calling your friends one by one, you could talk to them all at the same time!

Many of us would spend hours on IRC, or pull all-nighters bantering on MSN Messenger or AIM. But then, something happened, and many of us found ourselves having shorter conversations online, if we were having any at all. Thinking back to my younger days, and comparing them with today, I think I’ve figured out what it is that’s changed.

Deliberate Choices

Having the right nick, profile image, and personal message was a big part of looking cool on MSN Messenger. You needed something that would make you seem interesting, hip, and worth talking to. Song lyrics were common. Credit: Screenshot, MSN Messenger history

Twenty five years ago, a lot more of us were stuck getting by with dialup. The Internet wasn’t always on back then. You had to make the decision to connect to it, and sit at your computer to use it.

Similarly, logging into an IRC room was a deliberate action. It was a sign that you were setting aside time to communicate. If you were in a chat room, you were by and large there to talk. On AIM or MSN Messenger, it was much the same deal. If you wanted to have a chat, you’d leave your status on available. If you didn’t wanna talk, you’d set yourself to Busy or Away, or log off entirely.

This intentionality fostered meaningful interactions online. Back then, you’d sign in and you’d flick through your list of friends. If someone’s icon was glowing green, you knew they were probably up to talk. You might have a quick chat, or you could talk for hours. Indeed, logging on to a chatroom for an extended session was a pastime enjoyed by many.

If you were on Linux, or used multiple chat services, you might have experimented with multi-chat clients like Pidgin back in the day. Credit: Uberushaximus, GPL

Back then, people were making the conscious decision to set aside time to talk. Conversations were more focused and meaningful because both parties had set aside time to engage. This intentionality led to richer, more engaging discussions because participants were fully present.

Furthermore, the need to log in and out helped create a healthy boundary between life online and off. Users balanced their online interactions with other responsibilities and activities. There was a clear distinction between online and offline life, allowing for more complete engagement in both. When you logged off, that was it. There was no way for your online friends to get a message to you in real time, so your focus was fully on what was going on in front of you.

Critical Shift

T’was the endless march of technology that changed the meta. Broadband internet would keep our computers online round the clock. You could still log in and out of your chat apps, of course, and when you walked away from your computer, you were offline.

But technology didn’t stop there. Facebook came along, and tacked on Messenger in turn. The app would live on the smartphones in our pockets, while mobile data connections meant a message from the Internet could come through at any time.

If your buddies were green, you could hit ’em up for a chat! Facebook kind of has us all defaulting to available at all times, though, and it throws everything off. Credit: Pidgin.IM

Facebook’s always-on messaging was right there, tied to a website many of us were already using on the regular. Suddenly, booting up another app like AIM or MSN seemed archaic when we could just chat in the browser. The addition of the app to smartphones put Messenger everywhere we went. For many, it even started to supplant SMS, in addition to making other online chat platforms obsolete.

Always-on messaging seemed convenient, but it came with a curse. It’s fundamentally changed the dynamics of our online interactions, and not always for the better.

Perpetual availability means that there is a constant pressure to respond. In the beginning, Facebook implemented “busy” and “available” status messages, but they’re not really a thing anymore. Now, when you go to message a friend, you’re kind of left in to the dark as to what they’re doing and how they’re feeling. Maybe they’re chilling at home, and they’re down for a deep-and-meaningful conversation. Or maybe they’re working late at work, and they don’t really want to be bothered right now. Back in the day, you could seamlessly infer their willingness to chat simply by noting whether they were logged in or not. Today, you can’t really know without asking.

That has created a kind of silent pressure against having longer conversations on Facebook Messenger. I’m often reluctant to start a big conversation with someone on the platform, because I don’t know if they’re ready for it right now. Even when someone contacts me, I find myself trying to close out conversations quickly, even positive ones. I’m inherently assuming that they probably just intended to send me a quick message, and that they’ve got other things to do. The platform provides no explicit social signal that they’re happy to have a proper conversation. Instead, it’s almost implied that they might be messaging me while doing something else more important, because hey, Messenger’s on all the time. Nobody sits down to chat on Facebook Messenger these days.

Do any of these people want to chat? I can’t tell, because they’re always online!

It’s also ruining the peace. If you’ve got Messenger installed, notifications pop up incessantly, disrupting focus and productivity. Conversations that might have once been deep and meaningful are now often fragmented and shallow because half the time, someone’s starting them when you’re in the middle of something else. If you weren’t “logged on” or “available”, they’d wait until you were ready for a proper chat. But they can’t know that on Facebook Messenger, so they just have to send a message and hope.

In a more romantic sense, Facebook Messenger has also killed some of the magic. The ease of starting a conversation at any moment diminishes the anticipation that once accompanied online interactions. Plenty of older Internet users (myself included) will remember the excitement when a new friend or crush popped up online. You could freely leap into a conversation because just by logging on, they were saying “hey, wanna talk?” It was the equivalent social signal of seeing them walk into your local pub and waving hello. They’re here, and they want to socialize!

It’s true that we effectively had always-on messaging before Facebook brought it to a wider audience. You could text message your friends, and they’d get it right away. But this was fine, and in fact, it acted as a complement to online messaging. SMSs used to at least cost a little money, and it was generally time consuming to type them out on a limited phone keypad. They were fine if you needed to send a short message, and that was about it. Meanwhile, online messaging was better for longer, intentional conversations. You could still buzz people at an instant when you needed to, but SMS didn’t get in the way of proper online chats like Facebook Messenger would.

The problem is, it seems like we can’t really go back. As with so many technologies, we can try and blame the creators, but it’s not entirely fair. Messenger changed how we used online chat, but Facebook didn’t force us to do anything. Many of us naturally flocked to the platform, abandoning others like AIM and MSN in short order .We found  it more convenient in the short term, even if some of us have found it less satisfying in the long term.

Online platforms tend to figure out what we respond to on a base psychological level, and game that for every last drop of interaction and attention they can. They do this to sell ads and make money, and that’s all that really matters at the end of the day. Facebook’s one of the best at it. It’s not just online chat, either. Forums went the same way, and it won’t end there.

Ultimately, for a lot of us, our days of spending hours having great conversations online are behind us. It’s hard to see what could ever get the broader population to engage again in that way. Instead, it seems that our society has moved on, for the worse or for the better. For me, that’s a shame!

Measure Three Times, Design Once

16 Mayo 2024 at 14:00
A thickness gauge, letter scale, push stick, and dial caliper

Most of the Hackaday community would never wire a power supply to a circuit without knowing the expected voltage and the required current. But our mechanical design is often more bodged. We meet folks who carefully budget power to their microcontroller, sensors, and so on, but never measure the forces involved in their mechanical designs. Then they’re surprised when the motor they chose isn’t big enough for the weight of their robot.

An obstacle to being more numbers oriented is lack of basic data about the system. So, here are some simple tools for measuring dynamic properties of small mechanisms; distances, forces, velocities, accelerations, torques, and other things you haven’t thought about since college physics. If you don’t have these in your toolkit, how do you measure?

Distance

For longer distances the usual homeowner’s tools work fine. The mechatronics tinkerer benefits from two tools on the small end. A dial or electronic caliper for measuring small things, and a thickness gauge (or leaf gauge) for measuring small slots.

head of a dial caliper. A steel clamp like measuring tool with a watch dial. Read millimeters off the stem and hundredths off the dial thickness gauge - finger sized metal leaves

A thickness gauge is just metal leaves in different thicknesses, bolted together at one end. Find a combination of leaves that just fits in the space.

Force

Here’s four force measuring tools we use to cover different magnitudes of force: a postage scale, a push stick, a spring scale, and a letter scale. The postage scale is best purchased. For big things, the bathroom scale works.

A push stick is a force measurement device that you can make yourself. We first saw one of these used to tune slot cars, but they’re universally useful. It’s a simple pen shaped device made with a barrel from any small transparent parts tube, a spring, and a plunger with a protruding pin. Grasp the barrel and push the gizmo with the pin, and you can read the force off the tube.

If you need it to be calibrated, remember that you just bought a postage scale. Push it into the scale and mark off reasonable increments. Make several, in different sizes. A Z or L shaped plunger is useful for hard to reach places.

square of MDF with two button head cap screws holding a thin steel wire. Hand drawn scale on MDF. The wire has a hook to hang items on, and deflects

The conventional spring tension scale is useful, but most commercial ones are terribly made and inaccurate. You can make yourself a better one. They are useful for measuring the spring constant of springs, for learning the tractive effort needed to move a robot, finding the center of gravity of a robot arm, and a hundred other ‘how much oomph’ things. Again, it’s just a matter of connecting a hook to a spring, and measuring its deflection.

For yet lighter weights, you could buy a letter scale, at least in the old days. Today you might have to make your own.  It can be as simple as a piece of spring steel fixed to a sheet of calibrated cardboard.

Torque

Torque measurements are good not only for sizing actuators, but for measuring efficiency.

How you do torque measurements depend on the speed you want to make them at. For static loads, just put a lever of known length on the shaft and measure the force. Torque = distance * force. For fast rotating systems, you can run the system at a known speed and measure the electrical energy used.

Schematic of a Prony brake.
Schematic of a Prony brake by [MatthiasDD]
If you just want to apply a varying known torque to measure efficiency, your life is much easier. Mount a broad wheel of some sort on the shaft — RC airplane tires work well. Drape a piece of ribbon over the tire. Anchor it at the “out” end and hang a small weight at the “in” end. This is a Prony Brake, and it’s a useful device to know about. The force on the outside of the wheel is just enough to lift the weight – after that the ribbon slips. The measured torque is then the weight times the wheel radius.

You may also want to measure speeds and accelerations. Here, the ubiquity of cell phone cameras is your friend. Suppose you’re animating a crane on your model railroad. Record yourself on video moving the crane with your hands against a protractor to get a feel for speed and acceleration. In video editing software check the positions for various frames, and you now have position changes. The number of frames and distance can help you calculate the speed, and the change in speed vs time is acceleration.

If your mechanism is moving too fast for video, use a fast phototransistor or hall effect device and an oscilloscope, or gear down by holding a toy wheel against the shaft and measure the more slowly rotating wheel.

In the crane example, the torque you need to supply is the frictional torque plus the acceleration torque, and to calculate the acceleration torque you need the moment of inertia. For refresher: angular acceleration = torque / moment of inertia (ω = τ / I) and moment of inertia = mass * radius2 (I = m * r2 ) for point mass.

You can drive the crane with a repeatable torque, say using a pulley and weight or a motor, and get the acceleration ω1 from the still frames on your video. If you repeat this with a known mass m a known distance r from the shaft axis, like a lump of putty on the end of the crane arm, you can get a second value: ω2. 

Write the ω = τ / I equations, ω1 = τ / Icrane and ω2 = τ / (Icrane + r 2 * m). Combining and isolating Icrane and holding our tongues just right, Icrane = r2 * m / (ω1 / ω2 – 1).

Be careful to subtract the moment of inertia of your measuring apparatus, and add in the moment of inertia of the final drive if needed. Now you can size your servo with some confidence. Believe me, once you’ve done this a couple times, you’ll never go back to winging it.

Power

The easiest way to get a ballpark feel for power is to simply measure the system’s consumed power by measuring the electrical power at the motor, but this ignores losses in the drive train. And losses are one of the really interesting things to measure. Bad performance is usually friction, and efficiency is a goal for other reasons than just motor sizing or battery life. It’s a measure of how janky your setup is.

Does your model train or robot run poorly? Set it to climb a steep grade on a test track. Calculate the work it does: mass * height change. Measure the input electrical power and the time, Energy = V*I*T. You now have an idea of how much the actual power consumption differs from the maximally efficient system. Any power that went in but didn’t appear as potential energy in the choo-choo’s new position is frictional loss. Now you can experiment with loosening and tightening screws, changing gear mesh, and such, and have some idea if you’re making things better or worse.

Conclusion

None of the above was rocket science, and you don’t need to do some complex FEM analysis to make the average hacker project. But a bit of real engineering can go a long way towards more reliable mechanisms, and that starts with knowing the numbers you’re dealing with. Taking the required measurements can be simple if you know how to build the tools you need,  and your life will be easier with some numbers to guide you.

A Slice of Simulation, Google Sheets Style

15 Mayo 2024 at 14:00

Have you ever tried to eat one jelly bean or one potato chip? It is nearly impossible. Some of us have the same problem with hardware projects. It all started when I wrote about the old bitslice chips people used to build computers before you could easily get a whole CPU on a chip. Bitslice is basically Lego blocks that build CPUs. I have always wanted to play with technology, so when I wrote that piece, I looked on eBay to see if I could find any leftovers from this 1970-era tech. It turns out that the chips are easy to find, but I found something even better. A mint condition AM2900 evaluation board. These aren’t easy to find, so the chances that you can try one out yourself are pretty low. But I’m going to fix that, virtually speaking.

This was just the second potato chip. Programming the board, as you can see in the video below, is tedious, with lots of binary switch-flipping. To simplify things, I took another potato chip — a Google Sheet that generates the binary from a quasi-assembly language. That should have been enough, but I had to take another chip from the bag. I extended the spreadsheet to actually emulate the system. It is a terrible hack, and Google Sheets’ performance for this sort of thing could be better. But it works.

If you missed it, I left many notes on Hackaday.io about the project. In particular, I created a microcode program that takes two four-bit binary-coded decimal digits and computes the proper 8-bit number. It isn’t much, but the board only has 16 microcode instructions, so you must temper your expectations. If you want an overview of the entire technology, we’ve done that, too.

Starting Point

Block diagram of the board being simulated

The idea for the simulator struck me when I was building the assembler. I considered writing a stand-alone program, but I wanted to limit my potato chip consumption, so keeping it in the spreadsheet seemed like a good idea.

Was it? Hard to say. Google Sheets has macros that are just Javascript. However, the macros are somewhat slow and attaching them to user interface elements is difficult.  There were plenty of ways to do it, but I went for the path of least resistance.

Strategy

For better or worse, I tried to minimize the amount of scripting. All of the real work occurs on the Sim tab of the spreadsheet, and only a few key parts are included in the attached macros. Instead, the cells take advantage of the way the AM2900 works. For example, some bits in the microcode instructions represent how to find the next address. Instead of calculating this with code, there is a table that computes the address for each possible branch.

For example, branch type zero goes to the next address when the current result is zero or the address coded in the instruction if the result is not zero. If the branch type is one, there is always a jump to the hardcoded address, while a branch type of two always takes the next instruction. So, the associated table always computes all the possible results (see cells O1 through P18). Then, cell P18 uses VLOOKUP to pick the right row from the table (which has 16 rows).

Every part of the instruction decode works this way. The only complication is that the instructions operate on the current result, something mentioned in the last post. In other words, consider an instruction that says (in English): If the result is zero, go to location 9; add 1 to the B register. You might assume the jump will occur if B+1 results in zero. But that’s not how it works. Instead, the processor adds B and 1. Then, it jumps to location 9 if the state was zero before the addition operation.

What this means is that the spreadsheet computes all things at all times. The macros are almost like clock pulses. Instead of gating flip flops, they copy things from where they are calculated to where they are supposed to go.

Macros

The main simulation logic is in the stepengine macro. It computes the next address and sets the status latch, if necessary. Then it grabs the result of the current operation and places it in the correct destination. The final part of the macro updates the next location, which may require manipulating the processor stack. All of those things would have been difficult to do in spreadsheet logic.

The other macros are essentially wrappers around stepengine. The Exec macro executes an instruction without advancing (like stepping the real board in load mode). The Step macro can optionally single step, or it can execute in place like Exec. The Run macro does the real execution and also checks for breakpoints. There’s also a Reset macro to put everything in a known state.

Usage

The user interface for the simulator.

You can call the macros directly, but that’s not very user-friendly. Instead, the Sim tab has three graphical buttons for run, step, and reset. Each command has options. For example, under Run, you can set a hex address to break execution. Under Step, you can decide if the step should advance the program counter or not. The reset button allows you to clear registers.

Don’t enter your program on the Sim tab. Use the Main tab as before. You can also go to the Extensions | Macros menu to load one of the canned demos. Demo 1 is the BCD program from the last post. The other examples are the ones that shipped with the real board’s manual. If you really want to learn how the thing works, you could do worse than walk through the manual and try each example. Just don’t forget that the scanned manual has at least two typos: Figure 7 is wrong, and example 7 has an error (that error was fixed in later manuals and in the simulator’s copy, too). Instead of figure 7, use figure 3 or pick up a corrected figure on Hackaday.io.

Why learn how to operate the AM2900 evaluation board? Beats me, but I did it anyway, and now you can do it without spending the money I did on a piece of exotic hardware. I’d like to say that this might help you understand modern CPU design, but that wouldn’t be very fair. I suppose it does help a little, but modern CPUs have as much in common with this design as a steam locomotive has in common with a jet airplane.

If the idea of building a CPU from modules appeals to you, check out DDL-4. If that’s too easy for you, grab your bag of 2,000 transistors and check out this build. I’m sealing up my bag of potato chips now. Really.

You’ve Probably Never Considered Taking an Airship To Orbit

Por: Lewin Day
13 Mayo 2024 at 14:00

There have been all kinds of wild ideas to get spacecraft into orbit. Everything from firing huge cannons to spinning craft at rapid speed has been posited, explored, or in some cases, even tested to some degree. And yet, good ol’ flaming rockets continue to dominate all, because they actually get the job done.

Rockets, fuel, and all their supporting infrastructure remain expensive, so the search for an alternative goes on. One daring idea involves using airships to loft payloads into orbit. What if you could simply float up into space?

Lighter Than Air

NASA regularly launches lighter-than-air balloons to great altitudes, but they’re not orbital craft. Credit: NASA, public domain

The concept sounds compelling from the outset. Through the use of hydrogen or helium as a lifting gas, airships and balloons manage to reach great altitudes while burning zero propellant. What if you could just keep floating higher and higher until you reached orbital space?

This is a huge deal when it comes to reaching orbit. One of the biggest problems of our current space efforts is referred to as the tyranny of the rocket equation. The more cargo you want to launch into space, the more fuel you need. But then that fuel adds more weight, which needs yet more fuel to carry its weight into orbit. To say nothing of the greater structure and supporting material to contain it all.

Carrying even a few extra kilograms of weight to space can require huge amounts of additional fuel. This is why we use staged rockets to reach orbit at present. By shedding large amounts of structural weight at the end of each rocket stage, it’s possible to move the remaining rocket farther with less fuel.

If you could get to orbit while using zero fuel, it would be a total gamechanger. It wouldn’t just be cheaper to launch satellites or other cargoes. It would also make missions to the Moon or Mars far easier. Those rockets would no longer have to carry the huge amount of fuel required to escape Earth’s surface and get to orbit. Instead, they could just carry the lower amount of fuel required to go from Earth orbit to their final destination.

The rumored “Chinese spy balloon” incident of 2023 saw a balloon carrying a payload that looked very much like a satellite. It was even solar powered. However, such a craft would never reach orbit, as it had no viable propulsion system to generate the huge delta-V required. Credit: USAF, public domain

Of course, it’s not that simple. Reaching orbit isn’t just about going high above the Earth. If you just go straight up above the Earth’s surface, and then stop, you’ll just fall back down. If you want to orbit, you have to go sideways really, really fast.

Thus, an airship-to-orbit launch system would have to do two things. It would have to haul a payload up high, and then get it up to the speed required for its desired orbit. That’s where it gets hard. The minimum speed to reach a stable orbit around Earth is 7.8 kilometers per second (28,000 km/h or 17,500 mph). Thus, even if you’ve floated up very, very high, you still need a huge rocket or some kind of very efficient ion thruster to push your payload up to that speed. And you still need fuel to generate that massive delta-V (change in velocity).

For this reason, airships aren’t the perfect hack to reaching orbit that you might think. They’re good for floating about, and you can even go very, very high. But if you want to circle the Earth again and again and again, you better bring a bucketload of fuel with you.

Someone’s Working On It

JP Aerospace founder John Powell regularly posts updates to YouTube regarding the airship-to-orbit concept. Credit: John Powell, YouTube

Nevertheless, this concept is being actively worked on, but not by the usual suspects. Don’t look at NASA, JAXA, SpaceX, ESA, or even Roscosmos. Instead, it’s the work of the DIY volunteer space program known as JP Aerospace.

The organization has grand dreams of launching airships into space. Its concept isn’t as simple as just getting into a big balloon and floating up into orbit, though. Instead, it envisions a three-stage system.

The first stage would involve an airship designed to travel from ground level up to 140,000 feet. The company proposes a V-shaped design with an airfoil profile to generate additional lift as it moves through the atmosphere. Propulsion would be via propellers that are specifically designed to operate in the near-vacuum at those altitudes.

Once at that height, the first stage craft would dock with a permanently floating structure called Dark Sky Station. It would serve as a docking station where cargo could be transferred from the first stage craft to the Orbital Ascender, which is the craft designed to carry the payload into orbit.

The Ascender H1 Variant is the company’s latest concept for an airship to carry payloads from an altitude of 140,000ft and into orbit. Credit: John Powell, YouTube screenshot

The Orbital Ascender itself sounds like a fantastical thing on paper. The team’s current concept is for a V-shaped craft with a fabric outer shell which contains many individual plastic cells full of lifting gas. That in itself isn’t so wild, but the proposed size is. It’s slated to measure 1,828 meters on each side of the V — well over a mile long — with an internal volume of over 11 million cubic meters. Thin film solar panels on the craft’s surface are intended to generate 90 MW of power, while a plasma generator on the leading edge is intended to help cut drag. The latter is critical, as the craft will need to reach hypersonic speeds in the ultra-thin atmosphere to get its payload up to orbital speeds. To propel the craft up to orbital velocity, the team has been running test firings on its own designs for plasma thrusters.

Payload would be carried in two cargo bays, each measuring 30 meters square, and 20 meters deep. Credit: John Powell, YouTube Screenshot

The team at JP Aerospace is passionate, but currently lacks the means to execute their plans at full scale. Right now, the team has some experimental low-altitude research craft that are a few hundred feet long. Presently, Dark Sky Station and the Orbital Ascender remain far off dreams.

Realistically, the team hasn’t found a shortcut to orbit just yet. Building a working version of the Orbital Ascender would require lofting huge amounts of material to high altitude where it would have to be constructed. Such a craft would be torn to shreds by a simple breeze in the lower atmosphere. A lighter-than-air craft that could operate at such high altitudes and speeds might not even be practical with modern materials, even if the atmosphere is vanishingly thin above 140,000 feet.  There are huge questions around what materials the team would use, and whether the theoretical concepts for plasma drag reduction could be made to work on the monumentally huge craft.

The team has built a number of test craft for lower-altitude operation. Credit: John Powell, Youtube Screenshot

Even if the craft’s basic design could work, there are questions around the practicalities of crewing and maintaining a permanent floating airship station at high altitude. Let alone how payloads would be transferred from one giant balloon craft to another. These issues might be solvable with billions of dollars. Maybe. JP Aerospace is having a go on a budget several orders of magnitude more shoestring than that.

One might imagine a simpler idea could be worth trying first. Lofting conventional rockets to 100,000 feet with balloons would be easier and still cut fuel requirements to some degree. But ultimately, the key challenge of orbit remains. You still need to find a way to get your payload up to a speed of at least 8 kilometers per second, regardless of how high you can get it in the air. That would still require a huge rocket, and a suitably huge balloon to lift it!

For now, orbit remains devastatingly hard to reach, whether you want to go by rocket, airship, or nuclear-powered paddle steamer. Don’t expect to float to the Moon by airship anytime soon, even if it sounds like a good idea.

The Great Green Wall: Africa’s Ambitious Attempt To Fight Desertification

Por: Lewin Day
9 Mayo 2024 at 14:00

As our climate changes, we fear that warmer temperatures and drier conditions could make life hard for us. In most locations, it’s a future concern that feels uncomfortably near, but for some locations, it’s already very real. Take the Sahara desert, for example, and the degraded landscapes to the south in the Sahel. These arid regions are so dry that they struggle to support life at all, and temperatures there are rising faster than almost anywhere else on the planet.

In the face of this escalating threat, one of the most visionary initiatives underway is the Great Green Wall of Africa. It’s a mega-sized project that aims to restore life to barren terrain.

A Living Wall

Concentrated efforts have helped bring dry lands back to life. Credit: WFP

Launched in 2007 by the African Union, the Great Green Wall was originally an attempt to halt the desert in its tracks. The Sahara Desert has long been expanding, and the Sahel region has been losing the battle against desertification. The Green Wall hopes to put a stop to this, while also improving food security in the area.

The concept of the wall is simple. The idea is to take degraded land and restore it to life, creating a green band across the breadth of Africa which would resist the spread of desertification to the south. Intended to span the continent from Senegal in the west to Djibouti in the east, it was originally intended to be 15 kilometers wide and a full 7,775 kilometers long. The hope was to complete the wall by 2030.

The Great Green Wall concept moved past initial ideas around simply planting a literal wall of trees. It eventually morphed into a broader project to create a “mosaic” of green and productive landscapes that can support local communities in the region.

Reforestation is at the heart of the Great Green Wall. Millions of trees have been planted, with species chosen carefully to maximise success. Trees like Acacia, Baobab, and Moringa are commonly planted not only for their resilience in arid environments but also for their economic benefits. Acacia trees, for instance, produce gum arabic—a valuable ingredient in the food and pharmaceutical industries—while Moringa trees are celebrated for their nutritious leaves.

 

Choosing plants with economic value has a very important side effect that sustains the project. If random trees of little value were planted solely as an environmental measure, they probably wouldn’t last long. They could be harvested by the local community for firewood in short order, completely negating all the hard work done to plant them. Instead, by choosing species that have ongoing productive value, it gives the local community a reason to maintain and support the plants.

Special earthworks are also aiding in the fight to repair barren lands. In places like Mauritania, communities have been digging  half-moon divots into the ground. Water can easily run off or flow away on hard, compacted dirt. However, the half-moon structures trap water in the divots, and the raised border forms a protective barrier. These divots can then be used to plant various species where they will be sustained by the captured water. Do this enough times over a barren landscape, and with a little rain, formerly dead land can be brought back to life. It’s a traditional technique that is both cheap and effective at turning brown lands green again.

Progress

The project has been an opportunity to plant economically valuable plants which have proven useful to local communities. Credit: WFP

The initiative plans to restore 100 million hectares of currently degraded land, while also sequestering 250 million tons of carbon to help fight against climate change. Progress has been sizable, but at the same time, limited. As of mid-2023, the project had restored approximately 18 million hectares of formerly degraded land. That’s a lot of land by any measure. And yet, it’s less than a fifth of the total that the project hoped to achieve. The project has been frustrated by funding issues, delays, and the degraded security situation in some of the areas involved. Put together, this all bodes poorly for the project’s chances of reaching its goal by 2030, given 17 years have passed and we draw ever closer to 2030.

While the project may not have met its loftiest goals, that’s not to say it has all been in vain. The Great Green Wall need not be seen as an all or nothing proposition. Those 18 million hectares that have been reclaimed are not nothing, and one imagines the communities in these areas are enjoying the boons of their newly improved land.

In the driest parts of the world, good land can be hard to come by. While the Great Green Wall may not span the African continent yet, it’s still having an effect. It’s showing communities that with the right techniques, it’s possible to bring some barren zones from the brink, turning hem back into useful productive land. That, at least, is a good legacy, and if the projects full goals can be realized? All the better.

Your Open-Source Client Options In the non-Mastodon Fediverse

Por: Lewin Day
8 Mayo 2024 at 14:00

When things started getting iffy over at Twitter, Mastodon rose as a popular alternative to the traditional microblogging platfrom. In contrast to the walled gardens of other social media channels, it uses an open protocol that runs on distributed servers that loosely join together, forming the “Fediverse”.

The beauty of the Fediverse isn’t just in its server structure, though. It’s also in the variety of clients available for accessing the network. Where Twitter is now super-strict about which apps can hook into the network, the Fediverse welcomes all comers to the platform! And although Mastodon is certainly the largest player, it’s absolutely not the only elephant in the room.

Today, we’ll look at a bunch of alternative clients for the platform, ranging from mobile apps to web clients. They offer unique features and interfaces that cater to different user preferences and needs. We’ll look at the most notable examples—each of which brings a different flavor to your Fediverse experience.

Phanpy

Phanpy is relatively new on the scene when it comes to Mastodon alternatives, but it has a fun name and a clean, user-friendly interface. Designed as a web client, Phanpy stands out in the way it hides status actions—like reply, boost, and favorite buttons. It’s an intentional design choice to reduce clutter, with the developer noting they are happy with this tradeoff even if it reduces engagement on the platform. It’s for the chillers, not the attention-starved.

Phanpy also supports multiple accounts, making it a handy tool for those who manage different personas or profiles across the Fediverse. Other power-user features include a multi-column interface if you want to really chug down the posts, and a recovery system for unsent drafts.

Rodent

Rodent, on the other hand, is tailored for users on Android smartphones and tablets. The developers have a bold vision, noting that “Rodent is disruptive, unapologetical, and has a user-first approach.” Despite this, it’s not foreboding to new users—the interface will be instantly familiar to a Mastodon or Twitter user.

Rodent brings you access to Mastodon with a unique set of features. It will let you access instances without having to log in to them (assuming the instance allows it), and has a multi-instance view that lets you flip between them easily. The interface also has neatly nested replies which can make following a conversation far easier. The latest update also set it up to give you meaningful notifications rather than just vague pings from the app. That’s kind of a baseline feature for most social media apps, but this is an app with a small but dedicated developer base.

Tusky

Tusky is perhaps one of the most popular Mastodon clients for Android users. Known for its sleek and minimalist design, Tusky provides a smooth and efficient way to navigate Mastodon. It’s clean, uncluttered, and unfussy.

Tusky handles all the basics—the essential features like notifications, direct messaging, and timeline filters. It’s a lightweight app that doesn’t hog a lot of space or system resources. However, it’s still nicely customizable to ensure it’s showing you what you want, when you want.

If you’ve tried the official Mastodon app and found it’s not for you, Tusky might be more your speed. Where some apps bombard you with buttons and features, Tusky gets out of the way of you and the feed you’re trying to scroll.

Fedilab

The thing about the Fediverse is that it’s all about putting power back in individual hands. Diversity is its strength, and that’s where apps like Fedilab come in. Fedilab isn’t just about accessing social media content either. It wants to let you access other sites in the Fediverse too. A notable example is Peertube—an open-source alternative to YouTube. It’ll handle a bunch of others, too.

You might think this makes Fedilab more complicated, but it’s not really the case. If you just want to use it to access Mastodon, it does that just fine. But if you want to pull in other content to the app, from places like Misskey, Lemmy, or even Twitter, it’ll gladly show you what you’re looking for.

Trunks.social

Trunks.social is a newer entrant designed to enhance the Mastodon experience for everybody. Unlike some other options, it’s truly multi-platform—available as a webclient, or as an app for both Android and iOS. If you want to use Mastodon across a bunch of devices and with a consistent experience across all of them, Trunks.social could be a good option for you.

It focuses on integrating tightly with iOS features, such as the system-wide dark mode, to deliver a coherent and aesthetically pleasing experience across all Apple devices. Trunks.social also places a strong emphasis on privacy and data protection, offering advanced settings that let users control how their data is handled and interacted with on the platform.

Conclusion

Choosing the right Fediverse client can significantly enhance your experience of the platform. Whether you’re a casual user looking for a simple interface on your smartphone or a power user needing to work across multiple accounts or instances, there’s a client out there for you.

The diversity of clients shows the vibrant ecosystem surrounding the Fediverse. It’s not just Mastodon! It’s all driven by the community’s commitment to open-source development and user-centric design. Twitter once had something similar before it shunned flexibility to rule its community with an iron fist. In the open-source world, though, you don’t need to worry about being treated like that.

Supercon 2023: MakeItHackin Automates the Tindie Workflow

7 Mayo 2024 at 14:00

Selling your hardware hacks is a great way to multiply your project’s impact, get your creations into others’ hands, and contribute to your hacking-related budget while at it. If you’re good at it, your store begins to grow. From receiving a couple orders a year, to getting one almost every day – if you don’t optimize the process of mailing orders out, it might just start taking a toll on you.

That is not to say that you should worry – it’s merely a matter of optimization, and, now you have a veritable resource to refer to. At Supercon 2023, [MakeItHackin]/[Andrew] has graced us with his extensive experience scaling up your sales and making your shipping process as seamless as it could be. His experience is multifaceted, and he’s working with entire four platforms – Tindie, Lektronz, Etsy and Shopify, which makes his talk all that more valuable.

[MakeItHackin] tells us how he started out selling hardware, how his stores grew, and what pushed him to automate the shipping process to a formidable extent. Not just that – he’s developed a codebase for making the shipping experience as smooth as possible, and he’s sharing it all with us.

His research was initially prompted by Tindie, specifically, striving to make the shipping process seamless. If you go the straightforward way and use the Web UI to copy-paste the shipping data in your postal system, it’s going to take you a good few minutes, and it’s an error-prone process. This is fine for a couple orders a year, but when you’re processing dozens of orders at a time, it starts to add up. Plus, there’s a few issues – for instance, the invoices Tindie prints out, are not customizeable. As for Etsy, it is less than equipped for handling shipping at all, and you are expected to have your own system.

There are APIs, however – which is where automation can begin. The goal is simple – spending as little time as possible on shipping, and as much time as possible on designing hardware. He shows us a video with a simple demo – cutting down the shipping label creation time from a couple minutes, down to fourteen seconds. That alone is a veritable result, and, there’s more.

On the way there, he’s had to reverse-engineer a couple APIs. In the talk, you get a primer about APIs – how they work, differences between external and internal APIs, ways to tap into internal APIs and make them work your magic. APIs are one of the keys to having the shipping process run smoothly and quickly, and [MakeItHackin] teaches you everything, from managing cookies to using browser inspect element tools and Selenium.

Another key is having fun. [MakeItHackin] gives us another demo – an automated system that stays in your workshop, powered by a Raspberry Pi and assisted by an Arduino, which does the entire process from start to finish without human input, save for actually putting things into envelopes and taking them to the post office. Of course, the system is also equipped with flashing lights and sirens – there’s no chance you will miss an order arriving.

Then, he goes into customs and inventory management. Customs forms might require special information added to the label, which is all that much easier to do in an automated process completely under your control. As for inventory management, the API situation is a bit dire, but he’s looking into a centralized inventory synchronization system for all four platforms too.

The last part is about working with your customers as people. Prompt and personalized communication helps – some might be tempted to use “AI” chatbots, and [MakeItHackin] has tried, showing you that there are specific limitations. Also, careful with the temptation to have part of your shipping process be cloud-managed – that also means you’re susceptible to personal data storage-related risks, so it might be best to stay away from it.

In the end, we get a list of things to watch out for. For instance, don’t use your personal details on the envelope, whether it’s the “From” address or the phone number, getting substitute ones is well worth it to protect your privacy. On the practical side, using a label printer might turn out to be significantly cheaper than using an inkjet printer – remember, ink costs money, and, there’s a dozen more pieces of advice that any up-and-coming seller ought to know.

Of course, all this is but a sliver of the wealth of information that [MakeItHackin] shares in his talk, and we are overjoyed to have hosted it. If you’re looking to start selling your hardware, or perhaps you’re well on your way, find 45 minutes for this talk – it’s worth its metaphorical weight in gold.

NASA Is Now Tasked With Developing A Lunar Time Standard, Relativity Or Not

Por: Lewin Day
2 Mayo 2024 at 14:00

A little while ago, we talked about the concept of timezones and the Moon. It’s a complicated issue, because on Earth, time is all about the Sun and our local relationship with it. The Moon and the Sun have their own weird thing going on, so time there doesn’t really line up well with our terrestrial conception of it.

Nevertheless, as humanity gets serious about doing Moon things again, the issue needs to be solved. To that end, NASA has now officially been tasked with setting up Moon time – just a few short weeks after we last talked about it! (Does the President read Hackaday?) Only problem is, physics is going to make it a damn sight more complicated!

Relatively Speaking

You know it’s serious when the White House sends you a memo. “Tell NASA to invent lunar time, and get off their fannies!”

The problem is all down to general and special relativity. The Moon is in motion relative to Erath, and it also has a lower gravitational pull. We won’t get into the physics here, but it basically means that time literally moves at a different pace up there. Time on the Moon passes on average 58.7 microseconds faster over a 24 hour Earth day. It’s not constant, either—there is a certain degree of periodic variation involved.

It’s a tiny difference, but it’s cumulative over time. Plus, as it is, many space and navigational applications need the utmost in precise timing to function, so it’s not something NASA can ignore. Even if the agency just wanted to just use UTC and call it good, the relativity problem would prevent that from being a workable solution.

Without a reliable and stable timebase, space agencies like NASA would struggle to establish useful infrastructure on the Moon. Things like lunar satellite navigation wouldn’t work accurately without taking into account the time slip, for example. GPS is highly sensitive to relativistic time effects, and indeed relies upon them to function. Replicating it on the Moon is only possible if these factors are accounted for. Looking even further ahead, things like lunar commerce or secure communication would be difficult to manage reliably without stable timebases for equipment involved.

Banks of atomic clocks—like these at the US Naval Observatory—are used to establish high-quality time standards. Similar equipment may need to be placed on the Moon to establish Coordinated Lunar Time (LTC). Credit: public domain

Still, the order to find a solution has come down from the top. A memo from the Executive Office of the President charged NASA with its task to deliver a standard solution for lunar timing by December 31, 2026.  Coordinated Lunar Time (LTC) must be established and in a way that is traceable to Coordinated Universal Time (UTC). That will enable operators on Earth to synchronize operations with crews or unmanned systems on the Moon itself. LTC is required to be accurate enough for scientific and navigational purposes, and it must be resilient to any loss of contact with systems back on Earth.

It’s also desired that the future LTC standard will be extensible and scalable to space environments we may explore in future beyond the Earth-Moon system itself. In time, NASA may find it necessary to establish time standards for other celestial bodies, due to their own unique differences in relative velocity and gravitational field.

The deadline means there’s time for NASA to come up with a plan to tackle the problem. However, for a federal agency, less than two years is not exactly a lengthy time frame. It’s likely that whatever NASA comes up with will involve some kind of timekeeping equipment deployed on the Moon itself. This equipment would thus be subject to the time shift relative to Earth, making it easier to track differences in time between the lunar and terrestrial time-realities.

The US Naval Observatory doesn’t just keep careful track of time, it displays it on a big LED display for people in the area. NASA probably doesn’t need to establish a big time billboard on the Moon, but it’d be cool if they did. Credit: Votpuske, CC BY 4.0

Great minds are already working on the problem, like Kevin Coggins, NASA’s space communications and navigation chief. “Think of the atomic clocks at the U.S. Naval Observatory—they’re the heartbeat of the nation, synchronizing everything,” he said in an interview. “You’re going to want a heartbeat on the moon.”

For now, establishing CLT remains a project for the American space agency. It will work on the project in partnership with the Departments of Commerce, Defense, State and Transportation. One fears for the public servants required to coordinate meetings amongst all those departments.

Establishing new time standards isn’t cheap. It requires smart minds, plenty of research and development, and some serious equipment. Space-rated atomic clocks don’t come cheap, either. Regardless, the U.S. government hopes that NASA will lead the way for all spacefaring nations in this regard, setting a lunar time standard that can serve future operations well.

 

VAR Is Ruining Football, and Tech Is Ruining Sport

Por: Lewin Day
29 Abril 2024 at 14:00
The symbol of all that is wrong with football.

Another week in football, another VAR controversy to fill the column inches and rile up the fans. If you missed it, Coventry scored a last-minute winner in extra time in a crucial match—an FA Cup semi-final. Only, oh wait—computer says no. VAR ruled Haji Wright was offside, and the goal was disallowed. Coventry fans screamed that the system got it wrong, but no matter. Man United went on to win and dreams were forever dashed.

Systems like the Video Assistant Referee were brought in to make sport fairer, with the aim that they would improve the product and leave fans and competitors better off. And yet, years later, with all this technology, we find ourselves up in arms more than ever.

It’s my sincere belief that technology is killing sport, and the old ways were better. Here’s why.

The Old Days

Moments like these came down to the people on the pitch. Credit: Sdo216, CC BY-SA 3.0

For hundreds of years, we adjudicated sports the same way. The relevant authority nominated some number of umpires or referees to control the game. The head referee was the judge, jury, and executioner as far as rules were concerned. Players played to the whistle, and a referee’s decision was final. Whatever happened, happened, and the game went on.

It was not a perfect system. Humans make mistakes. Referees would make bad calls. But at the end of the day, when the whistle blew, the referee’s decision carried the day. There was no protesting it—you had to suck it up and move on.

This worked fine until the advent of a modern evil—the instant replay. Suddenly, stadiums were full of TV cameras that captured the play from all angles. Now and then, it would become obvious that a referee had made a mistake, with television stations broadcasting incontrovertible evidence to thousands of viewers across the land. A ball at Wimbledon was in, not out. A striker was on side prior to scoring. Fans started to groan and grumble. This wasn’t good enough!

And yet, the system hung strong. As much as it pained the fans to see a referee screw over their favored team, there was nothing to be done. The referee’s call was still final. Nobody could protest or overrule the call. The decision was made, the whistle was blown. The game rolled on.

Then somebody had a bright idea. Why don’t we use these cameras and all this video footage, and use it to double check the referee’s work? Then, there’ll never be a problem—any questionable decision can be reviewed outside of the heat of the moment. There’ll never be a bad call again!

Oh, what a beautiful solution it seemed. And it ruined everything.

The Villain, VAR

The assistant video assistant referees are charged with monitoring various aspects of the game and reporting to the Video Assistant Referee (VAR). The VAR then reports to the referee on the ground, who may overturn a decision, hold firm, or look at the footage themself on a pitchside display. Credit: Niko4it, CC BY-SA 4.0

Enter the Video Assistant Referee (VAR). The system was supposed to bring fairness and accuracy to a game fraught with human error. The Video Assistant Referee was an official that would help guide the primary referee’s judgement based on available video evidence. They would be fed information from a cadre of Assistant Video Assistant Referees (AVARs) who sat in the stadium behind screens, reviewing the game from all angles. No, I didn’t make that second acronym up.

It was considered a technological marvel. So many cameras, so many views, so much slow-mo to pour over. The assembed VAR team would look into everything from fouls to offside calls. The information would be fed to the main referee on the pitch, and they could refer to a pitchside video replay screen if they needed to see things with their own eyes.

A VAR screen mounted on the pitch for the main referee to review as needed. Credit: Carlos Figueroa, CC BY-SA 4.0

The key was that VAR was to be an assistive tool. It was to guide the primary referee, who still had the final call at the end of the day.

You’d be forgiven for thinking that giving a referee more information to do their job would be a good thing.  Instead, the system has become a curse word in the mouths of fans, and a scourge on football’s good name.

From its introduction, VAR began to pervert the game of football. Fans were soon decrying the system’s failures, as entire championships fell the wrong way due to unreliability in VAR systems. Assistant referees were told to hold their offside calls to let the video regime take over. Players were quickly chided for demanding video reviews time and again. New rules would see yellow cards issued for players desperately making “TV screen” gestures in an attempt to see a rivals goal overturned. Their focus wasn’t on the game, but on gaming the system in charge of it.

Fans and players are so often stuck waiting for the penny to drop that celebrations lose any momentum they might have had. Credit: Rlwjones, CC BY-SA 4.0

VAR achieves one thing with brutal technological efficiency: it sucks the life out of the game. The spontaneity of celebrating a goal is gone. Forget running to the stands, embracing team mates, and punching the air in sweet elation. Instead, so many goals now lead to minute-long reviews while the referee consults with those behind the video screens and reviews the footage. Fans sit in a stunted silence, sitting in the dreaded drawn-out suspense of “goal” or “no goal.”

The immediacy and raw emotion of the game has been shredded to pieces. Instead of jumping in joy, fans and players sit waiting for a verdict from an unseen, remote official. The communal experience of instant joy or despair is muted by the system’s mere presence. What was once a straightforward game now feels like a courtroom drama where every play can be contested and overanalyzed.

It’s not just football where this is a problem, either. Professional cricket is now weighed down with microphone systems to listen out for the slightest snick of bat on ball. Tennis, weighed down by radar reviews of line calls. The interruptions never cease—because it’s in every player’s interest to whip out the measuring tape whenever it would screw over their rival. The more technology, the more reviews are made, and the further we get from playing out the game we all came to see.

Making Things Right

Enough of this nonsense! Blow the whistle and move on. Credit: SounderBruce, CC BY-SA 4.0

With so much footage to review, and so many layers of referees involved, VAR can only slow football down. There’s no point trying to make it faster or trying to make it better. The correct call is to scrap it entirely.

As it stands, good games of football are being regularly interrupted by frustrating video checks. Even better games are being ruined when the VAR system fails or a bad call still slips through. Moments of jubilant celebration are all too often brought to naught when someone’s shoelace was thought to be a whisker’s hair ahead of someone’s pinky toe in a crucial moment of the game.

Yes, bad calls will happen. Yes, these will frustrate the fans. But they will frustrate them far less than the current way of doing things. It’s my experience that fans get over a bad call far faster when it’s one ref and and a whistle. When it’s four referees, sixteen camera angles, and a bunch of lines on the video screen? They’ll rage for days that this mountain of evidence suggests their team was ripped off. They won’t get over it. They’ll moan about it for years.

Let the referees make the calls. Refereeing is an art form. A good referee understands the flow of the game, and knows when to let the game breathe versus when to assert control. This subtle art is being lost to the halting interruptions of the video inspection brigade.

Football was better before. They were fools to think they could improve it by measuring it to the nth degree. Scrap VAR, scrap the interruptions. Put it back on the referees on the pitch, and let the game flow.

❌
❌