Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

The SS United States: The Most Important Ocean Liner We May Soon Lose Forever

Por: Maya Posch
27 Junio 2024 at 14:30

Although it’s often said that the era of ocean liners came to an end by the 1950s with the rise of commercial aviation, reality isn’t quite that clear-cut. Coming out of the troubled 1940s arose a new kind of ocean liner, one using cutting-edge materials and propulsion, with hybrid civil and military use as the default, leading to a range of fascinating design decisions. This was the context in which the SS United States was born, with the beating heart of the US’ fastest battle ships, with light-weight aluminium structures and survivability built into every single aspect of its design.

Outpacing the super-fast Iowa-class battleships with whom it shares a lot of DNA due to its lack of heavy armor and triple 16″ turrets, it easily became the fastest ocean liner, setting speed records that took decades to be beaten by other ocean-going vessels, though no ocean liner ever truly did beat it on speed or comfort. Tricked out in the most tasteful non-flammable 1950s art and decorations imaginable, it would still be the fastest and most comfortable way to cross the Atlantic today. Unfortunately ocean liners are no longer considered a way to travel in this era of commercial aviation, leading to the SS United States and kin finding themselves either scrapped, or stuck in limbo.

In the case of the SS United States, so far it has managed to escape the cutting torch, but while in limbo many of its fittings were sold off at auction, and the conservation group which is in possession of the ship is desperately looking for a way to fund the restoration. Most recently, the owner of the pier where the ship is moored in Camden, New Jersey got the ship’s eviction approved by a judge, leading to very tough choices to be made by September.

A Unique Design

WW II-era United States Maritime Commission (MARCOM) poster.
WW II-era United States Maritime Commission (MARCOM) poster.

The designer of the SS United States is William Francis Gibbs, who despite being a self-taught engineer managed to translate his life-long passion for shipbuilding into a range of very notable ships. Many of these were designed at the behest of the United States Maritime Commission (MARCOM), which was created by the Merchant Marine Act of 1936, until it was abolished in 1950. MARCOM’s task was to create a merchant shipbuilding program for hundreds of modern cargo ships that would replace the World War I vintage vessels which formed the bulk of the US Merchant Marine. As a hybrid civil and federal organization, the merchant marine is intended to provide the logistical backbone for the US Navy in case of war and large-scale conflict.

The first major vessel to be commissioned for MARCOM was the SS America, which was an ocean liner commissioned in 1939 and whose career only ended in 1994 when it (then named the American Star) wrecked at the Canary Islands. This came after it had been sold in 1992 to be turned into a five-star hotel in Thailand. Drydocking in 1993 had revealed that despite the advanced age of the vessel, it was still in remarkably good condition.

Interestingly, the last merchant marine vessel to be commissioned by MARCOM was the SS United States, which would be a hybrid civilian passenger liner and military troop transport. Its sibling, the SS America, was in Navy service from 1941 to 1946 when it was renamed the USS West Point (AP-23) and carried over 350,000 troops during the war period, more than any other Navy troopship. Its big sister would thus be required to do all that and much more.

Need For Speed

SS United States colorized promotional B&W photograph. The ship's name and an American flag have been painted in position here as both were missing when this photo was taken during 1952 sea trials.
SS United States colorized promotional B&W photograph. The ship’s name and an American flag have been painted in position here as both were missing when this photo was taken during 1952 sea trials.

William Francis Gibbs’ naval architecture firm – called Gibbs & Cox by 1950 after Daniel H. Cox joined – was tasked to design the SS United States, which was intended to be a display of the best the United States of America had to offer. It would be the largest, fastest ocean liner and thus also the largest and fastest troop and supply carrier for the US Navy.

Courtesy of the major metallurgical advances during WW II, and with the full backing of the US Navy, the design featured a military-style propulsion plant and a heavily compartmentalized design following that of e.g. the Iowa-class battleships. This meant two separate engine rooms and similar levels of redundancy elsewhere, to isolate any flooding and other types of damage. Meanwhile the superstructure was built out of aluminium, making it both very light and heavily corrosion-resistant. The eight US Navy M-type boilers (run at only 54% of capacity) and a four-shaft propeller design took lessons learned with fast US Navy ships to reduce vibrations and cavitation to a minimum. These lessons include e.g. the the five- and four-bladed propeller design also seen used with the Iowa-class battleships with their newer configurations.

Another lessons-learned feature was a top to bottom fire-proofing after the terrible losses of the SS Morro Castle and SS Normandie, with no wood, fabrics or other flammable materials onboard, leading to the use of glass, metal and spun-glass fiber, as well as fireproof fabrics and carpets. This extended to the art pieces that were onboard the ship, as well as the ship’s grand piano which was made from mahogany whose inability to ignite was demonstrated by trying to burn it with a gasoline fire.

The actual maximum speed that the SS United States can reach is still unknown, with it originally having been a military secret. Its first speed trial supposedly saw the vessel hit an astounding 43 knots (80 km/h), though after the ship was retired from the United States Lines (USL) by the 1970s and no longer seen as a naval auxiliary asset, its top speed during the June 10, 1952 trial was revealed to be 38.32 knots (70.97 km/h). In service with USL, its cruising speed was 36 knots, gaining it the Blue Riband and rightfully giving it its place as America’s Flagship.

A Fading Star

The SS United States was withdrawn from passenger service by 1969, in a very unexpected manner. Although the USL was no longer using the vessel, it remained a US Navy reserve vessel until 1978, meaning that it remained sealed off to anyone but US Navy personnel during that period. Once the US Navy no longer deemed the vessel relevant for its needs in 1978, it was sold off, leading to a period of successive owners. Notable was Richard Hadley who had planned to convert it into seagoing time-share condominiums, and auctioned off all the interior fittings in 1984 before his financing collapsed.

In 1992, Fred Mayer wanted to create a new ocean liner to compete with the Queen Elizabeth, leading him to have the ship’s asbestos and other hazardous materials removed in Ukraine, after which the vessel was towed back to Philadelphia in 1996, where it has remained ever since. Two more owners including Norwegian Cruise Line (NCL) briefly came onto the scene, but economic woes scuttled plans to revive it as an active ocean liner. Ultimately NCL sought to sell the vessel off for scrap, which led to the SS United States Conservancy (SSUSC) to take over ownership in 2010 and preserve the ship while seeking ways to restore and redevelop the vessel.

Considering that the running mate of the SS United States (the SS America) was lost only a few years prior, this leaves the SS United States as the only example of a Gibbs ocean liner, and a poignant reminder of what would have been a highlight of the US’s marine prowess. Compared to the United Kingdom’s record here, with the Queen Elizabeth 2 (QE2, active since 1969) now a floating hotel in Dubai and the Queen Mary 2‘s maiden voyage in 2004, the US looks to be rather meager when it comes to preserving its ocean liner legacy.

End Of The Line?

The curator of the Iowa-class USS New Jersey (BB-62, currently fresh out of drydock), Ryan Szimanski, walked over from his museum ship last year to take a look at the SS United States, which is moored literally within viewing distance from his own pride and joy. Through the videos he made, one gains a good understanding of both how stripped the interior of the ship is, but also how amazingly well-conserved the ship is today. Even after decades without drydocking or in-depth maintenance, the ship looks like could slip into a drydock tomorrow and come out like new a year or so later.

At the end of all this, the question remains whether the SS United States deserves it to be preserved. There are many arguments for why this would the case, from its unique history as part of the US Merchant Marine, its relation to the highly successful SS America, it being effectively a sister ship to the four Iowa-class battleships, as well as a strong reminder of the importance of the US Merchant Marine at some point in time. The latter especially is a point which professor Sal Mercogliano (from What’s Going on With Shipping? fame) is rather passionate about.

Currently the SSUSC is in talks with a New York-based real-estate developer about a redevelopment concept, but this was thrown into peril when the owner of the pier suddenly doubled the rent, leading to the eviction by September. Unless something changes for the better soon, the SS United States stands a good chance of soon following the USS Kitty Hawk, USS John F. Kennedy (which nearly became a museum ship) and so many more into the scrapper’s oblivion.

What, one might ask, is truly in the name of the SS United States?

The Book That Could Have Killed Me

24 Junio 2024 at 14:00

It is funny how sometimes things you think are bad turn out to be good in retrospect. Like many of us, when I was a kid, I was fascinated by science of all kinds. As I got older, I focused a bit more, but that would come later. Living in a small town, there weren’t many recent science and technology books, so you tended to read through the same ones over and over. One day, my library got a copy of the relatively recent book “The Amateur Scientist,” which was a collection of [C. L. Stong’s] Scientific American columns of the same name. [Stong] was an electrical engineer with wide interests, and those columns were amazing. The book only had a snapshot of projects, but they were awesome. The magazine, of course, had even more projects, most of which were outside my budget and even more of them outside my skill set at the time.

If you clicked on the links, you probably went down a very deep rabbit hole, so… welcome back. The book was published in 1960, but the projects were mostly from the 1950s. The 57 projects ranged from building a telescope — the original topic of the column before [Stong] took it over — to using a bathtub to study aerodynamics of model airplanes.

X-Rays

[Harry’s] first radiograph. Not bad!
However, there were two projects that fascinated me and — lucky for me — I never got even close to completing. One was for building an X-ray machine. An amateur named [Harry Simmons] had described his setup complaining that in 23 years he’d never met anyone else who had X-rays as a hobby. Oddly, in those days, it wasn’t a problem that the magazine published his home address.

You needed a few items. An Oudin coil, sort of like a Tesla coil in an autotransformer configuration, generated the necessary high voltage. In fact, it was the Ouidn coil that started the whole thing. [Harry] was using it to power a UV light to test minerals for flourescence. Out of idle curiosity, he replaced the UV bulb with an 01 radio tube. These old tubes had a magnesium coating — a getter — that absorbs stray gas left inside the tube.

The tube glowed in [Harry’s] hand and it reminded him of how an old gas-filled X-ray tube looked. He grabbed some film and was able to image screws embedded in a block of wood.

With 01 tubes hard to find, why not blow your own X-ray tubes?

However, 01 tubes were hard to get even then. So [Harry], being what we would now call a hacker, took the obvious step of having a local glass blower create custom tubes to his specifications.

Given that I lived where the library barely had any books published after 1959, it is no surprise that I had no access to 01 tubes or glass blowers. It wasn’t clear, either, if he was evacuating the tubs or if the glass blower was doing it for him, but the tube was down to 0.0001 millimeters of mercury.

Why did this interest me as a kid? I don’t know. For that matter, why does it interest me now? I’d build one today if I had the time. We have seen more than one homemade X-ray tube projects, so it is doable. But today I am probably able to safely operate high voltages, high vaccums, and shield myself from the X-rays. Probably. Then again, maybe I still shouldn’t build this. But at age 10, I definitely would have done something bad to myself or my parent’s house, if not both.

Then It Gets Worse

The other project I just couldn’t stop reading about was a “homemade atom smasher” developed by [F. B. Lee]. I don’t know about “atom smasher,” but it was a linear particle accelerators, so I guess that’s an accurate description.

The business part of the “atom smasher” (does not show all the vacuum equipment).

I doubt I have the chops to pull this off today, much less back then. Old refigerator compressors were run backwards to pull a rough vaccuum. A homemade mercury diffusion pump got you the rest of the way there. I would work with some of this stuff later in life with scanning electron microscopes and similar instruments, but I was buying them, not cobbling them together from light bulbs, refigerators, and home-made blown glass!

You needed a good way to measure low pressure, too, so you needed to build a McLeod gauge full of mercury. The accelerator itself is three foot long,  borosilicate glass tube, two inches in diameter. At the top is a metal globe with a peephole in it to allow you to see a neon bulb to judge the current in the electron beam. At the bottom is a filament.

The globe at the top matches one on top of a Van de Graf generator that creates about 500,000 volts at a relatively low current. The particle accelerator is decidedly linear but, of course, all the cool particle accelerators these days form a loop.

[Andres Seltzman] built something similar, although not quite the same, some years back and you can watch it work in the video below:

What could go wrong? High vacuum, mercury, high voltage, an electron beam and plenty of unintentional X-rays. [Lee] mentions the danger of “water hammers” in the mercury tubes. In addition, [Stong] apparently felt nervous enough to get a second opinion from [James Bly] who worked for a company called High Voltage Engineering. He said, in part:

…we are somewhat concerned over the hazards involved. We agree wholeheartedly with his comments concerning the hazards of glass breakage and the use of mercury. We feel strongly, however, that there is inadequate discussion of the potential hazards due to X-rays and electrons. Even though the experimenter restricts himself to targets of low atomic number, there will inevitably be some generation of high-energy X-rays when using electrons of 200 to .300 kilovolt energy. If currents as high as 20 microamperes are achieved, we are sure that the resultant hazard is far from negligible. In addition, there will be substantial quantities of scattered electrons, some of which will inevitably pass through the observation peephole.

I Survived

Clearly, I didn’t build either of these, because I’m still here today. I did manage to make an arc furnace from a long-forgotten book. Curtain rods held carbon rods from some D-cells. The rods were in a flower pot packed with sand. An old power cord hooked to the curtain rods, although one conductor went through a jar of salt water, making a resistor so you didn’t blow the fuses.

Somehow, I survived without dying from fumes, blinding myself, or burning myself, but my parent’s house had a burn mark on the floor for many years after that experiement.

If you want to build an arc furnace, we’d start with a more modern concept. If you want a safer old book to read, try the one by [Edmund Berkeley], the developer of the Geniac.

An Enigma Machine Built in Meccano

15 Junio 2024 at 20:00

As far as model construction sets go, LEGO is by far the most popular brand for building not only pre-planned models but whatever the builder can imagine. There are a few others out there though, some with some interesting features. Meccano (or Erector in North America) is a construction set based around parts that are largely metal including its fasteners, which allows for a different approach to building models than other systems including the easy addition of electricity. [Craig], a member of the London Meccano Club, is demonstrating his model Enigma machine using this system for all of its parts and adding some electricity to make the circuitry work as well.

The original Enigma machine was an electronic cypher used by the German military in World War 2 to send coded messages. For the time, its code was extremely hard to break, and led to the British development of the first programmable electronic digital computer to help decipher its coded messages. This model uses Meccano parts instead to recreate the function of the original machine, with a set of keys similar to a typewriter which, when pressed, advance a set of three wheels. The wheels all have wiring in them, and depending on their initial settings will light up a different character on a display.

There are a few modifications made to the design (besides the use of a completely different set of materials) but one of the main ones was eliminating the heavy leaf springs of the original for smaller and easier-to-manage coil springs, which are also part of the electrical system that creates the code. The final product recreates the original exceptionally faithfully, with plans to create a plugboard up next, and you can take a look at the inner workings of a complete original here.

Thanks to [Tim] for the tip!

Mechanic Prince of Tides

5 Junio 2024 at 08:00

Lord Kelvin’s name comes up anytime you start looking at the history of science and technology. In addition to working on transatlantic cables and thermodynamics, he also built an early computing device to predict tides. Kelvin, whose real name was William Thomson, became interested in tides in a roundabout way, as explained in a recent IEEE Spectrum article.

He’d made plenty of money on his patents related to the telegraph cable, but his wife died, so he decided to buy a yacht, the Lalla Rookh. He used it as a summer home. If you live on a boat, the tides are an important part of your day.

Today, you could just ask your favorite search engine or AI about the tides, but in 1870, that wasn’t possible. Also, in a day when sea power made or broke empires, tide charts were often top secret. Not that the tides were a total mystery. Newton explained what was happening back in 1687. Laplace realized they were tied to oscillations almost a century later. Thomson made a machine that could do the math Laplace envisioned.

We know today that the tides depend on hundreds of different motions, but many of them have relatively insignificant contributions, and we only track 37 of them, according to the post. Kelvin’s machine — an intricate mesh of gears and cranks — tracked only 10 components.

In operation, the user turned a crank, and a pen traced a curve on a roll of paper. A small mark showed the hour with a special mark for noon. You could process a year’s worth of tides in about 4 hours. While Kelvin received credit for the machine’s creation, he acknowledged the help of many others in his paper, from craftsmen to his brother.

We actually did a deep dive into tides, including Kelvin’s machine, a few years ago. He shows up a number of times in our posts.

Aiken’s Secret Computing Machines

3 Junio 2024 at 20:00

This neat video from the [Computer History Archives Project] documents the development of the Aiken Mark I through Mark IV computers. Partly shrouded in the secrecy of World War II and the Manhattan Project effort, the Mark I, “Harvard’s Robot Super Brain”, was built and donated by IBM, and marked their entry into what we would now call the computer industry.

Numerous computing luminaries used the Mark I, aside from its designer Howard Aiken. Grace Hopper, Richard Bloch, and even John von Neumann all used the machine. It was an electromechanical computer, using gears, punch tape, relays, and a five horsepower motor to keep it all running in sync. If you want to dig into how it actually worked, the deliciously named patent “Calculator” goes into some detail.

The video goes on to tell the story of Aiken’s various computers, the rift between Harvard and IBM, and the transition of computation from mechanical to electronic. If this is computer history that you don’t know, it’s well worth a watch. (And let us know if you also think that they’re using computer-generated speech to narrate it.)

If “modern” computer history is more your speed, check out this documentary about ENIAC.

Thanks [Stephen Walters] for the tip!

The Tragic Story Of The Ill-Fated Supergun

Por: Lewin Day
28 Mayo 2024 at 14:00

In the annals of ambitious engineering projects, few have captured the imagination and courted controversy quite like Gerald Bull’s Supergun. Bull, a Canadian artillery expert, envisioned a gun that could shoot payloads directly into orbit. In time, his ambition led him down a path that ended in both tragedy and unfinished business.

Depending on who you talk to, the Supergun was either a new and innovative space technology, or a weapon of war so dangerous, it couldn’t be allowed to exist. Ultimately, the powers that be intervened to ensure we would never find out either way.

First Shots Fired

Gerard Bull, pictured at the Space Research Institute at McGill University in 1964. Credit: CC BY-SA 3.0

Gerald Bull was born in 1928 in Ontario, Canada. After a tumultuous youth, his uncle was able to find him a place at the University of Toronto at the age of sixteen. Where his uncle suggested the medical school, Bull requested a position in the newly established aeronautical engineering course. After passing an interview, he was able to begin his tertiary studies in the field at the age of sixteen.

He would go on to graduate in 1948, a strictly average student that had done little to distinguish himself during his period at the university. However, his energy and passion would eventually see him admitted to further study at the Institute of Aerodynamics, where he studied the design of advanced wind tunnels.

This academic pursuit laid the groundwork for his future endeavors. While finishing his PhD in 1950, Bull would eventually be nominated for military work with the Defence Research Board. That led to his position with the Canadian Armament Research and Development Establishment, where he dived into the world of advanced artillery technology.

The Project HARP gun, abandoned in Barbados. Credit: Brohav, Public Domain

He began exploring the use of artillery guns for supersonic aerodynamic research, as a cheaper alternative to building high-speed wind tunnels. Later on, he would go on to develop the High Altitude Research Project (HARP), a joint Canadian-American initiative aimed at exploring ballistics at extremely high altitudes.

Kicking off in the 1960s, HARP’s most notable achievement was the creation of a massive gun capable of firing projectiles into the stratosphere, setting the stage for Bull’s lifelong obsession with superguns.

His early experiments with HARP demonstrated the potential of using artillery to reach the upper atmosphere, though the project was eventually shuttered due to financial and political pressures. The project developed a 16.4 inch (41.6 cm) smooth-bore gun which was installed for testing in Barbados.

By 1962, HARP was firing 330 pound (150 kilogram) finned projectiles at over 10,000 feet per second (3000 m/s), reaching altitudes of 215,000 feet (65 kilometers). The project was funded by using the projectiles to capture meteorological data in the upper atmosphere.

Aiming Higher

The seeds for Bull’s later work on the infamous Supergun were sown during these formative years. His desire was not just to shoot projectiles into the upper atmosphere, but to fire them so fast that they could actually reach orbit. His idea to achieve this was simple — he’d use a large gun to fire a projectile high into the atmosphere, where it would then ignite a rocket to boost its velocity further.

Bull’s SRC was in the arms trade, with the company desinging and manufacturing the GC-45 howitzer for multiple customers. Credit: Sturmvogel 66, CC BY-SA 3.0

Well, simple enough on paper, anyway. But achieving this feat was altogether more complex in reality. Bull began investigating the concept during his time at the HARP project. There, he developed rocket-assisted projectiles that could be fired from an artillery gun without damage to the solid fuel propellant.

Plans centered around a small multi-staged rocket called the Martlet. It was to be fired from a 16.4 inch (41.6 cm) gun that was assembled by joining two existing naval cannons together into one massive barrel a full 110 feet (33.5 meters) long. Sadly, HARP’s funding began to dry up towards the end of the 1960s, and a change of government sealed the project’s fate.

Bull ended up going out on his own, establishing the Space Research Corporation (SRC) to pursue his goals. The company operated as an artillery consultancy for international clients, including the Canadian and US military. He developed improved rifling techniques which helped give military artillery longer range and better accuracy. SRC and Bull would go on to sell shells and guns to states all around the world. On the side, he continued to develop his orbital gun technology.

A small barrel section from Project Babylon exists in the collection of the Imperial War Museum, Duxford. Credit: CC BY-SA 3.0

The culmination of Bull’s work came in the late 1980s with the Supergun project. After serving jail time in the US for dealing arms to South Africa, Bull had moved away from clients in the West, and had taken up work with China and Iraq. Ultimately, though, this gave him the opportunity to pursue his dream of an orbital launch gun once more.

Officially known as Project Babylon, it was commissioned by Saddam Hussein in 1988, while he was then the Iraqi defense secretary. The project’s goal was ostensibly to develop a supergun capable of launching satellites into orbit, potentially reducing the cost and complexity of space launches. The guns were intended to fire multi-stage rocket propelled shells that would be capable of reaching orbit.

Bull agreed to continue work on conventional military artillery pieces for the Iraqi government, in exchange for a $25 million payment towards Project Babylon. The project would see the construction of multiple “Baby Babylon” guns, each measuring 147 feet (44.8 meters) long with a caliber of 13.8 inches (35 cm).

Big Babylon

The ultimate goal, however, was the production of two mighty PC-2 Big Babylon guns. They would measure 512 feet (156 meters) long with a massive 39 inch (99 cm) bore. The PC-2 was intended to be capable of launching a 440 lb (200 kg) satellite into an orbital trajectory, carried by a 4,400 lb (2,000 kg) rocket-assisted projectile. Alternatively, it could have launched a 1,300 lb (600 kg) projectile over 620 miles (1,000 km). The final gun would have sat almost 328 feet (100 m) high at the tip, with the barrel suspended by cables from a large supporting frame. The barrel itself was to weigh 1,510 tons,  with the whole structure coming in at a hefty 2,100 tons in total.

Two segments of the Iraqi supergun, Big Babylon, are displayed at the Royal Armouries in Fort Nelson, Portsmouth. Credit: Geni, GFDL CC-BY-SA

The technical challenges were immense. Achieving the necessary muzzle velocity to reach orbit required unprecedented gun lengths and extremely durable materials to withstand the immense pressures involved. The initial construction of the Baby Babylon revealed problems with seals between multiple barrel segments. This was a complication from a a necessary engineering decision, as producing a single barrel at such large sizes was impractical.

Meanwhile, the political implications of the project drew international concern. Given the fraught political situation at the time, a large Iraqi gun project was not popular on the international stage. On paper, the gun’s applications for military use were limited. It was not possible to readily aim the gun, nor could it fire rapid salvos on a given target. It was impossible to move or hide, and it was extremely vulnerable to air attack.

Regardless of these practical limitations, few countries wanted Iraq to have such a potent gun in any way, shape or form. Furthermore, Bull was continuing to work on other Iraqi artillery projects, including Scud missile development. This only made him more unpopular with Iraq’s enemies.

The project’s demise was as dramatic as its ambition. In 1990, Bull was assassinated in Brussels as he approached his apartment’s front door. It followed a series of break-ins to his home, which were suggested to be a threat to the engineer to cease his work on the project. His death effectively ended Project Babylon. Supergun components, which had been in production across Europe, were seized by customs officers, and Bull’s staff in turn abandoned the project. Parts of the gun still exist today, after being donated to museums in the UK.

In the aftermath, the Supergun project remains a fascinating study of the interactions between ambition, technology and politics. Gerard Bull’s legacy is a testament to the limits of engineering, and the limits of our own ruling structures. While technically feasible, the Supergun could not be born, given the perceived geopolitical ramifications of such a weapon.

Gerard Bull’s story is a poignant chapter in the history of space exploration technology, marked by brilliant engineering marred by political intrigue and a tragic end. It serves as a reminder of the complexities involved when mixed-use technologies clash with political interests and national security concerns.

Rediscovering The Nile: The Ancient River That Was Once Overlooked By The Egyptian Pyramids

Por: Maya Posch
19 Mayo 2024 at 11:00

Although we usually imagine the conditions in Ancient Egypt to be much like the Egypt of today, back during the Holocene there was significantly more rain as a result of the African Humid Period (AHP). This translated in the river Nile stretching far beyond its current range, with many more branches. This knowledge led a team of researchers to test the hypothesis that the largest cluster of pyramids in the Nile Valley was sited along one of these now long since vanished branches. Their findings are described in an article published in Communications Earth & Environment, by [Eman Ghoneim] and colleagues.

The Ahramat Branch and pyramids along its trajectory. (Credit: Eman Ghoneim et al., 2024)
The Ahramat Branch and pyramids along its trajectory. (Credit: Eman Ghoneim et al., 2024)

The CliffsNotes version can be found in the accompanying press release by the University of North Carolina Wilmington. Effectively, the researchers postulated that a branch of the Nile existed along these grouping of pyramids, with their accompanying temples originally positioned alongside this branch. The trick was to prove that a river branch once existed in that area many thousands of years ago.

What complicates this is that the main course of the Nile has shifted over the centuries, and anthropogenic activity has obscured much what remained, making life for researchers exceedingly difficult. Ultimately a combination of soil core samples, geophysical evidence, and remote sensing (e.g. satellite imagery) helped to cement the evidence for the existence what they termed the Ahramat Nile Branch, with ‘ahramat’ meaning ‘pyramids’ in Arabic.

Synthetic Aperture Radar (SAR) and high-resolution radar elevation data provided evidence for the Nile once having traveled right past this string of pyramids, also identifying the modern Bahr el-Libeini canal as one of the last remnants of the Ahramat Branch before the river’s course across the floodplain shifted towards the East, probably due to tectonic activity. Further research using Ground Penetrating Radar (GPR) and Electromagnetic Tomography (EMT) along a 1.2 km section of the suspected former riverbed gave clear indications of a well-preserved river channel, with the expected silt and sediments.

Soil cores to a depth of 20 and 13 meters further confirmed this, showing not only the sediment, but also freshwater mussel shells at 6 meter depth. Shallow groundwater was indicated at these core sites, meaning that even today subsurface water still flows through this part of the floodplain.

These findings not only align with the string of pyramids and their causeways that would have provided direct access to the water’s edge, but also provided hints for a further discovery regarding the Bent Pyramid — as it’s commonly known — which is located deep inside the desert today. Although located far from the floodplain by about a kilometer, its approximately 700 meters long causeway terminates at what would have been a now extinct channel: the Dahshur Inlet, which might also have served the Red Pyramid and others, although evidence for this is shakier.

Altogether, these findings further illustrate an Ancient Egypt where the Old Kingdom was followed by a period of severe changes, with increasing drought caused by the end of the AHP, an eastwardly migrating floodplain and decreased flow in the Nile from its tributaries. By the time that European explorers laid eyes on the ancient wonders of the Ancient Egyptian pyramids, the civilization that had birthed them was no more, nor was the green and relatively lush environment that had once surrounded it.

How Italians Got Their Power

Por: Jenny List
18 Mayo 2024 at 20:00

We take for granted that electrical power standards are generally unified across countries and territories. Europe for instance has a standard at 230 volts AC, with a wide enough voltage acceptance band to accommodate places still running at 220 or 240 volts. Even the sockets maintain a level of compatibility across territories, with a few notable exceptions.

It was not always this way though, and to illustrate this we have [Sam], who’s provided us with a potted history of mains power in Italy. The complex twists and turns of power delivery in that country reflect the diversity of the power industry in the late 19th and early 20th century as the technology spread across the continent.

Starting with a table showing the impressive range of voltages found across the country from differing power countries, it delves into the taxation of power in Italy which led to two entirely different plug standards, and their 110/220 volt system. Nationalization may have ironed out some of the kinks and unified 220 volts across the country, but the two plugs remain.

Altogether it’s a fascinating read, and one which brings to mind that where this is being written you could still find a few years ago some houses with three sizes of the archaic British round-pin socket. Interested in the diversity of plugs? We have a link for that.

Institutional Memory, On Paper

11 Mayo 2024 at 14:00

Our own Dan Maloney has been on a Voyager kick for the past couple of years. Voyager, the space probe. As a long-term project, he has been trying to figure out the computer systems on board. He got far enough to write up a great overview piece, and it’s a pretty good summary of what we know these days. But along the way, he stumbled on a couple old documents that would answer a lot of questions.

Dan asked JPL if they had them, and the answer was “no”. Oddly enough, the very people who are involved in the epic save a couple weeks ago would also like a copy. So when Dan tracked the document down to a paper-only collection at Wichita State University, he thought he had won, but the whole box is stashed away as the library undergoes construction.

That box, and a couple of its neighbors, appear to have a treasure trove of documentation about the Voyagers, and it may even be one-of-a-kind. So in the comments, a number of people have volunteered to help the effort, but I think we’re all just going to have to wait until the library is open for business again. In this age of everything-online, everything-scanned-in, it’s amazing to believe that documents about the world’s furthest-flown space probe wouldn’t be available, but so it is!

It makes you wonder how many other similar documents – products of serious work by the people responsible for designing the systems and machines that shaped our world – are out there in the dark somewhere. History can’t capture everything, and it’s down to our collective good judgement in the end. So if you find yourself in a position to shed light on, or scan, such old papers, please do! And then contact some nerd institution like the Internet Archive or the Computer History Museum.

This article is part of the Hackaday.com newsletter, delivered every seven days for each of the last 200+ weeks. It also includes our favorite articles from the last seven days that you can see on the web version of the newsletter. Want this type of article to hit your inbox every Friday morning? You should sign up!

The Computers of Voyager

6 Mayo 2024 at 14:00

After more than four decades in space and having traveled a combined 44 billion kilometers, it’s no secret that the Voyager spacecraft are closing in on the end of their extended interstellar mission. Battered and worn, the twin spacecraft are speeding along through the void, far outside the Sun’s influence now, their radioactive fuel decaying, their signals becoming ever fainter as the time needed to cross the chasm of space gets longer by the day.

But still, they soldier on, humanity’s furthest-flung outposts and testaments to the power of good engineering. And no small measure of good luck, too, given the number of nearly mission-ending events which have accumulated in almost half a century of travel. The number of “glitches” and “anomalies” suffered by both Voyagers seems to be on the uptick, too, contributing to the sense that someday, soon perhaps, we’ll hear no more from them.

That day has thankfully not come yet, in no small part due to the computers that the Voyager spacecraft were, in a way, designed around. Voyager was to be a mission unlike any ever undertaken, a Grand Tour of the outer planets that offered a once-in-a-lifetime chance to push science far out into the solar system. Getting the computers right was absolutely essential to delivering on that promise, a task made all the more challenging by the conditions under which they’d be required to operate, the complexity of the spacecraft they’d be running, and the torrent of data streaming through them. Forty-six years later, it’s safe to say that the designers nailed it, and it’s worth taking a look at how they pulled it off.

Volatile (Institutional) Memory

That turns out that getting to the heart of the Voyager computers, in terms of schematics and other technical documentation, wasn’t that easy. For a project with such an incredible scope and which had an outsized impact on our understanding of the outer planets and our place in the galaxy, the dearth of technical information about Voyager is hard to get your head around. Most of the easily accessible information is pretty high-level stuff; the juicy technical details are much harder to come by. This is doubly so for the computers running Voyager, many of the details of which seem to be getting lost in the sands of time.

As a case in point, I’ll offer an anecdote. As I was doing research for this story, I was looking for anything that would describe the architecture of the Flight Data System, one of the three computers aboard each spacecraft and the machine that has been the focus of the recent glitch and recovery effort aboard Voyager 1. I kept coming across a reference to a paper with a most promising title: “Design of a CMOS Processor for use in the Flight Data Subsystem of a Deep Space Probe.” I searched high and low for this paper online, but it appears not to be available anywhere but in a special collection in the library of Witchita State University, where it’s in the personal papers of a former professor who did some work for NASA.

Unfortunately, thanks to ongoing construction, the library has no access to the document right now. The difficulty I had in rounding up this potentially critical document seems to indicate a loss of institutional knowledge of the Voyager program’s history and its technical origins. That became apparent when I reached out to public affairs at Jet Propulsion Lab, where the Voyagers were built, in the hope that they might have a copy of that paper in their archives. Sadly, they don’t, and engineers on the Voyager team haven’t even heard of the paper. In fact, they’re very keen to see a copy if I ever get a hold of it, presumably to aid their job of keeping the spacecraft going.

In the absence of detailed technical documents, the original question remains: How do the computers of Voyager work? I’ll do the best I can to answer that from the existing documentation, and hopefully fill in the blanks later with any other documents I can scrape up.

Good Old TTL

As mentioned above, each Voyager contains three different computers, each of which is assigned different functions. Voyager was the first unmanned mission to include distributed computing, partly because the sheer number of tasks to be executed with precision during the high-stakes planetary fly-bys would exceed the capabilities of any single computer that could be made flyable. There was a social engineering angle to this as well, in that it kept the various engineering teams from competing for resources from a single computer.

Redundancy galore: block diagram for the Command Computer Subsystem (CCS) used on the Viking orbiters. The Voyager CCS is almost identical. Source: NASA/JPL.

To the extent that any one computer in a tightly integrated distributed system such as the one on Voyager can be considered the “main computer,” the Computer and Command Subsystem (CCS) would be it. The Voyager CCS was almost identical to another JPL-built machine, the Viking orbiter CCS. The Viking mission, which put two landers on Mars in the summer of 1976, was vastly more complicated than any previous unmanned mission that JPL had built spacecraft for, most of which used simple sequencers rather than programmable computers.

On Voyager, the CCS is responsible for receiving commands from the ground and passing them on to the other computers that run the spacecraft itself and the scientific instruments. The CCS was built with autonomy and reliability in mind, since after just a few days in space, the communication delay would make direct ground control impossible. This led JPL to make everything about the CCS dual-redundant — two separate power supplies, two processors, two output units, and two complete sets of command buffers. Additionally, each processor could be cross-connected to each output unit, and interrupts were distributed to both processors.

There are no microprocessors in the CCS. Rather, the processors are built from discrete 7400-series TTL chips. The machine does not have an operating system but rather runs bare-metal instructions. Both data and instruction words are 18 bits wide, with the instruction words having a 6-bit opcode and a 12-bit address. The 64 instructions contain the usual tools for moving data in and out of registers and doing basic arithmetic, although there are only commands for adding and subtracting, not for multiplication or division. The processors access 4 kilowords of redundant plated-wire memory, which is similar to magnetic core memory in that it records bits as magnetic domains, but with an iron-nickel alloy plated onto the surface of wires rather than ferrite beads.

The Three-Axis Problem

On Voyager, the CCS does almost nothing in terms of flying the spacecraft. The tasks involved in keeping Voyager pointed in the right direction are farmed out to the Attitude and Articulation Control Subsystem, or AACS. Earlier interplanetary probes such as Pioneer were spin-stabilized, meaning they maintained their orientation gyroscopically by rotating the craft around the longitudinal axis. Spin stabilization wouldn’t work for Voyager, since a lot of the science planned for the mission, especially the photographic studies, required a stable platform. This meant that three-axis stabilization was required, and the AACS was designed to accommodate that need.

Voyager’s many long booms complicate attitude control by adding a lot of “wobble”.

The physical design of Voyager injected some extra complexity into attitude control. While previous deep-space vehicles had been fairly compact, Voyager bristles with long booms. Sprouting from the compact bus located behind its huge high-gain antenna are booms for the three radioisotope thermoelectric generators that power the spacecraft, a very long boom for the magnetometers, a shorter boom carrying the heavy imaging instruments, and a pair of very long antennae for the Plasma Wave Subsystem experiment. All these booms tend to wobble a bit when the thrusters fire or actuators move, complicating the calculations needed to stay on course.

The AACS is responsible for running the gyros, thrusters, attitude sensors, and actuators needed to keep Voyager oriented in space. Like the CCS, the AACS has a redundant design using TTL-based processors and 18-bit words. The same 4k of redundant plated-wire memory was used, and many instructions were shared between the two computers. To handle three-axis attitude control in a more memory-efficient manner, the AACS uses index registers to point to the same block of code multiple times.

Years of Boredom, Minutes of Terror

Rounding out the computers of Voyager is the Flight Data Subsystem or FDS, the culprit in the latest “glitch” on Voyager 1, which was traced to a corrupted memory location and nearly ended the extended interstellar mission. Compared with the Viking-descended CCS and AACS, the FDS was to be a completely new kind of computer, custom-made for the demands of a torrent of data from eleven scientific experiments and hundreds of engineering sensors during the high-intensity periods of planetary flybys, while not being overbuilt for the long, boring cruises between the planets.

The FDS was designed strictly to handle the data to and from the eleven separate scientific instruments on Voyager, as well as the engineering data from dozens of sensors installed around the spacecraft. The need for a dedicated data computer was apparent early on in the Voyager design process, when it became clear that the torrent of data streaming from the scientific platforms during flybys would outstrip the capabilities of any of the hard-wired data management systems used in previous deep space probes.

One of the eight cards comprising the Voyager FDS. Covered with discrete CMOS chips, this card bears the “MJS77” designation; “Mariner Jupiter Saturn 1977” was the original name of the Voyager mission. Note the D-sub connectors for inter-card connections. Source: NASA/JPL.

It was evident early in the Voyager design process that data-handling requirements would outstrip the capabilities of any of the hard-wired data management systems used in previous deep space probes. This led to an initial FDS design using the same general architecture as the CCS and AACS — dual TTL processors, 18-bit word width, and the same redundant 4k of plated-wire memory.  But when the instruction time of a breadboard version of this machine was measured, it turned out to be about half the speed necessary to support peak flyby data throughput.

Voyager FDS. Source: National Air and Space Museum.

To double the speed, direct memory access circuits were added. This allowed data to move in and out of memory without having to go through the processor first. Further performance gains were made by switching the processor design to CMOS chips, a risky move in the early 1970s. Upping the stakes was the decision to move away from the reliable plated-wire memory to CMOS memory, which could be accessed much faster.

The speed gains came at a price, though: volatility. Unlike plated-wire memory, CMOS memory chips lose their data if the power is lost, meaning a simple power blip could potentially erase the FDS memory at the worst possible time. JPL engineers worked around this with brutal simplicity — rather than power the FDS memories from the main spacecraft power systems, they ran dedicated power lines directly back to the radioisotope thermoelectric generators (RTG) powering the craft. This means the only way to disrupt power to the CMOS memories would be a catastrophic loss of all three RTGs, in which case the mission would be over anyway.

Physically, the FDS was quite compact, especially for a computer built of discrete chips in the early 1970s. Unfortunately, it’s hard to find many high-resolution photos of the flight hardware, but the machine appears to be built from eight separate cards that are attached to a card cage. Each card has a row of D-sub connectors along the top edge, which appear to be used for card-to-card connections in lieu of a backplane. A series of circular MIL-STD connectors provide connection to the spacecraft’s scientific instruments, power bus, communications, and the Data Storage Subsystem (DSS), the digital 8-track tape recorder used to buffer data during flybys.

Next Time?

Even with the relative lack of information on Voyager’s computers, there’s still a lot of territory to cover, including some of the interesting software architecture techniques used, and the details of how new software is uploaded to spacecraft that are currently almost a full light-day distant. And that’s not to mention the juicy technical details likely to be contained in a paper hidden away in some dusty box in a Kansas library. Here’s hoping that I can get my hands on that document and follow up with more details of the Voyager computers.

This Is How a Pen Changed the World

29 Abril 2024 at 23:00
A render of a BiC Cristal ballpoint pen showing the innards.

Look around you. Chances are, there’s a BiC Cristal ballpoint pen among your odds and ends. Since 1950, it has far outsold the Rubik’s Cube and even the iPhone, and yet, it’s one of the most unsung and overlooked pieces of technology ever invented. And weirdly, it hasn’t had the honor of trademark erosion like Xerox or Kleenex. When you ‘flick a Bic’, you’re using a lighter.

It’s probably hard to imagine writing with a feather and a bottle of ink, but that’s what writing was limited to for hundreds of years. When fountain pens first came along, they were revolutionary, albeit expensive and leaky. In 1900, the world literacy rate stood around 20%, and exorbitantly-priced, unreliable utensils weren’t helping.

Close-up, cutaway render of a leaking ballpoint pen. In 1888, American inventor John Loud created the first ballpoint pen. It worked well on leather and wood and the like, but absolutely shredded paper, making it almost useless.

One problem was that while the ball worked better than a nib, it had to be an absolutely perfect fit, or ink would either get stuck or leak out everywhere. Then along came László Bíró, who turned instead to the ink to solve the problems of the ballpoint.

Bíró’s ink was oil-based, and sat on top of the paper rather than seeping through the fibers. While gravity and pen angle had been a problem in previous designs, his ink induced capillary action in the pen, allowing it to write reliably from most angles. You’d think this is where the story ends, but no. Bíró charged quite a bit for his pens, which didn’t help the whole world literacy thing.

French businessman Marcel Bich became interested in Bíró’s creation and bought the patent rights for $2 million ($26M in 2024). This is where things get interesting, and when the ballpoint pen becomes incredibly cheap and ubiquitous. In addition to thicker ink, the secret is in precision-machined steel balls, which Marcel Bich was able to manufacture using Swiss watchmaking machinery. When released in 1950, the Bic Cristal cost just $2. Since this vital instrument has continued to be so affordable, world literacy is at 90% today.

When we wrote about the Cristal, we did our best to capture the essence of what about the pen makes continuous, dependable ink transmission possible, but the video below goes much further, with extremely detailed 3D models.

Thanks to both [George Graves] and [Stephen Walters] for the tip!

❌
❌