Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

On Egyptian Pyramids and Why It’s Definitely Aliens

Por: Maya Posch
1 Abril 2025 at 14:00

History is rather dull and unexciting to most people, which naturally invites exciting flights of fancy that can range from the innocent to outright conspiracies. Nobody truly believes that the astounding finds and (fully functioning) ancient mechanisms in the Indiana Jones & Uncharted franchises are real, with mostly intact ancient cities waiting for intrepid explorers along with whatever mystical sources of power, wealth or influence formed the civilization’s foundations before its tragic demise. Yet somehow Plato’s fictive Atlantis has taken on a life of its own, along with many other ‘lost’ civilizations, whether real or imagined.

Of course, if these aforementioned movies and video games were realistic, they would center around a big archaeological dig and thrilling finds like pot shards and cuneiform clay tablets, not ways to smite enemies and gain immortality. Nor would it involve solving complex mechanical puzzles to gain access to the big secret chamber, prior to walking out of the readily accessible backdoor. Reality is boring like that, which is why there’s a major temptation to spruce things up. With the Egyptian pyramids as well as similar structures around the world speaking to the human imagination, this has led to centuries of half-baked ideas and outright conspiracies.

Most recently, a questionable 2022 paper hinting at structures underneath the Pyramid of Khafre in Egypt was used for a fresh boost to old ideas involving pyramid power stations, underground cities and other fanciful conspiracies. Although we can all agree that the ancient pyramids in Egypt are true marvels of engineering, are we really on the cusp of discovering that the ancient Egyptians were actually provided with Forerunner technology by extraterrestrials?

The Science of Being Tragically Wrong

A section of the 'runes' at Runamo. (Credit: Entheta, Wikimedia)
A section of the ‘runes’ at Runamo. (Credit: Entheta, Wikimedia)

In defense of fanciful theories regarding the Actual Truth™ about Ancient Egypt and kin, archaeology as we know it today didn’t really develop until the latter half of the 20th century, with the field being mostly a hobbyist thing that people did out of curiosity as well as a desire for riches. Along the way many comical blunders were made, such as the Runamo runes in Sweden that turned out to be just random cracks in dolerite.

Less funny were attempts by colonists to erase Great Zimbabwe (11th – ~17th century CE) and the Kingdom of Zimbabwe after the ruins of the abandoned capital were discovered by European colonists and explored in earnest by the 19th century. Much like the wanton destruction of local cultures in the Americas by European colonists and explorers who considered their own culture, religion and technology to be clearly superior, the history of Great Zimbabwe was initially rewritten so that no thriving African society ever formed on its own, but was the result of outside influences.

In this regard it’s interesting how many harebrained ideas about archaeological sites have now effectively flipped, with mystical and mythical properties being assigned and these ‘Ancients’ being almost worshipped. Clearly, aliens visited Earth and that led to pyramids being constructed all around the globe. These would also have been the same aliens or lost civilizations that had technology far beyond today’s cutting edge, putting Europe’s fledgling civilization to shame.

Hence people keep dogpiling on especially the pyramids of Giza and its surrounding complex, assigning mystical properties to their ventilation shafts and expecting hidden chambers with technology and treasures interspersed throughout and below the structures.

Lost Technology

The Giant's Causeway in Northern Ireland. (Credit: code poet, Wikimedia)
The Giant’s Causeway in Northern Ireland. (Credit: code poet, Wikimedia)

The idea of ‘lost technology’ is a pervasive one, mostly buoyed by the axiom that you cannot disprove something, only find evidence for its absence. Much like the possibility of a teapot being in orbit around the Sun right now, you cannot disprove that the Ancient Egyptians did not have hyper-advanced power plants using zero point energy back around 3,600 BCE. This ties in with the idea of ‘lost civilizations‘, which really caught on around the Victorian era.

Such romanticism for a non-existent past led to the idea of Atlantis being a real, lost civilization becoming pervasive, with the 1960s seeing significant hype around the Bimini Road. This undersea rock formation in the Bahamas was said to have been part of Atlantis, but is actually a perfectly cromulent geological formation. More recently a couple of German tourists got into legal trouble while trying to prove a connection between Egypt’s pyramids to Atlantis, which is a theory that refuses to die along with the notion that Atlantis was some kind of hyper-advanced civilization and not just a fictional society that Plato concocted to illustrate the folly of man.

Admittedly there is a lot of poetry in all of this when you consider it from that angle.

Welcome to Shangri-La... or rather Shambhala as portrayed in <i>Uncharted 3</i>.
Welcome to Shangri-La… or rather Shambhala as portrayed in Uncharted 3.

People have spent decades of their life and countless sums of money on trying to find Atlantis, Shangri-La (possibly inspired by Shambhala), El Dorado and similar fictional locations. The Iram of the Pillars which featured in Uncharted 3: Drake’s Deception is one of the lost cities mentioned in the Qur’an, and is incidentally another great civilization that saw itself meet a grim end through divine punishment. Iram is often said to be Ubar, which is commonly known as Atlantis of the Sands.

 

All of this is reminiscent of the Giant’s Causeway in Northern Ireland, and corresponding area at Fingal’s Cave on the Scottish isle of Staffa, where eons ago molten basalt cooled and contracted into basalt columns in a way that is similar to how drying mud will crack in semi-regular patterns. This particular natural formation did lead to many local myths, including how a giant built a causeway across the North Channel, hence the name.

Fortunately for this location, no ‘lost civilization’ tag became attached, and thus it remains a curious demonstration of how purely natural formations can create structures that one might assume to have required intelligence, thus providing fuel for conspiracies. So far only ‘Young Earth’ conspiracy folk have put a claim on this particular site.

What we can conclude is that much like the Victorian age that spawned countless works of fiction on the topic, many of these modern-day stories appear to be rooted in a kind of romanticism for a past that never existed, with those affected interpreting natural patterns as something more in a sure sign of confirmation bias.

Tourist Traps

Tomb of the First Emperor Qin Shi Huang Di, Xi'an, China (Credit: Aaron Zhu)
Tomb of the First Emperor Qin Shi Huang Di, Xi’an, China (Credit: Aaron Zhu)

One can roughly map the number of tourist visits with the likelihood of wild theories being dreamed up. These include the Egyptian pyramids, but also similar structures in what used to be the sites of the Aztec and Maya civilizations. Similarly the absolutely massive mausoleum of Qin Shi Huang in China with its world-famous Terracotta Army has led to incredible speculation on what might still be hidden inside the unexcavated tomb mound, such as entire seas and rivers of mercury that moved mechanically to simulate real bodies of water, a simulated starry sky, crossbows set to take out trespassers and incredible riches.

Many of these features were described by Sima Qian in the first century BCE, who may or may not have been truthful in his biography of Qin Shi Huang. Meanwhile, China’s authorities have wisely put further excavations on hold, as they have found that many of the recovered artefacts degrade very quickly once exposed to air. The paint on the terracotta figures began to flake off rapidly after excavation, for example, reducing them to the plain figures which we are familiar with.

Tourism can be as damaging as careless excavation. As popular as the pyramids at Giza are, centuries of tourism have taken their toll, with vandalism, graffiti and theft increasing rapidly since the 20th century. The Great Pyramid of Khufu had already been pilfered for building materials over the course of millennia by the local population, but due to tourism part of its remaining top stones were unceremoniously tipped over the side to make a larger platform where tourists could have some tea while gazing out over the the Giza Plateau, as detailed in a recent video on the History for Granite channel:

The recycling of building materials from antique structures was also the cause of the demise of the Labyrinth at the foot of the pyramid of Amenemhat III at Hawara. Once an architectural marvel, with reportedly twelve roofed courts and spanning a total of 28,000 m2, today only fragments remain of its existence. This sadly is how most marvels of the Ancient World end up: looted ruins, ashes and shards, left in the sand, mud, or reclaimed by nature, from which we can piece together with a lot of patience and the occasional stroke of fortune a picture what it once may have looked like.

Pyramid Power

Cover of The Giza Power Plant book. (Credit: Christopher Dunn)
Cover of The Giza Power Plant book. (Credit: Christopher Dunn)

When in light of all this we look at the claims made about the Pyramid of Khafre and the persistent conspiracies regarding this and other pyramids hiding great secrets, we can begin to see something of a pattern. Some people have really bought into these fantasies, while for others it’s just another way to embellish a location, to attract more rubes tourists and sell more copies of their latest book on the extraterrestrial nature of pyramids and how they are actually amazing lost technologies. This latter category is called pseudoarcheology.

Pyramids, of course, have always held magical powers, but the idea that they are literal power plants seems to have been coined by one Christopher Dunn, with the publication of his pseudo-archeological book The Giza Power Plant in 1998. That there would be more structures underneath the Pyramid of Khafre is a more recent invention, however. Feeding this particular flight of fancy appears to be a 2022 paper by Filippo Biondi and Corrado Malanga, in which synthetic aperture radar (SAR) was used to examine said pyramid interior and subsurface features.

Somehow this got turned into claims about multiple deep vertical wells descending 648 meters along with other structures. Shared mostly via conspiracy channels, it widely extrapolates from claims made in the paper by Biondi et al., with said SAR-based claims never having been peer-reviewed or independently corroborated. On the Rational Wiki entry for these and other claims related to the Giza pyramids are savagely tossed under the category of ‘pyramidiots’.

The art that conspiracy nuts produce when provided with generative AI tools. (Source: Twitter)
The art that conspiracy nuts produce when provided with generative AI tools. (Source: Twitter)

Back in the real world, archaeologists have found a curious L-shaped area underneath a royal graveyard near Khufu’s pyramid that was apparently later filled in, but which seems to lead to a deeper structure. This is likely to be part of the graveyard, but may also have been a feature that was abandoned during construction. Currently this area is being excavated, so we’re likely to figure out more details after archaeologists have finished gently sifting through tons of sand and gravel.

There is also the ScanPyramids project, which uses non-destructive and non-invasive techniques to scan Old Kingdom-era pyramids, such as muon tomography and infrared thermography. This way the internal structure of these pyramids can be examined in-depth. One finding was that of a number of ‘voids’, which could mean any of a number of things, but most likely do not contain world-changing secrets.

To this day the most credible view is still that the pyramids of the Old Kingdom were used as tombs, though unlike the mastabas and similar tombs, there is a credible argument to be made that rather than being designed to be hidden away, these pyramids would be eternal monuments to the pharaoh. They would be open for worship of the pharaoh, hence the ease of getting inside them. Ironically this would make them more secure from graverobbers, which was a great idea until the demise of the Ancient Egyptian civilization.

This is a point that’s made succinctly on the History for Granite channel, with the conclusion being that this goal of ‘inspiring awe’ to worshippers is still effective today, simply judging by the millions of tourists each year to these monuments, and the tall tales that they’ve inspired.

AMSAT-OSCAR 7: the Ham Satellite That Refused to Die

Por: Maya Posch
29 Marzo 2025 at 20:00

When the AMSAT-OSCAR 7 (AO-7) amateur radio satellite was launched in 1974, its expected lifespan was about five years. The plucky little satellite made it to 1981 when a battery failure caused it to be written off as dead. Then, in 2002 it came back to life. The prevailing theory being that one of the cells in the satellites NiCd battery pack, in an extremely rare event, failed open — thus allowing the satellite to run (intermittently) off its solar panels.

In a recent video by [Ben] on the AE4JC Amateur Radio YouTube channel goes over the construction of AO-7, its operation, death and subsequent revival are covered, as well as a recent QSO (direct contact).

The battery is made up of multiple individual cells.

The solar panels covering this satellite provided a grand total of 14 watts at maximum illumination, which later dropped to 10 watts, making for a pretty small power budget. The entire satellite was assembled in a ‘clean room’ consisting of a sectioned off part of a basement, with components produced by enthusiasts associated with AMSAT around the world. Onboard are two radio transponders: Mode A at 2 meters and Mode B at 10 meters, as well as four beacons, three of which are active due to an international treaty affecting the 13 cm beacon.

Positioned in a geocentric LEO (1,447 – 1,465 km) orbit, it’s quite amazing that after 50 years it’s still mostly operational. Most of this is due to how the satellite smartly uses the Earth’s magnetic field for alignment with magnets as well as the impact of photons to maintain its spin. This passive control combined with the relatively high altitude should allow AO-7 to function pretty much indefinitely while the PV panels keep producing enough power. All because a NiCd battery failed in a very unusual way.

Modern Computing’s Roots or The Manchester Baby

19 Marzo 2025 at 23:00
Closeup of the original Manchester Baby CRT screen

In the heart of Manchester, UK, a groundbreaking event took place in 1948: the first modern computer, known as the Manchester Baby, ran its very first program. The Baby’s ability to execute stored programs, developed with guidance from John von Neumann’s theory, marks it as a pioneer in the digital age. This fascinating chapter in computing history not only reshapes our understanding of technology’s roots but also highlights the incredible minds behind it. The original article, including a video transcript, sits here at [TheChipletter]’s.

So, what made this hack so special? The Manchester Baby, though a relatively simple prototype, was the first fully electronic computer to successfully run a program from memory. Built by a team with little formal experience in computing, the Baby featured a unique cathode-ray tube (CRT) as its memory store – a bold step towards modern computing. It didn’t just run numbers; it laid the foundation for all future machines that would use memory to store both data and instructions. Running a test to find the highest factor of a number, the Baby performed 3.5 million operations over 52 minutes. Impressive, by that time.

Despite criticisms that it was just a toy computer, the Baby’s significance shines through. It was more than just a prototype; it was proof of concept for the von Neumann architecture, showing us that computers could be more than complex calculators. While debates continue about whether it or the ENIAC should be considered the first true stored-program computer, the Baby’s role in the evolution of computing can’t be overlooked.

So What is a Supercomputer Anyway?

Por: Maya Posch
19 Marzo 2025 at 14:00

Over the decades there have been many denominations coined to classify computer systems, usually when they got used in different fields or technological improvements caused significant shifts. While the very first electronic computers were very limited and often not programmable, they would soon morph into something that we’d recognize today as a computer, starting with World War 2’s Colossus and ENIAC, which saw use with cryptanalysis and military weapons programs, respectively.

The first commercial digital electronic computer wouldn’t appear until 1951, however, in the form of the Ferranti Mark 1. These 4.5 ton systems mostly found their way to universities and kin, where they’d find welcome use in engineering, architecture and scientific calculations. This became the focus of new computer systems, effectively the equivalent of a scientific calculator. Until the invention of the transistor, the idea of a computer being anything but a hulking, room-sized monstrosity was preposterous.

A few decades later, more computer power could be crammed into less space than ever before including ever higher density storage. Computers were even found in toys, and amidst a whirlwind of mini-, micro-, super-, home-, minisuper- and mainframe computer systems, one could be excused for asking the question: what even is a supercomputer?

Today’s Supercomputers

ORNL's Summit supercomputer, fastest until 2020 (Credit: ORNL)
ORNL’s Summit supercomputer, fastest until 2020 (Credit: ORNL)

Perhaps a fair way to classify supercomputers  is that the ‘supercomputer’ aspect is a highly time-limited property. During the 1940s, Colossus and ENIAC were without question the supercomputers of their era, while 1976’s Cray-1 wiped the floor with everything that came before, yet all of these are archaic curiosities next to today’s top two supercomputers. Both the El Capitan and Frontier supercomputers are exascale (1+ exaFLOPS in double precision IEEE 754 calculations) level machines, based around commodity x86_64 CPUs in a massively parallel configuration.

Taking up 700 m2 of floor space at the Lawrence Livermore National Laboratory (LLNL) and drawing 30 MW of power, El Capitan’s 43,808 AMD EPYC CPUs are paired with the same number of AMD Instinct MI300A accelerators, each containing 24 Zen 4 cores plus CDNA3 GPU and 128 GB of HBM3 RAM. Unlike the monolithic ENIAC, El Capitan’s 11,136 nodes, containing four MI300As each, rely on a number of high-speed interconnects to distribute computing work across all cores.

At LLNL, El Capitan is used for effectively the same top secret government things as ENIAC was, while Frontier at Oak Ridge National Laboratory (ORNL) was the fastest supercomputer before El Capitan came online about three years later. Although currently LLNL and ORNL have the fastest supercomputers, there are many more of these systems in use around the world, even for innocent scientific research.

Looking at the current list of supercomputers, such as today’s Top 9, it’s clear that not only can supercomputers perform a lot more operations per second, they also are invariably massively parallel computing clusters. This wasn’t a change that was made easily, as parallel computing comes with a whole stack of complications and problems.

The Parallel Computing Shift

ILLIAC IV massively parallel computer's Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)
ILLIAC IV massively parallel computer’s Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)

The first massively parallel computer was the ILLIAC IV, conceptualized by Daniel Slotnick in 1952 and first successfully put into operation in 1975 when it was connected to ARPANET. Although only one quadrant was fully constructed, it produced 50 MFLOPS compared to the Cray-1’s 160 MFLOPS a year later. Despite the immense construction costs and spotty operational history, it provided a most useful testbed for developing parallel computation methods and algorithms until the system was decommissioned in 1981.

There was a lot of pushback against the idea of massively parallel computation, however, with Seymour Cray famously comparing the idea of using many parallel vector processors instead of a single large one akin to ‘plowing a field with 1024 chickens instead of two oxen’.

Ultimately there is only so far you can scale a singular vector processor, of course, while parallel computing promised much better scaling, as well as the use of commodity hardware. A good example of this is a so-called Beowulf cluster, named after the original 1994 parallel computer built by Thomas Sterling and Donald Becker at NASA. This can use plain desktop computers, wired together using for example Ethernet and with open source libraries like Open MPI enabling massively parallel computing without a lot of effort.

Not only does this approach enable the assembly of a ‘supercomputer’ using cheap-ish, off-the-shelf components, it’s also effectively the approach used for LLNL’s El Capitan, just with not very cheap hardware, and not very cheap interconnect hardware, but still cheaper than if one were to try to build a monolithic vector processor with the same raw processing power after taking the messaging overhead of a cluster into account.

Mini And Maxi

David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)
David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)

One way to look at supercomputers is that it’s not about the scale, but what you do with it. Much like how government, large businesses and universities would end up with ‘Big Iron’ in the form of mainframes and supercomputers, there was a big market for minicomputers too. Here ‘mini’ meant something like a PDP-11 that’d comfortably fit in the corner of an average room at an office or university.

The high-end versions of minicomputers were called ‘superminicomputer‘, which is not to be confused with minisupercomputer, which is another class entirely. During the 1980s there was a brief surge in this latter class of supercomputers that were designed to bring solid vector computing and similar supercomputer feats down to a size and price tag that might entice departments and other customers who’d otherwise not even begin to consider such an investment.

The manufacturers of these ‘budget-sized supercomputers’ were generally not the typical big computer manufacturers, but instead smaller companies and start-ups like Floating Point Systems (later acquired by Cray) who sold array processors and similar parallel, vector computing hardware.

Recently David Lovett (AKA Mr. Usagi Electric) embarked on a quest to recover and reverse-engineer as much FPS hardware as possible, with one of the goals being to build a full minisupercomputer system as companies and universities might have used them in the 1980s. This would involve attaching such an array processor to a PDP-11/44 system.

Speed Versus Reliability

Amidst all of these definitions, the distinction between a mainframe and a supercomputer is much easier and more straightforward at least. A mainframe is a computer system that’s designed for bulk data processing with as much built-in reliability and redundancy as the price tag allows for. A modern example is IBM’s Z-series of mainframes, with the ‘Z’ standing for ‘zero downtime’. These kind of systems are used by financial institutions and anywhere else where downtime is counted in millions of dollars going up in (literal) flames every second.

This means hot-swappable processor modules, hot-swappable and redundant power supplies, not to mention hot spares and a strong focus on fault tolerant computing. All of these features are less relevant for a supercomputer, where raw performance is the defining factor when running days-long simulations and when other ways to detect flaws exist without requiring hardware-level redundancy.

Considering the brief lifespan of supercomputers (currently in the order of a few years) compared to mainframes (decades) and the many years that the microcomputers which we have on our desks can last, the life of a supercomputer seems like that of a bright and very brief flame, indeed.

Top image: Marlyn Wescoff and Betty Jean Jennings configuring plugboards on the ENIAC computer (Source: US National Archives)

Combined Crypto, Anglo-American Style

7 Marzo 2025 at 03:00

If you think about military crypto machines, you probably think about the infamous Enigma machine. However, as [Christos T.] reminds us, there were many others and, in particular, the production of a “combined cipher” machine for the US and the UK to use for a variety of purposes.

The story opens in 1941 when ships from the United States and the United Kingdom were crossing the Atlantic together in convoys. The US wanted to use the M-138A and M-209 machines, but the British were unimpressed. They were interested in the M-134C, but it was too secret to share, so they reached a compromise.

Starting with a British Typex, a US Navy officer developed an attachment with additional rotors and converted the Typex into a CCM or Combined Cipher Machine. Two earlier verisons of the attachment worked with the M-134C. However the CSP 1800 (or CCM Mark III) was essentially the same unit made to attach to the Typex. Development cost about $6 million — a huge sum for the middle of last century.

By the end of 1943, there were enough machines to work with the North Atlantic convoys. [Christos] says at least 8,631 machines left the factory line. While the machine was a marvel, it did have a problem. With certain settings, the machine had a very low cipher period (338 compared to 16,900 for Enigma). This wasn’t just theoretical, either. A study showed that bad settings showed up seven times in about two months on just one secure circuit.

This led to operational changes to forbid certain settings and restrict the maximum message length. The machine saw service at the Department of State until 1959. There were several variations in use within NATO as late as 1962. It appears the Germans didn’t break CCM during the war, but the Soviets may have been able to decode traffic from it in the post-war period.

You can see a CCM/Typex combo in the video below from the Cryptomuseum. Of course, the Enigma is perhaps the most famous of these machines. These days, you can reproduce one easily.

DataSaab: Sweden’s Lesser-Known History in Computing

23 Febrero 2025 at 00:00
DataSaab mainframe

Did you know that the land of flat-pack furniture and Saab automobiles played a serious role in the development of minicomputers, the forerunners of our home computers? If not, read on for a bit of history. You can also go ahead and watch the video below, which tells it all with a ton of dug up visuals.

Sweden’s early computer development was marked by significant milestones, beginning with the relay-based Binär Aritmetisk Relä-Kalkylator (BARK) in 1950, followed by the vacuum tube-based Binär Elektronisk SekvensKalkylator (BESK) in 1953. These projects were spearheaded by the Swedish Board for Computing Machinery (Matematikmaskinnämnden), established in 1948 to advance the nation’s computing capabilities.

In 1954, Saab ventured into computing by obtaining a license to replicate BESK, resulting in the creation of Saab’s räkneautomat (SARA). This initiative aimed to support complex calculations for the Saab 37 Viggen jet fighter. Building on this foundation, Saab’s computer division, later known as Datasaab, developed the D2 in 1960 – a transistorized prototype intended for aircraft navigation. The D2’s success led to the CK37 navigational computer, which was integrated into the Viggen aircraft in 1971.

Datasaab also expanded into the commercial sector with the D21 in 1962, producing approximately 30 units for various international clients. Subsequent models, including the D22, D220, D23, D5, D15, and D16, were developed to meet diverse computing needs. In 1971, Datasaab’s technologies merged with Standard Radio & Telefon AB (SRT) to form Stansaab AS, focusing on real-time data systems for commercial and aviation applications. This entity eventually evolved into Datasaab AB in 1978, which was later acquired by Ericsson in 1981, becoming part of Ericsson Information Systems.

Parallel to these developments, Åtvidabergs Industrier AB (later Facit) produced the FACIT EDB in 1957, based on BESK’s design. This marked Sweden’s first fully domestically produced computer, with improvements such as expanded magnetic-core memory and advanced magnetic tape storage. The FACIT EDB was utilized for various applications, including meteorological calculations and other scientific computations. For a short time, Saab even partnered with the American Unisys called Saab-Univac – a well-known name in computer history.

These pioneering efforts by Swedish organizations laid the groundwork for the country’s advancements in computing technology, influencing both military and commercial sectors. The video below has lots and lots more to unpack and goes into greater detail on collaborations and (missed) deals with great names in history.

You Know This Font, But You Don’t Really Know It

Por: Jenny List
16 Febrero 2025 at 00:00

Typography enthusiasts reach a point at which they can recognise a font after seeing only a few letters in the wild, and usually identify its close family if not the font itself. It’s unusual then for a font to leave them completely stumped, but that’s where [Marcin Wichary] found himself. He noticed a font which many of you will also have seen, on typewriter and older terminal keys. It has a few unusual features that run contrary to normal font design such as slightly odd-shaped letters and a constant width line, and once he started looking, it appeared everywhere. Finding its origin led back well over a century, and led him to places as diverse as New York street furniture and NASA elevators.

The font in question is called Gorton, and it came from the Gorton Machine Co, a Wisconsin manufacturer. It’s a font designed for a mechanical router, which is why it appears on so much custom signage and utilitarian components such as keyboard keys. Surprisingly its history leads back into the 19th century, predating many of the much more well-know sans serif fonts. So keep an eye out for it on your retro tech, and you’ll find that you’ve seen a lot more of it than you ever knew. If you are a fellow font-head, you might also know the Hershey Font, and we just ran a piece on the magnetic check fonts last week.

Thanks [Martina] for the tip!

Akadimia Ai

Por: EasyWithAI
20 Septiembre 2023 at 19:38
Experience a transformative educational journey with Akadimia that opens new dimensions of learning. This tool lets you engage in insightful conversations with historical figures like Nikola Tesla through cutting-edge augmented reality (AR) technology, making the past come alive. Akadimia lets you unleash your curiosity and immerse yourself in a unique learning experience. You can start […]

Source

❌
❌