We don’t think of computers as something you’d find in the 17th century. But [Levi McClain] found plans for one in a book — books, actually — by [Athanasius Kirker] about music. The arca musarithmica, a machine to allow people with no experience to compose church music, might not fit our usual definition of a computer, but as [Levi] points out in the video below, there are a number of similarities to mechanical computers like slide rules.
Apparently, there are a few of these left in the world, but as you’d expect, they are quite rare. So [Levi] decided to take the plans from the book along with some information available publicly and build his own.
The computer is a box of wooden cards — tablets — with instructions written on them. Honestly, we don’t know enough about music theory to quite get the algorithm. [Kirker] himself had this to say in his book about the device:
Mechanical music-making is nothing more than a particular system invented by us whereby anyone, even the ἀμουσος [unmusical] may, through various applications of compositional instruments compose melodies according to a desired style. We shall briefly relate how this mechanical music-making is done and, lest we waste time with prefatory remarks, we shall begin with the construction of the Musarithmic Ark.
If you want to try it yourself, you won’t need to break out the woodworking tools. You can find a replica on the web, of course. Let us know if you set any Hackaday posts to music.
We know not everyone thinks something mechanical can be a computer, but we disagree. True, some are more obvious than others.
One of the things that’s stopping us from re-using old phones, of course, is the lack of easy access to the peripherals. On the average phone, you’ve got one USB port and that’s it. The Citronics dev provides all sorts of connectivity: 4x USB 2.0, 1x Ethernet 10/100M, and a Raspberry Pi Header (UART, SPI, I2C, GPIO). At the same time, for better or worse, they’ve done away with the screen and its touch interface, and the camera too, but they seem to be keeping all of the RF capabilities.
The whole thing runs Linux, which means that this won’t work with every phone out there, but projects like PostmarketOS and others will certainly broaden the range of usable devices. And stripping off the camera and screen has the secondary advantages of removing the parts that get most easily broken and have the least support from custom Linux distros.
We wish we had more details about the specifics of the break-out boards, but we like the idea. How long before we see an open-source implementation of something similar? There are so many cheap used and broken cellphones out there that it’s certainly a worthwhile project!
The Vectrex console from the early 1980s holds a special place in retrocomputing lore thanks to its vector display — uniquely for a home system, it painted its graphics to the screen by drawing them with an electron beam, instead of scanning across a raster as a TV screen would. It thus came with its own CRT, and a distinctive vertical screen form factor.
For all that though, it was just a games console, but there were rumors that it might have become more. [Intric8] embarked on a quest to find some evidence, and eventually turned up what little remains in a copy of Electronic Games magazine. A keyboard, RAM and ROM expansion, and a wafer drive were in the works, which would have made the Vectrex a quirky equal of most of what the likes of Commodore and Sinclair had to offer.
It’s annoying that it doesn’t specify which issue of the magazine has the piece, and after a bit or browsing archive.org we’re sorry to say we can’t find it ourselves. But the piece itself bears a second look, for what it tells us about the febrile world of the 8-bit games industry. This was a time of intense competition in the period around the great console crash, and developers would claim anything to secure a few column inches in a magazine. It’s not to say that the people behind the Vectrex wouldn’t have produced a home computer add-on for it if they could have done, but we remember as teenagers being suckered in by too many of these stories. We still kinda want one, but we’d be surprised if any ever existed.
Over the decades there have been many denominations coined to classify computer systems, usually when they got used in different fields or technological improvements caused significant shifts. While the very first electronic computers were very limited and often not programmable, they would soon morph into something that we’d recognize today as a computer, starting with World War 2’s Colossus and ENIAC, which saw use with cryptanalysis and military weapons programs, respectively.
The first commercial digital electronic computer wouldn’t appear until 1951, however, in the form of the Ferranti Mark 1. These 4.5 ton systems mostly found their way to universities and kin, where they’d find welcome use in engineering, architecture and scientific calculations. This became the focus of new computer systems, effectively the equivalent of a scientific calculator. Until the invention of the transistor, the idea of a computer being anything but a hulking, room-sized monstrosity was preposterous.
A few decades later, more computer power could be crammed into less space than ever before including ever higher density storage. Computers were even found in toys, and amidst a whirlwind of mini-, micro-, super-, home-, minisuper- and mainframe computer systems, one could be excused for asking the question: what even is a supercomputer?
Today’s Supercomputers
ORNL’s Summit supercomputer, fastest until 2020 (Credit: ORNL)
Perhaps a fair way to classify supercomputers is that the ‘supercomputer’ aspect is a highly time-limited property. During the 1940s, Colossus and ENIAC were without question the supercomputers of their era, while 1976’s Cray-1 wiped the floor with everything that came before, yet all of these are archaic curiosities next to today’s top two supercomputers. Both the El Capitan and Frontier supercomputers are exascale (1+ exaFLOPS in double precision IEEE 754 calculations) level machines, based around commodity x86_64 CPUs in a massively parallel configuration.
Taking up 700 m2 of floor space at the Lawrence Livermore National Laboratory (LLNL) and drawing 30 MW of power, El Capitan’s 43,808 AMD EPYC CPUs are paired with the same number of AMD Instinct MI300A accelerators, each containing 24 Zen 4 cores plus CDNA3 GPU and 128 GB of HBM3 RAM. Unlike the monolithic ENIAC, El Capitan’s 11,136 nodes, containing four MI300As each, rely on a number of high-speed interconnects to distribute computing work across all cores.
At LLNL, El Capitan is used for effectively the same top secret government things as ENIAC was, while Frontier at Oak Ridge National Laboratory (ORNL) was the fastest supercomputer before El Capitan came online about three years later. Although currently LLNL and ORNL have the fastest supercomputers, there are many more of these systems in use around the world, even for innocent scientific research.
Looking at the current list of supercomputers, such as today’s Top 9, it’s clear that not only can supercomputers perform a lot more operations per second, they also are invariably massively parallel computing clusters. This wasn’t a change that was made easily, as parallel computing comes with a whole stack of complications and problems.
The Parallel Computing Shift
ILLIAC IV massively parallel computer’s Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)
The first massively parallel computer was the ILLIAC IV, conceptualized by Daniel Slotnick in 1952 and first successfully put into operation in 1975 when it was connected to ARPANET. Although only one quadrant was fully constructed, it produced 50 MFLOPS compared to the Cray-1’s 160 MFLOPS a year later. Despite the immense construction costs and spotty operational history, it provided a most useful testbed for developing parallel computation methods and algorithms until the system was decommissioned in 1981.
There was a lot of pushback against the idea of massively parallel computation, however, with Seymour Cray famously comparing the idea of using many parallel vector processors instead of a single large one akin to ‘plowing a field with 1024 chickens instead of two oxen’.
Ultimately there is only so far you can scale a singular vector processor, of course, while parallel computing promised much better scaling, as well as the use of commodity hardware. A good example of this is a so-called Beowulf cluster, named after the original 1994 parallel computer built by Thomas Sterling and Donald Becker at NASA. This can use plain desktop computers, wired together using for example Ethernet and with open source libraries like Open MPI enabling massively parallel computing without a lot of effort.
Not only does this approach enable the assembly of a ‘supercomputer’ using cheap-ish, off-the-shelf components, it’s also effectively the approach used for LLNL’s El Capitan, just with not very cheap hardware, and not very cheap interconnect hardware, but still cheaper than if one were to try to build a monolithic vector processor with the same raw processing power after taking the messaging overhead of a cluster into account.
Mini And Maxi
David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)
One way to look at supercomputers is that it’s not about the scale, but what you do with it. Much like how government, large businesses and universities would end up with ‘Big Iron’ in the form of mainframes and supercomputers, there was a big market for minicomputers too. Here ‘mini’ meant something like a PDP-11 that’d comfortably fit in the corner of an average room at an office or university.
The high-end versions of minicomputers were called ‘superminicomputer‘, which is not to be confused with minisupercomputer, which is another class entirely. During the 1980s there was a brief surge in this latter class of supercomputers that were designed to bring solid vector computing and similar supercomputer feats down to a size and price tag that might entice departments and other customers who’d otherwise not even begin to consider such an investment.
The manufacturers of these ‘budget-sized supercomputers’ were generally not the typical big computer manufacturers, but instead smaller companies and start-ups like Floating Point Systems (later acquired by Cray) who sold array processors and similar parallel, vector computing hardware.
Recently David Lovett (AKA Mr. Usagi Electric) embarked on a quest to recover and reverse-engineer as much FPS hardware as possible, with one of the goals being to build a full minisupercomputer system as companies and universities might have used them in the 1980s. This would involve attaching such an array processor to a PDP-11/44 system.
Speed Versus Reliability
Amidst all of these definitions, the distinction between a mainframe and a supercomputer is much easier and more straightforward at least. A mainframe is a computer system that’s designed for bulk data processing with as much built-in reliability and redundancy as the price tag allows for. A modern example is IBM’s Z-series of mainframes, with the ‘Z’ standing for ‘zero downtime’. These kind of systems are used by financial institutions and anywhere else where downtime is counted in millions of dollars going up in (literal) flames every second.
This means hot-swappable processor modules, hot-swappable and redundant power supplies, not to mention hot spares and a strong focus on fault tolerant computing. All of these features are less relevant for a supercomputer, where raw performance is the defining factor when running days-long simulations and when other ways to detect flaws exist without requiring hardware-level redundancy.
Considering the brief lifespan of supercomputers (currently in the order of a few years) compared to mainframes (decades) and the many years that the microcomputers which we have on our desks can last, the life of a supercomputer seems like that of a bright and very brief flame, indeed.
Top image: Marlyn Wescoff and Betty Jean Jennings configuring plugboards on the ENIAC computer (Source: US National Archives)
Coming straight to the point: [Ron Hinton] is significantly braver than we are. Or maybe he was just in a worse situation. His historic Acer K385s laptop suffered what we learned is called vinegar syndrome, which is a breakdown in the polarizers that make the LCD work. So he bit the bullet and decided to open up the LCD stack and replace what he could.
Nothing says “no user serviceable parts inside” quite like those foil-and-glue sealed packages, but that didn’t stop [Ron]. Razor blades, patience, and an eye ever watchful for the connectors that are seemingly everywhere, and absolutely critical, got the screen disassembled. Installation of the new polarizers was similarly fiddly.
In the end, it looks like the showstopper to getting a perfect result is that technology has moved on, and these older screens apparently used a phase correction layer between the polarizers, which might be difficult to source these days. (Anyone have more detail on that? We looked around and came up empty.)
This laptop may not be in the pantheon of holy-grail retrocomputers, but that’s exactly what makes it a good candidate for practicing such tricky repair work, and the result is a readable LCD screen on an otherwise broken old laptop, so that counts as a win in our book.
Did you know that the land of flat-pack furniture and Saab automobiles played a serious role in the development of minicomputers, the forerunners of our home computers? If not, read on for a bit of history. You can also go ahead and watch the video below, which tells it all with a ton of dug up visuals.
Sweden’s early computer development was marked by significant milestones, beginning with the relay-based Binär Aritmetisk Relä-Kalkylator (BARK) in 1950, followed by the vacuum tube-based Binär Elektronisk SekvensKalkylator (BESK) in 1953. These projects were spearheaded by the Swedish Board for Computing Machinery (Matematikmaskinnämnden), established in 1948 to advance the nation’s computing capabilities.
In 1954, Saab ventured into computing by obtaining a license to replicate BESK, resulting in the creation of Saab’s räkneautomat (SARA). This initiative aimed to support complex calculations for the Saab 37 Viggen jet fighter. Building on this foundation, Saab’s computer division, later known as Datasaab, developed the D2 in 1960 – a transistorized prototype intended for aircraft navigation. The D2’s success led to the CK37 navigational computer, which was integrated into the Viggen aircraft in 1971.
Datasaab also expanded into the commercial sector with the D21 in 1962, producing approximately 30 units for various international clients. Subsequent models, including the D22, D220, D23, D5, D15, and D16, were developed to meet diverse computing needs. In 1971, Datasaab’s technologies merged with Standard Radio & Telefon AB (SRT) to form Stansaab AS, focusing on real-time data systems for commercial and aviation applications. This entity eventually evolved into Datasaab AB in 1978, which was later acquired by Ericsson in 1981, becoming part of Ericsson Information Systems.
Parallel to these developments, Åtvidabergs Industrier AB (later Facit) produced the FACIT EDB in 1957, based on BESK’s design. This marked Sweden’s first fully domestically produced computer, with improvements such as expanded magnetic-core memory and advanced magnetic tape storage. The FACIT EDB was utilized for various applications, including meteorological calculations and other scientific computations. For a short time, Saab even partnered with the American Unisys called Saab-Univac – a well-known name in computer history.
These pioneering efforts by Swedish organizations laid the groundwork for the country’s advancements in computing technology, influencing both military and commercial sectors. The video below has lots and lots more to unpack and goes into greater detail on collaborations and (missed) deals with great names in history.
Neverinstall AI is a personal AI computer assistant based in the cloud. It can perform a number of tasks and can instantly download and install apps on your cloud computer to match your needs. Just tell it what you want to do, like “I’m a designer and want to learn Figma.” It will instantly set […]