There are all manner of musical myths, covering tones and melodies that have effects ranging from the profound to the supernatural. The Pied Piper, for example, or the infamous “brown note.”
But what about a song that could crash your laptop just by playing it? Even better, a song that could crash nearby laptops in the vicinity, too? It’s not magic, and it’s not a trick—it was just a punchy pop song that Janet Jackson wrote back in 1989.
Rhythm Nation
As told by Microsoft’s Raymond Chen, the story begins in the early 2000s during the Windows XP era. Engineers at a certain OEM laptop manufacturer noticed something peculiar. Playing Janet Jackson’s song Rhythm Nation through laptop speakers would cause the machines to crash. Even more bizarrely, the song could crash nearby laptops that weren’t even playing the track themselves, and the effect was noted across laptops of multiple manufacturers.
Rhythm Nation was a popular song from Jackson’s catalog, but nothing about it immediately stands out as a laptop killer.
After extensive testing and process of elimination, the culprit was identified as the audio frequencies within the song itself. It came down to the hardware of the early 2000s laptops in question. These machines relied on good old mechanical hard drives. Specifically, they used 2.5-inch 5,400 RPM drives with spinning platters, magnetic heads, and actuator arms.
The story revolves around 5,400 RPM laptop hard drives, but the manufacturer and model are not public knowledge. No reports have been made of desktop PCs or hard disks suffering the same issue. Credit: Raimond Spekking, CC BY-SA 4.0
Unlike today’s solid-state drives, these components were particularly susceptible to physical vibration. Investigation determined that something in Rhythm Nation was hitting a resonant frequency of some component of the drive. When this occurred, the drive would be disturbed enough that read errors would stack up to the point where it would trigger a crash in the operating system. The problem wasn’t bad enough to crash the actual hard drive head into the platters themselves, which would have created major data loss. It was just bad enough to disrupt the hard drive’s ability to read properly, to the point where it could trigger a crash in the operating system.
A research paper published in 2018 investigated the vibrational characteristics of a certain model of 2.5-inch laptop hard drive. It’s not conclusive evidence, and has nothing to do with the Janet Jackson case, but it provides some potentially interesting insights as to why similar hard drives failed to read when the song was played. Credit: Research paper
There was a simple workaround for this problem, that was either ingenious or egregious depending on your point of view. Allegedly, the OEM simply whipped up a notch filter for the audio subsystem to remove the offending frequencies. The filter apparently remained in place from the then-contemporary Windows XP up until at least Windows 7. At this point, Microsoft created a new rule for “Audio Processing Objects” (APO) which included things like the special notch filter. The rule stated that all of these filters must be able to be switched off if so desired by the user. However, the story goes that the manufacturer gained a special exception for some time to leave their filter APO on at all times, to prevent users disabling it and then despairing when their laptops suddenly started crashing unexpectedly during Janet Jackson playlists.
As for what made Rhythm Nation special? YouTuber Adam Neely investigated, and came up with a compelling theory. Having read a research paper on the vibrational behavior of a 2.5-inch 5,400 RPM laptop hard disk, he found that it reported the drive to have its largest vibrational peak at approximately 87.5 Hz. Meanwhile, he also found that Rhythm Nation had a great deal of energy at 84.2 Hz. Apparently, the recording had been sped up a touch after the recording process, pushing the usual low E at 82 Hz up slightly higher. The theory being that the mild uptuning in Rhythm Nation pushed parts of the song close enough to the resonant frequency of some of the hard drive’s components to give them a good old shaking, causing the read errors and eventual crashes.
It’s an interesting confluence of unintended consequences. A singular pop song from 1989 ended up crashing laptops over a decade later, leading to the implementation of an obscure and little-known audio filter. The story still has holes—nobody has ever come forward to state officially which OEM was involved, and which precise laptops and hard drives suffered this problem. That stymies hopes for further research and recreation of this peculiarity. Nevertheless, it’s a fun tech tale from the days when computers were ever so slightly more mechanical than they are today.
Procedural generation is a big part of game design these days. Usually you generate your map, and [Fractal Philosophy] has decided to go one step further: using a procedurally-generated world from an older video, he is procedurally generating history by simulating the rise and fall of empires on that map in a video embedded below.
Now, lacking a proper theory of Psychohistory, [Fractal Philosophy] has chosen to go with what he admits is the simplest model he could find, one centered on the concept of “solidarity” and based on the work of [Peter Turchin], a Russian-American thinker. “Solidarity” in the population holds the Empire together; external pressures increase it, and internal pressures decrease it. This leads to an obvious cellular automation type system (like Conway’s Game of Life), where cells are evaluated based on their nearest neighbors: the number of nearest neighbors in the empire goes into a function that gives the probability of increasing or decreasing the solidarity score each “turn”. (Probability, in order to preserve some randomness.) The “strength” of the Empire is given by the sum of the solidarity scores in every cell.
Each turn, Empires clash, with the the local solidarity, sum strength, and distance from Imperial center going into determining who gains or loses territory. It is a simple model; you can judge from the video how well it captures the ebb and flow of history, but we think it did surprisingly well all things considered. The extra 40-minute video of the model running is oddly hypnotic, too.
In v2 of the model, one of these fluffy creatures will betray you.
After a dive into more academic support for the main idea, and a segue into game theory and economics, a slight complication is introduced later in the video, dividing each cell into two populations: “cooperators” or “selfish” individuals.
This allows for modeling of internal conflicts between the two groups. This hitch gives a very similar looking map at the end of its run, although has an odd quirk that it automatically starts with a space-filling empire across the whole map that quickly disintegrates.
Unfortunately, the model not open-source, but the ideas are discussed in enough detail that one could probably produce a very similar algorithm in an afternoon. For those really interested, [Fractal Philosophy] does offer a one-time purchase through his Patreon. It also includes the map-generating model from his last video.
As the Industrial Age took the world by storm, city centers became burgeoning hubs of commerce and activity. New offices and apartments were built higher and higher as density increased and skylines grew ever upwards. One could live and work at height, but this created a simple inconvenience—if you wanted to send any mail, you had to go all the way down to ground level.
In true American fashion, this minor inconvenience would not be allowed to stand. A simple invention would solve the problem, only to later fall out of vogue as technology and safety standards moved on. Today, we explore the rise and fall of the humble mail chute.
Going Down
Born in 1848 in Albany, New York, James Goold Cutler would come to build his life in the state. He lived and worked in the growing state, and as an architect, he soon came to identify an obvious problem. For those occupying higher floors in taller buildings, the simple act of sending a piece of mail could quickly become a tedious exercise. One would have to make their way all the way to a street level post box, which grew increasingly tiresome as buildings grew ever taller.
Cutler’s original patent for the mail chute. Note element G – a hand guard that prevented people from reaching into the chute to grab mail falling from above. Security of the mail was a key part of the design. Credit: US Patent, public domain
Cutler saw that there was an obvious solution—install a vertical chute running through the building’s core, add mail slots on each floor, and let gravity do the work. It then became as simple as dropping a letter in, and down it would go to a collection box at the bottom, where postal workers could retrieve it during their regular rounds. Cutler filed a patent for this simple design in 1883. He was sure to include a critical security feature—a hand guard behind each floor’s mail chute. This was intended to stop those on lower levels reaching into the chute to steal the mail passing by from above. Installations in taller buildings were also to be fitted with an “elastic cushion” in the bottom to “prevent injury to the mail” from higher drop heights.
A Cutler Receiving Box that was built in 1920. This box would have lived at the bottom of a long mail chute, with the large door for access by postal workers. The brass design is typical of the era. Credit: National Postal Museum, CC0
One year later, the first installation went live in the Elwood Building, built in Rochester, New York to Cutler’s own design. The chute proved fit for purpose in the seven-story building, but there was a problem. The collection box at the bottom of Cutler’s chute was seen by the postal authorities as a mailbox. Federal mail laws were taken quite seriously, then as now, and they stated that mailboxes could only be installed in public buildings such as hotels, railway stations, or government facilities. The Elwood was a private building, and thus postal carriers refused to service the collection box.
It consists of a chute running down through each story to a mail box on the ground floor, where the postman can come and take up the entire mail of the tenants of the building. A patent was easily secured, for nobody else had before thought of nailing four boards together and calling it a great thing.
Letters could be dropped in the apertures on the fourth and fifth floors and they always fell down to the ground floor all right, but there they stated. The postman would not touch them. The trouble with the mail chute was the law which says that mail boxes shall be put only in Government and public buildings.
Cutler’s brilliantly simple invention seemed dashed at the first hurdle. However, rationality soon prevailed. Postal laws were revised in 1893, and mail chutes were placed under the authority of the US Post Office Department. This had important security implications. Only post-office approved technicians would be allowed to clear mail clogs and repair and maintain the chutes, to ensure the safety and integrity of the mail.
The Cutler Mail chutes are easy to spot at the Empire State Building. Credit: Teknorat, CC BY-SA 2.0
With the legal issues solved, the mail chute soared in popularity. As skyscrapers became ever more popular at the dawn of the 20th century, so did the mail chute, with over 1,600 installed by 1905. The Cutler Manufacturing Company had been the sole manufacturer reaping the benefits of this boom up until 1904, when the US Post Office looked to permit competition in the market. However, Cutler’s patent held fast, with his company merging with some rivals and suing others to dominate the market. The company also began selling around the world, with London’s famous Savoy Hotel installing a Cutler chute in 1904. By 1961, the company held 70 percent of the mail chute market, despite Cutler’s passing and the expiry of the patent many years prior.
The value of the mail chute was obvious, but its success was not to last. Many companies began implementing dedicated mail rooms, which provided both delivery and pickup services across the floors of larger buildings. This required more manual handling, but avoided issues with clogs and lost mail and better suited bigger operations. As postal volumes increased, the chutes became seen as a liability more than a convenience when it came to important correspondence. Larger oversized envelopes proved a particular problem, with most chutes only designed to handle smaller envelopes. A particularly famous event in 1986 saw 40,000 pieces of mail stuck in a monster jam at the McGraw-Hill building, which took 23 mailbags to clear. It wasn’t unusual for a piece of mail to get lost in a chute, only to turn up many decades later, undelivered.
An active mail chute in the Law Building in Akron, Ohio. The chute is still regularly visited by postal workers for pickup. Credit: Cards84664, CC BY SA 4.0Mail chutes were often given fine, detailed designs befitting the building they were installed in. This example is from the Fitzsimons Army Medical Center in Colorado. Credit: Mikepascoe, CC BY SA 4.0
The final death knell for the mail chute, though, was a safety matter. Come 1997, the National Fire Protection Association outright banned the installation of new mail chutes in new and existing buildings. The reasoning was simple. A mail chute was a single continuous cavity between many floors of a building, which could easily spread smoke and even flames, just like a chimney.
Despite falling out of favor, however, some functional mail chutes do persist to this day. Real examples can still be spotted in places like the Empire State Building and New York’s Grand Central station. Whether in use or deactivated, many still remain in older buildings as a visible piece of mail history.
Better building design standards and the unstoppable rise of email mean that the mail chute is ultimately a piece of history rather than a convenience of our modern age. Still, it’s neat to think that once upon a time, you could climb to the very highest floors of an office building and drop your important letters all the way to the bottom without having to use the elevator or stairs.
I ran into an old episode of Hogan’s Heroes the other day that stuck me as odd. It didn’t have a laugh track. Ironically, the show was one where two pilots were shown, one with and one without a laugh track. The resulting data ensured future shows would have fake laughter. This wasn’t the pilot, though, so I think it was just an error on the part of the streaming service.
However, it was very odd. Many of the jokes didn’t come off as funny without the laugh track. Many of them came off as cruel. That got me to thinking about how they had to put laughter in these shows to begin with. I had my suspicions, but was I way off!
Well, to be honest, my suspicions were well-founded if you go back far enough. Bing Crosby was tired of running two live broadcasts, one for each coast, so he invested in tape recording, using German recorders Jack Mullin had brought back after World War II. Apparently, one week, Crosby’s guest was a comic named Bob Burns. He told some off-color stories, and the audience was howling. Of course, none of that would make it on the air in those days. But they saved the recording.
A few weeks later, either a bit of the show wasn’t as funny or the audience was in a bad mood. So they spliced in some of the laughs from the Burns performance. You could guess that would happen, and that’s the apparent birth of the laugh track. But that method didn’t last long before someone — Charley Douglass — came up with something better.
Sweetening
The problem with a studio audience is that they might not laugh at the right times. Or at all. Or they might laugh too much, too loudly, or too long. Charley Douglass developed techniques for sweetening an audio track — adding laughter, or desweetening by muting or cutting live laughter. At first, this was laborious, but Douglass had a plan.
He built a prototype machine that was a 28-inch wooden wheel with tape glued to its perimeter. The tape had laughter recordings and a mechanical detent system to control how much it played back.
Douglass decided to leave CBS, but the prototype belonged to them. However, the machine didn’t last very long without his attention. In 1953, he built his own derivative version and populated it with laughter from the Red Skelton Show, where Red did pantomime, and, thus, there was no audio but the laughter and applause.
Do You Really Need It?
There is a lot of debate regarding fake laughter. On the one hand, it does seem to help. On the other hand, shouldn’t people just — you know — laugh when something’s funny?
There was concern, for example, that the Munsters would be scary without a laugh track. Like I mentioned earlier, some of the gags on Hogan’s Heroes are fine with laughter, but seem mean-spirited without.
Consider the Big Bang theory. If you watch a clip (below) with no laugh track, you’ll notice two things. First, it does seem a bit mean (as a commenter said: “…like a bunch of people who really hate each other…” The other thing you’ll notice is that they pause for the laugh track insertion, which, when there is no laughter, comes off as really weird.
Laugh Monopoly
Laugh tracks became very common with most single-camera shows. These were hard to do in front of an audience because they weren’t filmed in sequence. Even so, some directors didn’t approve of “mechanical tricks” and refused to use fake laughter.
Even multiple-camera shows would sometimes want to augment a weak audience reaction or even just replace laughter to make editing less noticeable. Soon, producers realized that they could do away with the audience and just use canned laughter. Douglass was essentially the only game in town, at least in the United States.
The Douglass device was used on all the shows from the 1950s through the 1970s. Andy Griffith? Yep. Betwitched? Sure. The Brady Bunch? Of course. Even the Munster had Douglass or one of his family members creating their laugh tracks.
One reason he stayed a monopoly is that he was extremely secretive about how he did his work. In 1960, he formed Northridge Electronics out of a garage. When called upon, he’d wheel his invention into a studio’s editing room and add laughs for them. No one was allowed to watch.
You can see the original “laff box” in the videos below.
The device was securely locked, but inside, we now know that the machine had 32 tape loops, each with ten laugh tracks. Typewriter-like keys allowed you to select various laughs and control their duration and intensity,
In the background, there was always a titter track of people mildly laughing that could be made more or less prominent. There were also some other sound effects like clapping or people moving in seats.
Building a laugh track involved mixing samples from different tracks and modulating their amplitude. You can imagine it was like playing a musical instrument that emits laughter.
Before you tell us, yes, there seems to be some kind of modern interface board on the top in the second video. No, we don’t know what it is for, but we’re sure it isn’t part of the original machine.
Of course, all things end. As technology got better and tastes changed, some companies — notably animation companies — made their own laugh tracks. One of Douglass’ protégés started a company, Sound One, that used better technology to create laughter, including stereo recordings and cassette tapes.
Today, laugh tracks are not everywhere, but you can still find them and, of course, they are prevalent in reruns. The next time you hear one, you’ll know the history behind that giggle.
If you were alive when 2001: A Space Odyssey was in theaters, you might have thought it didn’t really go far enough. After all, in 1958, the US launched its first satellite. The first US astronaut went up in 1961. Eight years later, Armstrong put a boot on the moon’s surface. That was a lot of progress for 11 years. The movie came out in 1968, so what would happen in 33 years? Turns out, not as much as you would have guessed back then. [The History Guy] takes us through a trip of what could have been if progress had marched on after those first few moon landings. You can watch the video below.
The story picks up way before NASA. Each of the US military branches felt like it should take the lead on space technology. Sputnik changed everything and spawned both ARPA and NASA. The Air Force, though, had an entire space program in development, and many of the astronauts for that program became NASA astronauts.
The Army also had its own stymied space program. They eventually decided it would be strategic to develop an Army base on the moon for about $6 billion. The base would be a large titanium cylinder buried on the moon that would house 12 people.
The base called for forty launches in a single year before sending astronauts, and then a stunning 150 Saturn V launches to supply building materials for the base. Certainly ambitious and probably overly ambitious, in retrospect.
There were other moon base plans. Most languished with little support or interest. The death knell, though, was the 1967 Outer Space Treaty, which forbids military bases on the moon.
While we’d love to visit a moon base, we are fine with it not being militarized. We also want our jet packs.
The city of London is no stranger to tall constructions today, but long before the first skyscrapers would loom above its streets, Watkin’s Tower was supposed to be the tallest structure in not only London but also the entirety of the UK. Inspired by France’s recently opened Eiffel tower, railway entrepreneur and Member of Parliament [Sir Edward Watkin] wanted to erect a structure that would rival the Eiffel tower, as part of a new attraction park to be constructed near the Middlesex hamlet of Wembley. In a retrospective, [Rob’s London] channel takes a look at what came to be known as Watkin’s Folly among other flattering names.
The first stage of Watkin’s Tower at Wembley Park. The only to be ever completed. (Source: Wikimedia)
After [Gustave Eiffel], the architect of the Eiffel tower recused himself, a design competition was held for a tower design, with the Illustrated Catalogue of the 68 designs submitted available for our perusal. The winner turned out to be #37, an eight-legged, 366 meter tall tower, much taller than the 312.2 meter tall Eiffel tower, along with multiple observation decks and various luxuries to be enjoyed by visitors to Wembley Park.
Naturally, [Watkin] commissioned a redesign to make it cheaper, which halved the number of legs, causing subsidence of the soil and other grievances later on. Before construction could finish, the responsible company went bankrupt and the one constructed section was demolished by 1907. Despite this, Wembley Park was a success and remains so to this day with Wembley Stadium built where Watkin’s Folly once stood.
I often ask people: What’s the most important thing you need to have a successful fishing trip? I get a lot of different answers about bait, equipment, and boats. Some people tell me beer. But the best answer, in my opinion, is fish. Without fish, you are sure to come home empty-handed.
On a recent visit to Bletchley Park, I thought about this and how it relates to World War II codebreaking. All the computers and smart people in the world won’t help you decode messages if you don’t already have the messages. So while Alan Turing and the codebreakers at Bletchley are well-known, at least in our circles, fewer people know about Arkley View.
The problem was apparent to the British. The Axis powers were sending lots of radio traffic. It would take a literal army of radio operators to record it all. Colonel Adrian Simpson sent a report to the director of MI5 in 1938 explaining that the three listening stations were not enough. The proposal was to build a network of volunteers to handle radio traffic interception.
That was the start of the Radio Security Service (RSS), which started operating out of some unused cells at a prison in London. The volunteers? Experienced ham radio operators who used their own equipment, at first, with the particular goal of intercepting transmissions from enemy agents on home soil.
At the start of the war, ham operators had their transmitters impounded. However, they still had their receivers and, of course, could all read Morse code. Further, they were probably accustomed to pulling out Morse code messages under challenging radio conditions.
Over time, this volunteer army of hams would swell to about 1,500 members. The RSS also supplied some radio gear to help in the task. MI5 checked each potential member, and the local police would visit to ensure the applicant was trustworthy. Keep in mind that radio intercepts were also done by servicemen and women (especially women) although many of them were engaged in reporting on voice communication or military communications.
Early Days
The VIs (voluntary interceptors) were asked to record any station they couldn’t identify and submit a log that included the messages to the RSS.
Arkey View ([Aka2112] CC-BY-SA-3.0)The hams of the RSS noticed that there were German signals that used standard ham radio codes (like Q signals and the prosign 73). However, these transmissions also used five-letter code groups, a practice forbidden to hams.
Thanks to a double agent, the RSS was able to decode the messages that were between agents in Europe and their Abwehr handlers back in Germany (the Abwehr was the German Secret Service) as well as Abwehr offices in foreign cities. Later messages contained Enigma-coded groups, as well.
Between the RSS team’s growth and the fear of bombing, the prison was traded for Arkley View, a large house near Barnet, north of London. Encoded messages went to Bletchley and, from there, to others up to Churchill. Soon, the RSS had orders to concentrate on the Abwehr and their SS rivals, the Sicherheitsdienst.
Change in Management
In 1941, MI6 decided that since the RSS was dealing with foreign radio traffic, they should be in charge, and thus RSS became SCU3 (Special Communications Unit 3).
There was fear that some operators might be taken away for normal military service, so some operators were inducted into the Army — sort of. They were put in uniform as part of the Royal Corps of Signals, but not required to do very much you’d expect from an Army recruit.
Those who worked at Arkley View would process logs from VIs and other radio operators to classify them and correlate them in cases where there were multiple logs. One operator might miss a few characters that could be found in a different log, for example.
Going 24/7
National HRO Receiver ([LuckyLouie] CC-BY-SA-3.0)It soon became clear that the RSS needed full-time monitoring, so they built a number of Y stations with two National HRO receivers from America at each listening position. There were also direction-finding stations built in various locations to attempt to identify where a remote transmitter was.
Many of the direction finding operators came from VIs. The stations typically had four antennas in a directional array. When one of the central stations (the Y stations) picked up a signal, they would call direction finding stations using dedicated phone lines and send them the signal.
The operator would hear the phone signal in one ear and the radio signal in the other. Then, they would change the antenna pattern electrically until the signal went quiet, indicating the antenna was electrically pointing away from the signals.
The DF operator would hear this signal in one earpiece. They would then tune their radio receiver to the right frequency and match the signal from the main station in one ear to the signal from their receiver in the other ear. This made sure they were measuring the correct signal among the various other noise and interference. The DF operator would then take a bearing by rotating the dial on their radiogoniometer until the signal faded out. That indicated the antenna was pointing the wrong way which means you could deduce which way it should be pointing.
The central station could plot lines from three direction finding stations and tell the source of a transmission. Sort of. It wasn’t incredibly accurate, but it did help differentiate signals from different transmitters. Later, other types of direction-finding gear saw service, but the idea was still the same.
Interesting VIs
Most of the VIs, like most hams at the time, were men. But there were a few women, including Helena Crawley. She was encouraged to marry her husband Leslie, another VI, so they could be relocated to Orkney to copy radio traffic from Norway.
In 1941, a single VI was able to record an important message of 4,429 characters. He was bedridden from a landmine injury during the Great War. He operated from bed using mirrors and special control extensions. For his work, he receive the British Empire Medal and a personal letter of gratitude from Churchill.
Results
Because of the intercepts of the German spy agency’s communications, many potential German agents were known before they arrived in the UK. Of about 120 agents arriving, almost 30 were turned into double agents. Others were arrested and, possibly, executed.
By the end of the war, the RSS had decoded around a quarter of a million intercepts. It was very smart of MI5 to realize that it could leverage a large number of trained radio operators both to cover the country with receivers and to free up military stations for other uses.
There comes a moment in the life of any operating system when an unforeseen event will tragically cut its uptime short. Whether it’s a sloppily written driver, a bug in the handling of an edge case or just dumb luck, suddenly there is nothing more that the OS’ kernel can do to salvage the situation. With its last few cycles it can still gather some diagnostic information, attempt to write this to a log or memory dump and then output a supportive message to the screen to let the user know that the kernel really did try its best.
This on-screen message is called many things, from a kernel panic message on Linux to a Blue Screen of Death (BSOD) on Windows since Windows 95, to a more contemplative message on AmigaOS and BeOS/Haiku. Over the decades these Screens of Death (SoD) have changed considerably, from the highly informative screens of Windows NT to the simplified BSOD of Windows 8 onwards with its prominent sad emoji that has drawn a modicum of ridicule.
Now it seems that the Windows BSOD is about to change again, and may not even be blue any more. So what’s got a user to think about these changes? What were we ever supposed to get out of these special screens?
Meditating On A Fatal Error
AmigaOS fatal Guru Meditation error screen.
More important than the color of a fatal system error screen is what information it displays. After all, this is the sole direct clue the dismayed user gets when things go south, before sighing and hitting the reset button, followed by staring forlorn at the boot screen. After making it back into the OS, one can dig through the system logs for hints, but some information will only end up on the screen, such as when there is a storage drive issue.
The exact format of the information on these SoDs changes per OS and over time, with AmigaOS’ Guru Meditation screen being rather well-known. Although the naming was the result of an inside joke related to how the developers dealt with frequent system crashes, it stuck around in the production releases.
Interestingly, both Windows 9x and ME as well as AmigaOS have fatal and non-fatal special screens. In the case of AmigaOS you got a similar screen to the Guru Meditation screen with its error code, except in green and the optimistic notion that it might be possible to continue running after confirming the message. For Windows 9x/ME users this might be a familiar notion as well :
BSOD in Windows 95 after typing “C:\con\con” in the Run dialog.
In this series of OSes you’d get these screens, with mashing a key usually returning you to a slightly miffed but generally still running OS minus the misbehaving application or driver. It could of course happen that you’d get stuck in an endless loop of these screens until you gave up and gave the three-finger salute to put Windows out of its misery. This was an interesting design choice, which Microsoft’s Raymond Chen readily admits to being somewhat quaint. What it did do was abandon the current event and return to the event dispatcher to give things another shot.
Mac OS X 10.2 thru 10.2.8 kernel panic message.
A characteristic of these BSODs in Windows 9x/ME was also that they didn’t give you a massive amount of information to work with regarding the reason for the rude interruption. Incidentally, over on the Apple side of the fence things were not much more elaborate in this regard, with OS X’s kernel panic message getting plastered over with a ‘Nothing to see here, please restart’ message. This has been quite a constant ever since the ‘Sad Mac’ days of Apple, with friendly messages rather than any ‘technobabble’.
This quite contrasts with the world of Windows NT, where even the already trimmed BSOD of Windows XP is roughly on the level of the business-focused Windows 2000 in terms of information. Of note is also that a BSOD on Windows NT-based OSes is a true ‘Screen of Death’, from which you absolutely are not returning.
A BSOD in Windows XP. A true game over, with no continues.
These BSODs provide a significant amount of information, including the faulting module, the fault type and some hexadecimal values that can conceivably help with narrowing down the fault. Compared to the absolute information overload in Windows NT 3.1 with a partial on-screen memory dump, the level of detail provided by Windows 2000 through Windows 7 is probably just enough for the average user to get started with.
It’s here interesting that more recent versions of Windows have opted to default to restarting automatically when a BSOD occurs, which renders what is displayed on them rather irrelevant. Maybe that’s why Windows 8 began to just omit that information and opted to instead show a generic ‘collecting information’ progress counter before restarting.
Times Are Changing
People took the new BSOD screen in Windows 8 well.
Although nobody was complaining about the style of BSODs in Windows 7, somehow Windows 8 ended up with the massive sad emoji plastered on the top half of the screen and no hexadecimal values, which would now hopefully be found in the system log. Windows 10 also added a big QR code that leads to some troubleshooting instructions. This overly friendly and non-technical BSOD mostly bemused and annoyed the tech community, which proceeded to brutally make fun of it.
In this context it’s interesting to see these latest BSOD screen mockups from Microsoft that will purportedly make their way to Windows 11 soon.
These new BSOD screens seem to have a black background (perhaps a ‘Black Screen of Death’?), omit the sad emoji and reduce the text to an absolute minimum:
The new Windows 11 BSOD, as it’ll likely appear in upcoming releases.
What’s noticeable here is how it makes the stop code very small on the bottom of the screen, with the faulting module below it in an even smaller font. This remains a big departure from the BSOD formats up till Windows 7 where such information was clearly printed on the screen, along with additional information that anyone could copy over to paper or snap a picture of for a quick diagnosis.
But Why
The BSODs in ReactOS keep the Windows 2000-style format.
The crux here is whether Microsoft expects their users to use these SoDs for informative purposes, or whether they would rather that they get quickly forgotten about, as something shameful that users shouldn’t concern themselves with. It’s possible that they expect that the diagnostics get left to paid professionals, who would have to dig into the memory dumps, the system logs, and further information.
Whatever the case may be, it seems that the era of blue SoDs is well and truly over now in Windows. Gone too are any embellishments, general advice, and more in-depth debug information. This means that distinguishing the different causes behind a specific stop code, contained in the hexadecimal numbers, can only be teased out of the system log entry in Event Viewer, assuming it got in fact recorded and you’re not dealing with a boot partition or similar fundamental issue.
Although I’ll readily admit to not having seen many BSODs since probably Windows 2000 or XP — and those were on questionable hardware — the rarity of these events makes it in my view even more pertinent that these screens are as descriptive as possible, which is sadly not a feature that seems to be a priority for mainstream desktop OSes. Nor for niche OSes like Linux and BSD, tragically, where you have to know your way around the Systemd journalctl tool or equivalent to figure out where that kernel panic came from.
This is definitely a point where the SoD generated upon a fiery kernel explosion sets the tone for the user’s response.
With few exceptions, amateur radio is a notably sedentary pursuit. Yes, some hams will set up in a national or state park for a “Parks on the Air” activation, and particularly energetic operators may climb a mountain for “Summits on the Air,” but most hams spend a lot of time firmly planted in a comfortable chair, spinning the dials in search of distant signals or familiar callsigns to add to their logbook.
There’s another exception to the band-surfing tendencies of hams: fox hunting. Generally undertaken at a field day event, fox hunts pit hams against each other in a search for a small hidden transmitter, using directional antennas and portable receivers to zero in on often faint signals. It’s all in good fun, but fox hunts serve a more serious purpose: they train hams in the finer points of radio direction finding, a skill that can be used to track down everything from manmade noise sources to unlicensed operators. Or, as was done in the 1940s, to ferret out foreign agents using shortwave radio to transmit intelligence overseas.
That was the primary mission of the Radio Intelligence Division, a rapidly assembled organization tasked with protecting the United States by monitoring the airwaves and searching for spies. The RID proved to be remarkably effective during the war years, in part because it drew heavily from the amateur radio community to populate its many field stations, but also because it brought an engineering mindset to the problem of finding needles in a radio haystack.
Winds of War
America’s involvement in World War II was similar to Hemingway’s description of the process of going bankrupt: Gradually, then suddenly. Reeling from the effects of the Great Depression, the United States had little interest in European affairs and no appetite for intervention in what increasingly appeared to be a brewing military conflict. This isolationist attitude persisted through the 1930s, surviving even the recognized start of hostilities with Hitler’s sweep into Poland in 1939, at least for the general public.
But behind the scenes, long before the Japanese attack on Pearl Harbor, precipitous changes were afoot. War in Europe was clearly destined from the outset to engulf the world, and in the 1940s there was only one technology with a truly global reach: radio. The ether would soon be abuzz with signals directing troop movements, coordinating maritime activities, or, most concerningly, agents using spy radios to transmit vital intelligence to foreign governments. To be deaf to such signals would be an unacceptable risk to any nation that fancied itself a world power, even if it hadn’t yet taken a side in the conflict.
It was in that context that US President Franklin Roosevelt approved an emergency request from the Federal Communications Commission in 1940 for $1.6 million to fund a National Defense Operations section. The group would be part of the engineering department within the FCC and was tasked with detecting and eliminating any illegal transmissions originating from within the country. This was aided by an order in June of that year which prohibited the 51,000 US amateur radio operators from making any international contacts, and an order four months later for hams to submit to fingerprinting and proof of citizenship.
A Ham’s Ham
George Sterling (W1AE/W3DF). FCC commissioner in 1940, he organized and guided RID during the war. Source: National Assoc. of Broadcasters, 1948
The man behind the formation of the NDO was George Sterling. To call Sterling an early adopter of amateur radio would be an understatement. He plunged into radio as a hobby in 1908 at the tender age of 14, just a few years after Marconi and others demonstrated the potential of radio. He was licensed immediately after the passage of the Radio Act of 1927, callsign 1AE (later W1AE), and continued to experiment with spark gap stations. When the United States entered World War I, Sterling served for 19 months in France as an instructor in the Signal Corps, later organizing and operating the Corps’ first radio intelligence unit to locate enemy positions based on their radio transmissions.
After a brief post-war stint as a wireless operator in the Merchant Marine, Sterling returned to the US to begin a career in the federal government with a series of radio engineering and regulatory jobs. He rose through the ranks over the 1920s and 1930s, eventually becoming Assistant Chief of the FCC Field Division in 1937, in charge of radio engineering for the entire nation. It was on the strength of his performance in that role that he was tapped to be the first — and as it would turn out, only — chief of the NDO, which was quickly raised to the level of a new division within the FCC and renamed the Radio Intelligence Division.
To adequately protect the homeland, the RID needed a truly national footprint. Detecting shortwave transmissions is simple enough; any single location with enough radio equipment and a suitable antenna could catch most transmissions originating from within the US or its territories. But Sterling’s experience in France taught him that a network of listening stations would be needed to accurately triangulate on a source and provide a physical location for follow-up investigation.
The network that Sterling built would eventually comprise twelve primary stations scattered around the US and its territories, including Alaska, Hawaii, and Puerto Rico. Each primary station reported directly to RID headquarters in Washington, DC, by telephone, telegraph, or teletype. Each primary station supported up to a few dozen secondary stations, with further coastal monitoring stations set up as the war ground on and German U-boats became an increasingly common threat. The network would eventually comprise over 100 stations stretched from coast to coast and beyond, staffed by almost 900 agents.
Searching the Ether
The job of staffing these stations with skilled radio operators wasn’t easy, but Sterling knew he had a ready and willing pool to pull from: his fellow hams. Recently silenced and eager to put their skills to the test, hams signed up in droves for the RID. About 80% of the RID staff were composed of current or former amateur radio operators, including the enforcement branch of sworn officers who carried badges and guns. They were the sharp end of the spear, tasked with the “last mile” search for illicit transmitters and possible confrontation with foreign agents.
But before the fedora-sporting, Tommy-gun toting G-men could swoop in to make their arrest came the tedious process of detecting and classifying potentially illicit signals. This task was made easier by an emergency order issued on December 8, 1941, the day after the Pearl Harbor attack, forbidding all amateur radio transmissions below 56 MHz. This reduced the number of targets the RID listening stations had to sort through, but the high-frequency bands cover a lot of turf, and listening to all that spectrum at the same time required a little in-house innovation.
Today, monitoring wide swaths of the spectrum is relatively easy, but in the 1940s, it was another story. Providing this capability fell to RID engineers James Veatch and William Hoffert, who invented an aperiodic receiver that covered everything from 50 kHz to 60 MHz. Called the SSR-201, this radio used a grid-leak detector to rectify and amplify all signals picked up by the antenna. A bridge circuit connected the output of the detector to an audio amplifier, with the option to switch an audio oscillator into the circuit so that continuous wave transmissions — the spy’s operating mode of choice — could be monitored. There was also an audio-triggered relay that could start and stop an external recorder, allowing for unattended operation.
SSR-201 aperiodic receiver, used by the RID to track down clandestine transmitters. Note the “Magic Eye” indicator. Source: Steve Ellington (N4LQ)
The SSR-201 and a later variant, the K-series, were built by Kann Manufacturing, a somewhat grand name for a modest enterprise operating out of the Baltimore, Maryland, basement of Manuel Kann (W3ZK), a ham enlisted by the RID to mass produce the receiver. Working with a small team of radio hobbyists and broadcast engineers mainly working after hours, Kann Manufacturing managed to make about 200 of the all-band receivers by the end of the war, mainly for the RID but also for the Office of Strategic Services (OSS), the forerunner of the CIA, as well as the intelligence services of other allied nations.
These aperiodic receivers were fairly limited in terms of sensitivity and lacked directional capability, and so were good only for a first pass scan of a specific area for the presence of a signal. Consequently, they were often used in places where enemy transmitters were likely to operate, such as major cities near foreign embassies. This application relied on the built-in relay in the receiver to trigger a remote alarm or turn on a recorder, giving the radio its nickname: “The Watchdog.” The receivers were also often mounted in mobile patrol vehicles that would prowl likely locations for espionage, such as Army bases and seaports. Much later in the war, RID mobile units would drive through remote locations such as the woods around Oak Ridge, Tennessee, and an arid plateau in the high desert near Los Alamos, New Mexico, for reasons that would soon become all too obvious.
Radio G-Men
Adcock-type goniometer radio direction finder. The dipole array could be rotated 360 degrees from inside the shack to pinpoint a bearing to the transmitter. Source: Radio Boulevard
Once a candidate signal was detected and headquarters alerted to its frequency, characteristics, and perhaps even its contents, orders went out to the primary stations to begin triangulation. Primary stations were equipped with radio direction finding (RDF) equipment, including the Adcock-type goniometer. These were generally wooden structures elevated above the ground with a distinctive Adcock antenna on the roof of the shack. The antenna was a variation on the Adcock array using two vertical dipoles on a steerable mount. The dipoles were connected to the receiving gear in the shack 180 degrees out of phase. This produced a radiation pattern with very strong nulls broadside to the antenna, making it possible for operators to determine the precise angle to the source by rotating the antenna array until the signal is minimized. Multiple stations would report the angle to the target to headquarters, where it would be mapped out and a rough location determined by where the lines intersected.
With a rough location determined, RID mobile teams would hit the streets. RID had a fleet of mobile units based on commercial Ford and Hudson models, custom-built for undercover work. Radio gear partially filled the back seat area, power supplies filled the trunk, and a small steerable loop antenna could be deployed through the roof for radio direction finding on the go. Mobile units were also equipped with special radio sets for communicating back to their primary station, using the VHF band to avoid creating unwanted targets for the other stations to monitor.
Mobile units were generally capable of narrowing the source of a transmission down to a city block or so, but locating the people behind the transmission required legwork. Armed RID enforcement agents would set out in search of the transmitter, often aided by a device dubbed “The Snifter.” This was a field-strength meter specially built for covert operations; small enough to be pocketed and monitored through headphones styled to look like a hearing aid, the agents could use the Snifter to ferret out the spy, hopefully catching them in the act and sealing their fate.
A Job (Too) Well Done
For a hastily assembled organization, the RID was remarkably effective. Originally tasked with monitoring the entire United States and its territories, that scope very quickly expanded to include almost every country in South America, where the Nazi regime found support and encouragement. Between 1940 and 1944, the RID investigated tens of thousands, resulting in 400 unlicensed stations being silenced. Not all of these were nefarious; one unlucky teenager in Portland, Oregon, ran afoul of the RID by hooking an antenna up to a record player so he could play DJ to his girlfriend down the street. But other operations led to the capture of 200 spies, including a shipping executive who used his ships to refuel Nazi U-boats operating in the Gulf of Mexico, and the famous Dusquense Spy Ring operating on Long Island.
Thanks in large part to the technical prowess of the hams populating its ranks, the RID’s success contained the seeds of its downfall. Normally, such an important self-defense task as preventing radio espionage would fall to the Army or Navy, but neither organization had the technical expertise in 1940, nor did they have the time to learn given how woefully unprepared they were for the coming war. Both branches eventually caught up, though, and neither appreciated a bunch of civilians mucking around on their turf. Turf battles ensued, politics came into it, and by 1944, budget cuts effectively ended the RID as a standalone agency.
The seeds of the Internet were first sown in the late 1960s, with computers laced together in continent-spanning networks to aid in national defence. However, it was in the late 1990s that the end-user explosion took place, as everyday people flocked online in droves.
Many astute individuals saw the potential at the time, and rushed to establish their own ISPs to capitalize on the burgeoning market. Amongst them was a famous figure of some repute. David Bowie might have been best known for his cast of rock-and-roll characters and number one singles, but he was also an internet entrepreneur who got in on the ground floor—with BowieNet.
Is There Dialup On Mars?
The BowieNet website was very much of its era. Credit: Bowienet, screenshot
Bowie’s obsession with the Internet started early. He was well ahead of the curve of many of his contemporaries, becoming the first major artist to release a song online. Telling Lies was released as a downloadable track, which sold over 300,000 downloads, all the way back in 1996. A year later, the Earthling concert would be “cybercast” online, in an era when most home internet connections could barely handle streaming audio.
These moves were groundbreaking, at the time, but also exactly what you might expect of a major artist trying to reach fans with their music. However, Bowie’s interests in the Internet lay deeper than mere music distribution. He wanted a richer piece of the action, and his own ISP—BowieNet— was the answer.
The site was regularly updated with new styling and fresh content from Bowie’s musical output. Eventually, it became more website than ISP. Credit: BowieNet, screenshot
Bowie tapped some experts for help, enlisting Robert Goodale and Ron Roy in his nascent effort. The service first launched in the US, on September 1st 1998, starting at a price of $19.95 a month. The UK soon followed at a price of £10.00. Users were granted a somewhat awkward email address of username@davidbowie.com, along with 5MB of personal web hosting. Connectivity was provided in partnership with established network companies, with Concentric Network Corp effectively offering a turnkey ISP service, and UltraStar handling the business and marketing side of things. It was, for a time, also possible to gain a free subscription by signing up for a BowieBanc credit card, a branded front end for a banking services run by USABancShares.com. At its peak, the service reached a total of 100,000 subscribers.
Bonuses included access to a network of chatrooms. The man himself was a user of the service, regularly popping into live chats, both scheduled and casually. He’d often wind up answering a deluge of fan questions on topics like upcoming albums and whether or not he drank tea. The operation was part ISP, part Bowie content farm, with users also able to access audio and video clips from Bowie himself. BowieNet subscribers were able to access exclusive tracks from the Earthling tour live album, LiveAndWell.com, gained early access to tickets, and could explore BowieWorld, a 3D interactive city environment. To some controversy, users of other ISPs had to stump up a $5.95 fee to access content on davidbowie.com, which drew some criticism at the time.
Bowienet relied heavily on the leading Internet technologies of the time. Audio and graphics were provided via RealAudio and Flash, standards that are unbelievably janky compared to those in common use today. A 56K modem was recommended for users wishing to make the most of the content on offer. New features were continually added to the service; Christmas 2004 saw users invited to send “BowieNet E-Cards,” and the same month saw the launch of BowieNet blogs for subscribers, too.
Bowie spoke to the BBC in 1999 about his belief in the power of the Internet.
BowieNet didn’t last forever. The full-package experience was, realistically, more than people expected even from one of the world’s biggest musicians. In May 2006, the ISP was quietly shutdown, with the BowieNet web presence slimmed down to a website and fanclub style experience. In 2012, this too came to an end, and DavidBowie.com was retooled to a more typical artist website of the modern era.
Ultimately, BowieNet was an interesting experiment in the burgeoning days of the consumer-focused Internet. The most appealing features of the service were really more about delivering exclusive content and providing a connection between fans and the artist himself. It eventually became clear that Bowie didn’t need to be branding the internet connection itself to provide that.
Still, we can dream of other artists getting involved in the utilities game, just for fun. Gagaphone would have been a slam dunk back in 2009. One suspects DojaGas perhaps wouldn’t have the same instant market penetration without some kind of hit single about clean burning fuels. Speculate freely in the comments.
It was such an innocent purchase, a slightly grubby and scuffed grey plastic box with the word “P O L A R O I D” intriguingly printed along its top edge. For a little more than a tenner it was mine, and I’d just bought one of Edwin Land’s instant cameras. The film packs it takes are now a decade out of production, but my Polaroid 104 with its angular 1960s styling and vintage bellows mechanism has all the retro-camera-hacking appeal I need. Straight away I 3D printed an adapter and new back allowing me to use 120 roll film in it, convinced I’d discover in myself a medium format photographic genius.
But who wouldn’t become fascinated with the film it should have had when faced with such a camera? I have form on this front after all, because a similar chance purchase of a defunct-format movie camera a few years ago led me into re-creating its no-longer-manufactured cartridges. I had to know more, both about the instant photos it would have taken, and those film packs. How did they work?
A Print, Straight From The Camera
An instant photograph reveals itself. Akos Burg, courtesy of One Instant.
In conventional black-and-white photography the film is exposed to the image, and its chemistry is changed by the light where it hits the emulsion. This latent image is rolled up with all the others in the film, and later revealed in the developing process. The chemicals cause silver particles to precipitate, and the resulting image is called a negative because the silver particles make it darkest where the most light hit it. Positive prints are made by exposing a fresh piece of film or photo paper through this negative, and in turn developing it. My Polaroid camera performed this process all-in-one, and I was surprised to find that behind what must have been an immense R&D effort to perfect the recipe, just how simple the underlying process was.
My dad had a Polaroid pack film camera back in the 1970s, a big plastic affair that he used to take pictures of the things he was working on. Pack film cameras weren’t like the motorised Polaroid cameras of today with their all-in-one prints, instead they had a paper tab that you pulled to release the print, and a peel-apart system where after a time to develop, you separated the negative from the print. I remember as a youngster watching this process with fascination as the image slowly appeared on the paper, and being warned not to touch the still-wet print or negative when it was revealed. What I was looking at wasn’t a negative printing process as described in the previous paragraph but something else, one in which the unexposed silver halide compounds which make the final image are diffused onto the paper from the less-exposed areas of the negative, forming a positive image of their own when a reducing agent precipitates out their silver crystals. Understanding the subtleties of this process required a journey back to the US Patent Office in the middle of the 20th century.
It’s All In The Diffusion
The illustration from Edwin Land’s patent US2647056.
It’s in US2647056 that we find a comprehensive description of the process, and the first surprise is that the emulsion on the negative is the same as on a contemporary panchromatic black-and-white film. The developer and fixer for this emulsion are also conventional, and are contained in a gel placed in a pouch at the head of the photograph. When the exposed film is pulled out of the camera it passes through a set of rollers that rupture this pouch, and then spread the gel in a thin layer between the negative and the coated paper. This gel has two functions: it develops the negative, but over a longer period it provides a wet medium for those unexposed silver halides to diffuse through into the now-also-wet coating of the paper which will become the print. This coating contains a reducing agent, in this case a metalic sulphide, which over a further period precipitates out the silver that forms the final visible image. This is what gives Polaroid photographs their trademark slow reveal as the chemistry does its job.
I’ve just described the black and white process; the colour version uses the same diffusion mechanism but with colour emulsions and dye couplers in place of the black-and-white chemistry. Meanwhile modern one-piece instant processes from Polaroid and Fuji have addressed the problem of making the image visible from the other side of the paper, removing the need for a peel-apart negative step.
Given that the mechanism and chemistry are seemingly so simple, one might ask why we can no longer buy two-piece Polaroid pack or roll film except for limited quantities of hand-made packs from One Instant. The answer lies in the complexity of the composition, for while it’s easy to understand how it works, it remains difficult to replicate the results Polaroid managed through a huge amount of research and development over many decades. Even the Impossible Project, current holders of the Polaroid brand, faced a significant effort to completely replicate the original Polaroid versions of their products when they brought the last remaining Polaroid factory to production back in 2010 using the original Polaroid machinery. So despite it retaining a fascination among photographers, it’s unlikely that we’ll see peel-apart film for Polaroid cameras return to volume production given the small size of the potential market.
Hacking A Sixty Year Old Camera
Five minutes with a Vernier caliper and openSCAD, and this is probably the closest I’ll get to a pack film of my own.
So having understood how peel-apart pack film works and discovered what is available here in 2025, what remains for the camera hacker with a Land camera? Perhaps the simplest idea would be to buy one of those One Instant packs, and use it as intended. But we’re hackers, so of course you will want to print that 120 conversion kit I mentioned, or find an old pack film cartridge and stick a sheet of photographic paper or even a Fuji Instax sheet in it. You’ll have to retreat to the darkroom and develop the film or run the Instax sheet through an Instax camera to see your images, but it’s a way to enjoy some retro photographic fun.
Further than that, would it be possible to load Polaroid 600 or i-Type sheets into a pack film cartridge and somehow give them paper tabs to pull through those rollers and develop them? Possibly, but all your images would be back to front. Sadly, rear-exposing Instax Wide sheets wouldn’t work either because their developer pod lies along their long side. If you were to manage loading a modern instant film sheet into a cartridge, you’d then have to master the intricate paper folding arrangement required to ensure the paper tabs for each photograph followed each other in turn. I have to admit that I’ve become fascinated by this in considering my Polaroid camera. Finally, could you make your own film? I would of course say no, but incredibly there are people who have achieved results doing just that.
My Polaroid 104 remains an interesting photographic toy, one I’ll probably try a One Instant pack in, and otherwise continue with the 3D printed back and shoot the occasional 120 roll film. If you have one too, you might find my 3D printed AAA battery adapter useful. Meanwhile it’s the cheap model without the nice rangefinder so it’ll never be worth much, so I might as well just enjoy it for what it is. And now I know a little bit more about his invention, admire Edwin Land for making it happen.
The world’s militaries have always been at the forefront of communications technology. From trumpets and drums to signal flags and semaphores, anything that allows a military commander to relay orders to troops in the field quickly or call for reinforcements was quickly seized upon and optimized. So once radio was invented, it’s little wonder how quickly military commanders capitalized on it for field communications.
Radiotelegraph systems began showing up as early as the First World War, but World War II was the first real radio war, with every belligerent taking full advantage of the latest radio technology. Chief among these developments was the ability of signals in the high-frequency (HF) bands to reflect off the ionosphere and propagate around the world, an important capability when prosecuting a global war.
But not long after, in the less kinetic but equally dangerous Cold War period, military planners began to see the need to move more information around than HF radio could support while still being able to do it over the horizon. What they needed was the higher bandwidth of the higher frequencies, but to somehow bend the signals around the curvature of the Earth. What they came up with was a fascinating application of practical physics: meteor burst communications.
Blame It on Shannon
In practical terms, a radio signal that can carry enough information to be useful for digital communications while still being able to propagate long distances is a bit of a paradox. You can thank Claude Shannon for that, after he developed the idea of channel capacity from the earlier work of Harry Nyquist and Ralph Hartley. The resulting Hartley-Shannon Theorem states that the bit rate of a channel in a noisy environment is directly related to the bandwidth of the channel. In other words, the more data you want to stuff down a channel, the higher the frequency needs to be.
Unfortunately, that runs afoul of the physics of ionospheric propagation. Thanks to the physics of the interaction between radio waves and the charged particles between about 50 km and 600 km above the ground, the maximum frequency that can be reflected back toward the ground is about 30 MHz, which is the upper end of the HF band. Beyond that is the very-high frequency (VHF) band from 30 MHz to 300 MHz, which has enough bandwidth for an effective data channel but to which the ionosphere is essentially transparent.
Luckily, the ionosphere isn’t the only thing capable of redirecting radio waves. Back in the 1920s, Japanese physicist Hantaro Nagaoka observed that the ionospheric propagation of shortwave radio signals would change a bit during periods of high meteoric activity. That discovery largely remained dormant until after World War II, when researchers picked up on Nagoka’s work and looked into the mechanism behind his observations.
Every day, the Earth sweeps up a huge number of meteoroids; estimates range from a million to ten billion. Most of those are very small, on the order of a few nanograms, with a few good-sized chunks in the tens of kilograms range mixed in. But the ones that end up being most interesting for communications purposes are the particles in the milligram range, in part because there are about 100 million such collisions on average every day, but also because they tend to vaporize in the E-level of the ionosphere, between 80 and 120 km above the surface. The air at that altitude is dense enough to turn the incoming cosmic debris into a long, skinny trail of ions, but thin enough that the free electrons take a while to recombine into neutral atoms. It’s a short time — anywhere between 500 milliseconds to a few seconds — but it’s long enough to be useful.
A meteor trail from the annual Perseid shower, which peaks in early August. This is probably a bit larger than the optimum for MBC, but beautiful nonetheless. Source: John Flannery, CC BY-ND 2.0.
The other aspect of meteor trails formed at these altitudes that makes them useful for communications is their relative reflectivity. The E-layer of the ionosphere normally has on the order of 107 electrons per cubic meter, a density that tends to refract radio waves below about 20 MHz. But meteor trails at this altitude can have densities as high as 1011 to 1012 electrons/m3. This makes the trails highly reflective to radio waves, especially at the higher frequencies of the VHF band.
In addition to the short-lived nature of meteor trails, daily and seasonal variations in the number of meteors complicate their utility for communications. The rotation of the Earth on its axis accounts for the diurnal variation, which tends to peak around dawn local time every day as the planet’s rotation and orbit are going in the same direction and the number of collisions increases. Seasonal variations occur because of the tilt of Earth’s axis relative to the plane of the ecliptic, where most meteoroids are concentrated. More collisions occur when the Earth’s axis is pointed in the direction of travel around the Sun, which is the second half of the year for the northern hemisphere.
Learning to Burst
Building a practical system that leverages these highly reflective but short-lived and variable mirrors in the sky isn’t easy, as shown by several post-war experimental systems. The first of these was attempted by the National Bureau of Standards in 1951. They set up a system between Cedar Rapids, Iowa, and Sterling, Virginia, a path length of about 1250 km. Originally built to study propagation phenomena such as forward scatter and sporadic E, the researchers noticed significant effects on their tests by meteor trails. This made them switch their focus to meteor trails, which caught the attention of the US Air Force. They were in the market for a four-channel continuous teletype link to their base in Thule, Greenland. They got it, but only just barely, thanks to the limited technology of the time. The NBS system also used the Iowa to Virginia system to study higher data rates by pointing highly directional rhombic antennas at each end of the connection at the same small patch of sky. They managed a whopping data rate of 3,200 bits per second with this system, but only for the second or so that a meteor trail happened to appear.
The successes and failures of the NBS system made it clear that a useful system based on meteor trails would need to operate in burst mode, to jam data through the link for as long as it existed and wait for the next one. The NBS tested a burst-mode system in 1958 that used the 50-MHz band and offered a full-duplex link at 2,400 bits per second. The system used magnetic tape loops to buffer data and transmitters at both ends of the link that operated continually to probe for a path. Whenever the receiver at one end detected a sufficiently strong probe signal from the other end, the transmitter would start sending data. The Canadians got in on the MBC action with their JANET system, which had a similar dedicated probing channel and tape buffer. In 1954 they established a full-duplex teletype link between Ottawa and Nova Scotia at 1,300 bits per second with an error rate of only 1.5%
In the late 1950s, Hughes developed a single-channel air-to-ground MBC system. This was a significant development since not only had the equipment gotten small enough to install on an airplane but also because it really refined the burst-mode technology. The ground stations in the Hughes system periodically transmitted a 100-bit interrogation signal to probe for a path to the aircraft. The receiver on the ground listened for an acknowledgement from the plane, which turned the channel around and allowed the airborne transmitter to send a 100-bit data burst. The system managed a respectable 2,400 bps data rate, but suffered greatly from ground-based interference for TV stations and automotive ignition noise.
The SHAPE of Things to Come
Supreme HQ Allied Powers Europe (SHAPE), NATO’s European headquarters in the mid-60s. The COMET meteor-bounce system kept NATO commanders in touch with member-nation HQs via teletype. Source: NATO
The first major MBC system fielded during the Cold War was the Communications by Meteor Trails system, or COMET. It was used by the North Atlantic Treaty Organization (NATO) to link its far-flung outposts in member nations with Supreme Headquarters Allied Powers Europe, or SHAPE, located in Belgium. COMET took cues from the Hughes system, especially its error detection and correction scheme. COMET was a robust and effective MBC system that provided between four and eight teletype circuits depending on daily and seasonal conditions, each handling 60 words per minute.
COMET was in continuous use from the mid-1960s until well after the official end of the Cold War. By that point, secure satellite communications were nowhere near as prohibitively expensive as they had been at the beginning of the Space Age, and MBC systems became less critical to NATO. They weren’t retired, though, and COMET actually still exists, although rebranded as “Compact Over-the-Horizon Mobile Expeditionary Terminal.” These man-portable systems don’t use MBC; rather, they use high-power UHF and microwave transmitters to scatter signals off the troposphere. A small amount of the signal is reflected back to the ground, where high-gain antennas pick up the vanishingly weak signals.
Although not directly related to Cold War communications, it’s worth noting that there was a very successful MBC system fielded in the civilian space in the United States: SNOTEL. We’ve covered this system in some depth already, but briefly, it’s a network of stations in the western part of the USA with the critical job of monitoring the snowpack. A commercial MBC system connected the solar-powered monitoring stations, often in remote and rugged locations, to two different central bases. Taking advantage of diurnal meteor variations, each morning the master station would send a polling signal out to every remote, which would then send back the previous day’s data once a return path was opened. The system could collect data from 180 remote sites in just 20 minutes. It operated successfully from the mid-1970s until just recently, when pervasive cell technology and cheap satellite modems made the system obsolete.
Once upon a time, typing “www” at the start of a URL was as automatic as breathing. And yet, these days, most of us go straight to “hackaday.com” without bothering with those three letters that once defined the internet.
Have you ever wondered why those letters were there in the first place, and when exactly they became optional? Let’s dig into the archaeology of the early web and trace how this ubiquitous prefix went from essential to obsolete.
Where Did You Go?
The first website didn’t bother with any of that www. nonsense! Credit: author screenshot
It may shock you to find out that the “www.” prefix was actually never really a key feature or necessity at all. To understand why, we need only contemplate the very first website, created by Tim Berners-Lee at CERN in 1990. Running on a NeXT workstation employed as a server, the site could be accessed at a simple URL: “http//info.cern.ch/”—no WWW needed. Berners-Lee had invented the World Wide Web, and called it as such, but he hadn’t included the prefix in his URL at all. So where did it come from?
McDonald’s were ahead of the times – in 1999, their website featured the “mcdonalds.com” domain, no prefix, though you did need it to actually get to the site. Credit: screenshot via Web Archive
As it turns out, the www prefix largely came about due to prevailing trends on the early Internet. It had become typical to separate out different services on a domain by using subdomains. For example, a company might have FTP access on http://ftp.company.com, while the SMTP server would be accessed via the smpt.company.com subdomain. In turn, when it came to establish a server to run a World Wide Web page, network administrators followed existing convention. Thus, they would put the WWW server on the www. subdomain, creating http://www.company.com.
This soon became standard practice, and in short order, was expected by members of the broader public as the joined the Internet in the late 1990s. It wasn’t long before end users were ignoring the http:// prefix at the start of domains, as web browsers didn’t really need you to type that in. However, www. had more of a foothold in the public consciousness. Along with “.com”, it became an obvious way for companies to highlight their new fancy website in their public facing marketing materials. For many years, this was simply how things were done. Users expected to type “www” before a domain name, and thus it became an ingrained part of the culture.
Eventually, though, trends shifted. For many domains, web traffic was the sole dominant use, so it became somewhat unnecessary to fold web traffic under its own subdomain. There was also a technological shift when the HTTP/1.1 protocol was introduced in 1999, with the “Host” header enabling multiple domains to be hosted on a single server. This, along with tweaks to DNS, also made it trivial to ensure “www.yoursite.com” and “yoursite.com” went to the same place. Beyond that, fashion-forward companies started dropping the leading www. for a cleaner look in marketing. Eventually, this would become the norm, with “www.” soon looking old hat.
Visit microsoft.com in Chrome, and you might think that’s where you really are… Credit: author screenshot
Of course, today, “www” is mostly dying out, at least as far as the industry and most end users are concerned. Few of us spend much time typing in URLs by hand these days, and fewer of us could remember the last time we felt the need to include “www.” at the beginning. Of course, if you want to make your business look out of touch, you could still include www. on your marketing materials, but people might think you’re an old fuddy duddy.
…but you’re not! Click in the address bar, and Chrome will show you the real URL. www. and all. Embarrassing! Credit: author screenshotHackaday, though? We rock without the prefix. Cutting-edge out here, folks. Credit: author screenshot
Using the www. prefix can still have some value when it comes to cookies, however. If you don’t use the prefix and someone goes to yoursite.com, that cookie would be sent to all subdomains. However, if your main page is set up at http://www.yoursite.com, it’s effectively on it’s own subdomain, along with any others you might have… like store.yoursite.com, blog.yoursite.com, and so on. This allows cookies to be more effectively managed across a site spanning multiple subdomains.
In any case, most browsers have taken a stance against the significance of “www”. Chrome, Safari, Firefox, and Edge all hide the prefix even when you are technically visiting a website that does still use the www. subdomain (like http://www.microsoft.com). You can try it yourself in Chrome—head over to a www. site and watch as the prefix disappears from the taskbar. If you really want to know if you’re on a www subdomain or not, though, you can click into the taskbar and it will give you the full URL, HTTP:// or HTTPS:// included, and all.
The “www” prefix stands as a reminder that the internet is a living, evolving thing. Over time, technical necessities become conventions, conventions become habits, and habits eventually fade away when they no longer serve a purpose. Yet we still see those three letters pop up on the Web now and then, a digital vestigial organ from the early days of the web. The next time you mindlessly type a URL without those three Ws, spare a thought for this small piece of internet history that shaped how we access information for decades. Largely gone, but not yet quite forgotten.
There was a time when each and every printer and typesetter had its own quirky language. If you had a wordprocessor from a particular company, it worked with the printers from that company, and that was it. That was the situation in the 1970s when some engineers at Xerox Parc — a great place for innovation but a spotty track record for commercialization — realized there should be a better answer.
That answer would be Interpress, a language for controlling Xerox laser printers. Keep in mind that in 1980, a laser printer could run anywhere from $10,000 to $100,000 and was a serious investment. John Warnock and his boss, Chuck Geschke, tried for two years to commercialize Interpress. They failed.
So the two formed a company: Adobe. You’ve heard of them? They started out with the idea of making laser printers, but eventually realized it would be a better idea to sell technology into other people’s laser printers and that’s where we get PostScript.
Early PostScript and the Birth of Desktop Publishing
PostScript is very much like Forth, with words made specifically for page layout and laser printing. There were several key selling points that made the system successful.
First, you could easily obtain the specifications if you wanted to write a printer driver. Apple decided to use it on their LaserWriter. Of course, that meant the printer had a more powerful computer in it than most of the Macs it connected to, but for $7,000 maybe that’s expected.
Second, any printer maker could license PostScript for use in their device. Why spend a lot of money making your own when you could just buy PostScript off the shelf?
Finally, PostScript allowed device independence. If you took a PostScript file and sent it to a 300 DPI laser printer, you got nice output. If you sent it to a 2400 DPI typesetter, you got even nicer output. This was a big draw since a rasterized image was either going to look bad on high-resolution devices or have a huge file system in an era where huge files were painful to deal with. Even a page at 300 DPI is fairly large.
If you bought a Mac and a LaserWriter you only needed one other thing: software. But since the PostScript spec was freely available, software was possible. A company named Aldus came out with PageMaker and invented the category of desktop publishing. Adding fuel to the fire, giant Lionotype came out with a typesetting machine that accepted PostScript, so you could go from a computer screen to proofs to a finished print job with one file.
If you weren’t alive — or too young to pay attention — during this time, you may not realize what a big deal this was. Prior to the desktop publishing revolution, computer output was terrible. You might mock something up in a text file and print it on a daisy wheel printer, but eventually, someone had to make something that was “camera-ready” to make real printing plates. The kind of things you can do in a minute in any word processor today took a ton of skilled labor back in those days.
Take Two
Of course, you have to innovate. Adobe did try to prompt Display PostScript in the late 1980s as a way to drive screens. The NeXT used this system. It was smart, but a bit slow for the hardware of the day. Also, Adobe wanted licensing fees, which had worked well for printers, but there were cheaper alternatives available for displays by the time Display PostScript arrived.
In 1991, Adobe released PostScript Level 2 — making the old PostScript into “Level 1” retroactively. It had all the improvements you would expect in a second version. It was faster and crashed less. It had better support for things like color separation and handling compressed images. It also worked better with oddball and custom fonts, and the printer could cache fonts and graphics.
Remember how releasing the spec helped the original PostScript? For Level 2, releasing it early caused a problem. Competitors started releasing features for Level 2 before Adobe. Oops.
They finally released PostScript 3. (And dropped the “Level”.) This allowed for 12-bit colors instead of 8-bit. It also supported PDF files.
PDF?
While PostScript is a language for controlling a printer, PDF is set up as a page description language. It focuses on what the page looks like and not how to create the page. Of course, this is somewhat semantics. You can think of a PostScript file as a program that drives a Raster Image Processor (RIP) to draw a page. You can think of a PDF as somewhat akin to a compiled version of that program that describes what the program would do.
Up to PDF 1.4, released in 2001, everything you could do in a PDF file could be done in PostScript. But with PDF 1.4 there were some new things that PostScript didn’t have. In particular, PDFs support layers and transparency. Today, PDF rules the roost and PostScript is largely static and fading.
What’s Inside?
Like we said, a PostScript file is a lot like a Forth program. There’s a comment at the front (%!PS-Adobe-3.0) that tells you it is a PostScript file and the level. Then there’s a prolog that defines functions and fonts. The body section uses words like moveto, lineto, and so on to build up a path that can be stroked, filled, or clipped. You can also do loops and conditionals — PostScript is Turing-complete. A trailer appears at the end of each page and usually has a command to render the page (showpage), which may start a new page.
A simple PostScript file running in GhostScript
A PDF file has a similar structure with a %PDF-1.7 comment. The body contains objects that can refer to pages, dictionaries, references, and image or font streams. There is also a cross-reference table to help find the objects and a trailer that points to the root object. That object brings in other objects to form the entire document. There’s no real code execution in a basic PDF file.
If you want to play with PostScript, there’s a good chance your printer might support it. If not, your printer drivers might. However, you can also grab a copy of GhostScript and write PostScript programs all day. Use GSView to render them on the screen or print them to any printer you can connect to. You can even create PDF files using the tools.
For example, try this:
%!PS
% Draw square
100 100 moveto
100 0 rlineto
0 100 rlineto
-100 0 rlineto
closepath
stroke
% Draw circle
150 150 50 0 360 arc
stroke
% Draw text "Hackaday" centered in the circle
/Times-Roman findfont 12 scalefont setfont % Choose font and size
(Hackaday) dup stringwidth pop 2 div % Calculate half text width
150 exch sub % X = center - half width
150 % Y = vertical center
moveto
(Hackaday) show
showpage
If you want to hack on the code or write your own, here’s the documentation. Think it isn’t really a programming language? [Nicolas] would disagree.
It’s amazing how quickly medical science made radiography one of its main diagnostic tools. Medicine had barely emerged from its Dark Age of bloodletting and the four humours when X-rays were discovered, and the realization that the internal structure of our bodies could cast shadows of this mysterious “X-Light” opened up diagnostic possibilities that went far beyond the educated guesswork and exploratory surgery doctors had relied on for centuries.
The problem is, X-rays are one of those things that you can’t see, feel, or smell, at least mostly; X-rays cause visible artifacts in some people’s eyes, and the pencil-thin beam of a CT scanner can create a distinct smell of ozone when it passes through the nasal cavity — ask me how I know. But to be diagnostically useful, the varying intensities created by X-rays passing through living tissue need to be translated into an image. We’ve already looked at how X-rays are produced, so now it’s time to take a look at how X-rays are detected and turned into medical miracles.
Taking Pictures
For over a century, photographic film was the dominant way to detect medical X-rays. In fact, years before Wilhelm Conrad Röntgen’s first systematic study of X-rays in 1895, fogged photographic plates during experiments with a Crooke’s tube were among the first indications of their existence. But it wasn’t until Röntgen convinced his wife to hold her hand between one of his tubes and a photographic plate to create the first intentional medical X-ray that the full potential of radiography could be realized.
“Hand mit Ringen” by W. Röntgen, December 1895. Public domain.
The chemical mechanism that makes photographic film sensitive to X-rays is essentially the same as the process that makes light photography possible. X-ray film is made by depositing a thin layer of photographic emulsion on a transparent substrate, originally celluloid but later polyester. The emulsion is a mixture of high-grade gelatin, a natural polymer derived from animal connective tissue, and silver halide crystals. Incident X-ray photons ionize the halogens, creating an excess of electrons within the crystals to reduce the silver halide to atomic silver. This creates a latent image on the film that is developed by chemically converting sensitized silver halide crystals to metallic silver grains and removing all the unsensitized crystals.
Other than in the earliest days of medical radiography, direct X-ray imaging onto photographic emulsions was rare. While photographic emulsions can be exposed by X-rays, it takes a lot of energy to get a good image with proper contrast, especially on soft tissues. This became a problem as more was learned about the dangers of exposure to ionizing radiation, leading to the development of screen-film radiography.
In screen-film radiography, X-rays passing through the patient’s tissues are converted to light by one or more intensifying screens. These screens are made from plastic sheets coated with a phosphorescent material that glows when exposed to X-rays. Calcium tungstate was common back in the day, but rare earth phosphors like gadolinium oxysulfate became more popular over time. Intensifying screens were attached to the front and back covers of light-proof cassettes, with double-emulsion film sandwiched between them; when exposed to X-rays, the screens would glow briefly and expose the film.
By turning one incident X-ray photon into thousands or millions of visible light photons, intensifying screens greatly reduce the dose of radiation needed to create diagnostically useful images. That’s not without its costs, though, as the phosphors tend to spread out each X-ray photon across a physically larger area. This results in a loss of resolution in the image, which in most cases is an acceptable trade-off. When more resolution is needed, single-screen cassettes can be used with one-sided emulsion films, at the cost of increasing the X-ray dose.
Wiggle Those Toes
Intensifying screens aren’t the only place where phosphors are used to detect X-rays. Early on in the history of radiography, doctors realized that while static images were useful, continuous images of body structures in action would be a fantastic diagnostic tool. Originally, fluoroscopy was performed directly, with the radiologist viewing images created by X-rays passing through the patient onto a phosphor-covered glass screen. This required an X-ray tube engineered to operate with a higher duty cycle than radiographic tubes and had the dual disadvantages of much higher doses for the patient and the need for the doctor to be directly in the line of fire of the X-rays. Cataracts were enough of an occupational hazard for radiologists that safety glasses using leaded glass lenses were a common accessory.
How not to test your portable fluoroscope. The X-ray tube is located in the upper housing, while the image intensifier and camera are below. The machine is generally referred to as a “C-arm” and is used in the surgery suite and for bedside pacemaker placements. Source: Nightryder84, CC BY-SA 3.0.
One ill-advised spin-off of medical fluoroscopy was the shoe-fitting fluoroscopes that started popping up in shoe stores in the 1920s. Customers would stick their feet inside the machine and peer at a fluorescent screen to see how well their new shoes fit. It was probably not terribly dangerous for the once-a-year shoe shopper, but pity the shoe salesman who had to peer directly into a poorly regulated X-ray beam eight hours a day to show every Little Johnny’s mother how well his new Buster Browns fit.
As technology improved, image intensifiers replaced direct screens in fluoroscopy suites. Image intensifiers were vacuum tubes with a large input window coated with a fluorescent material such as zinc-cadmium sulfide or sodium-cesium iodide. The phosphors convert X-rays passing through the patient to visible light photons, which are immediately converted to photoelectrons by a photocathode made of cesium and antimony. The electrons are focused by coils and accelerated across the image intensifier tube by a high-voltage field on a cylindrical anode. The electrons pass through the anode and strike a phosphor-covered output screen, which is much smaller in diameter than the input screen. Incident X-ray photons are greatly amplified by the image intensifier, making a brighter image with a lower dose of radiation.
Originally, the radiologist viewed the output screen using a microscope, which at least put a little more hardware between his or her eyeball and the X-ray source. Later, mirrors and lenses were added to project the image onto a screen, moving the doctor’s head out of the direct line of fire. Later still, analog TV cameras were added to the optical path so the images could be displayed on high-resolution CRT monitors in the fluoroscopy suite. Eventually, digital cameras and advanced digital signal processing were introduced, greatly streamlining the workflow for the radiologist and technologists alike.
Get To The Point
So far, all the detection methods we’ve discussed fall under the general category of planar detectors, in that they capture an entire 2D shadow of the X-ray beam after having passed through the patient. While that’s certainly useful, there are cases where the dose from a single, well-defined volume of tissue is needed. This is where point detectors come into play.
In medical X-ray equipment, point detectors often rely on some of the same gas-discharge technology that DIYers use to build radiation detectors at home. Geiger tubes and ionization chambers measure the current created when X-rays ionize a low-pressure gas inside an electric field. Geiger tubes generally use a much higher voltage than ionization chambers, and tend to be used more for radiological safety, especially in nuclear medicine applications, where radioisotopes are used to diagnose and treat diseases. Ionization chambers, on the other hand, were often used as a sort of autoexposure control for conventional radiography. Tubes were placed behind the film cassette holders in the exam tables of X-ray suites and wired into the control panels of the X-ray generators. When enough radiation had passed through the patient, the film, and the cassette into the ion chamber to yield a correct exposure, the generator would shut off the X-ray beam.
Another kind of point detector for X-rays and other kinds of radiation is the scintillation counter. These use a crystal, often cesium iodide or sodium iodide doped with thallium, that releases a few visible light photons when it absorbs ionizing radiation. The faint pulse of light is greatly amplified by one or more photomultiplier tubes, creating a pulse of current proportional to the amount of radiation. Nuclear medicine studies use a device called a gamma camera, which has a hexagonal array of PM tubes positioned behind a single large crystal. A patient is injected with a radioisotope such as the gamma-emitting technetium-99, which accumulates mainly in the bones. Gamma rays emitted are collected by the gamma camera, which derives positional information from the differing times of arrival and relative intensity of the light pulse at the PM tubes, slowly building a ghostly skeletal map of the patient by measuring where the 99Tc accumulated.
Going Digital
Despite dominating the industry for so long, the days of traditional film-based radiography were clearly numbered once solid-state image sensors began appearing in the 1980s. While it was reliable and gave excellent results, film development required a lot of infrastructure and expense, and resulted in bulky films that required a lot of space to store. The savings from doing away with all the trappings of film-based radiography, including the darkrooms, automatic film processors, chemicals, silver recycling, and often hundreds of expensive film cassettes, is largely what drove the move to digital radiography.
After briefly flirting with phosphor plate radiography, where a sensitized phosphor-coated plate was exposed to X-rays and then “developed” by a special scanner before being recharged for the next use, radiology departments embraced solid-state sensors and fully digital image capture and storage. Solid-state sensors come in two flavors: indirect and direct. Indirect sensor systems use a large matrix of photodiodes on amorphous silicon to measure the light given off by a scintillation layer directly above it. It’s basically the same thing as a film cassette with intensifying screens, but without the film.
Direct sensors, on the other hand, don’t rely on converting the X-ray into light. Rather, a large flat selenium photoconductor is used; X-rays absorbed by the selenium cause electron-hole pairs to form, which migrate to a matrix of fine electrodes on the underside of the sensor. The current across each pixel is proportional to the amount measured to the amount of radiation received, and can be read pixel-by-pixel to build up a digital image.
El editor Neowiz y el desarrollador Round8 Studio publicaron el trailer oficial de la historia de Lies of P: Overture, un DLC que servirá como precuela para Lies of P y se lanzará en el tercer trimestre de 2025 para PC vía Steam, MacOS, PlayStation 4, PlayStation 5, Xbox One y Xbox Series S|X.
Lies of P: Overture es un preludio dramático del aclamado RPG de acción soulslike, Lies of P. La historia te transporta a la ciudad de Krat en sus últimos días de inquietante belleza de la belle époque de finales del siglo XIX. Seguirás a una Acechadora Legendaria (una suerte de guía misteriosa) al borde de la masacre del Frenesí de las marionetas a través de historias jamás contadas y secretos escalofriantes.
Como la marioneta mortal de Geppetto, recorrerás Krat y sus alrededores, descubrirás historias ocultas y te enfrentarás a batallas épicas que darán forma al pasado y al futuro de Lies of P.
Como la marioneta de Geppetto, te encuentras con un misterioso artefacto que te lleva de vuelta a Krat en sus últimos días de grandeza. A la sombra de una tragedia inminente, tu misión es explorar el pasado y descubrir sus oscuros secretos, envueltos en sorpresas, pérdida y venganza.
Las decisiones que tomes repercutirán en el pasado y el presente del mundo de Lies of P, revelando verdades ocultas y dejando consecuencias duraderas.
Embárcate en una aventura inolvidable donde la sinfonía del acero choca con la inquietante melodía de lo desconocido. Atrévete a desentrañar los misterios del pasado, pues en el corazón de las tinieblas se halla la clave para desvelar los secretos de una historia atemporal renacida.
Pueden leer nuestro análisis de Lies of P en este enlace.
«Crucen los límites del tiempo y compartan nuestra emoción mientras los llevamos de paseo a los comienzos de la historia de Lies of P. ¡Espero que estén de acuerdo con que el largo camino hasta darles tal noticia valió la pena! El primer tráiler del juego Lies of P: Overture acaba de ser desvelado en el State of Play y está lleno de increíbles revelaciones.
Este fue un debut histórico para nosotros, y es todavía más gratificante al poder compartirlo con nuestros fans, los veteranos de los juegos estilo souls y también los que se unen por primera vez.
Como pueden imaginarse por el título, Lies of P: Overture los llevará atrás en el tiempo para que puedan descubrir las historias escondidas de Krat. ¿Recuerdan el momento en el que ustedes, manejando a la obra maestra más ambiciosa de Geppetto, despertaron en medio del Frenesí de las Marionetas, la masacre que devastó la ciudad por completo?
Ahora podrán aventurarse atrás en el tiempo y vivir el desgarrador viaje que dio vida a tal catastrófico momento. En Lies of P: Overture, intentamos refinar, rehacer y completar la historia como la concebimos originalmente.
En el cine podría llamarse la edición del director. Esta versión suele ser diferente a la versión para cines y normalmente incluye escenas adicionales que el cineasta considera que completan su obra. Con Lies of P: Overture, me pregunto si intentamos, de manera subconsciente, de crear una versión de los desarrolladores de Lies of P.
Nuestro equipo de producción tenía muchas historias y funciones que querían explorar. Así que seguimos explorando esas ideas hasta el final. Intentar descubrir cómo integrar esos elementos a Lies of P: Overture fue un gran desafío para mí como director. Con esta nueva retrospectiva, nos enfocamos en asegurar que el cariño y apoyo que recibimos por Lies of P fuera honrado y, al mismo tiempo, también deseábamos contar la historia que siempre quisimos compartir. Pero quédense tranquilos; seguiremos dándolo todo hasta el final.
También quiero usar esta oportunidad para expresar nuestra gratitud por todos los comentarios y consejos que recibimos de los ciudadanos de Krat desde el lanzamiento de Lies of P. Sus invaluables comentarios ayudaron a nuestros desarrolladores a perfeccionar Lies of P y su impresionante precuela que llegará este año a PlayStation 4, PlayStation 5 y PlayStation 5 Pro.
A nuestros fans y a la hermosa comunidad de jugadores, queremos decirles que ustedes nos completan. Únanse a nuestro equipo para completar la increíble historia de Lies of P.
Y como siempre, muchas gracias.»
Acerca de Lies of P: Overture
Lies of P: Overture es un preludio dramático del aclamado RPG de acción soulslike, Lies of P. La historia te transporta a la ciudad de Krat en sus últimos días de inquietante belleza de la belle époque de finales del siglo XIX.
Seguirás a una Acechadora Legendaria (una suerte de guía misteriosa) al borde de la masacre del Frenesí de las marionetas a través de historias jamás contadas y secretos escalofriantes.
Como la marioneta mortal de Geppetto, recorrerás Krat y sus alrededores, descubrirás historias ocultas y te enfrentarás a batallas épicas que darán forma al pasado y al futuro de Lies of P.
Como la marioneta de Geppetto, te encuentras con un misterioso artefacto que te lleva de vuelta a Krat en sus últimos días de grandeza. A la sombra de una tragedia inminente, tu misión es explorar el pasado y descubrir sus oscuros secretos, envueltos en sorpresas, pérdida y venganza.
Las decisiones que tomes repercutirán en el pasado y el presente del mundo de Lies of P, revelando verdades ocultas y dejando consecuencias duraderas.
Embárcate en una aventura inolvidable donde la sinfonía del acero choca con la inquietante melodía de lo desconocido. Atrévete a desentrañar los misterios del pasado, pues en el corazón de las tinieblas se halla la clave para desvelar los secretos de una historia atemporal renacida.
Acerca de Lies of P
Inspirado en la conocida historia de Pinocho, Lies of P es un juego de acción de estilo Souls ambientado en la oscura ciudad inspirada en la Belle Époque llamada Krat. Antiguamente una ciudad hermosa, Krat se ha convertido en una pesadilla viviente, con peligrosas marionetas causando estragos y una plaga que azota la tierra.
Juega como P, una marioneta que debe luchar a lo largo de la ciudad en su incansable búsqueda para encontrar a Geppetto y finalmente convertirse en humano. Lies of P presenta un mundo elegante lleno de tensión, un sistema de combate profundo y de personalización de personajes, y una historia cautivadora con interesantes opciones narrativas, donde cuanto más se miente, más humano se vuelve P. Solo recuerda: en un mundo lleno de mentiras, no puedes confiar en nadie.
“Juega como Pinocho, un mecanoide títere, y lucha a través de todo lo que encuentres en tu camino para encontrar a esta persona misteriosa. Pero no espere ayuda en el camino y no cometa el error de confiar en nadie. Siempre debes mentir a los demás si deseas convertirte en humano.
Te despiertas en una estación de tren abandonada en Krat, una ciudad abrumada por la locura y la sed de sangre. Frente a ti hay una sola nota que dice: “Encuentra al señor Geppetto. Está aquí en la ciudad”.
Inspirado en la conocida historia de Pinocho, Lies of P es un juego de acción similar al de un alma ambientado en un mundo cruel y oscuro de la Belle Époque. Toda la humanidad está perdida en una ciudad que alguna vez fue hermosa y que ahora se ha convertido en un infierno viviente lleno de horrores indescriptibles.
Lies of P ofrece un mundo elegante lleno de tensión, un sistema de combate profundo y una historia apasionante. Guía a Pinocho y experimenta su implacable viaje para convertirse en humano. “
Características:
Un cuento de hadas oscuro recontado – La historia atemporal de Pinocho ha sido reinventado con imágenes oscuras y sorprendentes. Ubicado en la ciudad caída de Krat, Pinnochio lucha desesperadamente por convertirse en humano contra todo pronóstico.
Concepto visual – La ciudad de Krat se inspiró en la época de la Belle Époque en Europa (finales del siglo XIX y principios del siglo XX) y es el epítome de una ciudad derrumbada sin prosperidad.
Misiones ‘mentirosas’ y múltiples finales – Experimenta misiones de procedimiento interconectadas que se desarrollan según cómo se mienta. Estas elecciones afectarán cómo termina la historia.
Sistema de fabricación de armas – Puedes combinar armas de muchas formas para crear algo completamente nuevo. Investiga para encontrar las mejores combinaciones y hacer algo realmente especial.
Sistema de habilidades especiales – Con Pinocho siendo un muñeco, puedes cambiar partes de su cuerpo para adquirir nuevas habilidades y, con suerte, una ventaja en la batalla. Pero no todas las mejoras son para pelear, también pueden proporcionar varias otras características únicas y útiles.
NEOWIZ lanzó Lies of P en todo el mundo el 19 de septiembre de 2023. Las ediciones estándar y de lujo están disponibles por US $59,99 y US $69.99 respectivamente, en formato digital o físico en PlayStation 4, PlayStation 5, Xbox One, Xbox Series X|S y PC a través de Steam y minoristas participantes.
La edición estándar de Lies of P también está disponible a través de la Mac App Store para modelos con silicio de Apple por US $59.99.
Epic Games Store anunció hoy que desde el 20 al 27 de febrero está regalando dos títulos en tu tienda: World War Z: Aftermath y Garden Story para PC.
Además, se pueden canjear STAR WARS Knights of the Old Republic I y STAR WARS Knights of the Old Republic II mediante la App Mobile de Epic Games Store, pero debao podrán encontrar los enlaces directos para canjearlos desde la PC.
STAR WARS Knights of the Old Republic I (Android)|Juego Base
STAR WARS Knights of the Old Republic I (iOS)|Juego Base
STAR WARS Knights of the Old Republic II – The Sith Lords (Android)|Juego Base
STAR WARS Knights of the Old Republic II – The Sith Lords (iOS)|Juego Base
Pueden consultar una lista de todos los títulos regalados por Epic Games Store en este enlace.
Acerca de World War Z
World War Z: Aftermath es el shooter cooperativo de zombis definitivo basado en el taquillazo de Paramount Pictures y el siguiente paso de World War Z, el éxito que ha cautivado a más de 20 millones de jugadores. Cambia el curso del apocalipsis zombi en consolas y en PC con cross-play completo.
Únete a hasta tres amigos o juega en solitario con compañeros de IA contra hordas de zombis insaciables en una historia intensa por episodios que se desarrolla en nuevas ubicaciones arrasadas de todo el mundo: Roma, el Vaticano y la península rusa de Kamchatka.
Nuevas historias de un mundo en guerra
Episodios de historia completamente nuevos en Roma, el Vaticano y la zona más oriental de Rusia: Kamchatka. Juega tanto con personajes nuevos como con otros conocidos para enfrentarte a los muertos vivientes con un nuevo sistema brutal de combate cuerpo a cuerpo, decimar a los zekes con movimientos únicos, ventajas y armas duales como la hoz y el cuchillo. Rechaza a nuevas monstruosidades muertas vivientes, incluyendo manadas de ratas hambrientas que desatarán el caos en tu equipo.
Progreso detallado y una nueva perspectiva
Vive una nueva perspectiva con el inmersivo modo de primera persona de Aftermath. Sube de nivel ocho clases únicas: pistolero, destructor, rebanador, médico, manitas, exterminador, maestro de drones y una nueva: vanguardista, cada una de ellas con sus propias ventajas y estilos de juego.
Personaliza tus armas para sobrevivir a cualquier desafío y conquista nuevas misiones diarias con modificadores especiales para conseguir recompensas adicionales.
Acerca de Epic Games Store
Epic Games es una empresa estadounidense cuyo director ejecutivo, Tim Sweeney, fundó en 1991. Su sede está en Cary, Carolina del Norte, y tiene decenas de oficinas en todo el mundo. Epic es una empresa líder en el entretenimiento interactivo y es proveedora de tecnología de motores 3D.
Epic Games opera uno de los juegos más grandes del mundo, Fortnite, el cual es un vibrante ecosistema de experiencias de entretenimiento social, que incluye juegos propios como Fortnite Battle Royale, LEGO Fortnite, Rocket Racing y Fortnite Festival, como así también experiencias de los creadores.
Epic Games tiene más de 800 millones de cuentas y más de 6000 millones de amigos conectados en Fortnite, Fall Guys, Rocket League y la Epic Games Store. Además, la empresa desarrolla Unreal Engine, que impulsa muchos de los juegos más importantes del mundo y que también otras industrias han adoptado, como el cine y la televisión, transmisiones y eventos en vivo, la arquitectura, la industria automotriz y la simulación.
A través de Fortnite, Unreal Engine, la Epic Games Store y Epic Games Online Services, Epic proporciona un ecosistema digital completo para que los creadores y los desarrolladores creen, distribuyan y operen juegos y otros contenidos.
Experience a transformative educational journey with Akadimia that opens new dimensions of learning. This tool lets you engage in insightful conversations with historical figures like Nikola Tesla through cutting-edge augmented reality (AR) technology, making the past come alive. Akadimia lets you unleash your curiosity and immerse yourself in a unique learning experience. You can start […]
Journey lets you tell captivating stories and presentations using videos, slides, and interactive elements like calendars. The tool offers features such as automagical creation, personalized content at scale, automatic branding, and a trove of customizable blocks. It even generates a first draft using AI, ensuring you never have to start from scratch. To get started […]