Once upon a time, typing “www” at the start of a URL was as automatic as breathing. And yet, these days, most of us go straight to “hackaday.com” without bothering with those three letters that once defined the internet.
Have you ever wondered why those letters were there in the first place, and when exactly they became optional? Let’s dig into the archaeology of the early web and trace how this ubiquitous prefix went from essential to obsolete.
Where Did You Go?
The first website didn’t bother with any of that www. nonsense! Credit: author screenshot
It may shock you to find out that the “www.” prefix was actually never really a key feature or necessity at all. To understand why, we need only contemplate the very first website, created by Tim Berners-Lee at CERN in 1990. Running on a NeXT workstation employed as a server, the site could be accessed at a simple URL: “http//info.cern.ch/”—no WWW needed. Berners-Lee had invented the World Wide Web, and called it as such, but he hadn’t included the prefix in his URL at all. So where did it come from?
McDonald’s were ahead of the times – in 1999, their website featured the “mcdonalds.com” domain, no prefix, though you did need it to actually get to the site. Credit: screenshot via Web Archive
As it turns out, the www prefix largely came about due to prevailing trends on the early Internet. It had become typical to separate out different services on a domain by using subdomains. For example, a company might have FTP access on http://ftp.company.com, while the SMTP server would be accessed via the smpt.company.com subdomain. In turn, when it came to establish a server to run a World Wide Web page, network administrators followed existing convention. Thus, they would put the WWW server on the www. subdomain, creating http://www.company.com.
This soon became standard practice, and in short order, was expected by members of the broader public as the joined the Internet in the late 1990s. It wasn’t long before end users were ignoring the http:// prefix at the start of domains, as web browsers didn’t really need you to type that in. However, www. had more of a foothold in the public consciousness. Along with “.com”, it became an obvious way for companies to highlight their new fancy website in their public facing marketing materials. For many years, this was simply how things were done. Users expected to type “www” before a domain name, and thus it became an ingrained part of the culture.
Eventually, though, trends shifted. For many domains, web traffic was the sole dominant use, so it became somewhat unnecessary to fold web traffic under its own subdomain. There was also a technological shift when the HTTP/1.1 protocol was introduced in 1999, with the “Host” header enabling multiple domains to be hosted on a single server. This, along with tweaks to DNS, also made it trivial to ensure “www.yoursite.com” and “yoursite.com” went to the same place. Beyond that, fashion-forward companies started dropping the leading www. for a cleaner look in marketing. Eventually, this would become the norm, with “www.” soon looking old hat.
Visit microsoft.com in Chrome, and you might think that’s where you really are… Credit: author screenshot
Of course, today, “www” is mostly dying out, at least as far as the industry and most end users are concerned. Few of us spend much time typing in URLs by hand these days, and fewer of us could remember the last time we felt the need to include “www.” at the beginning. Of course, if you want to make your business look out of touch, you could still include www. on your marketing materials, but people might think you’re an old fuddy duddy.
…but you’re not! Click in the address bar, and Chrome will show you the real URL. www. and all. Embarrassing! Credit: author screenshotHackaday, though? We rock without the prefix. Cutting-edge out here, folks. Credit: author screenshot
Using the www. prefix can still have some value when it comes to cookies, however. If you don’t use the prefix and someone goes to yoursite.com, that cookie would be sent to all subdomains. However, if your main page is set up at http://www.yoursite.com, it’s effectively on it’s own subdomain, along with any others you might have… like store.yoursite.com, blog.yoursite.com, and so on. This allows cookies to be more effectively managed across a site spanning multiple subdomains.
In any case, most browsers have taken a stance against the significance of “www”. Chrome, Safari, Firefox, and Edge all hide the prefix even when you are technically visiting a website that does still use the www. subdomain (like http://www.microsoft.com). You can try it yourself in Chrome—head over to a www. site and watch as the prefix disappears from the taskbar. If you really want to know if you’re on a www subdomain or not, though, you can click into the taskbar and it will give you the full URL, HTTP:// or HTTPS:// included, and all.
The “www” prefix stands as a reminder that the internet is a living, evolving thing. Over time, technical necessities become conventions, conventions become habits, and habits eventually fade away when they no longer serve a purpose. Yet we still see those three letters pop up on the Web now and then, a digital vestigial organ from the early days of the web. The next time you mindlessly type a URL without those three Ws, spare a thought for this small piece of internet history that shaped how we access information for decades. Largely gone, but not yet quite forgotten.
There was a time when each and every printer and typesetter had its own quirky language. If you had a wordprocessor from a particular company, it worked with the printers from that company, and that was it. That was the situation in the 1970s when some engineers at Xerox Parc — a great place for innovation but a spotty track record for commercialization — realized there should be a better answer.
That answer would be Interpress, a language for controlling Xerox laser printers. Keep in mind that in 1980, a laser printer could run anywhere from $10,000 to $100,000 and was a serious investment. John Warnock and his boss, Chuck Geschke, tried for two years to commercialize Interpress. They failed.
So the two formed a company: Adobe. You’ve heard of them? They started out with the idea of making laser printers, but eventually realized it would be a better idea to sell technology into other people’s laser printers and that’s where we get PostScript.
Early PostScript and the Birth of Desktop Publishing
PostScript is very much like Forth, with words made specifically for page layout and laser printing. There were several key selling points that made the system successful.
First, you could easily obtain the specifications if you wanted to write a printer driver. Apple decided to use it on their LaserWriter. Of course, that meant the printer had a more powerful computer in it than most of the Macs it connected to, but for $7,000 maybe that’s expected.
Second, any printer maker could license PostScript for use in their device. Why spend a lot of money making your own when you could just buy PostScript off the shelf?
Finally, PostScript allowed device independence. If you took a PostScript file and sent it to a 300 DPI laser printer, you got nice output. If you sent it to a 2400 DPI typesetter, you got even nicer output. This was a big draw since a rasterized image was either going to look bad on high-resolution devices or have a huge file system in an era where huge files were painful to deal with. Even a page at 300 DPI is fairly large.
If you bought a Mac and a LaserWriter you only needed one other thing: software. But since the PostScript spec was freely available, software was possible. A company named Aldus came out with PageMaker and invented the category of desktop publishing. Adding fuel to the fire, giant Lionotype came out with a typesetting machine that accepted PostScript, so you could go from a computer screen to proofs to a finished print job with one file.
If you weren’t alive — or too young to pay attention — during this time, you may not realize what a big deal this was. Prior to the desktop publishing revolution, computer output was terrible. You might mock something up in a text file and print it on a daisy wheel printer, but eventually, someone had to make something that was “camera-ready” to make real printing plates. The kind of things you can do in a minute in any word processor today took a ton of skilled labor back in those days.
Take Two
Of course, you have to innovate. Adobe did try to prompt Display PostScript in the late 1980s as a way to drive screens. The NeXT used this system. It was smart, but a bit slow for the hardware of the day. Also, Adobe wanted licensing fees, which had worked well for printers, but there were cheaper alternatives available for displays by the time Display PostScript arrived.
In 1991, Adobe released PostScript Level 2 — making the old PostScript into “Level 1” retroactively. It had all the improvements you would expect in a second version. It was faster and crashed less. It had better support for things like color separation and handling compressed images. It also worked better with oddball and custom fonts, and the printer could cache fonts and graphics.
Remember how releasing the spec helped the original PostScript? For Level 2, releasing it early caused a problem. Competitors started releasing features for Level 2 before Adobe. Oops.
They finally released PostScript 3. (And dropped the “Level”.) This allowed for 12-bit colors instead of 8-bit. It also supported PDF files.
PDF?
While PostScript is a language for controlling a printer, PDF is set up as a page description language. It focuses on what the page looks like and not how to create the page. Of course, this is somewhat semantics. You can think of a PostScript file as a program that drives a Raster Image Processor (RIP) to draw a page. You can think of a PDF as somewhat akin to a compiled version of that program that describes what the program would do.
Up to PDF 1.4, released in 2001, everything you could do in a PDF file could be done in PostScript. But with PDF 1.4 there were some new things that PostScript didn’t have. In particular, PDFs support layers and transparency. Today, PDF rules the roost and PostScript is largely static and fading.
What’s Inside?
Like we said, a PostScript file is a lot like a Forth program. There’s a comment at the front (%!PS-Adobe-3.0) that tells you it is a PostScript file and the level. Then there’s a prolog that defines functions and fonts. The body section uses words like moveto, lineto, and so on to build up a path that can be stroked, filled, or clipped. You can also do loops and conditionals — PostScript is Turing-complete. A trailer appears at the end of each page and usually has a command to render the page (showpage), which may start a new page.
A simple PostScript file running in GhostScript
A PDF file has a similar structure with a %PDF-1.7 comment. The body contains objects that can refer to pages, dictionaries, references, and image or font streams. There is also a cross-reference table to help find the objects and a trailer that points to the root object. That object brings in other objects to form the entire document. There’s no real code execution in a basic PDF file.
If you want to play with PostScript, there’s a good chance your printer might support it. If not, your printer drivers might. However, you can also grab a copy of GhostScript and write PostScript programs all day. Use GSView to render them on the screen or print them to any printer you can connect to. You can even create PDF files using the tools.
For example, try this:
%!PS
% Draw square
100 100 moveto
100 0 rlineto
0 100 rlineto
-100 0 rlineto
closepath
stroke
% Draw circle
150 150 50 0 360 arc
stroke
% Draw text "Hackaday" centered in the circle
/Times-Roman findfont 12 scalefont setfont % Choose font and size
(Hackaday) dup stringwidth pop 2 div % Calculate half text width
150 exch sub % X = center - half width
150 % Y = vertical center
moveto
(Hackaday) show
showpage
If you want to hack on the code or write your own, here’s the documentation. Think it isn’t really a programming language? [Nicolas] would disagree.
It’s amazing how quickly medical science made radiography one of its main diagnostic tools. Medicine had barely emerged from its Dark Age of bloodletting and the four humours when X-rays were discovered, and the realization that the internal structure of our bodies could cast shadows of this mysterious “X-Light” opened up diagnostic possibilities that went far beyond the educated guesswork and exploratory surgery doctors had relied on for centuries.
The problem is, X-rays are one of those things that you can’t see, feel, or smell, at least mostly; X-rays cause visible artifacts in some people’s eyes, and the pencil-thin beam of a CT scanner can create a distinct smell of ozone when it passes through the nasal cavity — ask me how I know. But to be diagnostically useful, the varying intensities created by X-rays passing through living tissue need to be translated into an image. We’ve already looked at how X-rays are produced, so now it’s time to take a look at how X-rays are detected and turned into medical miracles.
Taking Pictures
For over a century, photographic film was the dominant way to detect medical X-rays. In fact, years before Wilhelm Conrad Röntgen’s first systematic study of X-rays in 1895, fogged photographic plates during experiments with a Crooke’s tube were among the first indications of their existence. But it wasn’t until Röntgen convinced his wife to hold her hand between one of his tubes and a photographic plate to create the first intentional medical X-ray that the full potential of radiography could be realized.
“Hand mit Ringen” by W. Röntgen, December 1895. Public domain.
The chemical mechanism that makes photographic film sensitive to X-rays is essentially the same as the process that makes light photography possible. X-ray film is made by depositing a thin layer of photographic emulsion on a transparent substrate, originally celluloid but later polyester. The emulsion is a mixture of high-grade gelatin, a natural polymer derived from animal connective tissue, and silver halide crystals. Incident X-ray photons ionize the halogens, creating an excess of electrons within the crystals to reduce the silver halide to atomic silver. This creates a latent image on the film that is developed by chemically converting sensitized silver halide crystals to metallic silver grains and removing all the unsensitized crystals.
Other than in the earliest days of medical radiography, direct X-ray imaging onto photographic emulsions was rare. While photographic emulsions can be exposed by X-rays, it takes a lot of energy to get a good image with proper contrast, especially on soft tissues. This became a problem as more was learned about the dangers of exposure to ionizing radiation, leading to the development of screen-film radiography.
In screen-film radiography, X-rays passing through the patient’s tissues are converted to light by one or more intensifying screens. These screens are made from plastic sheets coated with a phosphorescent material that glows when exposed to X-rays. Calcium tungstate was common back in the day, but rare earth phosphors like gadolinium oxysulfate became more popular over time. Intensifying screens were attached to the front and back covers of light-proof cassettes, with double-emulsion film sandwiched between them; when exposed to X-rays, the screens would glow briefly and expose the film.
By turning one incident X-ray photon into thousands or millions of visible light photons, intensifying screens greatly reduce the dose of radiation needed to create diagnostically useful images. That’s not without its costs, though, as the phosphors tend to spread out each X-ray photon across a physically larger area. This results in a loss of resolution in the image, which in most cases is an acceptable trade-off. When more resolution is needed, single-screen cassettes can be used with one-sided emulsion films, at the cost of increasing the X-ray dose.
Wiggle Those Toes
Intensifying screens aren’t the only place where phosphors are used to detect X-rays. Early on in the history of radiography, doctors realized that while static images were useful, continuous images of body structures in action would be a fantastic diagnostic tool. Originally, fluoroscopy was performed directly, with the radiologist viewing images created by X-rays passing through the patient onto a phosphor-covered glass screen. This required an X-ray tube engineered to operate with a higher duty cycle than radiographic tubes and had the dual disadvantages of much higher doses for the patient and the need for the doctor to be directly in the line of fire of the X-rays. Cataracts were enough of an occupational hazard for radiologists that safety glasses using leaded glass lenses were a common accessory.
How not to test your portable fluoroscope. The X-ray tube is located in the upper housing, while the image intensifier and camera are below. The machine is generally referred to as a “C-arm” and is used in the surgery suite and for bedside pacemaker placements. Source: Nightryder84, CC BY-SA 3.0.
One ill-advised spin-off of medical fluoroscopy was the shoe-fitting fluoroscopes that started popping up in shoe stores in the 1920s. Customers would stick their feet inside the machine and peer at a fluorescent screen to see how well their new shoes fit. It was probably not terribly dangerous for the once-a-year shoe shopper, but pity the shoe salesman who had to peer directly into a poorly regulated X-ray beam eight hours a day to show every Little Johnny’s mother how well his new Buster Browns fit.
As technology improved, image intensifiers replaced direct screens in fluoroscopy suites. Image intensifiers were vacuum tubes with a large input window coated with a fluorescent material such as zinc-cadmium sulfide or sodium-cesium iodide. The phosphors convert X-rays passing through the patient to visible light photons, which are immediately converted to photoelectrons by a photocathode made of cesium and antimony. The electrons are focused by coils and accelerated across the image intensifier tube by a high-voltage field on a cylindrical anode. The electrons pass through the anode and strike a phosphor-covered output screen, which is much smaller in diameter than the input screen. Incident X-ray photons are greatly amplified by the image intensifier, making a brighter image with a lower dose of radiation.
Originally, the radiologist viewed the output screen using a microscope, which at least put a little more hardware between his or her eyeball and the X-ray source. Later, mirrors and lenses were added to project the image onto a screen, moving the doctor’s head out of the direct line of fire. Later still, analog TV cameras were added to the optical path so the images could be displayed on high-resolution CRT monitors in the fluoroscopy suite. Eventually, digital cameras and advanced digital signal processing were introduced, greatly streamlining the workflow for the radiologist and technologists alike.
Get To The Point
So far, all the detection methods we’ve discussed fall under the general category of planar detectors, in that they capture an entire 2D shadow of the X-ray beam after having passed through the patient. While that’s certainly useful, there are cases where the dose from a single, well-defined volume of tissue is needed. This is where point detectors come into play.
In medical X-ray equipment, point detectors often rely on some of the same gas-discharge technology that DIYers use to build radiation detectors at home. Geiger tubes and ionization chambers measure the current created when X-rays ionize a low-pressure gas inside an electric field. Geiger tubes generally use a much higher voltage than ionization chambers, and tend to be used more for radiological safety, especially in nuclear medicine applications, where radioisotopes are used to diagnose and treat diseases. Ionization chambers, on the other hand, were often used as a sort of autoexposure control for conventional radiography. Tubes were placed behind the film cassette holders in the exam tables of X-ray suites and wired into the control panels of the X-ray generators. When enough radiation had passed through the patient, the film, and the cassette into the ion chamber to yield a correct exposure, the generator would shut off the X-ray beam.
Another kind of point detector for X-rays and other kinds of radiation is the scintillation counter. These use a crystal, often cesium iodide or sodium iodide doped with thallium, that releases a few visible light photons when it absorbs ionizing radiation. The faint pulse of light is greatly amplified by one or more photomultiplier tubes, creating a pulse of current proportional to the amount of radiation. Nuclear medicine studies use a device called a gamma camera, which has a hexagonal array of PM tubes positioned behind a single large crystal. A patient is injected with a radioisotope such as the gamma-emitting technetium-99, which accumulates mainly in the bones. Gamma rays emitted are collected by the gamma camera, which derives positional information from the differing times of arrival and relative intensity of the light pulse at the PM tubes, slowly building a ghostly skeletal map of the patient by measuring where the 99Tc accumulated.
Going Digital
Despite dominating the industry for so long, the days of traditional film-based radiography were clearly numbered once solid-state image sensors began appearing in the 1980s. While it was reliable and gave excellent results, film development required a lot of infrastructure and expense, and resulted in bulky films that required a lot of space to store. The savings from doing away with all the trappings of film-based radiography, including the darkrooms, automatic film processors, chemicals, silver recycling, and often hundreds of expensive film cassettes, is largely what drove the move to digital radiography.
After briefly flirting with phosphor plate radiography, where a sensitized phosphor-coated plate was exposed to X-rays and then “developed” by a special scanner before being recharged for the next use, radiology departments embraced solid-state sensors and fully digital image capture and storage. Solid-state sensors come in two flavors: indirect and direct. Indirect sensor systems use a large matrix of photodiodes on amorphous silicon to measure the light given off by a scintillation layer directly above it. It’s basically the same thing as a film cassette with intensifying screens, but without the film.
Direct sensors, on the other hand, don’t rely on converting the X-ray into light. Rather, a large flat selenium photoconductor is used; X-rays absorbed by the selenium cause electron-hole pairs to form, which migrate to a matrix of fine electrodes on the underside of the sensor. The current across each pixel is proportional to the amount measured to the amount of radiation received, and can be read pixel-by-pixel to build up a digital image.
An occasional series of mine on these pages has been Daily Drivers, in which I try out operating systems from the point of view of using them for my everyday Hackaday work. It has mostly featured esoteric or lesser-used systems, some of which have been unexpected gems and others have been not quite ready for the big time.
Today I’m testing another system, but it’s not quite the same as the previous ones. Instead I’m looking at a piece of hardware, and I’m looking at it for use in my computing projects rather than as my desktop OS. You’ll all be familiar with it: the original Raspberry Pi appeared at the end of February 2012, though it would be May of that year before all but a lucky few received one. Since then it has become a global phenomenon and spawned a host of ever-faster successors, but what of that original board from 2012 here in 2025? If you have a working piece of hardware it makes sense to use it, so how does the original stack up? I have a project that needs a Linux machine, so I’m dusting off a Model B and going down memory lane.
Rediscovering An Old Flame
My first Pi from 2012. The heatsinks are my addition.
It’s fair to say that Raspberry Pi have never had the fastest board on the block, or the highest specification. At any point there’s always some board or other touted as a Pi-killer because it claims to do more, but somehow they never make much impact. The reason for this is simple; alongside your Pi you are also buying the ability to run Raspberry Pi OS, and their achievement in creating a solid and well-supported operating system that still runs on their earliest boards is something their competitors can’t touch. So when I pulled out my Model B I was able to go to the Raspberry Pi downloads page and snag a Debian Bookworm image for its 32-bit processor. I went for the “lite” version; while an early Pi will run a desktop and could even be my desktop daily driver, it would be so painfully slow as to be frustrating.
This is what my word trend analysis tool can do. Everyone was talking about Brexit in the UK in 2016.
My purpose for using the Pi is to run a language analysis package. Aside from fiddling with old cameras and writing about tech, I have a long history in computational language processing, and I have recently returned to my news trend analysis code and made it open-source. It’s a project whose roots go back nearly two decades, so there’s been an element of working out what my younger self was thinking. It builds and processes a corpus of news data over time from RSS feeds, and presents a web-based analysis client. 2000s-era me wrote it in PHP (don’t judge!) and I evolved a corpus structure using a huge tree of small JSON files for fast access. An earlier version of this package ran on my first Pi for many years, sitting next to my router with a USB hard disk.
Firing up an original Pi in 2025 is easy enough, as with any Pi it’s simply a case of writing the image to an SD card, hooking up the Pi to screen and peripherals, and booting it. Raspberry Pi OS is as straightforward to set up as always, and after rebooting and logging in, there I was with a shell.
Remembering, Computers Weren’t Always This Quick
Yes. it’s slow. But it’s got a shell. macrophile, CC BY 2.0.
My main machine is a fairly recent high-end Thinkpad laptop with an Intel Core i7, 32 GB of memory, and the fastest SSD I could afford, equipped with a hefty cache. It’s a supercomputer by any measure from the past, so I have become used to things I do in the shell being blisteringly quick. Sitting at the Pi, it’s evident that I’ll need to recalibrate my expectations, as there’s no way it can match the Thinkpad. As i waited – rather a long time – for apt to upgrade the packages, I had time to reflect. Back in the day when I set up Linux on my 486 or my Pentium machine, I was used to waiting like this. I remember apt upgrade being a go away and have a coffee thing, and I also remember thinking that Pentium was pretty quick, which it was for its day. But stripped of unnecessary services and GUI cruft, I was still getting all the power of the Pi in my terminal. It wasn’t bad, simply visibly slower than the Thinkpad, which to be fair, also applies to all the other computers I own.
So my little Pi 1 model B now sits again hooked up to my router and with a hefty USB drive, again waking up every couple of hours and number-crunching the world’s news. I’ve got used to its relative sloth, and to working again with nano and screen to get things done on it. It’s a useful little computer for the task I have for it, and it can run all day consuming only a couple of watts. As long as the Raspberry Pi people still make the Pi Zero, and I hope for a few years after they stop, it will continue to have OS support, and thus its future as my language processing machine looks assured.
The point of this piece has been to reflect on why we shouldn’t let our older hardware collect dust if it’s still useful. Of course Raspberry Pi want to sell us a new Pi 5, and that board is an amazing machine. But if your task doesn’t need all that power and you still have the earlier model lying around, don’t forget that it’s still a capable little Linux board that you probably paid quite a lot less for. You can’t argue with that.
If you’re living your life right, you probably know what as MOSFET is. But do you know the MESFET? They are like the faster, uninsulated, Schottky version of a MOSFET, and they used to rule the roost in radio-frequency (RF) silicon. But if you’re like us, and you have never heard of a MESFET, then give this phenomenal video by [Asianometry] a watch. In it, among other things, he explains how the shrinking feature size in CMOS made RF chips cheap, which brought you the modern cellphone as we know it.
The basic overview is that in the 1960s, most high-frequency stuff had to be done with discrete parts because the bipolar-junction semiconductors of the time were just too slow. At this time, MOSFETs were just becoming manufacturable, but were even slower still. The MESFET, without its insulating oxide layer between the metal and the silicon, had less capacitance, and switched faster. When silicon feature sizes got small enough that you could do gigahertz work with them, the MESFET was the tech of choice.
As late as the 1980s, you’d find MESFETs in radio devices. At this time, the feature size of the gates and the thickness of the oxide layer in MOSFETs kept them out of the game. But as CPU manufacturers pushed CMOS theses features smaller, not only did we get chips like the 8086 and 80386, two of Intel’s earliest CMOS designs, but the tech started getting fast enough for RF. And the world never looked back.
If you’re interested in the history of the modern monolithic RF ICs, definitely give the 18-minute video a watch. (You can skip the first three or so if you’re already a radio head.) If you just want to build some radio circuits, this fantastic talk from [Michael Ossmann] at the first-ever Supercon will make you an RF design hero. His secrets? Among them, making the most of exactly these modern everything-in-one-chip RF ICs so that you don’t have to think about that side of things too hard.
History is rather dull and unexciting to most people, which naturally invites exciting flights of fancy that can range from the innocent to outright conspiracies. Nobody truly believes that the astounding finds and (fully functioning) ancient mechanisms in the Indiana Jones & Uncharted franchises are real, with mostly intact ancient cities waiting for intrepid explorers along with whatever mystical sources of power, wealth or influence formed the civilization’s foundations before its tragic demise. Yet somehow Plato’s fictive Atlantis has taken on a life of its own, along with many other ‘lost’ civilizations, whether real or imagined.
Of course, if these aforementioned movies and video games were realistic, they would center around a big archaeological dig and thrilling finds like pot shards and cuneiform clay tablets, not ways to smite enemies and gain immortality. Nor would it involve solving complex mechanical puzzles to gain access to the big secret chamber, prior to walking out of the readily accessible backdoor. Reality is boring like that, which is why there’s a major temptation to spruce things up. With the Egyptian pyramids as well as similar structures around the world speaking to the human imagination, this has led to centuries of half-baked ideas and outright conspiracies.
Most recently, a questionable 2022 paper hinting at structures underneath the Pyramid of Khafre in Egypt was used for a fresh boost to old ideas involving pyramid power stations, underground cities and other fanciful conspiracies. Although we can all agree that the ancient pyramids in Egypt are true marvels of engineering, are we really on the cusp of discovering that the ancient Egyptians were actually provided with Forerunner technology by extraterrestrials?
The Science of Being Tragically Wrong
A section of the ‘runes’ at Runamo. (Credit: Entheta, Wikimedia)
In defense of fanciful theories regarding the Actual Truth about Ancient Egypt and kin, archaeology as we know it today didn’t really develop until the latter half of the 20th century, with the field being mostly a hobbyist thing that people did out of curiosity as well as a desire for riches. Along the way many comical blunders were made, such as the Runamo runes in Sweden that turned out to be just random cracks in dolerite.
Less funny were attempts by colonists to erase Great Zimbabwe (11th – ~17th century CE) and the Kingdom of Zimbabwe after the ruins of the abandoned capital were discovered by European colonists and explored in earnest by the 19th century. Much like the wanton destruction of local cultures in the Americas by European colonists and explorers who considered their own culture, religion and technology to be clearly superior, the history of Great Zimbabwe was initially rewritten so that no thriving African society ever formed on its own, but was the result of outside influences.
In this regard it’s interesting how many harebrained ideas about archaeological sites have now effectively flipped, with mystical and mythical properties being assigned and these ‘Ancients’ being almost worshipped. Clearly, aliens visited Earth and that led to pyramids being constructed all around the globe. These would also have been the same aliens or lost civilizations that had technology far beyond today’s cutting edge, putting Europe’s fledgling civilization to shame.
Hence people keep dogpiling on especially the pyramids of Giza and its surrounding complex, assigning mystical properties to their ventilation shafts and expecting hidden chambers with technology and treasures interspersed throughout and below the structures.
Lost Technology
The Giant’s Causeway in Northern Ireland. (Credit: code poet, Wikimedia)
The idea of ‘lost technology’ is a pervasive one, mostly buoyed by the axiom that you cannot disprove something, only find evidence for its absence. Much like the possibility of a teapot being in orbit around the Sun right now, you cannot disprove that the Ancient Egyptians did not have hyper-advanced power plants using zero point energy back around 3,600 BCE. This ties in with the idea of ‘lost civilizations‘, which really caught on around the Victorian era.
Such romanticism for a non-existent past led to the idea of Atlantis being a real, lost civilization becoming pervasive, with the 1960s seeing significant hype around the Bimini Road. This undersea rock formation in the Bahamas was said to have been part of Atlantis, but is actually a perfectly cromulent geological formation. More recently a couple of German tourists got into legal trouble while trying to prove a connection between Egypt’s pyramids to Atlantis, which is a theory that refuses to die along with the notion that Atlantis was some kind of hyper-advanced civilization and not just a fictional society that Plato concocted to illustrate the folly of man.
Admittedly there is a lot of poetry in all of this when you consider it from that angle.
Welcome to Shangri-La… or rather Shambhala as portrayed in Uncharted 3.
People have spent decades of their life and countless sums of money on trying to find Atlantis, Shangri-La (possibly inspired by Shambhala), El Dorado and similar fictional locations. The Iram of the Pillars which featured in Uncharted 3: Drake’s Deception is one of the lost cities mentioned in the Qur’an, and is incidentally another great civilization that saw itself meet a grim end through divine punishment. Iram is often said to be Ubar, which is commonly known as Atlantis of the Sands.
All of this is reminiscent of the Giant’s Causeway in Northern Ireland, and corresponding area at Fingal’s Cave on the Scottish isle of Staffa, where eons ago molten basalt cooled and contracted into basalt columns in a way that is similar to how drying mud will crack in semi-regular patterns. This particular natural formation did lead to many local myths, including how a giant built a causeway across the North Channel, hence the name.
Fortunately for this location, no ‘lost civilization’ tag became attached, and thus it remains a curious demonstration of how purely natural formations can create structures that one might assume to have required intelligence, thus providing fuel for conspiracies. So far only ‘Young Earth’ conspiracy folk have put a claim on this particular site.
What we can conclude is that much like the Victorian age that spawned countless works of fiction on the topic, many of these modern-day stories appear to be rooted in a kind of romanticism for a past that never existed, with those affected interpreting natural patterns as something more in a sure sign of confirmation bias.
Tourist Traps
Tomb of the First Emperor Qin Shi Huang Di, Xi’an, China (Credit: Aaron Zhu)
One can roughly map the number of tourist visits with the likelihood of wild theories being dreamed up. These include the Egyptian pyramids, but also similar structures in what used to be the sites of the Aztec and Maya civilizations. Similarly the absolutely massive mausoleum of Qin Shi Huang in China with its world-famous Terracotta Army has led to incredible speculation on what might still be hidden inside the unexcavated tomb mound, such as entire seas and rivers of mercury that moved mechanically to simulate real bodies of water, a simulated starry sky, crossbows set to take out trespassers and incredible riches.
Many of these features were described by Sima Qian in the first century BCE, who may or may not have been truthful in his biography of Qin Shi Huang. Meanwhile, China’s authorities have wisely put further excavations on hold, as they have found that many of the recovered artefacts degrade very quickly once exposed to air. The paint on the terracotta figures began to flake off rapidly after excavation, for example, reducing them to the plain figures which we are familiar with.
Tourism can be as damaging as careless excavation. As popular as the pyramids at Giza are, centuries of tourism have taken their toll, with vandalism, graffiti and theft increasing rapidly since the 20th century. The Great Pyramid of Khufu had already been pilfered for building materials over the course of millennia by the local population, but due to tourism part of its remaining top stones were unceremoniously tipped over the side to make a larger platform where tourists could have some tea while gazing out over the the Giza Plateau, as detailed in a recent video on the History for Granite channel:
The recycling of building materials from antique structures was also the cause of the demise of the Labyrinth at the foot of the pyramid of Amenemhat III at Hawara. Once an architectural marvel, with reportedly twelve roofed courts and spanning a total of 28,000 m2, today only fragments remain of its existence. This sadly is how most marvels of the Ancient World end up: looted ruins, ashes and shards, left in the sand, mud, or reclaimed by nature, from which we can piece together with a lot of patience and the occasional stroke of fortune a picture what it once may have looked like.
Pyramid Power
Cover of The Giza Power Plant book. (Credit: Christopher Dunn)
When in light of all this we look at the claims made about the Pyramid of Khafre and the persistent conspiracies regarding this and other pyramids hiding great secrets, we can begin to see something of a pattern. Some people have really bought into these fantasies, while for others it’s just another way to embellish a location, to attract more rubes tourists and sell more copies of their latest book on the extraterrestrial nature of pyramids and how they are actually amazing lost technologies. This latter category is called pseudoarcheology.
Pyramids, of course, have always held magical powers, but the idea that they are literal power plants seems to have been coined by one Christopher Dunn, with the publication of his pseudo-archeological book The Giza Power Plant in 1998. That there would be more structures underneath the Pyramid of Khafre is a more recent invention, however. Feeding this particular flight of fancy appears to be a 2022 paper by Filippo Biondi and Corrado Malanga, in which synthetic aperture radar (SAR) was used to examine said pyramid interior and subsurface features.
Somehow this got turned into claims about multiple deep vertical wells descending 648 meters along with other structures. Shared mostly via conspiracy channels, it widely extrapolates from claims made in the paper by Biondi et al., with said SAR-based claims never having been peer-reviewed or independently corroborated. On the Rational Wiki entry for these and other claims related to the Giza pyramids are savagely tossed under the category of ‘pyramidiots’.
The art that conspiracy nuts produce when provided with generative AI tools. (Source: Twitter)
Back in the real world, archaeologists have found a curious L-shaped area underneath a royal graveyard near Khufu’s pyramid that was apparently later filled in, but which seems to lead to a deeper structure. This is likely to be part of the graveyard, but may also have been a feature that was abandoned during construction. Currently this area is being excavated, so we’re likely to figure out more details after archaeologists have finished gently sifting through tons of sand and gravel.
There is also the ScanPyramids project, which uses non-destructive and non-invasive techniques to scan Old Kingdom-era pyramids, such as muon tomography and infrared thermography. This way the internal structure of these pyramids can be examined in-depth. One finding was that of a number of ‘voids’, which could mean any of a number of things, but most likely do not contain world-changing secrets.
To this day the most credible view is still that the pyramids of the Old Kingdom were used as tombs, though unlike the mastabas and similar tombs, there is a credible argument to be made that rather than being designed to be hidden away, these pyramids would be eternal monuments to the pharaoh. They would be open for worship of the pharaoh, hence the ease of getting inside them. Ironically this would make them more secure from graverobbers, which was a great idea until the demise of the Ancient Egyptian civilization.
This is a point that’s made succinctly on the History for Granite channel, with the conclusion being that this goal of ‘inspiring awe’ to worshippers is still effective today, simply judging by the millions of tourists each year to these monuments, and the tall tales that they’ve inspired.
When the AMSAT-OSCAR 7 (AO-7) amateur radio satellite was launched in 1974, its expected lifespan was about five years. The plucky little satellite made it to 1981 when a battery failure caused it to be written off as dead. Then, in 2002 it came back to life. The prevailing theory being that one of the cells in the satellites NiCd battery pack, in an extremely rare event, failed open — thus allowing the satellite to run (intermittently) off its solar panels.
In a recent video by [Ben] on the AE4JC Amateur Radio YouTube channel goes over the construction of AO-7, its operation, death and subsequent revival are covered, as well as a recent QSO (direct contact).
The battery is made up of multiple individual cells.
The solar panels covering this satellite provided a grand total of 14 watts at maximum illumination, which later dropped to 10 watts, making for a pretty small power budget. The entire satellite was assembled in a ‘clean room’ consisting of a sectioned off part of a basement, with components produced by enthusiasts associated with AMSAT around the world. Onboard are two radio transponders: Mode A at 2 meters and Mode B at 10 meters, as well as four beacons, three of which are active due to an international treaty affecting the 13 cm beacon.
Positioned in a geocentric LEO (1,447 – 1,465 km) orbit, it’s quite amazing that after 50 years it’s still mostly operational. Most of this is due to how the satellite smartly uses the Earth’s magnetic field for alignment with magnets as well as the impact of photons to maintain its spin. This passive control combined with the relatively high altitude should allow AO-7 to function pretty much indefinitely while the PV panels keep producing enough power. All because a NiCd battery failed in a very unusual way.
In the heart of Manchester, UK, a groundbreaking event took place in 1948: the first modern computer, known as the Manchester Baby, ran its very first program. The Baby’s ability to execute stored programs, developed with guidance from John von Neumann’s theory, marks it as a pioneer in the digital age. This fascinating chapter in computing history not only reshapes our understanding of technology’s roots but also highlights the incredible minds behind it. The original article, including a video transcript, sits here at [TheChipletter]’s.
So, what made this hack so special? The Manchester Baby, though a relatively simple prototype, was the first fully electronic computer to successfully run a program from memory. Built by a team with little formal experience in computing, the Baby featured a unique cathode-ray tube (CRT) as its memory store – a bold step towards modern computing. It didn’t just run numbers; it laid the foundation for all future machines that would use memory to store both data and instructions. Running a test to find the highest factor of a number, the Baby performed 3.5 million operations over 52 minutes. Impressive, by that time.
Despite criticisms that it was just a toy computer, the Baby’s significance shines through. It was more than just a prototype; it was proof of concept for the von Neumann architecture, showing us that computers could be more than complex calculators. While debates continue about whether it or the ENIAC should be considered the first true stored-program computer, the Baby’s role in the evolution of computing can’t be overlooked.
Over the decades there have been many denominations coined to classify computer systems, usually when they got used in different fields or technological improvements caused significant shifts. While the very first electronic computers were very limited and often not programmable, they would soon morph into something that we’d recognize today as a computer, starting with World War 2’s Colossus and ENIAC, which saw use with cryptanalysis and military weapons programs, respectively.
The first commercial digital electronic computer wouldn’t appear until 1951, however, in the form of the Ferranti Mark 1. These 4.5 ton systems mostly found their way to universities and kin, where they’d find welcome use in engineering, architecture and scientific calculations. This became the focus of new computer systems, effectively the equivalent of a scientific calculator. Until the invention of the transistor, the idea of a computer being anything but a hulking, room-sized monstrosity was preposterous.
A few decades later, more computer power could be crammed into less space than ever before including ever higher density storage. Computers were even found in toys, and amidst a whirlwind of mini-, micro-, super-, home-, minisuper- and mainframe computer systems, one could be excused for asking the question: what even is a supercomputer?
Today’s Supercomputers
ORNL’s Summit supercomputer, fastest until 2020 (Credit: ORNL)
Perhaps a fair way to classify supercomputers is that the ‘supercomputer’ aspect is a highly time-limited property. During the 1940s, Colossus and ENIAC were without question the supercomputers of their era, while 1976’s Cray-1 wiped the floor with everything that came before, yet all of these are archaic curiosities next to today’s top two supercomputers. Both the El Capitan and Frontier supercomputers are exascale (1+ exaFLOPS in double precision IEEE 754 calculations) level machines, based around commodity x86_64 CPUs in a massively parallel configuration.
Taking up 700 m2 of floor space at the Lawrence Livermore National Laboratory (LLNL) and drawing 30 MW of power, El Capitan’s 43,808 AMD EPYC CPUs are paired with the same number of AMD Instinct MI300A accelerators, each containing 24 Zen 4 cores plus CDNA3 GPU and 128 GB of HBM3 RAM. Unlike the monolithic ENIAC, El Capitan’s 11,136 nodes, containing four MI300As each, rely on a number of high-speed interconnects to distribute computing work across all cores.
At LLNL, El Capitan is used for effectively the same top secret government things as ENIAC was, while Frontier at Oak Ridge National Laboratory (ORNL) was the fastest supercomputer before El Capitan came online about three years later. Although currently LLNL and ORNL have the fastest supercomputers, there are many more of these systems in use around the world, even for innocent scientific research.
Looking at the current list of supercomputers, such as today’s Top 9, it’s clear that not only can supercomputers perform a lot more operations per second, they also are invariably massively parallel computing clusters. This wasn’t a change that was made easily, as parallel computing comes with a whole stack of complications and problems.
The Parallel Computing Shift
ILLIAC IV massively parallel computer’s Control Unit (CU). (Credit: Steve Jurvetson, Wikimedia)
The first massively parallel computer was the ILLIAC IV, conceptualized by Daniel Slotnick in 1952 and first successfully put into operation in 1975 when it was connected to ARPANET. Although only one quadrant was fully constructed, it produced 50 MFLOPS compared to the Cray-1’s 160 MFLOPS a year later. Despite the immense construction costs and spotty operational history, it provided a most useful testbed for developing parallel computation methods and algorithms until the system was decommissioned in 1981.
There was a lot of pushback against the idea of massively parallel computation, however, with Seymour Cray famously comparing the idea of using many parallel vector processors instead of a single large one akin to ‘plowing a field with 1024 chickens instead of two oxen’.
Ultimately there is only so far you can scale a singular vector processor, of course, while parallel computing promised much better scaling, as well as the use of commodity hardware. A good example of this is a so-called Beowulf cluster, named after the original 1994 parallel computer built by Thomas Sterling and Donald Becker at NASA. This can use plain desktop computers, wired together using for example Ethernet and with open source libraries like Open MPI enabling massively parallel computing without a lot of effort.
Not only does this approach enable the assembly of a ‘supercomputer’ using cheap-ish, off-the-shelf components, it’s also effectively the approach used for LLNL’s El Capitan, just with not very cheap hardware, and not very cheap interconnect hardware, but still cheaper than if one were to try to build a monolithic vector processor with the same raw processing power after taking the messaging overhead of a cluster into account.
Mini And Maxi
David Lovett of Usagi Electric fame sitting among his FPS minisupercomputer hardware. (Credit: David Lovett, YouTube)
One way to look at supercomputers is that it’s not about the scale, but what you do with it. Much like how government, large businesses and universities would end up with ‘Big Iron’ in the form of mainframes and supercomputers, there was a big market for minicomputers too. Here ‘mini’ meant something like a PDP-11 that’d comfortably fit in the corner of an average room at an office or university.
The high-end versions of minicomputers were called ‘superminicomputer‘, which is not to be confused with minisupercomputer, which is another class entirely. During the 1980s there was a brief surge in this latter class of supercomputers that were designed to bring solid vector computing and similar supercomputer feats down to a size and price tag that might entice departments and other customers who’d otherwise not even begin to consider such an investment.
The manufacturers of these ‘budget-sized supercomputers’ were generally not the typical big computer manufacturers, but instead smaller companies and start-ups like Floating Point Systems (later acquired by Cray) who sold array processors and similar parallel, vector computing hardware.
Recently David Lovett (AKA Mr. Usagi Electric) embarked on a quest to recover and reverse-engineer as much FPS hardware as possible, with one of the goals being to build a full minisupercomputer system as companies and universities might have used them in the 1980s. This would involve attaching such an array processor to a PDP-11/44 system.
Speed Versus Reliability
Amidst all of these definitions, the distinction between a mainframe and a supercomputer is much easier and more straightforward at least. A mainframe is a computer system that’s designed for bulk data processing with as much built-in reliability and redundancy as the price tag allows for. A modern example is IBM’s Z-series of mainframes, with the ‘Z’ standing for ‘zero downtime’. These kind of systems are used by financial institutions and anywhere else where downtime is counted in millions of dollars going up in (literal) flames every second.
This means hot-swappable processor modules, hot-swappable and redundant power supplies, not to mention hot spares and a strong focus on fault tolerant computing. All of these features are less relevant for a supercomputer, where raw performance is the defining factor when running days-long simulations and when other ways to detect flaws exist without requiring hardware-level redundancy.
Considering the brief lifespan of supercomputers (currently in the order of a few years) compared to mainframes (decades) and the many years that the microcomputers which we have on our desks can last, the life of a supercomputer seems like that of a bright and very brief flame, indeed.
Top image: Marlyn Wescoff and Betty Jean Jennings configuring plugboards on the ENIAC computer (Source: US National Archives)
If you think about military crypto machines, you probably think about the infamous Enigma machine. However, as [Christos T.] reminds us, there were many others and, in particular, the production of a “combined cipher” machine for the US and the UK to use for a variety of purposes.
The story opens in 1941 when ships from the United States and the United Kingdom were crossing the Atlantic together in convoys. The US wanted to use the M-138A and M-209 machines, but the British were unimpressed. They were interested in the M-134C, but it was too secret to share, so they reached a compromise.
Starting with a British Typex, a US Navy officer developed an attachment with additional rotors and converted the Typex into a CCM or Combined Cipher Machine. Two earlier verisons of the attachment worked with the M-134C. However the CSP 1800 (or CCM Mark III) was essentially the same unit made to attach to the Typex. Development cost about $6 million — a huge sum for the middle of last century.
By the end of 1943, there were enough machines to work with the North Atlantic convoys. [Christos] says at least 8,631 machines left the factory line. While the machine was a marvel, it did have a problem. With certain settings, the machine had a very low cipher period (338 compared to 16,900 for Enigma). This wasn’t just theoretical, either. A study showed that bad settings showed up seven times in about two months on just one secure circuit.
This led to operational changes to forbid certain settings and restrict the maximum message length. The machine saw service at the Department of State until 1959. There were several variations in use within NATO as late as 1962. It appears the Germans didn’t break CCM during the war, but the Soviets may have been able to decode traffic from it in the post-war period.
You can see a CCM/Typex combo in the video below from the Cryptomuseum. Of course, the Enigma is perhaps the most famous of these machines. These days, you can reproduce one easily.
El editor Neowiz y el desarrollador Round8 Studio publicaron el trailer oficial de la historia de Lies of P: Overture, un DLC que servirá como precuela para Lies of P y se lanzará en el tercer trimestre de 2025 para PC vía Steam, MacOS, PlayStation 4, PlayStation 5, Xbox One y Xbox Series S|X.
Lies of P: Overture es un preludio dramático del aclamado RPG de acción soulslike, Lies of P. La historia te transporta a la ciudad de Krat en sus últimos días de inquietante belleza de la belle époque de finales del siglo XIX. Seguirás a una Acechadora Legendaria (una suerte de guía misteriosa) al borde de la masacre del Frenesí de las marionetas a través de historias jamás contadas y secretos escalofriantes.
Como la marioneta mortal de Geppetto, recorrerás Krat y sus alrededores, descubrirás historias ocultas y te enfrentarás a batallas épicas que darán forma al pasado y al futuro de Lies of P.
Como la marioneta de Geppetto, te encuentras con un misterioso artefacto que te lleva de vuelta a Krat en sus últimos días de grandeza. A la sombra de una tragedia inminente, tu misión es explorar el pasado y descubrir sus oscuros secretos, envueltos en sorpresas, pérdida y venganza.
Las decisiones que tomes repercutirán en el pasado y el presente del mundo de Lies of P, revelando verdades ocultas y dejando consecuencias duraderas.
Embárcate en una aventura inolvidable donde la sinfonía del acero choca con la inquietante melodía de lo desconocido. Atrévete a desentrañar los misterios del pasado, pues en el corazón de las tinieblas se halla la clave para desvelar los secretos de una historia atemporal renacida.
Pueden leer nuestro análisis de Lies of P en este enlace.
«Crucen los límites del tiempo y compartan nuestra emoción mientras los llevamos de paseo a los comienzos de la historia de Lies of P. ¡Espero que estén de acuerdo con que el largo camino hasta darles tal noticia valió la pena! El primer tráiler del juego Lies of P: Overture acaba de ser desvelado en el State of Play y está lleno de increíbles revelaciones.
Este fue un debut histórico para nosotros, y es todavía más gratificante al poder compartirlo con nuestros fans, los veteranos de los juegos estilo souls y también los que se unen por primera vez.
Como pueden imaginarse por el título, Lies of P: Overture los llevará atrás en el tiempo para que puedan descubrir las historias escondidas de Krat. ¿Recuerdan el momento en el que ustedes, manejando a la obra maestra más ambiciosa de Geppetto, despertaron en medio del Frenesí de las Marionetas, la masacre que devastó la ciudad por completo?
Ahora podrán aventurarse atrás en el tiempo y vivir el desgarrador viaje que dio vida a tal catastrófico momento. En Lies of P: Overture, intentamos refinar, rehacer y completar la historia como la concebimos originalmente.
En el cine podría llamarse la edición del director. Esta versión suele ser diferente a la versión para cines y normalmente incluye escenas adicionales que el cineasta considera que completan su obra. Con Lies of P: Overture, me pregunto si intentamos, de manera subconsciente, de crear una versión de los desarrolladores de Lies of P.
Nuestro equipo de producción tenía muchas historias y funciones que querían explorar. Así que seguimos explorando esas ideas hasta el final. Intentar descubrir cómo integrar esos elementos a Lies of P: Overture fue un gran desafío para mí como director. Con esta nueva retrospectiva, nos enfocamos en asegurar que el cariño y apoyo que recibimos por Lies of P fuera honrado y, al mismo tiempo, también deseábamos contar la historia que siempre quisimos compartir. Pero quédense tranquilos; seguiremos dándolo todo hasta el final.
También quiero usar esta oportunidad para expresar nuestra gratitud por todos los comentarios y consejos que recibimos de los ciudadanos de Krat desde el lanzamiento de Lies of P. Sus invaluables comentarios ayudaron a nuestros desarrolladores a perfeccionar Lies of P y su impresionante precuela que llegará este año a PlayStation 4, PlayStation 5 y PlayStation 5 Pro.
A nuestros fans y a la hermosa comunidad de jugadores, queremos decirles que ustedes nos completan. Únanse a nuestro equipo para completar la increíble historia de Lies of P.
Y como siempre, muchas gracias.»
Acerca de Lies of P: Overture
Lies of P: Overture es un preludio dramático del aclamado RPG de acción soulslike, Lies of P. La historia te transporta a la ciudad de Krat en sus últimos días de inquietante belleza de la belle époque de finales del siglo XIX.
Seguirás a una Acechadora Legendaria (una suerte de guía misteriosa) al borde de la masacre del Frenesí de las marionetas a través de historias jamás contadas y secretos escalofriantes.
Como la marioneta mortal de Geppetto, recorrerás Krat y sus alrededores, descubrirás historias ocultas y te enfrentarás a batallas épicas que darán forma al pasado y al futuro de Lies of P.
Como la marioneta de Geppetto, te encuentras con un misterioso artefacto que te lleva de vuelta a Krat en sus últimos días de grandeza. A la sombra de una tragedia inminente, tu misión es explorar el pasado y descubrir sus oscuros secretos, envueltos en sorpresas, pérdida y venganza.
Las decisiones que tomes repercutirán en el pasado y el presente del mundo de Lies of P, revelando verdades ocultas y dejando consecuencias duraderas.
Embárcate en una aventura inolvidable donde la sinfonía del acero choca con la inquietante melodía de lo desconocido. Atrévete a desentrañar los misterios del pasado, pues en el corazón de las tinieblas se halla la clave para desvelar los secretos de una historia atemporal renacida.
Acerca de Lies of P
Inspirado en la conocida historia de Pinocho, Lies of P es un juego de acción de estilo Souls ambientado en la oscura ciudad inspirada en la Belle Époque llamada Krat. Antiguamente una ciudad hermosa, Krat se ha convertido en una pesadilla viviente, con peligrosas marionetas causando estragos y una plaga que azota la tierra.
Juega como P, una marioneta que debe luchar a lo largo de la ciudad en su incansable búsqueda para encontrar a Geppetto y finalmente convertirse en humano. Lies of P presenta un mundo elegante lleno de tensión, un sistema de combate profundo y de personalización de personajes, y una historia cautivadora con interesantes opciones narrativas, donde cuanto más se miente, más humano se vuelve P. Solo recuerda: en un mundo lleno de mentiras, no puedes confiar en nadie.
“Juega como Pinocho, un mecanoide títere, y lucha a través de todo lo que encuentres en tu camino para encontrar a esta persona misteriosa. Pero no espere ayuda en el camino y no cometa el error de confiar en nadie. Siempre debes mentir a los demás si deseas convertirte en humano.
Te despiertas en una estación de tren abandonada en Krat, una ciudad abrumada por la locura y la sed de sangre. Frente a ti hay una sola nota que dice: “Encuentra al señor Geppetto. Está aquí en la ciudad”.
Inspirado en la conocida historia de Pinocho, Lies of P es un juego de acción similar al de un alma ambientado en un mundo cruel y oscuro de la Belle Époque. Toda la humanidad está perdida en una ciudad que alguna vez fue hermosa y que ahora se ha convertido en un infierno viviente lleno de horrores indescriptibles.
Lies of P ofrece un mundo elegante lleno de tensión, un sistema de combate profundo y una historia apasionante. Guía a Pinocho y experimenta su implacable viaje para convertirse en humano. “
Características:
Un cuento de hadas oscuro recontado – La historia atemporal de Pinocho ha sido reinventado con imágenes oscuras y sorprendentes. Ubicado en la ciudad caída de Krat, Pinnochio lucha desesperadamente por convertirse en humano contra todo pronóstico.
Concepto visual – La ciudad de Krat se inspiró en la época de la Belle Époque en Europa (finales del siglo XIX y principios del siglo XX) y es el epítome de una ciudad derrumbada sin prosperidad.
Misiones ‘mentirosas’ y múltiples finales – Experimenta misiones de procedimiento interconectadas que se desarrollan según cómo se mienta. Estas elecciones afectarán cómo termina la historia.
Sistema de fabricación de armas – Puedes combinar armas de muchas formas para crear algo completamente nuevo. Investiga para encontrar las mejores combinaciones y hacer algo realmente especial.
Sistema de habilidades especiales – Con Pinocho siendo un muñeco, puedes cambiar partes de su cuerpo para adquirir nuevas habilidades y, con suerte, una ventaja en la batalla. Pero no todas las mejoras son para pelear, también pueden proporcionar varias otras características únicas y útiles.
NEOWIZ lanzó Lies of P en todo el mundo el 19 de septiembre de 2023. Las ediciones estándar y de lujo están disponibles por US $59,99 y US $69.99 respectivamente, en formato digital o físico en PlayStation 4, PlayStation 5, Xbox One, Xbox Series X|S y PC a través de Steam y minoristas participantes.
La edición estándar de Lies of P también está disponible a través de la Mac App Store para modelos con silicio de Apple por US $59.99.
Did you know that the land of flat-pack furniture and Saab automobiles played a serious role in the development of minicomputers, the forerunners of our home computers? If not, read on for a bit of history. You can also go ahead and watch the video below, which tells it all with a ton of dug up visuals.
Sweden’s early computer development was marked by significant milestones, beginning with the relay-based Binär Aritmetisk Relä-Kalkylator (BARK) in 1950, followed by the vacuum tube-based Binär Elektronisk SekvensKalkylator (BESK) in 1953. These projects were spearheaded by the Swedish Board for Computing Machinery (Matematikmaskinnämnden), established in 1948 to advance the nation’s computing capabilities.
In 1954, Saab ventured into computing by obtaining a license to replicate BESK, resulting in the creation of Saab’s räkneautomat (SARA). This initiative aimed to support complex calculations for the Saab 37 Viggen jet fighter. Building on this foundation, Saab’s computer division, later known as Datasaab, developed the D2 in 1960 – a transistorized prototype intended for aircraft navigation. The D2’s success led to the CK37 navigational computer, which was integrated into the Viggen aircraft in 1971.
Datasaab also expanded into the commercial sector with the D21 in 1962, producing approximately 30 units for various international clients. Subsequent models, including the D22, D220, D23, D5, D15, and D16, were developed to meet diverse computing needs. In 1971, Datasaab’s technologies merged with Standard Radio & Telefon AB (SRT) to form Stansaab AS, focusing on real-time data systems for commercial and aviation applications. This entity eventually evolved into Datasaab AB in 1978, which was later acquired by Ericsson in 1981, becoming part of Ericsson Information Systems.
Parallel to these developments, Åtvidabergs Industrier AB (later Facit) produced the FACIT EDB in 1957, based on BESK’s design. This marked Sweden’s first fully domestically produced computer, with improvements such as expanded magnetic-core memory and advanced magnetic tape storage. The FACIT EDB was utilized for various applications, including meteorological calculations and other scientific computations. For a short time, Saab even partnered with the American Unisys called Saab-Univac – a well-known name in computer history.
These pioneering efforts by Swedish organizations laid the groundwork for the country’s advancements in computing technology, influencing both military and commercial sectors. The video below has lots and lots more to unpack and goes into greater detail on collaborations and (missed) deals with great names in history.
Epic Games Store anunció hoy que desde el 20 al 27 de febrero está regalando dos títulos en tu tienda: World War Z: Aftermath y Garden Story para PC.
Además, se pueden canjear STAR WARS Knights of the Old Republic I y STAR WARS Knights of the Old Republic II mediante la App Mobile de Epic Games Store, pero debao podrán encontrar los enlaces directos para canjearlos desde la PC.
STAR WARS Knights of the Old Republic I (Android)|Juego Base
STAR WARS Knights of the Old Republic I (iOS)|Juego Base
STAR WARS Knights of the Old Republic II – The Sith Lords (Android)|Juego Base
STAR WARS Knights of the Old Republic II – The Sith Lords (iOS)|Juego Base
Pueden consultar una lista de todos los títulos regalados por Epic Games Store en este enlace.
Acerca de World War Z
World War Z: Aftermath es el shooter cooperativo de zombis definitivo basado en el taquillazo de Paramount Pictures y el siguiente paso de World War Z, el éxito que ha cautivado a más de 20 millones de jugadores. Cambia el curso del apocalipsis zombi en consolas y en PC con cross-play completo.
Únete a hasta tres amigos o juega en solitario con compañeros de IA contra hordas de zombis insaciables en una historia intensa por episodios que se desarrolla en nuevas ubicaciones arrasadas de todo el mundo: Roma, el Vaticano y la península rusa de Kamchatka.
Nuevas historias de un mundo en guerra
Episodios de historia completamente nuevos en Roma, el Vaticano y la zona más oriental de Rusia: Kamchatka. Juega tanto con personajes nuevos como con otros conocidos para enfrentarte a los muertos vivientes con un nuevo sistema brutal de combate cuerpo a cuerpo, decimar a los zekes con movimientos únicos, ventajas y armas duales como la hoz y el cuchillo. Rechaza a nuevas monstruosidades muertas vivientes, incluyendo manadas de ratas hambrientas que desatarán el caos en tu equipo.
Progreso detallado y una nueva perspectiva
Vive una nueva perspectiva con el inmersivo modo de primera persona de Aftermath. Sube de nivel ocho clases únicas: pistolero, destructor, rebanador, médico, manitas, exterminador, maestro de drones y una nueva: vanguardista, cada una de ellas con sus propias ventajas y estilos de juego.
Personaliza tus armas para sobrevivir a cualquier desafío y conquista nuevas misiones diarias con modificadores especiales para conseguir recompensas adicionales.
Acerca de Epic Games Store
Epic Games es una empresa estadounidense cuyo director ejecutivo, Tim Sweeney, fundó en 1991. Su sede está en Cary, Carolina del Norte, y tiene decenas de oficinas en todo el mundo. Epic es una empresa líder en el entretenimiento interactivo y es proveedora de tecnología de motores 3D.
Epic Games opera uno de los juegos más grandes del mundo, Fortnite, el cual es un vibrante ecosistema de experiencias de entretenimiento social, que incluye juegos propios como Fortnite Battle Royale, LEGO Fortnite, Rocket Racing y Fortnite Festival, como así también experiencias de los creadores.
Epic Games tiene más de 800 millones de cuentas y más de 6000 millones de amigos conectados en Fortnite, Fall Guys, Rocket League y la Epic Games Store. Además, la empresa desarrolla Unreal Engine, que impulsa muchos de los juegos más importantes del mundo y que también otras industrias han adoptado, como el cine y la televisión, transmisiones y eventos en vivo, la arquitectura, la industria automotriz y la simulación.
A través de Fortnite, Unreal Engine, la Epic Games Store y Epic Games Online Services, Epic proporciona un ecosistema digital completo para que los creadores y los desarrolladores creen, distribuyan y operen juegos y otros contenidos.
Typography enthusiasts reach a point at which they can recognise a font after seeing only a few letters in the wild, and usually identify its close family if not the font itself. It’s unusual then for a font to leave them completely stumped, but that’s where [Marcin Wichary] found himself. He noticed a font which many of you will also have seen, on typewriter and older terminal keys. It has a few unusual features that run contrary to normal font design such as slightly odd-shaped letters and a constant width line, and once he started looking, it appeared everywhere. Finding its origin led back well over a century, and led him to places as diverse as New York street furniture and NASA elevators.
The font in question is called Gorton, and it came from the Gorton Machine Co, a Wisconsin manufacturer. It’s a font designed for a mechanical router, which is why it appears on so much custom signage and utilitarian components such as keyboard keys. Surprisingly its history leads back into the 19th century, predating many of the much more well-know sans serif fonts. So keep an eye out for it on your retro tech, and you’ll find that you’ve seen a lot more of it than you ever knew. If you are a fellow font-head, you might also know the Hershey Font, and we just ran a piece on the magnetic check fonts last week.
Experience a transformative educational journey with Akadimia that opens new dimensions of learning. This tool lets you engage in insightful conversations with historical figures like Nikola Tesla through cutting-edge augmented reality (AR) technology, making the past come alive. Akadimia lets you unleash your curiosity and immerse yourself in a unique learning experience. You can start […]
Journey lets you tell captivating stories and presentations using videos, slides, and interactive elements like calendars. The tool offers features such as automagical creation, personalized content at scale, automatic branding, and a trove of customizable blocks. It even generates a first draft using AI, ensuring you never have to start from scratch. To get started […]