Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

The ’80s Multi-Processor System That Never Was

21 Junio 2024 at 05:00

Until the early 2000s, the computer processors available on the market were essentially all single-core chips. There were some niche layouts that used multiple processors on the same board for improved parallel operation, and it wasn’t until the POWER4 processor from IBM in 2001 and later things like the AMD Opteron and Intel Pentium D that we got multi-core processors. If things had gone just slightly differently with this experimental platform, though, we might have had multi-processor systems available for general use as early as the 80s instead of two decades later.

The team behind this chip were from the University of Califorina, Berkeley, a place known for such other innovations as RAID, BSD, SPICE, and some of the first RISC processors. This processor architecture would be based on RISC as well, and would be known as Symbolic Processing Using RISC. It was specially designed to integrate with the Lisp programming language but its major feature was a set of parallel processors with a common bus that allowed for parallel operations to be computed at a much greater speed than comparable systems at the time. The use of RISC also allowed a smaller group to develop something like this, and although more instructions need to be executed they can often be done faster than other architectures.

The linked article from [Babbage] goes into much more detail about the architecture of the system as well as some of the things about UC Berkeley that made projects like this possible in the first place. It’s a fantastic deep-dive into a piece of somewhat obscure computing history that, had it been more commercially viable, could have changed the course of computing. Berkeley RISC did go on to have major impacts in other areas of computing and was a significant influence on the SPARC system as well.

Early “Computer Kit” Really Just a Fancy Calculator

16 Junio 2024 at 08:00

We’re big fans of calculators, computers and vintage magazines, so when we see something at the intersection of all three we always take a look. Back in 1966, Electronics Illustrated included instructions in their November issue on building, in their words, a “Space-Age Decimal Computer!” using neon lamps, a couple of tubes, and lots of soldering. The article starts on page 39 and it’s made fairly clear that it will be an expensive and complicated project, but you will be paid back many times over by the use and experience you will get!

Our modern idea of a computer differs greatly from the definitions used in the past. As many readers likely know, “Computer” was actually a job title for a long time. The job of a computer was to sit with pen, paper, and later on electromechanical devices, and compute and tabulate long lists of numbers. Imagine doing payroll for large companies completely by hand, every month. The opportunity for errors was large and was just part of doing business. As analog and later transistor-based computers started to be developed, they replaced the jobs of human computers in calculating and tabulating numbers. This is why IBM was originally called the Computing, Recording and Tabulating Company!

So at the time this article was written, the idea of a computer as just a number-cruncher meant that for the magazine readers, a machine that could add, subtract, multiply and divide was for all intents a computer. The kit is a fairly clever but simple machine. A rotary telephone dial is used to enter numbers from 1 to 10 (with the 0 acting as 10). This sends pulses into a series of boards that represent decimal decades from 1s all the way up to 100000s. You use a rotary switch to select which decade to enter a number into. And then, just like manual addition, you dial in the second number, working from the units upwards. All carries are done automatically, and you have your result after entering each addend.

As the machine can only count upwards, subtraction is done by adding complements. This is all based on doing the 9s complement of the number to be subtracted, and the article goes into a lot of detail on the operation of the machine. Tricks like these were common when using electromechanical machines and would have been familiar at the time to many readers. Of course, multiplication and division are repeated additions or subtractions, and with long inputs, it could become very tedious. However, as long as the machine was carefully constructed and each number carefully noted down, it could be a very useful tool that would eliminate errors!

Thanks to [Stephen] for the tip!

Startup Claims it Can Boost CPU Performance by 2-100X

13 Junio 2024 at 02:00

Although Moore’s Law has slowed at bit as chip makers reach the physical limits of transistor size, researchers are having to look to other things other than cramming more transistors on a chip to increase CPU performance. ARM is having a bit of a moment by improving the performance-per-watt of many computing platforms, but some other ideas need to come to the forefront to make any big pushes in this area. This startup called Flow Computing claims it can improve modern CPUs by a significant amount with a slight change to their standard architecture.

It hopes to make these improvements by adding a parallel processing unit, which they call the “back end” to a more-or-less standard CPU, the “front end”. These two computing units would be on the same chip, with a shared bus allowing them to communicate extremely quickly with the front end able to rapidly offload tasks to the back end that are more inclined for parallel processing. Since the front end maintains essentially the same components as a modern CPU, the startup hopes to maintain backwards compatibility with existing software while allowing developers to optimize for use of the new parallel computing unit when needed.

While we’ll take a step back and refrain from claiming this is the future of computing until we see some results and maybe a prototype or two, the idea does show some promise and is similar to some ARM computers which have multiple cores optimized for different tasks, or other computers which offload non-graphics tasks to a GPU which is more optimized for processing parallel tasks. Even the Raspberry Pi is starting to take advantage of external GPUs for tasks like these.

The Amiga We All Wanted In 1993

Por: Jenny List
5 Junio 2024 at 05:00

To be an Amiga fan during the dying days of the hardware platform back in the mid 1990s was to have a bleak existence indeed. Commodore had squandered what was to us the best computer ever with dismal marketing and a series of machines that were essentially just repackaged versions of the original. Where was a PCI Amiga with fast processors, we cried!

Now, thirty years too late, here’s [Jason Neus] with just the machine we wanted, in the shape of an ATX form factor Amiga motherboard with those all-important PCI slots and USB for keyboard and mouse.

What would have been unthinkable in the ’90s comes courtesy of an original or ECS Amiga chipset for the Amiga functions, and an FPGA and microcontroller for PCI and USB respectively. Meanwhile there’s also a PC floppy drive controller, based on work from [Ian Steadman]. The processor and RAM lives on a daughter card, and both 68040 and 68060 processors are supported.

Here in 2024 of course this is still a 1990s spec board, and misty-eyed speculation about what might have happened aside, it’s unlikely to become your daily driver. But that may not be the point, instead we should evaluate it for what it is. Implementing a PCI bus, even a 1990s one, is not without its challenges, and we’re impressed with the achievement.

If you’re interested in Amiga post-mortems, here’s a slightly different take.

Aiken’s Secret Computing Machines

3 Junio 2024 at 20:00

This neat video from the [Computer History Archives Project] documents the development of the Aiken Mark I through Mark IV computers. Partly shrouded in the secrecy of World War II and the Manhattan Project effort, the Mark I, “Harvard’s Robot Super Brain”, was built and donated by IBM, and marked their entry into what we would now call the computer industry.

Numerous computing luminaries used the Mark I, aside from its designer Howard Aiken. Grace Hopper, Richard Bloch, and even John von Neumann all used the machine. It was an electromechanical computer, using gears, punch tape, relays, and a five horsepower motor to keep it all running in sync. If you want to dig into how it actually worked, the deliciously named patent “Calculator” goes into some detail.

The video goes on to tell the story of Aiken’s various computers, the rift between Harvard and IBM, and the transition of computation from mechanical to electronic. If this is computer history that you don’t know, it’s well worth a watch. (And let us know if you also think that they’re using computer-generated speech to narrate it.)

If “modern” computer history is more your speed, check out this documentary about ENIAC.

Thanks [Stephen Walters] for the tip!

Intel’s Anti-Upgrade Tricks Defeated With Kapton Tape

31 Mayo 2024 at 08:00
Screenshot of the Kaby Lake CPU pinout next to the Coffee Lake CPU pinout, showing just how few differences there are

If you own an Intel motherboard with a Z170 or Z270 chipset, you might believe that it only supports CPUs up to Intel’s 7th generation, known as Kaby Lake. Even the CPU socket’s pinout is different in the next generation — we are told, it will fit the same socket, but it won’t boot. So if you want a newer CPU, you’ll have to buy a new motherboard while you’re at it. Or do you?

Turns out, the difference in the socket is just a few pins here and there, and you can make a 8th or 9th generation Coffee Lake CPU work on your Z170/270 board if you apply a few Kapton tape fixes and mod your BIOS, in a process you can find as “Coffee Mod”. You can even preserve compatibility with the 6th/7th generation CPUs after doing this mod, should you ever need to go back to an older chip. Contrasting this to AMD’s high degree of CPU support on even old Ryzen motherboards, it’s as if Intel introduced this incompatibility intentionally.

There’s been a number of posts on various PC forums and YouTube videos, going through the process and showing off the tools used to modify the BIOS. Some mods are exceptionally easy to apply. For example, if you have the Asus Maximus VIII Ranger motherboard, a single jumper wire between two pads next to the EC will enable support without Kapton tape, a mod that likely could be figured out for other similar motherboards as well. There’s a few aspects to keep in mind, like making sure your board’s VRMs are good enough for the new chip, and a little more patching might be needed for hyper-threading, but nothing too involved.

Between money-grab features like this that hamper even the simplest of upgrades and increase e-waste, fun vulnerabilities, and inability to sort out problems like stability power consumption issues, it’s reassuring to see users take back control over their platforms wherever possible, and brings us back to the days of modding Xeon CPUs to fit into 775 sockets.

Don’t get too excited though, as projects like Intel BootGuard are bound to hamper mods like this on newer generations by introducing digital signing for BIOS images, flying under the banner of user security yet again. Alas, it appears way more likely that Intel’s financial security is the culprit.

We thank [Lexi] for sharing this with us!

The Emperor’s New Computer

27 Mayo 2024 at 20:00

You walk into a home office and see an attractive standing desk that appears bare. Where’s the computer? Well, if it is [DIY Perk]’s office, the desk is the computer. Like a transformer robot, the desk transforms into a good-looking PC.

He starts with a commercial desk and creates a replacement desktop out of some aluminum sheets and extrusions. The motion uses some V-slot profiles and linear rails. The monitor and keyboard shelf pop up on invisible hinges. When closed, there’s no trace of a computer.

The mechanics of the pop-out hatch are complex, but they worked the first time. At least, we think it was the first time. Video editing is a possibility! He did have to add some springs and pneumatics to keep it from slamming down. A magnet gives a positive lock feeling when you open the hatch.

The monitor is an ultra-wide OLED that can be curved or flat. He removed the electronics from the panel and mounted the screen on the inner part of the hatch. Half of the electronics went back into the desk. A small but powerful PC with an Intel I9 and a graphics card fit in the desk. A conventional power supply would be too large, but a pair of very thin GaN power supplies come to the rescue.

Surplus server heatsinks keep the system cool without breaking the bank.

Thermal management is also something that could easily be too thick. The solution was a custom brass heat spreader that runs the length of the desk, onto which he mounted 40 surplus server heatsinks paired with laptop fans. But when they failed to get the job done, larger heatsinks and fans were brought in. These stick out below the bottom of the desk, but you wouldn’t notice unless you were laying on the floor.

Honestly, the build is amazing. If you are on the fence, watch the first few seconds of the video where the desk transforms, and you’ll be hooked. The final step was to make the aluminum desktop look like wood with oak planks and some optical illusions.

We doubt our woodworking and machining skills are up to duplicating this, but we wish he’d take our money. Desk computers aren’t really a new idea, of course. Be glad you don’t have to build a 1965 “desktop” computer into a desk.

Homebrew Computer from the Ground Up

26 Mayo 2024 at 08:00

Building a retro computer of some sort is a rite of passage for many of us, with some building replicas or restorations of old Commodores, Ataris, and other machines from decades past. Others go even further back, to the time of the Intel 8008 or earlier, and a dedicated few will build something completely novel. This project from [3DSage] falls squarely in the latter category, with his completely DIY computer built component by component from scratch, including the machine code needed to run it.

[3DSage] starts with the backbone of every computer: the clock. He first demonstrates how a pair of NOT gates with a set of capacitors can be used as a rudimentary clock pulse, then builds a more refined version with a 555 timer and potentiometer for adjustable rates. Then, it’s on to creating a binary counter, which is a fundamental part of the memory system for this small computer, and finally, allows this circuitry to behave like a normal computer. Using a set of switches to store values in memory and stepping through them with the clock, the computer can be programmed to do plenty of tasks just like a modern microcontroller.

[3DSage] built this project a few years ago and has used it for real-world applications such as controlling servos, LED arrays, playing music, and other tasks. Although he has to program it using his own machine code by hand, it’s a usable computer in many ways. If you want to eschew modernity and build a retro computer in the style of the 1960s, though, this piece goes through what it would have been like to build a similar system in the era when these computers were more common. If you have a switch fetish, you might like to see how real computers worked back then, too.

A Slice of Simulation, Google Sheets Style

15 Mayo 2024 at 14:00

Have you ever tried to eat one jelly bean or one potato chip? It is nearly impossible. Some of us have the same problem with hardware projects. It all started when I wrote about the old bitslice chips people used to build computers before you could easily get a whole CPU on a chip. Bitslice is basically Lego blocks that build CPUs. I have always wanted to play with technology, so when I wrote that piece, I looked on eBay to see if I could find any leftovers from this 1970-era tech. It turns out that the chips are easy to find, but I found something even better. A mint condition AM2900 evaluation board. These aren’t easy to find, so the chances that you can try one out yourself are pretty low. But I’m going to fix that, virtually speaking.

This was just the second potato chip. Programming the board, as you can see in the video below, is tedious, with lots of binary switch-flipping. To simplify things, I took another potato chip — a Google Sheet that generates the binary from a quasi-assembly language. That should have been enough, but I had to take another chip from the bag. I extended the spreadsheet to actually emulate the system. It is a terrible hack, and Google Sheets’ performance for this sort of thing could be better. But it works.

If you missed it, I left many notes on Hackaday.io about the project. In particular, I created a microcode program that takes two four-bit binary-coded decimal digits and computes the proper 8-bit number. It isn’t much, but the board only has 16 microcode instructions, so you must temper your expectations. If you want an overview of the entire technology, we’ve done that, too.

Starting Point

Block diagram of the board being simulated

The idea for the simulator struck me when I was building the assembler. I considered writing a stand-alone program, but I wanted to limit my potato chip consumption, so keeping it in the spreadsheet seemed like a good idea.

Was it? Hard to say. Google Sheets has macros that are just Javascript. However, the macros are somewhat slow and attaching them to user interface elements is difficult.  There were plenty of ways to do it, but I went for the path of least resistance.

Strategy

For better or worse, I tried to minimize the amount of scripting. All of the real work occurs on the Sim tab of the spreadsheet, and only a few key parts are included in the attached macros. Instead, the cells take advantage of the way the AM2900 works. For example, some bits in the microcode instructions represent how to find the next address. Instead of calculating this with code, there is a table that computes the address for each possible branch.

For example, branch type zero goes to the next address when the current result is zero or the address coded in the instruction if the result is not zero. If the branch type is one, there is always a jump to the hardcoded address, while a branch type of two always takes the next instruction. So, the associated table always computes all the possible results (see cells O1 through P18). Then, cell P18 uses VLOOKUP to pick the right row from the table (which has 16 rows).

Every part of the instruction decode works this way. The only complication is that the instructions operate on the current result, something mentioned in the last post. In other words, consider an instruction that says (in English): If the result is zero, go to location 9; add 1 to the B register. You might assume the jump will occur if B+1 results in zero. But that’s not how it works. Instead, the processor adds B and 1. Then, it jumps to location 9 if the state was zero before the addition operation.

What this means is that the spreadsheet computes all things at all times. The macros are almost like clock pulses. Instead of gating flip flops, they copy things from where they are calculated to where they are supposed to go.

Macros

The main simulation logic is in the stepengine macro. It computes the next address and sets the status latch, if necessary. Then it grabs the result of the current operation and places it in the correct destination. The final part of the macro updates the next location, which may require manipulating the processor stack. All of those things would have been difficult to do in spreadsheet logic.

The other macros are essentially wrappers around stepengine. The Exec macro executes an instruction without advancing (like stepping the real board in load mode). The Step macro can optionally single step, or it can execute in place like Exec. The Run macro does the real execution and also checks for breakpoints. There’s also a Reset macro to put everything in a known state.

Usage

The user interface for the simulator.

You can call the macros directly, but that’s not very user-friendly. Instead, the Sim tab has three graphical buttons for run, step, and reset. Each command has options. For example, under Run, you can set a hex address to break execution. Under Step, you can decide if the step should advance the program counter or not. The reset button allows you to clear registers.

Don’t enter your program on the Sim tab. Use the Main tab as before. You can also go to the Extensions | Macros menu to load one of the canned demos. Demo 1 is the BCD program from the last post. The other examples are the ones that shipped with the real board’s manual. If you really want to learn how the thing works, you could do worse than walk through the manual and try each example. Just don’t forget that the scanned manual has at least two typos: Figure 7 is wrong, and example 7 has an error (that error was fixed in later manuals and in the simulator’s copy, too). Instead of figure 7, use figure 3 or pick up a corrected figure on Hackaday.io.

Why learn how to operate the AM2900 evaluation board? Beats me, but I did it anyway, and now you can do it without spending the money I did on a piece of exotic hardware. I’d like to say that this might help you understand modern CPU design, but that wouldn’t be very fair. I suppose it does help a little, but modern CPUs have as much in common with this design as a steam locomotive has in common with a jet airplane.

If the idea of building a CPU from modules appeals to you, check out DDL-4. If that’s too easy for you, grab your bag of 2,000 transistors and check out this build. I’m sealing up my bag of potato chips now. Really.

Answering All Your iSCSI Scanner Questions

13 Mayo 2024 at 02:00
The film scanner [xssfox] found, in the center of a table, with other stuff strewn across the table

iSCSI is a widely used protocol for exposing SCSI devices over a network connection, and some scanners have in the past been equipped with SCSI ports. So, could you have an iSCSI network scanner? [xssfox] details her journey making a Canoscan FS4000US film scanner work over iSCSI, sparked by someone’s overly-confident StackOverflow comment that it couldn’t be done. Nothing in the spec said it couldn’t actually work, however, and after figuring out a tentative architecture, a hardware setup was put together.

No flatbed scanners with SCSI ports could be found on the cheap, so a film scanner had to be procured. After figuring out a few hitches with the loading mechanism and getting a test image locally, it was time to try and build up the software setup, tearing through SCSI compatibility and cabling, driver and PCI pass-through woes, bluescreens, and intermediate software having dropped some of the necessary features by now. Still, [xssfox] eventually exported the scanner as an iSCSI target – and, on the other end of the network, successfully connected to it and completed a scan. The StackOverflow answer was wrong, after all.

It’s fun to see how far old technology can go, and get answers to questions you never knew you had. Whether you’re reminiscing about SCSI days or wondering what the technology about, we’ve talked about it aplenty, from a retrospective to modern-day experiments, repurposing old SCSI hardware for modern SATA ports, a Raspberry Pi implementation, an emulator, and a fair bit more.

We thank [Valentijn Sessink] and [adistuder] for sharing this with us!

Pssst… Wanna Buy An Old Supercomputer?

Por: Jenny List
1 Mayo 2024 at 11:00

If you spend your time plotting evil world domination while stroking your fluffy white cat in your super-villain lair, it’s clear that only the most high-performance in computing is going to help you achieve your dastardly aims. But computers of that scale are expensive, and not even your tame mad scientist can whistle one out of thin air. Never mind though, because if your life lacks a supercomputer, there’s one for sale right now in Wyoming.

The Cheyenne Supercomputer was ranked in the top 20 of global computing power back in 2016, when it was installed to work on atmospheric simulation and earth sciences. There’s a page containing exhaustive specs, but overall we’re talking about a Silicon Graphics ICE XA system with 8,064 processors at 18 cores each for a total of 14,5152 cores, and a not inconsequential 313,344 GB of memory. In terms of software it ran the SuSE Linux Enterprise Server OS, but don’t let that stop you from installing your distro of choice.

It’s now being sold on a government auction site in a decommissioned but able to be reactivated state, and given that it takes up a LOT of space we’re guessing that arranging the trucks to move it will cost more than the computer itself. If you’re interested it’s standing at a shade over $40,000 at the time of writing with its reserve not met, and you have until the 3rd of May to snag it.

It’s clear that the world of supercomputing is a fast-moving one and this computer has been superseded. So whoever buys it won’t be joining the big boys any time soon — even though it remains one heck of a machine by mere mortal standards. We’re curious then who would buy an old supercomputer, if anyone. Would its power consumption for that much computing make it better off as scrap metal, or is there still a place for it somewhere? Ideas? Air them in the comments.

Keep Tabs on PC Use with Custom Analog Voltmeter

26 Abril 2024 at 23:00

With the demands of modern computing, from video editing, streaming, and gaming, many of us will turn to a monitoring system of some point to keep tabs on CPU usage, temperatures, memory, and other physical states of our machines. Most are going to simply display on the screen but this data can be sent to external CPU monitors as well. This retro-styled monitor built on analog voltmeters does a great job of this and adds some flair to a modern workstation as well.

The build, known as bbMonitor, is based on the ESP32 platform which controls an array of voltmeters via PWM. The voltmeters have been modified with a percentage display to show things like CPU use percentage. Software running on the computers sends this data in real time to the ESP32 so the computer’s behavior can be viewed at a glance. Each voltmeter is also augmented with RGB LEDs that change color from green to red as use increases as well. The project’s creator, [Corebb], also notes that the gauges will bounce around if the computer is under heavy load but act more linearly when under constant load, also helping to keep an eye on computer status.

While the build does seem to rely on a Windows machine to run the software for export to the monitor, all of the code is open-sourced and available on the project’s GitHub page and could potentially be adapted for other operating systems. And, as far as the voltmeters themselves go, there have been similar projects in the past that use stepper motors as a CPU usage monitor instead.

❌
❌