Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

The ’80s Multi-Processor System That Never Was

21 Junio 2024 at 05:00

Until the early 2000s, the computer processors available on the market were essentially all single-core chips. There were some niche layouts that used multiple processors on the same board for improved parallel operation, and it wasn’t until the POWER4 processor from IBM in 2001 and later things like the AMD Opteron and Intel Pentium D that we got multi-core processors. If things had gone just slightly differently with this experimental platform, though, we might have had multi-processor systems available for general use as early as the 80s instead of two decades later.

The team behind this chip were from the University of Califorina, Berkeley, a place known for such other innovations as RAID, BSD, SPICE, and some of the first RISC processors. This processor architecture would be based on RISC as well, and would be known as Symbolic Processing Using RISC. It was specially designed to integrate with the Lisp programming language but its major feature was a set of parallel processors with a common bus that allowed for parallel operations to be computed at a much greater speed than comparable systems at the time. The use of RISC also allowed a smaller group to develop something like this, and although more instructions need to be executed they can often be done faster than other architectures.

The linked article from [Babbage] goes into much more detail about the architecture of the system as well as some of the things about UC Berkeley that made projects like this possible in the first place. It’s a fantastic deep-dive into a piece of somewhat obscure computing history that, had it been more commercially viable, could have changed the course of computing. Berkeley RISC did go on to have major impacts in other areas of computing and was a significant influence on the SPARC system as well.

Startup Claims it Can Boost CPU Performance by 2-100X

13 Junio 2024 at 02:00

Although Moore’s Law has slowed at bit as chip makers reach the physical limits of transistor size, researchers are having to look to other things other than cramming more transistors on a chip to increase CPU performance. ARM is having a bit of a moment by improving the performance-per-watt of many computing platforms, but some other ideas need to come to the forefront to make any big pushes in this area. This startup called Flow Computing claims it can improve modern CPUs by a significant amount with a slight change to their standard architecture.

It hopes to make these improvements by adding a parallel processing unit, which they call the “back end” to a more-or-less standard CPU, the “front end”. These two computing units would be on the same chip, with a shared bus allowing them to communicate extremely quickly with the front end able to rapidly offload tasks to the back end that are more inclined for parallel processing. Since the front end maintains essentially the same components as a modern CPU, the startup hopes to maintain backwards compatibility with existing software while allowing developers to optimize for use of the new parallel computing unit when needed.

While we’ll take a step back and refrain from claiming this is the future of computing until we see some results and maybe a prototype or two, the idea does show some promise and is similar to some ARM computers which have multiple cores optimized for different tasks, or other computers which offload non-graphics tasks to a GPU which is more optimized for processing parallel tasks. Even the Raspberry Pi is starting to take advantage of external GPUs for tasks like these.

❌
❌