Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

BASIC Co-Inventor Thomas Kurtz Has Passed Away

It’s with sadness that we note the passing of Thomas E. Kurtz, on November 12th. He was co-inventor of the BASIC programming language back in the 1960s, and though his creation may not receive the attention in 2024 that it would have done in 1984, the legacy of his work lives on in the generation of technologists who gained their first taste of computer programming through it.

A BBC Micro BASIC program that writes "HELLO HACKADAY!" to the screen multiple times.
For the 1980s kids who got beyond this coding masterpiece, BASIC launched many a technology career.

The origins of BASIC lie in the Dartmouth Timesharing System, like similar timesharing operating systems of the day, designed to allow the resources of a single computer to be shared across many terminals. In this case the computer was at Dartmouth College, and BASIC was designed to be a language with which software could be written by average students who perhaps didn’t have a computing background. In the decade that followed it proved ideal for the new microcomputers, and few were the home computers of the era which didn’t boot into some form of BASIC interpreter. Kurtz continued his work as a distinguished academic and educator until his retirement in 1993, but throughout he remained as the guiding hand of the language.

Should you ask a computer scientist their views on BASIC, you’ll undoubtedly hear about its shortcomings, and no doubt mention will be made of the GOTO statement and how it makes larger projects very difficult to write. This is all true, but at the same time it misses the point of it being a readily understandable language for first-time users of machines with very little in the way of resources. It was the perfect programming start for a 1970s or 1980s beginner, and once its limitations had been reached it provided the impetus for a move to higher things. We’ve not written a serious BASIC program in over three decades, but we’re indebted to Thomas Kurtz and his collaborator for what they gave us.

Thanks [Stephen Walters] for the tip.

All You Need for Artificial Intelligence is a Commodore 64

Artificial intelligence has always been around us, with [Timothy J. O’Malley]’s 1985 book on AI projects for the Commodore 64 being one example of this. With AI defined as being the theory and development of systems that can perform tasks that normally requiring human intelligence (e.g. visual perception, speech recognition, decision-making), this book is a good introduction to the many ways that computer systems for decades now have been able to learn, make decisions and in general become more human-like. Even if there’s no electronic personality behind the actions.

In the book’s first chapter, [Timothy] isn’t afraid to toss in some opinions about the true nature of intelligence and thinking. Starting with the concept that intelligence is based around storing information and being able to derive meaning from connections between stored pieces of information, the idea of a basic AI as one would use in a game for the computer opponent arises. A number of ways of implementing such an AI is explored in the first and subsequent chapters, using Towers of Hanoi, chess, Nim and other games.

After this we look at natural language processing – referencing ELIZA as an example – followed by heuristics, pattern recognition and AI for robotics. Although much of this may seem outdated in this modern age of LLMs and neural networks, it’s important to realize that much of what we consider ‘bleeding edge’ today has its roots in AI research performed in the 1950s and 1960s. As [Timothy] rightfully states in the final chapter, there is no real limit to how far you can push this type of AI as long as you have more hardware and storage to throw at the problem. This is where we now got datacenters full of GPU-equipped systems churning through vector space calculations for the sake of today’s LLM & diffusion model take on ‘AI’.

Using a Commodore 64 to demonstrate the (lack of) validity of claims is not a new one, with recently a group of researchers using one of these breadbin marvels to run an Ising model with a tensor network and outperforming IBM’s quantum processor. As they say, just because it’s new and shiny doesn’t necessarily mean that it is actually better.

❌