Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

Fixed Point Math Exposed

23 Junio 2024 at 11:00

If you are used to writing software for modern machines, you probably don’t think much about computing something like one divided by three. Modern computers handle floating point quite well. However, in constrained systems, there is a trap you should be aware of. While modern compilers are happy to let you use and abuse floating point numbers, the hardware is often woefully slow. It also tends to eat up lots of resources. So what do you do? Well, as [Low Byte Productions] explains, you can opt for fixed-point math.

In theory, the idea is simple. Just put an arbitrary decimal point in your integers. So, for example, if we have two numbers, say 123 and 456, we could remember that we really mean 1.23 and 4.56. Adding, then, becomes trivial since 123+456=579, which is, of course, 5.79.

But, of course, nothing is simple. Multiplyting those two numbers gives you 56088 but that’s 5.6088 and not 560.88. So keeping track of the decimal point is a little more complicated than the addition case would make you think.

How much more complicated is it? Well, the video covers a lot but it takes an hour and half to do it. There’s plenty of code and explanations so if you haven’t dealt with fixed point math or you want a refresher, this video’s worth the time to watch.

Want to do 3D rendering on an ATMega? Fixed point is your friend. We’ve done our own deep dive on the topic way back in 2016.

Forsp: A Forth & Lisp Hybrid Lambda Calculus Language

Por: Maya Posch
14 Junio 2024 at 02:00

In the world of lambda calculus programming languages there are many ways to express the terms, which is why we ended up with such an amazing range of programming languages, even if most trace their roots back to ALGOL. Of the more unique (and practical) languages, Lisp and Forth probably range near the top, but what if you were to smudge both together? That’s what [xorvoid] did and it resulted in the gracefully titled Forsp programming language. Unsurprisingly it got a very warm and enthusiastic reception over at Hacker News.

While keeping much of Lisp-isms, the Forth part consists primarily out of it being very small and easy to implement, as demonstrated by the C-based reference implementation. It also features a Forth-like value/operand stack and function application. Also interesting is Forsp using call-by-push-value (CBPV), which is quite different from call-by-value (CBV) and call-by-name (CBN), which may give some advantages if you can wrap your mind around the concept.

Even if practicality is debatable, Forsp is another delightful addition to the list of interesting lambda calculus demonstrations which show that the field is anything but static or boring.

Shipping Your Illicit Software on Launch Hardware

13 Junio 2024 at 23:00

In the course of a career, you may run up against projects that get cancelled, especially those that are interesting, but deemed unprofitable in the eyes of the corporate overlords. Most people would move, but [Ron Avitzur] just couldn’t let it go.

In 1993, in the midst of the transition to PowerPC, [Avitzur]’s employer let him go as the project they were contracted to perform for Apple was canceled. He had been working on a graphing calculator to show off the capabilities of the new system. Finding his badge still allowed him access to the building, he “just kept showing up.”

[Avitzur] continued working until Apple Facilities caught onto his use of an abandoned office with another former contractor, [Greg Robbins], and their badges were removed from the system. Not the type to give up, they tailgated other engineers into the building to a different empty office to continue their work. (If you’ve read Kevin Mitnick‘s Ghost in the Wires, you’ll remember this is one of the most effective ways to gain unauthorized access to a building.)

We’ll let [Avitzur] tell you the rest, but suffice it to say, this story has a number of twists and turns to it. We suspect it certainly isn’t the typical way a piece of software gets included on the device from the factory.

Looking for more computing history? How about a short documentary on the Aiken computers, or a Hack Chat on how to preserve that history?

[Thanks to Stephen for the tip via the Retrocomputing Forum!]

Eraser AI

Por: EasyWithAI
6 Mayo 2024 at 12:25
Eraser AI is a technical design copilot that’s able to streamline technical design workflows for developers and engineering teams. It can serve as a copilot for creating, editing, and documenting diagrams, architectures, and design documents using natural language prompts. Some more detailed use cases for this particular tool can be found on the main website, […]

Source

Programming Ada: Records and Containers for Organized Code

Por: Maya Posch
4 Junio 2024 at 14:00

Writing code without having some way to easily organize sets of variables or data would be a real bother. Even if in the end you could totally do all of the shuffling of bits and allocating in memory by yourself, it’s much easier when the programming language abstracts all of that housekeeping away. In Ada you generally use a few standard types, ranging from records (equivalent to structs in C) to a series of containers like vectors and maps. As with any language, there are some subtle details about how all of these work, which is where the usage of these types in the Sarge project will act as an illustrative example.

In this project’s Ada code, a record is used for information about command line arguments (flag names, values, etc.) with these argument records stored in a vector. In addition, a map is created that links the names of these arguments, using a string as the key, to the index of the corresponding record in the vector. Finally, a second vector is used to store any text fragments that follow the list of arguments provided on the command line. This then provides a number of ways to access the record information, either sequentially in the arguments vector, or by argument (flag) name via the map.

Introducing Generics

Not unlike the containers provided by the Standard Template Library (STL) of C++, the containers provided by Ada are provided as generics, meaning that they cannot be used directly. Instead we have to create a new package that uses the container generic to formulate a container implementation limited to the types which we intend to use with it. For a start let’s take a look at how to create a vector:

with Ada.Containers.Vectors;
use Ada.Containers;
package arg_vector is new Vectors(Natural, Argument);

The standard containers are part of the Ada.Containers package, which we include here before the instantiating of the desired arguments vector, which is indexed using natural numbers (all positive integers, no zero or negative numbers), and with the Argument type as value. This latter type is the custom record, which is defined as follows:

type Argument is record
    arg_short: aliased Unbounded_String;
    arg_long: aliased Unbounded_String;
    description: aliased Unbounded_String;
    hasValue: aliased boolean := False;
    value: aliased Unbounded_String;
    parsed: aliased boolean := False;
end record;

Here the aliased keyword means that the variable will have a memory address rather than only exist in a register. This is a self-optimizing feature of Ada that is being copied by languages like C and C++ that used to require the inverse action by the programmer in the form of the C & C++ register keyword. For Ada’s aliased keyword, this means that the variable it is associated with can have its access (‘pointer’, in C parlance) taken.

Moving on, we can now create the two vectors and the one map, starting with the arguments vector using the earlier defined arg_vector package:

args : arg_vector.vector;

The text arguments vector is created effectively the same way, just with an unbounded string as its value:

package tArgVector is new Vectors(Natural, Unbounded_String);
textArguments: tArgVector.vector;

Finally, the map container is created in a similar fashion. Note that for this we are using the Ada.Containers.Indefinite_Ordered_Maps package. Ordered maps contrast with hashed maps in that they do not require a hash function, but will use the < operator (existing for the type or custom).  These maps provide a look-up time defined as O(log N), which is faster than the O(N) of a vector and the reason why the map is used as an index for the vector here.

package argNames_map is new Indefinite_Ordered_Maps(Unbounded_String, Natural);
argNames: argNames_map.map;

With these packages and instances defined and instantiated, we are now ready to fill them with data.

Cross Mapping

When we define a new argument to look for when parsing command line arguments, we have to perform three operations: first create a new Argument record instance and assign its members the relevant information, secondly we assign this record to the args vector. The record is provided with data via the setArgument procedure:

procedure setArgument(arg_short: in Unbounded_String; arg_long: in Unbounded_String; 
                            desc: in Unbounded_String; hasVal: in boolean);

This allows us to create the Argument instance as follows in the initialization section (before begin in the procedure block) as follows:

arg: aliased Argument := (arg_short => arg_short, arg_long => arg_long, 
                          description => desc, hasValue => hasVal, 
                          value => +"", parsed => False);

This Argument record can then be added to the args vector:

args.append(arg);

Next we have to set up links between the flag names (short and long version) in the map to the relevant index in the argument vector:

argNames.include(arg_short, args.Last_Index);
argNames.include(arg_long, args.Last_Index);

This sets the key for the map entry to the short or long version of the flag, and takes the last added (highest) index of the arguments vector for the value. We’re now ready to find and update records.

Search And Insert

Using the contraption which we just setup is fairly straightforward. If we want to check for example that an argument flag has been defined or not, we can use the arguments vector and the map as follows:

flag_it: argNames_map.Cursor;
flag_it := argNames.find(arg_flag);
if flag_it = argNames_map.No_Element then
    return False;
elsif args(argNames_map.Element(flag_it)).parsed /= True then
    return False;
end if;

This same method can be used to find a specific record to update the freshly parsed value that we expect to trail certain flags:

flag_it: argNames_map.Cursor;
flag_it := argNames.find(arg_flag);
args.Reference(argNames_map.Element(flag_it)).value := arg;

Using the reference function on the args vector gets us a reference to the element which we can then update, unlike the element function of the package. The requisite index into the arguments vector is obtained by

We can now easily check that a particular flag has been found by looking up its record in the vector and return the found value, as defined in the getFlag function in the sarge.adb file of Sarge:

function getFlag(arg_flag: in Unbounded_String; arg_value: out Unbounded_String) return boolean is
flag_it: argNames_map.Cursor;
use argNames_map;
begin
    if parsed /= True then
        return False;
    end if;

    flag_it := argNames.find(arg_flag);
    if flag_it = argNames_map.No_Element then
         return False;
    elsif args(argNames_map.Element(flag_it)).parsed /= True then
        return False;
    end if;

    if args(argNames_map.Element(flag_it)).hasValue = True then
        arg_value := args(argNames_map.Element(flag_it)).value;
    end if;

    return True;
end getFlag;

Other Containers

There are of course many more containers than just the two types covered here defined in Ada’s Predefined Language Library (PLL). For instance, sets are effectively like vectors, except that they only allow for unique elements to exist within the container. This is only the beginning of the available containers, though, with the Ada 2005 standard defining only the first collection, which got massively extended in the Ada 2012 standard (which we focus on here). These include trees, queues, linked lists and so on. We’ll cover some of these in more detail in upcoming articles.

Together with the packages, functions and procedures covered earlier in this series, records and containers form the basics of organizing code in Ada. Naturally, Ada also supports more advanced types of modularization and reusability, such as object-oriented programming, which will also be covered in upcoming articles.

Static Recompilation Brings New Life to N64 Games

Por: Maya Posch
21 Mayo 2024 at 11:00

Over the past few years a number of teams have been putting a lot of effort into taking beloved Nintendo 64 games, decompiling them, and lovingly crafting them into highly portable C code. This allows for these games to not only run natively on PCs, but also for improvements to be made to the rendering engine and other components.

Yet this artisan approach to porting these games means a massive time investment, something which static binary translation (static recompilation) may conceivably speed up. Enter the N64: Recompiled project, which provides a binary translation tool to ease the translation of the N64’s binaries into C code.

This is effectively quite similar to what an emulator does in real-time, just with the goal of creating a permanent copy of the translated instructions. After this static binary translation, the C code can be compiled again, but as noted by the project’s documentation, a suitable runtime is needed to get a functional game. An example of this is the Zelda 64: Recompiled project, which uses the N64: Recompiled project at its core, while providing the necessary scaffolding and wrappers to create a working copy of The Legend of Zelda: Majora’s Mask as output.

In the video below, [Modern Vintage Gamer] takes the software for a test drive and comes away very excited about the potential it has to completely change the state of N64 emulation. To be clear, this isn’t a one-button-press solution — it still requires capable developers to roll up their sleeves and get the plumbing in. It’s going to take some time before you favorite game is supported, but the idea of breathing new life into some of the best games from the 1990s and early 2000s certainly has us eager to see where this technology goes

Thanks to [Keith Olson] for the tip.

Try Image Classification Running In Your Browser, Thanks to WebGPU

20 Mayo 2024 at 11:00

When something does zero-shot image classification, that means it’s able to make judgments about the contents of an image without the user needing to train the system beforehand on what to look for. Watch it in action with this online demo, which uses WebGPU to implement CLIP (Contrastive Language–Image Pre-training) running in one’s browser, using the input from an attached camera.

By giving the program some natural language visual concept labels (such as ‘person’ or ‘cat’) that fit a hypothetical template for the image content, the system will output — in real-time — its judgement on the appropriateness of such labels to what the camera sees. Again, all of this runs locally.

It’s maybe a little bit unintuitive, but what’s happening in the demo is that the system is deciding which of the user-provided labels (“a photo of a cat” vs “a photo of a bald man”, for example) is most appropriate to what the camera sees. The more a particular label is judged a good fit for the image, the higher the number beside it.

This kind of process benefits greatly from shoveling the hard parts of the computation onto compatible graphics cards, which is exactly what WebGPU provides by allowing the browser access to a local GPU. WebGPU is relatively recent, but we’ve already seen it used to run LLMs (Large Language Models) directly in the browser.

Wondering what makes GPUs so very useful for AI-type applications? It’s all about their ability to work with enormous amounts of data very quickly.

NetBSD Bans AI-Generated Code From Commits

Por: Maya Posch
18 Mayo 2024 at 08:00

A recent change was announced to the NetBSD commit guidelines which amends these to state that code which was generated by Large Language Models (LLMs) or similar technologies, such as ChatGPT, Microsoft’s Copilot or Meta’s Code Llama is presumed to be tainted code. This amendment was to the existing section about tainted code, which originally referred to any code that was not written directly by the person committing the code, and was due to licensing concerns. The obvious reason behind this is that otherwise code may be copied into the NetBSD codebase which may have been licensed under an incompatible (or proprietary) license.

In the case of LLM-based code generators like the above-mentioned, the problem stems from the fact that they are trained on millions of lines of code from all over the internet, which are naturally released under a wide variety of licenses. Invariably, some of that code will be covered by a license that’s not acceptable for the NetBSD codebase. Although the guideline mentions that these auto-generated code commits may still be admissible, they require written permission from core developers, and presumably an in-depth audit of the code’s heritage. This should leave non-trivial commits that got churned out by ChatGPT and kin out in the cold.

The debate about the validity of works produced by current-gen “artificial intelligence” software is only just beginning, but there’s little question that NetBSD has made the right call here. From a legal and software engineering perspective this policy makes perfect sense, as LLM-generated code simply doesn’t meet the project’s standards. That said, code produced by humans brings with it a whole different set of potential problems.

Winamp Source Code Will be Opened Up, Company Says

Por: Maya Posch
18 Mayo 2024 at 02:00

Recently the company currently in charge of the Winamp media player – formerly Radionomy, now Llama Group – announced that it will be making the source code of the player ‘available to developers’. Although the peanut gallery immediately seemed to have jumped to the conclusion that this meant that the source would be made available to all on the announced 24 September 2024 date, reading between the lines of the press release gives a different impression.

First there is the sign-up form for ‘FreeLlama’ where interested developers can sign up, with a strong suggestion that only vetted developers will be able to look at the code, which may or may not be accompanied by any non-disclosure agreements. It would seem appropriate to be skeptical considering Winamp’s rocky history since AOL divested of it in 2013 with version 5.666 and its new owner Radionomy not doing much development on the software except for adding NFT and crypto/blockchain features in 2022. The subsequent Winamp online service doubled down on this.

Naturally it would be great to see Winamp become a flourishing OSS project for the two dozen of us who still use Winamp on a daily base, but the proof will be in the non-NFT pudding, as the saying goes.

The Minimalistic Dillo Web Browser Is Back

Por: Maya Posch
12 Mayo 2024 at 02:00

Over the decades web browsers have changed from the fairly lightweight and nimble HTML document viewers of the 1990s to today’s top-heavy browsers that struggle to run on a system with less than a quad-core, multi-GHz CPU and gigabytes of RAM. All but a few, that is.

Dillo is one of a small number of browsers that requires only a minimum of system resources and will happily run on an Intel 486 or thereabouts. Sadly, the project more or less ended back in 2016 when the rendering engine’s developer passed away, but with the recent 3.10 release the project seems to be back on track, courtesy of efforts by [Rodrigo Arias Mallo].

Although a number of forks were started after the Dillo project ground to a halt, of these only Dillo+ appears to be active at this point in time, making this project revival a welcome boost, as is its porting to Atari systems. As for Dillo’s feature set, it boasts support for a range of protocols, including Gopher, HTTP(S), Gemini, and FTP via extensions. It supports HTML 4.01 and some HTML 5, along with CSS 2.1 and some CSS 3 features, and of course no JavaScript.

On today’s JS-crazed web this means access can be somewhat limited, but maybe it will promote websites to have a no-JS fallback for the Dillo users. The source code and releases can be obtained from the GitHub project page, with contributions to the project gracefully accepted.

Thanks to [Prof. Dr. Feinfinger] for the tip.

Giving Your KiCad PCB Repository Pretty Pictures

5 Mayo 2024 at 02:00
Screenshot of the GitHub Marketplace action listing, describing the extension

Publishing your boards on GitHub or GitLab is a must, and leads to wonderful outcomes in the hacker world. On their own, however, your board files might have the repo look a bit barren; having a picture or two in the README is the best. Making them yourself takes time – what if you could have it happen automatically? Enter [kicad-render], a GitHub&GitLab integration for rendering your KiCad projects by [linalinn].

This integration makes your board pictures, top and bottom view, generated on every push into the repo – just embed two image links into your README.md. This integration is made possible thanks to the new option in KiCad 8’s kicad-cli – board image generation, and [linalinn]’s code makes KiCad run on GitHub/GitLab servers.

For even more bling, you can enable an option to generate a GIF that rotates your board, in the style of that one [arturo182] demo – in fact, this integration’s GIF code was borrowed from that script! Got a repository with many boards in one? There’s an option you could make work for yourself, too.

All you need to do is to follow a couple of simple steps; [linalinn] has documented both the GitHub and GitLab integration. We’ve recently talked about KiCad integrations in more detail, if you’re wondering what else your repository could be doing!

Don’t Object to Python Objects

3 Mayo 2024 at 02:00

There’s the old joke about 10 kinds of programmers, but the truth is when it comes to programming, there are often people who make tools and people who use tools. The Arduino system is a good example of this. Most people use it like a C compiler. However, it really uses C++, and if you want to provide “things” to the tool users, you need to create objects. For example, when you put Serial in a program, you use an object someone else wrote. Python — and things like Micropython — have the same kind of division. Python started as a scripting language, but it has added object features, allowing a rich set of tools for scripters to use. [Damilola Oladele] shows the ins and outs of object-oriented Python in a recent post.

Like other languages, Python allows you to organize functions and data into classes and then create instances that belong to that class. Class hierarchies are handy for reusing code, customizing behavior, and — through polymorphism — building device driver-like architectures.

For example, you might build a class for temperature sensors and then create specialized subclasses for different specific sensors. The code to convert the sensor reading to degrees would live in each subclass. However, common code, such as getting an average of several samples, could be used in the main class. Even more importantly, any part of your code that needs a temperature sensor will just deal with the main class and won’t care what kind of sensor is actually in use except, of course, when you instantiate the sensor.

Python’s implementation of object orientation does have a few quirks. For example, if you create a class variable, it can be read from a subclass without specifying scope like you’d expect. But if you try to write to it from a subclass, you create a new variable for that particular subclass, which then hides the parent class version.

Still, objects can make your tools and libraries much more reusable, and Python makes it relatively easy compared to some other languages. If you want to see how objects can improve common constructs like state tables, you’ll have to read a different language. If you want to see an admittedly hairy Python example, check out VectorOS, the operating system for the 2023 Hackaday Supercon badge.

Train a GPT-2 LLM, Using Only Pure C Code

28 Abril 2024 at 08:00

[Andrej Karpathy] recently released llm.c, a project that focuses on LLM training in pure C, once again showing that working with these tools isn’t necessarily reliant on sprawling development environments. GPT-2 may be older but is perfectly relevant, being the granddaddy of modern LLMs (large language models) with a clear heritage to more modern offerings.

LLMs are fantastically good at communicating despite not actually knowing what they are saying, and training them usually relies on PyTorch deep learning library, itself written in Python. llm.c takes a simpler approach by implementing the neural network training algorithm for GPT-2 directly. The result is highly focused and surprisingly short: about a thousand lines of C in a single file. It is a highly elegant process that does the same thing the bigger, clunkier methods accomplish. It can run entirely on a CPU, or it can take advantage of GPU acceleration, where available.

This isn’t the first time [Andrej Karpathy] has bent his considerable skills and understanding towards boiling down these sorts of concepts into bare-bones implementations. We previously covered a project of his that is the “hello world” of GPT, a tiny model that predicts the next bit in a given sequence and offers low-level insight into just how GPT (generative pre-trained transformer) models work.

The Performance Impact of C++’s `final` Keyword for Optimization

Por: Maya Posch
25 Abril 2024 at 02:00

In the world of software development the term ‘optimization’ is generally reason for experienced developers to start feeling decidedly nervous, especially when a feature is marked as an ‘easy and free optimization’. The final keyword introduced in C++11 is one of such features. It promises a way to speed up object-oriented code by omitting the vtable call indirection by marking a class or member function as – unsurprisingly – final, meaning that it cannot be inherited from or overridden. Inspired by this promise, [Benjamin Summerton] figured that he’d run a range of benchmarks to see what performance uplift he’d get on his ray tracing project.

To be as thorough as possible, the tests were run on three different systems, including 64-bit Intel and AMD systems, as well as on Apple Silicon (M1). For the compilers various versions of GCC (12.x, 13.x), as well as Clang  (15, 17) and MSVC (17) were employed, with rather interesting results for final versus no final tests. Clang was probably the biggest surprise, as with the keyword added, performance with Clang-generated code absolutely tanked. MSVC was a mixed bag, as were the GCC versions other than GCC 13.2 on AMD Ryzen, which saw a bump of a few percent faster.

Ultimately, it seems that there’s no free lunch as usual, and adding final to your code falls distinctly under ‘only use it if you know what you’re doing’. As things stand, the resulting behavior seems wildly inconsistent.

FLOSS Weekly Episode 780: Zoneminder — Better Call Randal

23 Abril 2024 at 23:00

This week Jonathan Bennett and Aaron Newcomb chat with Isaac Connor about Zoneminder! That’s the project that’s working to store and deliver all the bits from security cameras — but the CCTV world has changed a lot since Zoneminder first started, over 20 years ago. The project is working hard to keep up, with machine learning object detection, WebRTC, and more. Isaac talks a bit about developer burnout, and a case or two over the years where an aggressive contributor seems suspicious in retrospect. And when is the next stable version of Zoneminder coming out, anyway?

Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us! Next week we’re taping the show on Tuesday, and looking for a guest!

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Places to follow the FLOSS Weekly Podcast:

Programming Ada: First Steps on the Desktop

Por: Maya Posch
23 Abril 2024 at 14:00

Who doesn’t want to use a programming language that is designed to be reliable, straightforward to learn and also happens to be certified for everything from avionics to rockets and ICBMs? Despite Ada’s strong roots and impressive legacy, it has the reputation among the average hobbyist of being ‘complicated’ and ‘obscure’, yet this couldn’t be further from the truth, as previously explained. In fact, anyone who has some or even no programming experience can learn Ada, as the very premise of Ada is that it removes complexity and ambiguity from programming.

In this first part of a series, we will be looking at getting up and running with a basic desktop development environment on Windows and Linux, and run through some Ada code that gets one familiarized with the syntax and basic principles of the Ada syntax. As for the used Ada version, we will be targeting Ada 2012, as the newer Ada 2022 standard was only just approved in 2023 and doesn’t change anything significant for our purposes.

Toolchain Things

The go-to Ada toolchain for those who aren’t into shelling out big amounts of money for proprietary, certified and very expensive Ada toolchains is GNAT, which at one point in time stood for the GNU NYU Ada Translator. This was the result of the United States Air Force awarding the New York University (NYU) a contract in 1992 for a free Ada compiler. The result of this was the GNAT toolchain, which per the stipulations in the contract would be licensed under the GNU GPL and its copyright assigned to the Free Software Foundation. The commercially supported (by AdaCore) version of GNAT is called GNAT Pro.

Obtaining a copy of GNAT is very easy if you’re on a common Linux distro, with the package gnat for Debian-based distros and gcc-ada if you’re Arch-based. For Windows you can either download the AdaCore GNAT Community Edition, or if you use MSYS2, you can use its package manager to install the mingw-w64-ucrt-x86_64-gcc-ada package for e.g. the new ucrt64 environment. My personal preference on Windows is the MSYS2 method, as this also provides a Unix-style shell and tools, making cross-platform development that much easier. This is also the environment that will be assumed throughout the article.

Hello Ada

The most important part of any application is its entry point, as this determines where the execution starts. Most languages have some kind of fixed name for this, such as main, but in Ada you are free to name the entry point whatever you want, e.g.:

with Ada.Text_IO;
procedure Greet is
begin
    -- Print "Hello, World!" to the screen
    Ada.Text_IO.Put_Line ("Hello, World!");
end Greet;

Here the entry point is the Greet procedure, because it’s the only procedure or function in the code. The difference between a procedure and a function is that only the latter returns a value, while the former returns nothing (similar to void in C and C++). Comments start with two dashes, and packages are imported using the with statement. In this case we want the Ada.Text_IO package, as it contains the standard output routines like Put_Line. Note that since Ada is case-insensitive, we can type all of those names in lower-case as well.

Also noticeable might be the avoidance of any symbols where an English word can be used, such as the use of is, begin and end rather than curly brackets. When closing a block with end, this is post-fixed with the name of the function or procedure, or the control structure that’s being closed (e.g. an if/else block or loop). This will be expanded upon later in the series. Finally, much like in C and C++ lines end with a semicolon.

For a reference of the syntax and much more, AdaCore has an online reference as well as a number of freely downloadable books, which include a comparison with Java and C++. The Ada Language Reference Manual (LRM) is also freely available.

Compile And Run

To compile the simple sample code above, we need to get it into a source file, which we’ll call greet.adb. The standard extensions with the GNAT toolchain are .adb for the implementation (body) and .ads for the specification (somewhat like a C++ header file). It’s good practice to use the same file name as the main package or entry point name (unit name) for the file name. It will work if not matched, but you will get a warning depending on the toolchain configuration.

Unlike in C and C++, Ada code isn’t just compiled and linked, but also has an intermediate binding step, because the toolchain fully determines the packages, dependencies, and other elements within the project before assembling the compiled code into a binary.

An important factor here is also that Ada does not work with a preprocessor, and specification files aren’t copied into the file which references them with a with statement, but only takes note of the dependency during compilation. A nice benefit of this is that include guards are not necessary, and headaches with linking such as link order of objects and libraries are virtually eliminated. This does however come at the cost of dealing with the binder.

Although GNAT comes with individual tools for each of these steps, the gnatmake tool allows the developer to handle all of these steps in one go. Although some prefer to use the AdaCore-developed gprbuild, we will not be using this as it adds complexity that is rarely helpful. To use gnatmate to compile the example code, we use a Makefile which produces the following output:

mkdir -p bin
mkdir -p obj
gnatmake -o bin/hello_world greet.adb -D obj/
gcc -c -o obj\greet.o greet.adb
gnatbind -aOobj -x obj\greet.ali
gnatlink obj\greet.ali -o bin/hello_world.exe

Although we just called gnatmake, the compilation, binding and linking steps were all executed subsequently, resulting in our extremely sophisticated Hello World application.

For reference, the Makefile used with the example is the following:

GNATMAKE = gnatmake
MAKEDIR = mkdir -p
RM = rm -f

BIN_OUTPUT := hello_world
ADAFLAGS := -D obj/

SOURCES := greet.adb

all: makedir build

build:
	$(GNATMAKE) -o bin/$(BIN_OUTPUT) $(SOURCES) $(ADAFLAGS)
	
makedir:
	$(MAKEDIR) bin
	$(MAKEDIR) obj

clean:
	rm -rf obj/
	rm -rf bin/
	
.PHONY: test src

Next Steps

Great, so now you have a working development environment for Ada with which you can build and run any code that you write. Naturally, the topic of code editors and IDEs is one can of flamewar that I won’t be cracking open here. As mentioned in my 2019 article, you can use AdaCore’s GNAT Programming Studio (GPS) for an integrated development environment experience, if that is your jam.

My own development environment is a loose constellation of Notepad++ on Windows, and Vim on Windows and elsewhere, with Bash and similar shells the environment for running the Ada toolchain in. If there is enough interest I’d be more than happy to take a look at other development environments as well in upcoming articles, so feel free to sound off in the comments.

For the next article I’ll be taking a more in-depth look at what it takes to write an Ada application that actually does something useful, using the preparatory steps of this article.

Refraction

Por: EasyWithAI
10 Octubre 2023 at 15:39
Refraction is an AI-powered code generation tool for developers that refactors code, generates documentation, creates unit tests, and more. It uses AI to automatically generate code in over 50 programming languages including Python, HTML, JavaScript, C++, and more. Developers simply paste their code into Refraction, select the task they want to automate, and hit “Generate” […]

Source

❌
❌