Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

Using Integer Addition to Approximate Float Multiplication

Por: Maya Posch
11 Abril 2025 at 02:00

Once the domain of esoteric scientific and business computing, floating point calculations are now practically everywhere. From video games to large language models and kin, it would seem that a processor without floating point capabilities is pretty much a brick at this point. Yet the truth is that integer-based approximations can be good enough to hit the required accuracy. For example, approximating floating point multiplication with integer addition, as [Malte Skarupke] recently had a poke at based on an integer addition-only LLM approach suggested by [Hongyin Luo] and [Wei Sun].

As for the way this works, it does pretty much what it says on the tin: adding the two floating point inputs as integer values, followed by adjusting the exponent. This adjustment factor is what gets you close to the answer, but as the article and comments to it illustrate, there are plenty of issues and edge cases you have to concern yourself with. These include under- and overflow, but also specific floating point inputs.

Unlike in scientific calculations where even minor inaccuracies tend to propagate and cause much larger errors down the line, graphics and LLMs do not care that much about float point precision, so the ~7.5% accuracy of the integer approach is good enough. The question is whether it’s truly more efficient as the paper suggests, rather than a fallback as seen with e.g. integer-only audio decoders for platforms without an FPU.

Since one of the nice things about FP-focused vector processors like GPUs and derivatives (tensor, ‘neural’, etc.) is that they can churn through a lot of data quite efficiently, the benefits of shifting this to the ALU of a CPU and expecting (energy) improvements seem quite optimistic.

Ask Hackaday: Vibe Coding

Por: Jenny List
10 Abril 2025 at 02:00

Vibe coding is the buzzword of the moment. What is it? The practice of writing software by describing the problem to an AI large language model and using the code it generates. It’s not quite as simple as just letting the AI do your work for you because the developer is supposed to spend time honing and testing the result, and its proponents claim it gives a much more interactive and less tedious coding experience. Here at Hackaday, we are pleased to see the rest of the world catch up, because back in 2023, we were the first mainstream hardware hacking news website to embrace it, to deal with a breakfast-related emergency.

Jokes aside, though, the fad for vibe coding is something which should be taken seriously, because it’s seemingly being used in enough places that vibe coded software will inevitably affect our lives.  So here’s the Ask Hackaday: is this a clever and useful tool for making better software more quickly, or a dangerous tool for creating software nobody quite understands, containing bugs which could cause a disaster?

Our approach to writing software has always been one of incrementally building something from the ground up, which satisfies the need. Readers will know that feeling of being in touch with how a project works at all levels, with a nose for immediately diagnosing any problems that might occur. If an AI writes the code for us, the feeling is that we might lose that connection, and inevitably this will lead to less experienced coders quickly getting out of their depth. Is this pessimism, or the grizzled voice of experience? We’d love to know your views in the comments. Are our new AI overlords the new senior developers? Or are they the worst summer interns ever?

One Book to Boot Them All

2 Abril 2025 at 23:00
Mockup of a printed copy of the Little OS Book

Somewhere in the universe, there’s a place that lists every x86 operating system from scratch. Not just some bootloaders, or just a kernel stub, but documentation to build a fully functional, interrupt-handling, multitasking-capable OS. [Erik Helin and Adam Renberg] did just that by documenting every step in The Little Book About OS Development.

This is not your typical dry academic textbook. It’s a hands-on, step-by-step guide aimed at hackers, tinkerers, and developers who want to demystify kernel programming. The book walks you through setting up your environment, bootstrapping your OS, handling interrupts, implementing virtual memory, and even tackling system calls and multitasking. It provides just enough detail to get you started but leaves room for exploration – because, let’s be honest, half the fun is in figuring things out yourself.

Completeness and structure are two things that make this book stand out. Other OS dev guides may give you snippets and leave you to assemble the puzzle yourself. This book documents the entire process, including common pitfalls. If you’ve ever been lost in the weeds of segmentation, paging, or serial I/O, this is the map you need. You can read it online or fetch it as a single 75-page long PDF.

Mockup photo source: Matthieu Dixte

Golang On The PS2

Por: Lewin Day
1 Abril 2025 at 05:00

A great many PlayStation 2 games were coded in C++, and there are homebrew SDKs that let you work in C. However, precious little software for the platform was ever created in Golang. [Ricardo] decided this wouldn’t do, and set about making the language work with Sony’s best-selling console of all time. 

Why program a PS2 in Go? Well, it can be easier to work with than some other languages, but also, there’s just value in experimenting in this regard. These days, Go is mostly just used on traditional computery platforms, but [Ricardo] is taking it into new lands with this project.

One of the challenges in getting Go to run on the PS2 is that the language was really built to live under a full operating system, which the PS2 doesn’t really have. However, [Ricardo] got around this by using TinyGo, which is designed for compiling Go on simpler embedded platforms. It basically takes Go code, turns it into an intermediate representation, then compiles binary code suitable for the PS2’s Emotion Engine (which is a MIPS-based CPU).

The specifics of getting it all to work are quite interesting if you fancy challenges like these. [Ricardo] was even able to get to an effective Hello World point and beyond. There’s still lots to do, and no real graphical fun yet, but the project has already passed several key milestones. It recalls us of when we saw Java running on the N64. Meanwhile, if you’re working to get LOLCODE running on the 3DO, don’t hesitate to let us know!

Help Propel The Original ARM OS Into The Future

Por: Jenny List
30 Marzo 2025 at 20:00

We use ARM devices in everything from our microcontroller projects to our laptops, and many of us are aware of the architecture’s humble beginnings in a 1980s Acorn Archimedes computer. ARM processors are not the only survivor from the Archimedes though, its operating system has made it through the decades as well.

RISC OS is a general purpose desktop operating system for ARM platforms that remains useful in 2025, as well as extremely accessible due to a Raspberry Pi port. No software can stand still though, and if RISC OS is to remain relevant it must move with the times. Thus RISC OS Open, the company behind its development, have launched what they call a Moonshots Initiative, moving the OS away from incremental development towards much bolder steps. This is necessary in order for it to support the next generation of ARM architectures.

We like RISC OS here at Hackaday and have kept up to date with its recent developments, but even we as fans can see that it is in part a little dated. From the point of view of RISC OS Open though, they identify support for 64-bit platforms as their highest priority, and to that end they’re looking for developers, funding partners, and community advocates. If that’s you, get in touch with them!

ReactOS 0.4.15 Released With Major Improvements

Por: Maya Posch
25 Marzo 2025 at 11:00

Recently the ReactOS project released the much anticipated 0.4.15 update, making it the first major release since 2020. Despite what might seem like a minor version bump from the previous 0.4.14 release, the update introduces sweeping changes to everything from the kernel to the user interface and aspects like the audio system and driver support. Those who have used the nightly builds over the past years will likely have noticed a lot of these changes already.

Japanese input with MZ-IME and CJK font (Credit: ReactOS project)
Japanese input with MZ-IME and CJK font (Credit: ReactOS project)

A notable change is to plug-and-play support which enables more third party drivers and booting from USB storage devices. The Microsoft FAT filesystem driver from the Windows Driver Kit can now be used courtesy of better compatibility, there is now registry healing, and caching and kernel access checks are implemented. The latter improvement means that many ReactOS modules can now work in Windows too.

On the UI side there is a much improved IME (input method editor) feature, along with native ZIP archive support and various graphical tweaks.

Meanwhile since 0.4.15 branched off the master branch six months ago, the latter has seen even more features added, including SMP improvements, UEFI support, a new NTFS driver and improvements to power management and application support. All of this accompanied by many bug fixes, which makes it totally worth it to regularly check out the nightly builds.

C+P: Combining the Usefulness of C with the Excellence of Prolog

Por: Maya Posch
14 Marzo 2025 at 23:00

In a move that will absolutely not over-excite anyone, nor lead to any heated arguments, [needleful] posits that their C Plus Prolog (C+P for short) programming language is the best possible language ever. This is due to it combining the best of the only good programming language (Prolog) with the best of the only useful programming language (C). Although the resulting mash-up syntax that results may trigger Objective-C flashbacks, it’s actually valid SWI-Prolog, that is subsequently converted to C for compilation.

Language flamewars aside, the motivation for C+P as explained in the project’s README was mostly the exploring of macros in a system programming language. More specifically, by implementing a language-within-a-language you can add just about any compile-time feature you want including – as demonstrated in C+P – a form of generics. Even as a way to have a bit of fun, C+P comes dangerously close to being a functional prototype. Its main flaw is probably the lack of validation and error messages, which likely leads to broken C being generated.

Also mentioned are the Nim and Haxe languages which can be compiled (transpiled) to C or C++, which is somewhat of a similar idea as C+P, as well as cmacro (based on Common Lisp) and the D language.

How To Use LLMs For Programming Tasks

11 Marzo 2025 at 23:00

[Simon Willison] has put together a list of how, exactly, one goes about using a large language models (LLM) to help write code. If you have wondered just what the workflow and techniques look like, give it a read. It’s full of examples, strategies, and useful tips for effectively using AI assistants like ChatGPT, Claude, and others to do useful programming work.

It’s a very practical document, with [Simon] emphasizing realistic expectations and the importance of managing context (both in terms of giving the LLM direction, as well as the model’s context in terms of being mindful of how much the LLM can fit in its ‘head’ at once.) It is useful to picture an LLM as a capable and obedient but over-confident programming intern or assistant, albeit one that never gets bored or annoyed. Useful work can be done, but testing is crucial and human oversight simply cannot be automated away.

Even if one has no interest in using LLMs to help in writing production code, there’s still a lot of useful work they can do to speed up the process of software development in general, especially when learning. They can help research options, interactively explore unfamiliar codebases, or prototype ideas quickly. [Simon] provides useful strategies for all these, and more.

If you have wondered how exactly glorified chatbots can meaningfully help with software development, [Simon]’s writeup hopefully gives you some new ideas. And if this is is all leaving you curious about how exactly LLMs work, in the time it takes to enjoy a warm coffee you can learn how they do what they do, no math required.

TrapC: A C Extension For the Memory Safety Boogeyman

Por: Maya Posch
11 Marzo 2025 at 14:00

In the world of programming languages it often feels like being stuck in a Groundhog Day-esque loop through purgatory, as effectively the same problems are being solved over and over, with previous solutions forgotten and there’s always that one jubilant inventor stumbling out of a darkened basement with the One True Solution™ to everything that plagues this world beset by the Unspeakable Horror that is the C programming language.

As the latest entry to pledge its fealty at the altar of the Church of the Holy Memory Safety, TrapC promises to fix C, while also lambasting Rust for allowing that terrible unsafe keyword. Of course, since this is yet another loop through purgatory, the entire idea that the problem is C and some perceived issue with this nebulous ‘memory safety’ is still a red herring, as pointed out previously.

In other words, it’s time for a fun trip back to the 1970s when many of the same arguments were being rehashed already, before the early 1980s saw the Steelman language requirements condensed by renowned experts into the Ada programming language. As it turns out, memory safety is a miniscule part of a well-written program.

It’s A Trap

Pretty much the entire raison d’être for new programming languages like TrapC, Rust, Zig, and kin is this fixation on ‘memory safety’, with the idea being that the problem with C is that it doesn’t check memory boundaries and allows usage of memory addresses in ways that can lead to Bad Things. Which is not to say that such events aren’t bad, but because they are so obvious, they are also very easy to detect both using static and dynamic analysis tools.

As a ‘proposed C-language extension’, TrapC would add:

  • memory-safe pointers.
  • constructors & destructors.
  • the trap and alias keywords.
  • Run-Time Type Information.

It would also remove:

  • the goto and union keywords.

The author, Robin Rowe, freely admits to this extension being ‘C++ like’, which takes us right back to 1979 when a then young Danish computer scientist (Bjarne Stroustrup) created a C-language extension cheekily called ‘C++’ to denote it as enhanced C. C++ adds many Simula features, a language which is considered the first Object-Oriented (OO) programming language and is an indirect descendant of ALGOL. These OO features include constructors and destructors. Together with (optional) smart pointers and the bounds-checked strings and containers from the Standard Template Library (STL) C++ is thus memory safe.

So what is the point of removing keywords like goto and union? The former is pretty much the most controversial keyword in the history of programming languages, even though it derives essentially directly from jumps in assembly language. In the Ada programming language you also have the goto keyword, with it often used to provide more flexibility where restrictive language choices would lead to e.g. convoluted loop constructs to the point where some C-isms do not exist in Ada, like the continue keyword.

The union keyword is similarly removed in TrapC, with the justification that both keywords are ‘unsafe’ and ‘widely deprecated’. Which makes one wonder how much real-life C & C++ code has been analyzed to come to this conclusion. In particular in the field of embedded- and driver programming with low-level memory (and register) access the use of union is widely used for the flexibility it offers.

Of course, if you’re doing low-level memory access you’re also free to use whatever pointer offset and type casting you require, together with very unsafe, but efficient, memcpy() and similar operations. There is a reason why C++ doesn’t forbid low-level access without guardrails, as sometimes it’s necessary and you’re expected to know what you’re doing. This freedom in choosing between strict memory safety and the untamed wilds of C is a deliberate design choice in C++. In embedded programming you tend to compile C++ with both RTTI & exceptions disabled as well due to the overhead from them.

Don’t Call It C++

Effectively, TrapC adds RTTI, exceptions (or ‘traps’), OO classes, safe pointers, and similar C++ features to C, which raises the question of why it’s any different, especially since the whitepaper describes TrapC and C++ code usually looking the same as a feature. Here the language seems to regard itself as being a ‘better C++’, mostly in terms of exception handling and templates, using ‘traps’ and ‘castplates’. Curiously there’s not much focus on “resource allocation is initialization” (RAII) that is such a cornerstone of C++.

Meanwhile castplates are advertised as a way to make C containers ‘typesafe’, but unlike C++ templates they are created implicitly using RTTI and one might argue somewhat opaque (C++ template-like) syntax. There are few people who would argue that C++ template code is easy to read. Of note here is that in embedded programming you tend to compile C++ with both RTTI & exceptions disabled due to the overhead from them. The extensive reliance on RTTI in TrapC would seem to preclude such an option.

Circling back on the other added keyword, alias, this is TrapC’s way to providing function overloading, and it works like a C preprocessor #define:

void puts(void* x) alias printf("{}n",x);

Then there is the new trap keyword that’s apparently important enough to be captured in the extension’s name. These are offered as an alternative to C++ exceptions, but the description is rather confusing, other than that it’s supposedly less complicated and does not support cascading exceptions up the stack. Here I do not personally see much value either way, as like so many C++ developers I loathe C++ exceptions with the fire of a thousand Suns and do my utmost to avoid them.

My favorite approach here is found in Ada, which not only cleanly separates functions and procedures, but also requires, during compile time, that any return value from a function is handled, and implements exceptions in a way that is both light-weight and very informative, as I found for example while extensively using the Ada array type in the context of a lock-free ring buffer. During testing there were zero crashes, just the program bailing out with an exception due to a faulty offset into the array and listing the exact location and cause, as in Ada everything is bound-checked by default.

Memory Safety

Much of the safety in TrapC would come from managed pointers, with its author describing TrapC’s memory management as ‘automatic’ in a recent presentation at an ISO C meeting. Pointers are lifetime-managed, but as the whitepaper states, the exact method used is ‘implementation defined’, instead of reference counting as in the C++ specification.

Yet none of this matters in the context of actual security issues. As I noted in 2024, the ‘red herring’ part refers to the real-life security issues that are captured in CVEs and their exploitation. Virtually all of the worst CVEs involve a lack of input validation, which allows users to access data in ‘restricted’ folders and gain access to databases and other resources. None of which involve memory safety in any way or form, and thus the onus lies on preventing logic errors, solid input validation and preventing lazy or inattentive programmers from introducing the next world-famous CVE.

As a long-time C & C++ programmer, I have come to ‘love’ the warts in these languages as well as the lack of guardrails for the freedom they provide. Meanwhile I have learned to write test cases and harnesses to strap my code into for QA sessions, because the best way to validate code is by stressing it. Along the way I have found myself incredibly fond of Ada, as its focus on preventing ambiguity and logic errors is self-evident and regularly keeps me from making inattentive mistakes. Mistakes that in C++ would show up in the next test and/or Valgrind cycle followed by a facepalm moment and recompile, yet somehow programming in Ada doesn’t feel more restrictive than writing in C++.

Thus I’ll keep postulating that the issues with C were already solved in 1983 with the introduction of Ada, and accepting this fact is the only way out of this endless Groundhog Day purgatory.

Writing an OLED Display Driver in MicroZig

Por: Maya Posch
9 Marzo 2025 at 00:00

Although most people would use C, C++ or MicroPython for programming microcontrollers, there are a few more obscure options out there as well, with MicroZig being one of them. Recently [Andrew Conlin] wrote about how to use MicroZig with the Raspberry Pi RP2040 MCU, showing the process of writing an SSD1306 OLED display driver and running it. Although MicroZig has since published a built-in version, the blog post gives a good impression of what developing with MicroZig is like.

Zig is a programming language which seeks to improve on the C language, adding memory safety, safe pointers (via option types), while keeping as much as possible of what makes C so useful for low-level development intact. The MicroZig project customizes Zig for use in embedded projects,  targeting platforms including the Raspberry Pi MCUs and STM32.  During [Andrew]’s usage of MicroZig it was less the language or supplied tooling that tripped him up, and more just the convoluted initialization of the SSD1306 controller, which is probably a good sign. The resulting project code can be found on his GitHub page.

A MicroPython Interpreter For Flipper Zero

3 Marzo 2025 at 12:00
Screenshot of the REPL running on the Flipper, importing the flipper API library and calling infrared receive function out of it with help of autocomplete

Got a Flipper Zero? Ever wanted to use a high-level but powerful scripting language on it? Thanks to [Oliver] we now have a MicroPython application for the Flipper, complete with a library for hardware and software feature support. Load it up, start it up, connect over USB, and you’ve got the ever-so-convenient REPL at your disposal. Or, upload a Python script to your Flipper and run them directly from Flipper’s UI at your convenience!

In the API docs, we’re seeing support for every single primitive you could want – GPIO (including the headers at the top, of course), a healthy library for LCD and LCD backlight control, button handling, SD card support, speaker library for producing tones, ADC and PWM, vibromotor, logging, and even infrared transmit/receive support. Hopefully, we get support for Flipper’s wireless capabilities at some point, too!

Check out the code examples, get the latest release from the Flipper app portal or GitHub, load it up, and play! Mp-flipper has existed for the better half of a year now, so it’s a pretty mature application, and it adds quite a bit to Flipper’s use cases in our world of hardware hacking. Want to develop an app for the Flipper in Python or otherwise? Check out this small-screen UI design toolkit or this editor we’ve featured recently!

Learn Assembly the FFmpeg Way

24 Febrero 2025 at 03:00

You want to learn assembly language. After all, understanding assembly unlocks the ability to understand what compilers are doing and it is especially important for time-critical code. But most tutorials are — well — boring. So you can print “Hello World” super fast. Who cares?

But decoding video data is something where assembly can really pay off, so why not study a real project like FFmpeg to see how they do things? Sounds like a pain, but thanks to the FFmpeg asm-lessons repository, it’s actually quite accessible.

According to the repo, you should already understand C — especially C pointers. They also expect you to understand some basic mathematics. Most of the FFmpeg code that uses assembly uses the single instruction multiple data (SIMD) opcodes. This allows you to do something like “add 5 to these 200 data items” very quickly compared to looping 200 times.

There are three lessons so far. Of course, some of the material is a little introductory, but they do jump in quickly to SIMD including upcoming instruction sets like AVX10 and older instructions like MMX and AVX512. It is no surprise that FFmpeg needs to understand all these variations since it runs on behalf of (their words) “billions of users.”

We enjoyed their link to a simplified instruction list. Not to mention the visual organizer for SIMD instructions.

The course’s goal is to prepare developers to contribute to FFmpeg. If you are more interested in using FFmpeg, you might enjoy this browser-based GUI. Then again, not all video playback needs high performance.

KiCad 9 Moves Up In The Pro League

22 Febrero 2025 at 21:00
Demonstration of the multichannel design feature, being able to put identical blocks into your design, only route one of them, and have all the other blocks' routing be duplicated

Do you do PCB design for a living? Has KiCad been just a tiny bit insufficient for your lightning-fast board routing demands? We’ve just been graced with the KiCad 9 release (blog post, there’s a FOSDEM talk too), and it brings features of the rank you expect from a professional-level monthly-subscription PCB design suite.

Of course, KiCad 9 has delivered a ton of polish and features for all sorts of PCB design, so everyone will have some fun new additions to work with – but if you live and breathe PCB track routing, this release is especially for you.

One of the most flashy features is multichannel design – essentially, if you have multiple identical blocks on your PCB, say, audio amplifiers, you can now route it once and then replicate the routing in all other blocks; a stepping stone for design blocks, no doubt.

Other than that, there’s a heap of additions – assigning net rules in the schematic, dragging multiple tracks at once, selectively removing soldermask from tracks and tenting from vias, a zone fill manager, in/decrementing numbers in schematic signal names with mousewheel scroll, alternate function display toggle on symbol pins, improved layer selection for layer switches during routing, creepage and acute angle DRC, DRC marker visual improvements, editing pad and via stacks, improved third-party imports (specifically, Eagle and Altium schematics), and a heap of other similar pro-level features big and small.

Regular hackers get a load of improvements to enjoy, too. Ever wanted to add a table into your schematic? Now that’s doable out of the box. How about storing your fonts, 3D models, or datasheets directly inside your KiCad files? This, too, is now possible in KiCad. The promised Python API for the board editor is here, output job templates are here (think company-wide standardized export settings), there’s significantly more options for tweaking your 3D exports, dogbone editor for inner contour milling, big improvements to footprint positioning and moving, improvements to the command line interface (picture rendering in mainline!), and support for even more 3D export standards, including STL. Oh, add to that, export of silkscreen and soldermask into 3D models – finally!

Apart from that, there’s, of course, a ton of bugfixes and small features, ~1500 new symbols, ~750 footprints, and, documentation has been upgraded to match and beyond. KiCad 10 already has big plans, too – mostly engine and infrastructure improvements, making KiCad faster, smarter, and future-proof, becoming even more of impressive software suite and a mainstay on an average hacker’s machine.

For example, KiCad 10 will bring delay matching, Git schematic and PCB integrations, PNG plot exports, improved diffpair routers, autorouter previews, design import wizard, DRC and length calculation code refactoring, part height support, and a few dozen other things!

We love that KiCad updates yearly now. Every FOSDEM, we get an influx of cool new features into the stable KiCad tree. We’re also pretty glad about the ongoing consistent funding they get – may they get even more, in fact. We’ve been consistently seeing hackers stop paying for proprietary PCB software suites and switching to KiCad, and hopefully some of them have redirected that money into a donation towards their new favorite PCB design tool.

Join the pro club, switch to the new now-stable KiCad 9! If you really enjoy it and benefit from it, donate, or even get some KiCad merch. Want to learn more about the new features? Check out the release blog post (many cool animations and videos there!), or the running thread on KiCad forums describing the new features&fixes in length, maybe if you’re up for video format, check out the KiCad 9 release talk recording (29m48s) from this year’s FOSDEM, it’s worth a watch.

A New 8-bit CPU for C

21 Febrero 2025 at 12:00

It is easy to port C compilers to architectures that look like old minicomputers or bigger CPUs. However, as the authors of the Small Device C Compiler (SDCC) found, pushing C into a typical 8-bit CPU is challenging. Lessons learned from SDCC inspired a new 8-bit architecture, F8. This isn’t just a theoretical architecture. You can find an example Verilog implementation in the SDDC project and on GitHub. The name choice may turn out to be unfortunate as there was an F8 CPU from Fairchild back in the 1970s that apparently few people remember.

In the video from FOSDEM 2025, [Phillip Krause] provides a nice overview of the how and why of F8. While it might seem odd to create a new 8-bit CPU when you can get bigger CPUs for pennies, you have to consider that 8-bit machines are more than enough for many jobs, and if you can squeeze one into an FPGA, it might be a good choice as opposed to having to get a bigger FPGA to hold your design and a 32-bit CPU.

Many 8-bit computers struggle with efficient C code mainly because the data size is smaller than the width of a pointer. Doing things like adding two numbers takes more code, even in common situations. For example, suppose you have a pointer to an array, and each element of the array is four bytes wide. To find the address of the n’th element, you need to compute: element_n = base_address + (n *4). On, say, an 8086 with 16-bit pointers and many 16-bit instructions and addressing modes can do the calculation very succinctly.

Other problems you frequently run into with compiling code for small CPUs include segmented address spaces, dedicated registers for memory indexing, and difficulties putting wider items on a stack (or, for some very small CPUs, even having a stack, at all).

The wish list was to include stack-relative addressing, hardware 8-bit multiplication, and BCD support to help support an efficient printf implementation.

Keep in mind, it isn’t that you can’t compile C for strange 8-bit architectures. SDDC is proof that you can. The question is how efficient is the generated code. F8 provides features that facilitate efficient binaries for C programs.

We’ve seen other modern 8-bit CPUs use SDCC. Writing C code for the notorious PIC (with it’s banked memory, lack of stack, and other hardships) was truly a surreal experience.

Homebrew CPU Gets a Beautiful Rotating Cube Demo

19 Febrero 2025 at 21:00

[James Sharman] designed and built his own 8-bit computer from scratch using TTL logic chips, including a VGA adapter, and you can watch it run a glorious rotating cube demo in the video below.

The rotating cube is the product of roughly 3,500 lines of custom assembly code and looks fantastic, running at 30 frames per second with shading effects from multiple light sources. Great results considering the computing power of his system is roughly on par with vintage 8-bit home computers, and the graphics capabilities are limited. [James]’s computer uses a tile map instead of a frame buffer, so getting 3D content rendered was a challenge.

The video is about 20 seconds of demo followed by a detailed technical discussion on how exactly one implements everything required for a 3D cube, from basic math to optimization. If a deep dive into that sort of thing is up your alley, give it a watch!

We’ve featured [James]’ fascinating work on his homebrew computer before. Here’s more detail on his custom VGA adapter, and his best shot at making it (kinda) run DOOM.

Get Ready For KiCAD 9!

Por: Jenny List
18 Febrero 2025 at 06:00

Rev up your browsers, package managers, or whatever other tool you use to avail yourself of new software releases, because the KiCAD team have announced that barring any major bugs being found in the next few hours, tomorrow should see the release of version 9 of the open source EDA suite. Who knows, depending on where you are in the world that could have already happened when you read this.

Skimming through the long list of enhancements brought into this version there’s one thing that strikes us; how this is now a list of upgrades and tweaks to a stable piece of software rather than essential features bringing a rough and ready package towards usability. There was a time when using KiCAD was a frustrating experience of many quirks and interface annoyances, but successive versions have improved it beyond measure. We would pass comment that we wished all open source software was as polished, but the fact is that much of the commercial software in this arena is not as good as this.

So head on over and kick the tires on this new KiCAD release, assuming that it passes those final checks. We look forward to the community’s verdict on it.

How Hard is it to Write a Calculator App?

16 Febrero 2025 at 21:00

How hard can it be to write a simple four-function calculator program? After all, computers are good at math, and making a calculator isn’t exactly blazing a new trail, right? But [Chad Nauseam] will tell you that it is harder than you probably think. His post starts with a screenshot of the iOS calculator app with a mildly complex equation. The app’s answer is wrong. Android’s calculator does better on the same problem.

What follows is a bit of a history lesson and a bit of a math lesson combined. As you might realize, the inherent problem with computers and math isn’t that they aren’t good at it. Floating point numbers have a finite precision and this leads to problems, especially when you do operations that combine large and small numbers together.

Indeed, any floating point representation has a bigger infinity of numbers that it can’t represent than those that it can. But the same is true of a calculator. Think about how many digits you are willing to type in, and how many digits you want out. All you want is for each of them to be correct, and that’s a much smaller set of numbers.

Google’s developer, [Hans-J. Boehm] tackled this problem by turning to recursive real arithmetic (RRA). Here, each math function is told how accurate it needs to be, and a set of rules determines the highest required accuracy.

But every solution brings a problem. With RRA, there is no way to tell very small numbers from zero. So computing “1-1” might give you “0.000000000”, which is correct but upsetting because of all the excess precision. You could try to test if “0.00000000” was equal to “0”, and simplify the output. But testing for equality of two numbers in RRA is not guaranteed to terminate: you can tell if two numbers are unequal by going to more and more precision until you find a difference, but if the numbers happen to be equal, this procedure never ends.

The key realization for [Boehm] and his collaborators was that you could use RRA only for cases where you deal with inexact numbers. Most of the time, the Android calculator deals with rationals. However, when an operation produces a potentially irrational result, it switches to RRA for the approximation, which works because no finite representation ever gets it exactly right. The result is a system that doesn’t show excess precision, but correctly displays all of the digits that it does show.

We really like [Chad’s] step-by-step explanation. If you would rather dive into the math, you can read [Boehm’s] paper on the topic. If you ever wonder how many computer systems handle odd functions like sine and cosine, read about CORDIC. Or, avoid all of this and stick to your slide rule.

Assets AI

Por: EasyWithAI
3 Enero 2023 at 14:16
Assets AI offers unique, original game assets for you to use in your game design and development. Prices start at $2.99 per asset and you only pay for what you need. The assets are of high quality and you will own them forever after downloading. New assets are added to their collection every week, including […]

Source

Eraser AI

Por: EasyWithAI
6 Mayo 2024 at 12:25
Eraser AI is a technical design copilot that’s able to streamline technical design workflows for developers and engineering teams. It can serve as a copilot for creating, editing, and documenting diagrams, architectures, and design documents using natural language prompts. Some more detailed use cases for this particular tool can be found on the main website, […]

Source

❌
❌