Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

How to Build an STM32 Web Dashboard Using the Mongoose Wizard

26 Mayo 2025 at 02:00
Screen shot of Mongoose Wizard.

Today from the team at Cesanta Software — the people who gave us the open-source Mongoose Web Server Library and Mongoose OS — we have an article covering how to build an STM32 web dashboard.

The article runs through setting up a development environment; creating the dashboard layout; implementing the dashboard, devices settings, and firmware update pages; building and testing the firmware; attaching UI controls to the hardware; and conclusion.

The web dashboard is all well and good, but in our opinion the killer feature remains the Over-The-Air (OTA) update facility which allows for authenticated wireless firmware updates via the web dashboard. The rest is just gravy. In the video you get to see how to use your development tools to create a firmware file suitable for OTA update.

If you’re thinking this all looks a little familiar, that’s because we recently wrote about their web dashboard for the ESP32. This is the same again but emphasizing the STM32 support this time around. We originally heard about the Mongoose technology line all the way back in 2017!

Thanks to [Toly] for letting us know about this new howto.

Version Control to the Max

14 Mayo 2025 at 14:00

There was a time when version control was an exotic idea. Today, things like Git and a handful of other tools allow developers to easily rewind the clock or work on different versions of the same thing with very little effort. I’m here to encourage you not only to use version control but also to go even a step further, at least for important projects.

My First Job

The QDP-100 with — count ’em — two 8″ floppies (from an ad in Byte magazine)

I remember my first real job back in the early 1980s. We made a particular type of sensor that had a 6805 CPU onboard and, of course, had firmware. We did all the development on physically big CP/M machines with the improbable name of Quasar QDP-100s. No, not that Quasar. We’d generate a hex file, burn an EPROM, test, and eventually, the code would make it out in the field.

Of course, you always have to make changes. You might send a technician out with a tube full of EPROMs or, in an emergency, we’d buy the EPROMs space on a Greyhound bus. Nothing like today.

I was just getting started, and the guy who wrote the code for those sensors wasn’t much older than me. One day, we got a report that something was misbehaving out in the field. I asked him how we knew what version of the code was on the sensor. The blank look I got back worried me.

Seat of the Pants

Version control circa 1981 alongside a 3.5-inch floppy that held much more data

Turns out, he’d burn however many EPROMs were required and then plow forward developing code. We had no idea what code was really running in the field. After we fixed the issue, I asked for and received a new rule. Every time we shipped an EEPROM, it got a version number sticker, and the entire development directory went on an 8″ floppy. The floppy got a write-protect tab and went up on the shelf.

I was young. I realize now that I needed to back those up, too, but it was still better than what we had been doing.

Enter Meta Version Control

Today, it would have been easy to label a commit and, later, check it back out. But there is still a latent problem. Your source code is only part of the equation when you are writing code. There’s also your development environment, including the libraries, the compiler, and anything else that can add to or modify your code. How do you version control that? Then there’s the operating system, which could interact with your code or development tools too.

Maybe it is a call back to my 8″ floppy days, but I have taken to doing serious development in a virtual machine. It doesn’t matter if you use QEMU or VirtualBox or VMWare. Just use it. The reason is simple. When you do a release, you can backup the entire development environment.

When you need to change something five years from now, you might find the debugger no longer runs on your version of the OS. The compiler fixed some bugs that you rely on or added some that you now trip over. But if you are in your comfy five-year-old virtual environment, you won’t care. I’ve had a number of cases where I wish I had done that because my old DOS software won’t run anymore. Switched to Linux? Or NewOS 2100tm? No problem, as long as it can host a virtual machine.

Can’t decide on which one to use? [How to Simple] has some thoughts in the video below.

How About You?

How about it? Do you or will you virtualize and save? Do you use containers for this sort of thing? Or do you simply have faith that your version-controlled source code is sufficient? Let us know in the comments.

If you think Git is just for software, think again.

Web Dashboard and OTA Updates for the ESP32

10 Mayo 2025 at 08:00
Mongoose Wizard new project dialog.

Today we are happy to present a web-based GUI for making a web-based GUI! If you’re a programmer then web front-end development might not be your bag. But a web-based graphical user interface (GUI) for administration and reporting for your microcontroller device can look very professional and be super useful. The Mongoose Wizard can help you develop a device dashboard for your ESP32-based project.

In this article (and associated video) the Mongoose developers run you through how to get started with their technology. They help you get your development environment set up, create your dashboard layout, add a dashboard page, add a device settings page, add an over-the-air (OTA) firmware update page, build and test the firmware, and attach the user-interface controls to the hardware. The generated firmware includes an embedded web server for serving your dashboard and delivering its REST interface, pretty handy.

You will find no end of ESP32-based projects here at Hackaday which you could potentially integrate with Mongoose. We think the OTA support is an excellent feature to have, but of course there are other ways of supporting that functionality.

Thanks to [Toly] for this tip.

Hardware Built For Executing Python (Not Pythons)

Por: Lewin Day
6 Mayo 2025 at 08:00

Lots of microcontrollers will accept Python these days, with CircuitPython and MicroPython becoming ever more popular in recent years. However, there’s now a new player in town. Enter PyXL, a project to run Python directly in hardware for maximum speed.

What’s the deal with PyXL? “It’s actual Python executed in silicon,” notes the project site. “A custom toolchain compiles a .py file into CPython ByteCode, translates it to a custom assembly, and produces a binary that runs on a pipelined processor built from scratch.” Currently, there isn’t a hard silicon version of PyXL — no surprise given what it costs to make a chip from scratch. For now, it exists as logic running on a Zynq-7000 FPGA on a Arty-Z7-20 devboard. There’s an ARM CPU helping out with setup and memory tasks for now, but the Python code is executed entirely in dedicated hardware.

The headline feature of PyXL is speed. A comparison video demonstrates this with a measurement of GPIO latency. In this test, the PyXL runs at 100 MHz, achieving a round-trip latency of 480 nanoseconds. This is compared to MicroPython running on a PyBoard at 168 MHz, which achieves a much slower 15,000 nanoseconds by comparison. The project site claims PyXL can be 30x faster than MicroPython based on this result, or 50x faster when normalized for the clock speed differences.

Python has never been the most real-time of languages, but efforts like this attempt to push it this way. The aim is that it may finally be possible to write performance-critical code in Python from the outset. We’ve taken a look at Python in the embedded world before, too, albeit in very different contexts.

A Gentle Introduction to COBOL

Por: Maya Posch
30 Abril 2025 at 14:00

As the Common Business Oriented Language, COBOL has a long and storied history. To this day it’s quite literally the financial bedrock for banks, businesses and financial institutions, running largely unnoticed by the world on mainframes and similar high-reliability computer systems. That said, as a domain-specific language targeting boring business things it doesn’t quite get the attention or hype as general purpose programming or scripting languages. Its main characteristic in the public eye appears be that it’s ‘boring’.

Despite this, COBOL is a very effective language for writing data transactions, report generating and related tasks. Due to its narrow focus on business applications, it gets one started with very little fuss, is highly self-documenting, while providing native support for decimal calculations, and a range of I/O access and database types, even with mere files. Since version 2002 COBOL underwent a number of modernizations, such as free-form code, object-oriented programming and more.

Without further ado, let’s fetch an open-source COBOL toolchain and run it through its paces with a light COBOL tutorial.

Spoiled For Choice

It used to be that if you wanted to tinker with COBOL, you pretty much had to either have a mainframe system with OS/360 or similar kicking around, or, starting in 1999, hurl yourself at setting up a mainframe system using the Hercules mainframe emulator. Things got a lot more hobbyist & student friendly in 2002 with the release of GnuCOBOL, formerly OpenCOBOL, which translates COBOL into C code before compiling it into a binary.

While serviceable, GnuCOBOL is not a compiler, and does not claim any level of standard adherence despite scoring quite high against the NIST test suite. Fortunately, The GNU Compiler Collection (GCC) just got updated with a brand-new COBOL frontend (gcobol) in the 15.1 release. The only negative is that for now it is Linux-only, but if your distribution of choice already has it in the repository, you can fetch it there easily. Same for Windows folk who have WSL set up, or who can use GnuCOBOL with MSYS2.

With either compiler installed, you are now ready to start writing COBOL. The best part of this is that we can completely skip talking about the Job Control Language (JCL), which is an eldritch horror that one would normally be exposed to on IBM OS/360 systems and kin. Instead we can just use GCC (or GnuCOBOL) any way we like, including calling it directly on the CLI, via a Makefile or integrated in an IDE if that’s your thing.

Hello COBOL

As is typical, we start with the ‘Hello World’ example as a first look at a COBOL application:

IDENTIFICATION DIVISION.
    PROGRAM-ID. hello-world.
PROCEDURE DIVISION.
    DISPLAY "Hello, world!".
    STOP RUN.

Assuming we put this in a file called hello_world.cob, this can then be compiled with e.g. GnuCOBOL: cobc -x -free hello_world.cob.

The -x indicates that an executable binary is to be generated, and -free that the provided source uses free format code, meaning that we aren’t bound to specific column use or sequence numbers. We’re also free to use lowercase for all the verbs, but having it as uppercase can be easier to read.

From this small example we can see the most important elements, starting with the identification division with the program ID and optionally elements like the author name, etc. The program code is found in the procedure division, which here contains a single display verb that outputs the example string. Of note is the use of the period (.) as a statement terminator.

At the end of the application we indicate this with stop run., which terminates the application, even if called from a sub program.

Hello Data

As fun as a ‘hello world’ example is, it doesn’t give a lot of details about COBOL, other than that it’s quite succinct and uses plain English words rather than symbols. Things get more interesting when we start looking at the aspects which define this domain specific language, and which make it so relevant today.

Few languages support decimal (fixed point) calculations, for example. In this COBOL Basics project I captured a number of examples of this and related features. The main change is the addition of the data division following the identification division:

DATA DIVISION.
WORKING-STORAGE SECTION.
01 A PIC 99V99 VALUE 10.11.
01 B PIC 99V99 VALUE 20.22.
01 C PIC 99V99 VALUE 00.00.
01 D PIC $ZZZZV99 VALUE 00.00.
01 ST PIC $*(5).99 VALUE 00.00.
01 CMP PIC S9(5)V99 USAGE COMP VALUE 04199.04.
01 NOW PIC 99/99/9(4) VALUE 04102034.

The data division is unsurprisingly where you define the data used by the program. All variables used are defined within this division, contained within the working-storage section. While seemingly overwhelming, it’s fairly easily explained, starting with the two digits in front of each variable name. This is the data level and is how COBOL structures data, with 01 being the highest (root) level, with up to 49 levels available to create hierarchical data.

This is followed by the variable name, up to 30 characters, and then the PICTURE (or PIC) clause. This specifies the type and size of an elementary data item. If we wish to define a decimal value, we can do so as two numeric characters (represented by 9) followed by an implied decimal point V, with two decimal numbers (99).  As shorthand we can use e.g. S9(5) to indicate a signed value with 5 numeric characters. There a few more special characters, such as an asterisk which replaces leading zeroes and Z for zero suppressing.

The value clause does what it says on the tin: it assigns the value defined following it to the variable. There is however a gotcha here, as can be seen with the NOW variable that gets a value assigned, but due to the PIC format is turned into a formatted date (04/10/2034).

Within the procedure division these variables are subjected to addition (ADD A TO B GIVING C.), subtraction with rounding (SUBTRACT A FROM B GIVING C ROUNDED.), multiplication (MULTIPLY A BY CMP.) and division (DIVIDE CMP BY 20 GIVING ST.).

Finally, there are a few different internal formats, as defined by USAGE: these are computational (COMP) and display (the default). Here COMP stores the data as binary, with a variable number of bytes occupied, somewhat similar to char, short and int types in C. These internal formats are mostly useful to save space and to speed up calculations.

Hello Business

In a previous article I went over the reasons why a domain specific language like COBOL cannot be realistically replaced by a general language. In that same article I discussed the Hello Business project that I had written in COBOL as a way to gain some familiarity with the language. That particular project should be somewhat easy to follow with the information provided so far. New are mostly file I/O, loops, the use of perform and of course the Report Writer, which is probably best understood by reading the IBM Report Writer Programmer’s Manual (PDF).

Going over the entire code line by line would take a whole article by itself, so I will leave it as an exercise for the reader unless there is somehow a strong demand by our esteemed readers for additional COBOL tutorial articles.

Suffice it to say that there is a lot more functionality in COBOL beyond these basics. The IBM ILE COBOL reference (PDF), the IBM Mainframer COBOL tutorial, the Wikipedia entry and others give a pretty good overview of many of these features, which includes object-oriented COBOL, database access, heap allocation, interaction with other languages and so on.

Despite being only a novice COBOL programmer at this point, I have found this DSL to be very easy to pick up once I understood some of the oddities about the syntax, such as the use of data levels and the PIC formats. It is my hope that with this article I was able to share some of the knowledge and experiences I gained over the past weeks during my COBOL crash course, and maybe inspire others to also give it a shot. Let us know if you do!

C64 Assembly in Parts

24 Abril 2025 at 05:00

[Michal Sapka] wanted to learn a new skill, so he decided on the Commodore 64 assembly language. We didn’t say he wanted to learn a new skill that might land him a job. But we get it and even applaud it. Especially since he’s written a multi-part post about what he’s doing and how you can do it, too. So far, there are four parts, and we’d bet there are more to come.

The series starts with the obligatory “hello world,” as well as some basic setup steps. By part 2, you are learning about registers and numbers. Part 3 covers some instructions, and by part 4, he finds that there are even more registers to contend with.

One of the great things about doing a project like this today is that you don’t have to have real hardware. Even if you want to eventually run on real hardware, you can edit in comfort, compile on a fast machine, and then debug and test on an emulator. [Michal] uses VICE.

The series is far from complete, and we hear part 5 will talk about branching, so this is a good time to catch up.

We love applying modern tools to old software development.

Abusing DuckDB-WASM To Create Doom In SQL

Por: Maya Posch
23 Abril 2025 at 23:00

These days you can run Doom anywhere on just about anything, with things like porting Doom to JavaScript these days about as interesting as writing Snake in BASIC on one’s graphical calculator. In a twist, [Patrick Trainer] had the idea to use SQL instead of JS to do the heavy lifting of the Doom game loop. Backed by the Web ASM version of  the analytical DuckDB database software, a Doom-lite clone was coded that demonstrates the principle that anything in life can be captured in a spreadsheet or database application.

Rather than having the game world state implemented in JavaScript objects, or pixels drawn to a Canvas/WebGL surface, this implementation models the entire world state in the database. To render the player’s view, the SQL VIEW feature is used to perform raytracing (in SQL, of course). Any events are defined as SQL statements, including movement. Bullets hitting a wall or impacting an enemy result in the bullet and possibly the enemy getting DELETE-ed.

The role of JavaScript in this Doom clone is reduced to gluing the chunks of SQL together and handling sprite Z-buffer checks as well as keyboard input. The result is a glorious ASCII-based game of Doom which you can experience yourself with the DuckDB-Doom project on GitHub. While not very practical, it was absolutely educational, showing that not only is it fun to make domain specific languages do things they were never designed for, but you also get to learn a lot about it along the way.

Thanks to [Particlem] for the tip.

Porting COBOL Code and the Trouble With Ditching Domain Specific Languages

Por: Maya Posch
16 Abril 2025 at 14:00

Whenever the topic is raised in popular media about porting a codebase written in an ‘antiquated’ programming language like Fortran or COBOL, very few people tend to object to this notion. After all, what could be better than ditching decades of crusty old code in a language that only your grandparents can remember as being relevant? Surely a clean and fresh rewrite in a modern language like Java, Rust, Python, Zig, or NodeJS will fix all ailments and make future maintenance a snap?

For anyone who has ever had to actually port large codebases or dealt with ‘legacy’ systems, their reflexive response to such announcements most likely ranges from a shaking of one’s head to mad cackling as traumatic memories come flooding back. The old idiom of “if it ain’t broke, don’t fix it”, purportedly coined in 1977 by Bert Lance, is a feeling that has been shared by countless individuals over millennia. Even worse, how can you ‘fix’ something if you do not even fully understand the problem?

In the case of languages like COBOL this is doubly true, as it is a domain specific language (DSL). This is a very different category from general purpose system programming languages like the aforementioned ‘replacements’. The suggestion of porting the DSL codebase is thus to effectively reimplement all of COBOL’s functionality, which should seem like a very poorly thought out idea to any rational mind.

Sticking To A Domain

The term ‘domain specific language’ is pretty much what it says it is, and there are many of such DSLs around, ranging from PostScript and SQL to the shader language GLSL. Although it is definitely possible to push DSLs into doing things which they were never designed for, the primary point of a DSL is to explicitly limit its functionality to that one specific domain. GLSL, for example, is based on C and could be considered to be a very restricted version of that language, which raises the question of why one should not just write shaders in C?

Similarly, Fortran (Formula translating system) was designed as a DSL targeting scientific and high-performance computation. First used in 1957, it still ranks in the top 10 of the TIOBE index, and just about any code that has to do with high-performance computation (HPC) in science and engineering will be written in Fortran or strongly relies on libraries written in Fortran. The reason for this is simple: from the beginning Fortran was designed to make such computations as easy as possible, with subsequent updates to the language standard adding updates where needed.

Fortran’s latest standard update was published in November 2023, joining the COBOL 2023 standard as two DSLs which are both still very much alive and very current today.

The strength of a DSL is often underestimated, as the whole point of a DSL is that you can teach this simpler, focused language to someone who can then become fluent in it, without requiring them to become fluent in a generic programming language and all the libraries and other luggage that entails. For those of us who already speak C, C++, or Java, it may seem appealing to write everything in that language, but not to those who have no interest in learning a whole generic language.

There are effectively two major reasons why a DSL is the better choice for said domain:

  • Easy to learn and teach, because it’s a much smaller language
  • Far fewer edge cases and simpler tooling

In the case of COBOL and Fortran this means only a fraction of the keywords (‘verbs’ for COBOL) to learn, and a language that’s streamlined for a specific task, whether it’s to allow a physicist to do some fluid-dynamic modelling, or a staff member at a bank or the social security offices to write a data processing application that churns through database data in order to create a nicely formatted report. Surely one could force both of these people to learn C++, Java, Rust or NodeJS, but this may backfire in many ways, the resulting code quality being one of them.

Tangentially, this is also one of the amazing things in the hardware design language (HDL) domain, where rather than using (System)Verilog or VHDL, there’s an amazing growth of alternative HDLs, many of them implemented in generic scripting and programming languages. That this prohibits any kind of skill and code sharing, and repeatedly, and often poorly, reinvents the wheel seems to be of little concern to many.

Non-Broken Code

A very nice aspect of these existing COBOL codebases is that they generally have been around for decades, during which time they have been carefully pruned, trimmed and debugged, requiring only minimal maintenance and updates while they happily keep purring along on mainframes as they process banking and government data.

One argument that has been made in favor of porting from COBOL to a generic programming language is ‘ease of maintenance’, pointing out that COBOL is supposedly very hard to read and write and thus maintaining it would be far too cumbersome.

Since it’s easy to philosophize about such matters from a position of ignorance and/or conviction, I recently decided to take up some COBOL programming from the position of both a COBOL newbie as well as an experienced C++ (and other language) developer. Cue the ‘Hello Business’ playground project.

For the tooling I used the GnuCOBOL transpiler, which converts the COBOL code to C before compiling it to a binary, but in a few weeks the GCC 15.1 release will bring a brand new COBOL frontend (gcobol) that I’m dying to try out. As language reference I used a combination of the Wikipedia entry for COBOL, the IBM ILE COBOL language reference (PDF) and the IBM COBOL Report Writer Programmer’s Manual (PDF).

My goal for this ‘Hello Business’ project was to create something that did actual practical work. I took the FileHandling.cob example from the COBOL tutorial by Armin Afazeli as starting point, which I modified and extended to read in records from a file, employees.dat, before using the standard Report Writer feature to create a report file in which the employees with their salaries are listed, with page numbering and totaling the total salary value in a report footing entry.

My impression was that although it takes a moment to learn the various divisions that the variables, files, I/O, and procedures are put into, it’s all extremely orderly and predictable. The compiler also will helpfully tell you if you did anything out of order or forgot something. While data level numbering to indicate data associations is somewhat quaint, after a while I didn’t mind at all, especially since this provides a whole range of meta information that other languages do not have.

The lack of semi-colons everywhere is nice, with only a single period indicating the end of a scope, even if it concerns an entire loop (perform). I used the modern free style form of COBOL, which removes the need to use specific columns for parts of the code, which no doubt made things a lot easier. In total it only took me a few hours to create a semi-useful COBOL application.

Would I opt to write a more extensive business application in C++ if I got put on a tight deadline? I don’t think so. If I had to do COBOL-like things in C++, I would be hunting for various libraries, get stuck up to my gills in complex configurations and be scrambling to find replacements for things like Report Writer, or be forced to write my own. Meanwhile in COBOL everything is there already, because it’s what that DSL is designed for. Replacing C++ with Java or the like wouldn’t help either, as you end up doing so much boilerplate work and dependencies wrangling.

A Modern DSL

Perhaps the funniest thing about COBOL is that since version 2002 it got a whole range of features that push it closer to generic languages like Java. Features that include object-oriented programming, bit and boolean types, heap-based memory allocation, method overloading and asynchronous messaging. Meanwhile the simple English, case-insensitive, syntax – with allowance for various spellings and acronyms – means that you can rapidly type code without adding symbol soup, and reading it is obvious even as a beginner, as the code literally does what it says it does.

True, the syntax and naming feels a bit quaint at first, but that is easily explained by the fact that when COBOL appeared on the scene, ALGOL was still highly relevant and the C programming language wasn’t even a glimmer in Dennis Ritchie’s eyes yet. If anything, COBOL has proven itself – much like Fortran and others – to be a time-tested DSL that is truly a testament to Grace Hopper and everyone else involved in its creation.

Eraser AI

Por: EasyWithAI
6 Mayo 2024 at 12:25
Eraser AI is a technical design copilot that’s able to streamline technical design workflows for developers and engineering teams. It can serve as a copilot for creating, editing, and documenting diagrams, architectures, and design documents using natural language prompts. Some more detailed use cases for this particular tool can be found on the main website, […]

Source

Devin

Por: EasyWithAI
14 Marzo 2024 at 18:09
Devin is an AI coding assistant that can function as a fully autonomous software engineer, able to plan and execute complex coding tasks requiring thousands of decisions. It offers impressive capabilities like learning new technologies on the fly, building and deploying full apps from scratch, automatically finding and fixing bugs, training its own AI models, […]

Source

KaneAI

Por: EasyWithAI
21 Agosto 2024 at 12:36
KaneAI by LambdaTest is a first of its kind AI Test Assistant with industry-first AI features like test authoring, management and debugging capabilities built from the ground up for high-speed quality engineering teams. KaneAI enables you to create and evolve complex test cases using natural language, significantly reducing the time and expertise required to get […]

Source

❌
❌