If you wonder how Large Language Models (LLMs) work and aren’t afraid of getting a bit technical, don’t miss [Brendan Bycroft]’s LLM Visualization. It is an interactively-animated step-by-step walk-through of a GPT large language model complete with animated and interactive 3D block diagram of everything going on under the hood. Check it out!
The demonstration walks through a simple task and shows every step. The task is this: using the nano-gpt model, take a sequence of six letters and put them into alphabetical order.
A GPT model is a highly complex prediction engine, so the whole process begins with tokenizing the input (breaking up words and assigning numerical values to the chunks) and ends with choosing an appropriate output from a list of probabilities. There are of course many more steps in between, and different ways to adjust the model’s behavior. All of these are made quite clear by [Brendan]’s process breakdown.
We’ve previously covered how LLMs work, explained without math which eschews gritty technical details in favor of focusing on functionality, but it’s also nice to see an approach like this one, which embraces the technical elements of exactly what is going on.
SDRs have been a game changer for radio hobbyists, but for ham radio applications, they often need a little help. That’s especially true of SDR dongles, which don’t have a lot of selectivity in the HF bands. But they’re so darn cheap and fun to play with, what’s a ham to do?
[VK3YE] has an answer, in the form of this homebrew software-defined radio (SDR) helper. It’s got a few features that make using a dongle like the RTL-SDR on the HF bands a little easier and a bit more pleasant. Construction is dead simple and based on what was in the junk bin and includes a potentiometer for attenuating stronger signals, a high-pass filter to tamp down stronger medium-wave broadcast stations, and a series-tuned LC circuit for each of the HF bands to provide some needed selectivity. Everything is wired together ugly-style in a metal enclosure, with a little jiggering needed to isolate the variable capacitor from ground.
The last two-thirds of the video below shows the helper in use on everything from the 11-meter (CB) band down to the AM bands. This would be a great addition to any ham’s SDR toolkit.
Recently China’s new CHIEF hypergravity facility came online to begin research projects after beginning construction in 2018. Standing for Centrifugal Hypergravity and Interdisciplinary Experiment Facility the name covers basically what it is about: using centrifuges immense acceleration can be generated. With gravity defined as an acceleration on Earth of 1 g, hypergravity is thus a force of gravity >1 g. This is distinct from simple pressure as in e.g. a hydraulic press, as gravitational acceleration directly affects the object and defines characteristics such as its effective mass. This is highly relevant for many disciplines, including space flight, deep ocean exploration, materials science and aeronautics.
While humans can take a g-force (g0) of about 9 g0 (88 m/s2) sustained in the case of trained fighter pilots, the acceleration generated by CHIEF’s two centrifuges is significantly above that, able to reach hundreds of g. For details of these centrifuges, this preprint article by [Jianyong Liu] et al. from April 2024 shows the construction of these centrifuges and the engineering that goes into their operation, especially the aerodynamic characteristics. Both air pressure (30 – 101 kPa) and arm velocity (200 – 1000 g) are considered, with the risks being overpressure and resonance, which if not designed for can obliterate such a centrifuge.
The acceleration of CHIEF is said to max out at 1,900 gravity tons (gt, weight of one ton due to gravity), which is significantly more than the 1,200 gt of the US Army Corps of Engineers’ hypergravity facility.
Tinkerers and tech enthusiasts, brace yourselves: the frontier of biohacking has just expanded. Picture implantable medical devices that don’t need batteries—no more surgeries for replacements or bulky contraptions. Though not all new (see below), ChemistryWorld recently shed new light on these innovations. It’s as exciting as it is unnerving; we, as hackers, know too well that tech and biology blend a fine ethical line. Realising our bodies can be hacked both tickles our excitement and unsettlement, posing deeper questions about human-machine integration.
Since the first pacemaker hit the scene in 1958, powered by rechargeable nickel-cadmium batteries and induction coils, progress has been steady but bound by battery limitations. Now, researchers like Jacob Robinson from Rice University are flipping the script, moving to designs that harvest energy from within. Whether through mechanical heartbeats or lung inflation, these implants are shifting to a network of energy-harvesting nodes.
From triboelectric nanogenerators made of flexible, biodegradable materials to piezoelectric devices tapping body motion is quite a leap. John Rogers at Northwestern University points out that the real challenge is balancing power extraction without harming the body’s natural function. Energy isn’t free-flowing; overharvesting could strain or damage organs. A topic we also addressed in April of this year.
As we edge toward battery-free implants, these breakthroughs could redefine biomedical tech. A good start on diving into this paradigm shift and past innovations is this article from 2023. It’ll get you on track of some prior innovations in this field. Happy tinkering, and: stay critical! For we hackers know that there’s an alternative use for everything!
Who doesn’t like dial-up internet? Even if those who survived the dial-up years are happy to be on broadband, and those who are still on dial-up wish that they weren’t, there’s definitely a nostalgic factor to the experience. Yet recreating the experience can be a hassle, with signing up for a dial-up ISP or jumping through many (POTS) hoops to get a dial-up server up and running. An easier way is demonstrated by [Minh Danh] with a Viking DLE-200B telephone line simulator in a recent blog post.
This little device does all the work of making two telephones (or modems) think that they’re communicating via a regular old POTS network. After picking up one of these puppies for a mere $5 at a flea market, [Minh Danh] tested it first with two landline phones to confirm that yes, you can call one phone from the other and hold a conversation. The next step was thus to connect two PCs via their modems, with the other side of the line receiving the ‘call’. In this case a Windows XP system was configured to be the dial-up server, passing through its internet connection via the modem.
With this done, a 33.6 kbps dial-up connection was successfully established on the client Windows XP system, with a blistering 3.8 kB/s download speed. The reason for 33.6 kbps is because the DLE-200B does not support 56K, and according to the manual doesn’t even support higher than 28.8 kbps, so even reaching these speeds was lucky.
Last Thursday we were at Electronica, which is billed as the world’s largest electronics trade show, and it probably is! It fills up twenty airplane-hangar-sized halls in Munich, and only takes place every two years.
And what did we see on the wall in the Raspberry Pi department? One of the relatively new AI-enabled cameras running a real-time pose estimation demo, powered by nothing less than a brand-new Raspberry Pi Compute Module 5. And it seemed happy to be running without a heatsink, but we don’t know how much load it was put under – most of the AI processing is done in the camera module.
We haven’t heard anything about the CM5 yet from the Raspberry folks, but we can’t imagine there’s all that much to say except that they’re getting ready to start production soon. If you look really carefully, this CM5 seems to have mouse bites on it that haven’t been ground off, so we’re speculating that this is still a pre-production unit, but feel free to generate wild rumors in the comment section.
The test board looks very similar to the RP4 CM demo board, so we imagine that the footprint hasn’t changed. (Edit: Oh wait, check out the M2 slot on the left-hand side!)
Last week I completed the SAO flower badge redrawing task, making a complete KiCad project. Most of the SAO petals are already released as KiCad projects, except for the Petal Matrix. The design features 56 LEDs arranged in eight spiral arms radiating from the center. What it does not feature are straight lines, right angles, nor parts placed on a regular grid.
Importing into KiCad
I followed the same procedures as the main flower badge with no major hiccups. This design didn’t have any released schematics, but backing out the circuits was straightforward. It also helped that user [sphereinabox] over on the Hackaday Discord server had rung out the LED matrix connections and gave me his notes.
Grep Those Positons
I first wanted to only read the data from the LEDs for analysis, and I didn’t need the full Kicad + Python scripting for that. Using grep on the PCB file, you get a text file that can be easily parsed to get the numbers. I confirmed that the LED placements were truly as irregular as they looked.
My biggest worry was how obtain and re-apply the positions and angles of the LEDs, given the irregular layout of the spiral arms. Just like the random angles of six SAO connector on the badge board, [Voja] doesn’t disappoint on this board, either. I fired up Python and used Matplotlib to get a visual perspective of the randomness of the placements, as one does. Due to the overall shape of the arms, there is a general trend to the numbers. But no obvious equation is discernable.
It was obvious that I needed a script of some sort to locate 56 new KiCad LED footprints onto the board. (Spoiler: I was wrong.) Theoretically I could have processed the PCB text file with bash or Python, creating a modified file. Since I only needed to change a few numbers, this wasn’t completely out of the question. But that is inelegant. It was time to get familiar with the KiCad + Python scripting capabilities. I dug in with gusto, but came away baffled.
KiCad’s Python Console to the Rescue — NOT
This being a one-time task for one specific PCB, writing a KiCad plugin didn’t seem appropriate. Instead, hacking around in the KiCad Python console looked like the way to go. But I didn’t work well for quick experimenting. You open the KiCad PCB console within the PCB editor. But when the console boots up, it doesn’t know anything about the currently loaded PCB. You need to import the Kicad Python interface library, and then open the PCB file. Also, the current state of the Python REPL and the command history are not maintained between restarts of KiCad. I don’t see any advantages of using the built-in Python console over just running a script in your usual Python environment.
Clearly there is a use case for this console. By all appearances, a lot of effort has gone into building up this capability. It appears to be full of features that must be valuable to some users and/or developers. Perhaps I should have stuck with it longer and figured it out.
KiCad Python Script Outside KiCad
This seemed like the perfect solution. The buzz in the community is that modern KiCad versions interface very well with Python. I’ve also been impressed with the improved KiCad project documentation on recent years. “This is going to be easy”, I thought.
First thing to note, the KiCad v8 interface library works only with Python 3.9. I run pyenv on my computers and already have 3.9 installed — check. However, you cannot just do a pip install kicad-something-or-other... to get the KiCad python interface library. These libraries come bundled within the KiCad distribution. Furthermore, they only work with a custom built version of Python 3.9 that is also included in the bundle. While I haven’t encountered this situation before, I figured out you can make pyenv point to a Python that has been installed outside of pyenv. But before I got that working, I made another discovery.
The Python API is not “officially” supported. KiCad has announced that the current Simplified Wrapper and Interface Generator-based Python interface bindings are slated to be deprecated. They are to be replaced by Inter-Process Communication-based bindings in Feb 2026. This tidbit of news coincided with learning of a similar 3rd party library.
Introducing KiUtils
Many people were asking questions about including external pip-installed modules from within the KiCad Python console. This confounded my search results, until I hit upon someone using the KiUtils package to solve the same problem I was having. Armed with this tool, I was up and running in no time. To be fair, I susepct KiUtils may also break when KiCad switched from SWIG to IPC interface, but KiUtils was so much easier to get up and running, I stuck with it.
I wrote a Python script to extract all the information I needed for the LEDs. The next step was to apply those values to the 56 new KiCad LED footprints to place each one in the correct position and orientation. As I searched for an example of writing a PCB file from KiUtils, I saw issue #113, “Broken as of KiCAD 8?”, on the KiUtils GitHub repository. Looks like KiUtils is already broken for v8 files. While I was able to read data from my v8 PCB file, it is reported that KiCad v8 cannot read files written by KiUtils.
Scripting Not Needed — DOH
At a dead end, I was about to hand place all the LEDs manually when I realized I could do it from inside KiCad. My excursions into KiCad and Python scripting were all for naught. The LED footprints had been imported from Altium Circuit Maker as one single footprint per LED (as opposed to some parts which convert as one footprint per pad). This single realization made the problem trivial. I just needed to update footprints from the library. While this did require a few attempts to get the cathode and anodes sorted out, it was basically solved with a single mouse click.
Those Freehand Traces
The imported traces on this PCB were harder to cleanup than those on the badge board. There were a lot of disconinuities in track segments. These artifacts would work fine if you made a real PCB, but because some segment endpoints don’t precisely line up, KiCad doesn’t know they belong to the same net. Here is how these were fixed:
Curved segments endpoints can’t be dragged like a straight line segment can. Solutions:
If the next track is a straight line, drag the line to connect to the curved segment.
If the next track is also a curve, manually route a very short track between the two endpoints.
If you route a track broadside into a curved track, it will usually not connect as far as KiCad is concerned. The solution is to break the curved track at the desired intersection, and those endpoints will accept a connection.
Some end segments were not connected to a pad. These were fixed by either dragging or routing a short trace.
Applying these rules over and over again, I finaly cleared all the discontinuities. Frustratingly, the algorithm to do this task already exists in a KiCad function: Tools -> Cleanup Graphics... -> Fix Discontinuities in Board Outline, and an accompanying tolerance field specified as a length in millimeters. But this operation, as noted in the its name, is restricted to lines on the Edge.Cuts layer.
PCB vs Picture
When I was all done, I noticed a detail in the photo of the Petal Matrix PCB assembly from the Hackaday reveal article. That board (sitting on a rock) has six debugging / expansion test points connected to the six pins of the SAO connector. But in the Altium Circuit Maker PCB design, there are only two pads, A and B. These connect to the two auxiliary input pins of the AS1115 chip. I don’t know which is correct. (Editor’s note: they were just there for debugging.) If you use this project to build one of these boards, edit it according to your needs.
Conclusion
The SAO Petal Matrix redrawn KiCad project can be found over at this GitHub repository. It isn’t easy to work backwards using KiCad from the PCB to the schematic. I certainly wouldn’t want to reverse engineer a 9U VME board this way. But for many smaller projects, it isn’t an unreasonable task, either. You can also use much simpler tools to get the job done. Earlier this year over on Hackaday.io, user [Skyhawkson] did a gread job backing out schematics from an Apollo-era PCB with Microsoft Paint 3D — a tool released in 2017 and just discontinued last week.
[CentyLab]’s PocketPD isn’t just adorably tiny — it also boasts some pretty useful features. It offers a lightweight way to get a precisely adjustable output of 0 to 20 V at up to 5 A with banana jack output, integrating a rotary encoder and OLED display for ease of use.
PocketPD leverages USB-C Power Delivery (PD), a technology with capabilities our own [Arya Voronova] has summarized nicely. In particular, PocketPD makes use of the Programmable Power Supply (PPS) functionality to precisely set and control voltage and current. Doing this does require a compatible USB-C charger or power bank, but that’s not too big of an ask these days.
Even if an attached charger doesn’t support PPS, PocketPD can still be useful. The device interrogates the attached charger on every bootup, and displays available options. By default PocketPD selects the first available 5 V output mode with chargers that don’t support PPS.
The latest hardware version is still in development and the GitHub repository has all the firmware, which is aimed at making it easy to modify or customize. Interested in some hardware? There’s a pre-launch crowdfunding campaign you can watch.
Over the years we’ve featured many projects which attempt to replicate the feel of physical media when playing music. Usually this involves some kind of token representation of the media, but here’s [Bas] with a different twist (Dutch language, Google Translate link). He’s using the CDs themselves in their cases, identifying them by their barcodes.
At its heart is a Raspberry Pi Pico W and a barcode scanner — after reading the barcode, the Pi calls Discogs to find the tracks, and then uses the Spotify API to find the appropriate links. From there, Home Assistant forwards them along to a smart speaker for playback. As a nice touch, [Bas] designed a 3D printed holder for the electronics which makes the whole thing a bit neater to use.
Product teardowns are great, but getting an unfiltered one from the people who actually designed and built the product is a rare treat. In the lengthy video after the break, former Formlabs engineer [Shane Wighton] tears down the Form 4 SLA printer while [Alec Rudd], the engineering lead for the project, answers all his prying questions.
[Shane] was part of the team that brought all Form 4’s predecessors to life, so he’s intimately familiar with the challenges of developing such a complex product. This means he can spot the small design details that most people would miss, and dive into the story behind each one. These include the hinges and poka-yoke (error-proofing) designed into the lid, the leveling features in the build-plate mount, the complex prototyping challenges behind the LCD panel and backlight, and the mounting features incorporated into every component.
A considerable portion of the engineering effort went into mitigating all the ways things could go wrong in production, shipping, and operation. The fact that most of the parts on the Form 4 are user-replaceable makes this even harder. It’s apparent that both engineers speak from a deep well of hard-earned experience, and it’s well worth the watch if you dream of bringing a physical product to market.
On November 6th, Northwestern University introduced a groundbreaking leap in haptic technology, and it’s worth every bit of attention now, even two weeks later. Full details are in their original article. This innovation brings tactile feedback into the future with a hexagonal matrix of 19 mini actuators embedded in a flexible silicone mesh. It’s the stuff of dreams for hackers and tinkerers looking for the next big thing in wearables.
What makes this patch truly cutting-edge? First, it offers multi-dimensional feedback: pressure, vibration, and twisting sensations—imagine a wearable that can nudge or twist your skin instead of just buzzing. Unlike the simple, one-note “buzzers” of old devices, this setup adds depth and realism to interactions. For those in the VR community or anyone keen on building sensory experiences, this is a game changer.
But the real kicker is its energy management. The patch incorporates a ‘bistable’ mechanism, meaning it stays in two stable positions without continuous power, saving energy by recycling elastic energy stored in the skin. Think of it like a rubber band that snaps back and releases stored energy during operation. The result? Longer battery life and efficient power usage—perfect for tinkering with extended use cases.
And it’s not all fun and games (though VR fans should rejoice). This patch turns sensory substitution into practical tech for the visually impaired, using LiDAR data and Bluetooth to transmit surroundings into tactile feedback. It’s like a white cane but integrated with data-rich, spatial awareness feedback—a boost for accessibility.
Fancy more stories like this? Earlier this year, we wrote about these lightweight haptic gloves—for those who notice, featuring a similar hexagonal array of 19 sensors—a pattern for success? You can read the original article on TechXplore here.
High-speed photography with the camera on a fast-moving robot arm has become all the rage at red-carpet events, but this GlamBOT setup comes with a hefty price tag. To get similar visual effects on a much lower budget [Henry Kidman] built a large, very fast camera slider. As is usually the case with such projects, it’s harder than it seems.
The original GlamBOT has a full 6 degrees of freedom, but many of the shots it’s famous for are just a slightly curved path between two points. That curve adds a few zeros to the required budget, so a straight slider was deemed good enough for [Henry]’s purposes. The first remaining challenge is speed. V1 one used linear rails made from shower curtain rails, with 3D printed sliders driven by a large stepper motor via a belt. The stepper motor wasn’t powerful enough to achieve the desired acceleration, so [Henry] upgraded to a more powerful 6 hp servo motor.
Unfortunately, the MDF and 3D-printed frame components were not rigid enough for the upgraded torque. It caused several crashes into the ends of the frame as the belt slipped and failed to stop the camera platform. The frame was rebuilt from steel, with square tubing for the rails and steel plates for the brackets. It provided the required rigidity, but the welding had warped the rails which led to a bumpy ride for the camera so he had to use active stabilization on the gimbal and camera. This project was filled with setback and challenges, but in the end the results look very promising with great slow motion shots on a mock red carpet.
How do you collect a lot of data about the ionosphere? Well, you could use sounding rockets or specialized gear. Or maybe you can just conscript a huge number of cell phones. That was the approach taken by Google researchers in a recent paper in Nature.
The idea is that GPS and similar navigation satellites measure transit time of the satellite signal, but the ionosphere alters the propagation of those signals. In fact, this effect is one of the major sources of error in GPS navigation. Most receivers have an 8-parameter model of the ionosphere that reduces that error by about 50%.
However, by measuring the difference in time between signals of different frequencies, the phone can estimate the total electron current (TEC) of the ionosphere between the receiver and the satellite. This requires a dual-frequency receiver, of course.
This isn’t a new idea. There are a large number of fixed-position stations that make this measurement to contribute to a worldwide database. However, the roughly 9,000 stations can’t compete with cell phones everywhere. The paper outlines how Android smartphones can do calculations on the GPS propagation delays to report the TEC numbers.
As impractical as most overclocking of computers is these days, there is still a lot of fun to be had along the way. Case in point being [Pieter-Jan Plaisier]’s recent liquid nitrogen-aided overclocking of an unsuspecting Raspberry Pi 5 and its BCM2712 SoC. Previous OCing attempts with air cooling by [Pieter] had left things off at a paltry 3 GHz from the default 2.4 GHz, with the power management IC (PMIC) circuitry on the SBC turning out to be the main limiting factor.
The main change here was thus to go for liquid nitrogen (LN2) cooling, with a small chipset LN2 pot to fit on the SBC. Another improvement was the application of a NUMA (non-uniform memory addressing) patch to force the BCM2712’s memory controller to utilize better RAM chip parallelism.
With these changes, the OC could now hit 3.6 GHz, but at 3.7 GHz, the system would always crash. It was time to further investigate the PMIC issues.
The PMIC imposes voltage configuration limitations and turns the system off at high power consumption levels. A solution there was to replace said circuitry with an ElmorLabs AMPLE-X1 power supply and definitively void the SBC’s warranty. This involves removing inductors and removing solder mask to attach the external power wires. Yet even with these changes, the SoC frequency had trouble scaling, which is why an external clock board was used to replace the 54 MHz oscillator on the PCB. Unfortunately, this also failed to improve the final overclock.
We covered the ease of OCing to 3 GHz previously, and no doubt some of us are wondering whether the new SoC stepping may OC better. Regardless, if you want to get a faster small system without jumping through all those hoops, there are definitely better (and cheaper) options. But you do miss out on the fun of refilling the LN2 pot every couple of minutes.
Normally, you think of things casting a shadow as being opaque. However, new research shows that under certain conditions, a laser beam can cast a shadow. This may sound like nothing more than a novelty, but it may have applications in using one laser beam to control another. If you want more details, you can read the actual paper online.
Typically, light passes through light without having an effect. But using a ruby crystal and specific laser wavelengths. In particular, a green laser has a non-linear response in the crystal that causes a shadow in a blue laser passing through the same crystal.
The green laser increases the crystal’s ability to absorb the blue laser beam. which creates a matching region in the blue beam that appears as a shadow.
If you read the article, there’s more to measuring shadows than you might think. We aren’t sure what we would do with this information, but if you figure it out, let us know.
Ruby has a long history with lasers, of course. That green laser pointer you have? It might not be all green, after all.
Collecting retrocomputers is fun, especially when you find fully-functional examples that you can plug in, switch on, and start playing with. Meanwhile, others prefer to find the damaged examples and nurse them back to health. [polymatt] can count himself in that category, as evidenced by his heroic rescue of an 1993 IBM ThinkPad Tablet.
The tablet came to [polymatt] in truly awful condition. Having been dropped at least once, the LCD screen was cracked, the case battered, and all the plastics were very much the worse for wear. Many of us would consider it too far gone, especially considering that replacement parts for such an item are virtually unobtainable. And yet, [polymatt] took on the challenge nonetheless.
Despite its condition, there were some signs of life in the machine. The pen-based touch display seemed to respond to the pen itself, and the backlight sort of worked, too. Still, with the LCD so badly damaged, it had to be replaced. Boggling the mind, [polymatt] was actually able to find a 9.4″ dual-scan monochrome LCD that was close enough to sort-of fit, size-wise. To make it work, though, it needed a completely custom mount to fit with the original case and electromagnetic digitizes sheet. From there, there was plenty more to do—recapping, recabling, fixing the batteries, and repairing the enclosure including a fresh set of nice decals.
The fact is, 1993 IBM ThinkPad Tablets just don’t come along every day. These rare specimens are absolutely worth this sort of heroic restoration effort if you do happen to score one on the retro market. Video after the break.
Nowadays, if you want to delay an audio signal for, say, an echo or a reverb, you’d probably just do it digitally. But it wasn’t long ago that wasn’t a realistic option. Some devices used mechanical means, but there were also ICs like the TCA350 “bucket brigade” device that [10maurycy10] shows us in a recent post.
In this case, bucket brigade is a euphemism calling to mind how firemen would pass buckets down the line to put out a fire. It’s a bit of an analog analogy. The “bucket” is a MOSFET and capacitor. The “water” is electrical charge stored in the cap. All those charges are tiny snippets of an analog signal.
In practice, the chip has two clock signals that do not overlap. The first one gates the signal to a small capacitor which follows the input signal voltage. Then, when that gate clock closes, the second clock gates that output to another identical capacitor. The second capacitor discharges the first one and the whole process repeats, sometimes for hundreds of times.
In addition to a test circuit and some signals going in and out, the post also shows photomicrographs of the chip’s insides. As you might expect, all those identical gates make for a very regular layout on the die.
You might think these devices are obsolete, and that’s true. However, the basic idea is still in use for CCD camera sensors.
Sometimes, those old delay lines were actually columns of mercury or coiled-up transmission lines. You could even use a garden hose or build your own delay line memory.
On today’s installment of UE1 vacuum tube computer construction, we join [David Lovett] once more on the Usagi Electric farm, as he determines just how much work remains before the project can be called done. When we last left off, the paper tape reader had been motorized, with the paper tape being pulled through smoothly in front of the photodiodes. This left [David] with the task to create a PCB to wire up these photodiodes, put an amplification circuit together (with tubes, of course) to amplify the signal from said photodiodes, and add some lighting (two 1-watt incandescents) to shine through the paper tape holes. All of this is now in place, but does it work?
The answer here is a definite kinda, as although there are definitely lovely squiggles on the oscilloscope, bit 0 turns out to be missing in action. This shouldn’t have come as a major surprise, as one of the problems that Bendix engineers dealt with back in the 1950s was effectively the same one: they, too, use the 9th hole on the 8-bit tape as a clock signal, but with this whole being much smaller than the other holes, this means not enough light passes through to activate the photodiode.
Here, the Bendix engineers opted to solve this by biasing the photodiode to be significantly more sensitive. This seems to be the ready-made solution for the UE1’s tape reader, too. After all, if it worked for Bendix for decades, surely it’ll work in 2024.
Beyond this curveball, the rest of the challenges involve getting a tape punched with known data on it so that the tape reader’s output can actually be validated beyond acknowledging the presence of squiggles on the scope display. Although the tape guiding mechanism seems more stable now, it also needs to be guided around in an endless loop due to the way that the UE1 computer will use the tape. Much like delay line memory, the paper tape will run in an endless loop, and the processor will simply skip over sections until it hits the next code it needs as part of a loop or jump.
A couple of weeks back, we covered an interesting method for prototyping PCBs using a modified CNC mill to 3D print solder onto a blank FR4 substrate. The video showing this process generated a lot of interest and no fewer than 20 tips to the Hackaday tips line, which continued to come in dribs and drabs this week. In a world where low-cost, fast-turn PCB fabs exist, the amount of effort that went into this method makes little sense, and readers certainly made that known in the comments section. Given that the blokes who pulled this off are gearheads with no hobby electronics background, it kind of made their approach a little more understandable, but it still left a ton of practical questions about how they pulled it off. And now a new video from the aptly named Bad Obsession Motorsports attempts to explain what went on behind the scenes.
To be quite honest, although the amount of work they did to make these boards was impressive, especially the part where they got someone to create a custom roll of fluxless tin-silver solder, we have to admit to being a little let down by the explanation. The mechanical bits, where they temporarily modified the CNC mill with what amounts to a 3D printer extruder and hot end to melt and dispense the solder, wasn’t really the question we wanted answered. We were far more interested in the details of getting the solder traces to stick to the board as they were dispensed and how the board acted when components were soldered into the rivets used as vias. Sadly, those details were left unaddressed, so unless they decide to make yet another video, we suppose we’ll just have to learn to live with the mystery.
What do mushrooms have to do with data security? Until this week, we’d have thought the two were completely unrelated, but then we spotted this fantastic article on “Computers Are Bad” that spins the tale of Iron Mountain, which people in the USA might recognize as a large firm that offers all kinds of data security products, from document shredding to secure offsite storage and data backups. We always assumed the “Iron Mountain” thing was simply marketing, but the company did start in an abandoned iron mine in upstate New York, where during the early years of the Cold War, it was called “Iron Mountain Atomic Storage” and marketed document security to companies looking for business continuity in the face of atomic annihilation. As Cold War fears ebbed, the company gradually rebranded itself into the information management entity we know today. But what about the mushrooms? We won’t ruin the surprise, but suffice it to say that IT people aren’t the only ones that are fed shit and kept in the dark.
Do you like thick traces? We sure do, at least when it comes to high-current PCBs. We’ve seen a few boards with really impressive traces and even had a Hack Chat about the topic, so it was nice to see Mark Hughes’ article on design considerations for heavy copper boards. The conventional wisdom with high-current applications seems to be “the more copper, the better,” but Mark explains why that’s not always the case and how trace thickness and trace spacing both need to be considered for high-current applications. It’s pretty cool stuff that we hobbyists don’t usually have to deal with, but it’s good to see how it’s done.
We imagine that there aren’t too many people out there with fond memories of Visual Basic, but back when it first came out in the early 1990s, the idea that you could actually make a Windows PC do Windows things without having to learn anything more than what you already knew from high school computer class was pretty revolutionary. By all lights, it was an awful language, but it was enabling for many of us, so much so that some of us leveraged it into successful careers. Visual Basic 6 was pretty much the end of the line for the classic version of the language, before it got absorbed into the whole .NET thing. If you miss that 2008 feel, here’s a VB6 virtual machine to help you recapture the glory days.
And finally, in this week’s “Factory Tour” segment we have a look inside a Japanese aluminum factory. The video mostly features extrusion, a process we’ve written about before, as well as casting. All of it is fascinating stuff, but what really got us was the glow of the molten aluminum, which we’d never really seen before. We’re used to the incandescent glow of molten iron or even brass and copper, but molten aluminum has always just looked like — well, liquid metal. We assumed that was thanks to its relatively low melting point, but apparently, you really need to get aluminum ripping hot for casting processes. Enjoy.