Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Hoy — 10 Julio 2025Hackaday

Ask Hackaday: Are You Wearing 3D Printed Shoes?

10 Julio 2025 at 14:00

We love 3D printing. We’ll print brackets, brackets for brackets, and brackets to hold other brackets in place. Perhaps even a guilty-pleasure Benchy. But 3D printed shoes? That’s where we start to have questions.

Every few months, someone announces a new line of 3D-printed footwear. Do you really want your next pair of sneakers to come out of a nozzle? Most of the shoes are either limited editions or fail to become very popular.

First World Problem

You might be thinking, “Really? Is this a problem that 3D printing is uniquely situated to solve?” You might assume that this is just some funny designs on some of the 3D model download sites. But no. Adidas, Nike, and Puma have shoes that are at least partially 3D printed. We have to ask why.

We are pretty happy with our shoes just the way that they are. But we will admit, if you insist on getting a perfect fitting shoe, maybe having a scan of your foot and a custom or semi-custom shoe printed is a good idea. Zellerfield lets you scan your feet with your phone, for example. [Stefan] at CNC Kitchen had a look at those in a recent video. The company is also in many partnerships, so when you hear that Hugo Boss, Mallet London, and Sean Watherspoon have a 3D-printed shoe, it might actually be their design from Zellerfield.

Or, try a Vivobiome sandal. We aren’t sold on the idea that we can’t buy shoes off the rack, but custom fits might make a little sense. We aren’t sure about 3D-printed bras, though.

Maybe the appeal of 3D-printed shoes lies in their personalizability? Creating self-printed shoes might make sense, so you can change their appearance or otherwise customize them. Maybe you’d experiment with different materials, colors, or subtle changes in designs. Nothing like 30 hours of printing and three filament changes to make one shoe. And that doesn’t explain why the majors are doing it.

Think of the Environment!

There is one possible plus to printing shoes. According to industry sources, more than 20 billion pairs of shoes are made every year, and almost all will end up in landfills. Up to 20% of these shoes will go straight to the dump without being worn even once.

So maybe you could argue that making shoes on demand would help reduce waste. We know of some shoe companies that offer you a discount if you send in an old pair for recycling, although we don’t know if they use them to make new shoes or not. Your tolerance for how much you are willing to pay might correlate to how much of a problem you think trash shoes really are.

But mass-market 3D-printed shoes? What’s the appeal? If you’re desperate for status, consider grabbing a pair of 3D-printed Gucci shoes for around $1,300. But for most of us, are you planning on dropping a few bucks on a pair of 3D-printed shoes? Why or why not? Let us know in the comments.

If you are imagining the big guys printing shoes on an Ender 3, that’s probably not the case. The shoes we’ve seen are made on big commercial printers.

An Emulated Stroll Down Macintosh Memory Lane

10 Julio 2025 at 11:00
Screenshot of "Frame of Preference"

If you’re into Macs, you’ll always remember your first. Maybe it was the revolutionary classic of 1984 fame, perhaps it was the adorable G3 iMac in 1998, or even a shiny OS X machine in the 21st century. Whichever it is, you’ll find it emulated in [Marcin Wichary]’s essay “Frame of preference: A history of Mac settings, 1984–2004” — an exploration of the control panel and its history.

Image of PowerBook showing the MacOS 8.0 desktop.
That’s not a photograph, it’s an emulator. (At least on the page. Here, it’s a screenshot.)

[Marcin] is a UI designer as well as an engineer and tech historian, and his UI chops come out in full force, commenting and critiquing Curputino’s coercions. The writing is excellent, as you’d expect from the man who wrote the book on keyboards, and it provides a fascinating look at the world of retrocomputing through the eyes of a designer. That design-focused outlook is very apropos for Apple in particular. (And NeXT, of course, because you can’t tell the story of Apple without it.)

There are ten emulators on the page, provided by [Mihai Parparita] of Infinite Mac. It’s like a virtual museum with a particularly knowledgeable tour guide — and it’s a blast, getting to feel hands-on, the design changes being discussed. There’s a certain amount of gamification, with each system having suggested tasks and a completion score when you finish reading. There are even Easter eggs.

This is everything we wish the modern web was like: the passionate deep-dives of personal sites on the Old Web, but enhanced and enabled by modern technology. If you’re missing those vintage Mac days and don’t want to explore them in browser, you can 3D print your own full-size replica, or a doll-sized picoMac.

 

Generatively-Designed Aerospike Test Fired

10 Julio 2025 at 08:00

The aerospike engine holds great promise for spaceflight, but for various reasons, has remained slightly out of reach for decades. But thanks to Leap 71, the technology has moved one step closer to a spacecraft near you with the test fire of their generatively-designed, 3D printed aerospike.

We reported on the original design process of the engine, but at the time it hadn’t been given a chance to burn its liquid oxygen and kerosene fuel. The special sauce was the application of a computational physics model to tackle the complex issue of keeping the engine components cool enough to function while directing 3,500˚C exhaust around the eponymous spike.

Printed via a powder bed process out of CuCrZr, cleaned, heat treated, and then prepped by the University of Sheffield’s Race 2 Space Team, the rocket produced 5,000 Newtons (1,100 lbf) of thrust during its test fire. For comparison, VentureStar, the ill-fated aerospike single stage to orbit project from the 1990s, was projected to produce more than 1,917 kilonewtons (431,000 lbf) from each of its seven RS-2200 engines. Leap 71 obviously has some scaling up to do before this can propel any crewed spacecraft.

If you want to build your own aerospike or 3D printed rocket nozzles we encourage you to read, understand, and follow all relevant safety guidelines when handling your rockets. It is rocket science, after all!

Solder Smarts: Hands-Free Fume Extractor Hack

10 Julio 2025 at 05:00
fume extractor

[Ryan] purchased a large fume extractor designed to sit on the floor below the work area and pull solder fumes down into its filtering elements. The only drawback to this new filter was that its controls were located near his feet. Rather than kicking at his new equipment, he devised a way to automate it.

By adding a Wemos D1 Mini microcontroller running ESPHome, a relay board, and a small AC-to-DC transformer, [Ryan] can now control the single push button used to cycle through speed settings wirelessly. Including the small transformer inside was a clever touch, as it allows the unit to require only a single power cable while keeping all the newfound smarts hidden inside.

The relay controls the button in parallel, so the physical button still works. Now that the extractor is integrated with Home Assistant, he can automate it. The fan can be controlled via his phone, but even better, he automated it to turn on by monitoring the power draw on the smart outlet his soldering iron is plugged into. When he turns on his iron, the fume extractor automatically kicks in.

Check out some other great automations we’ve featured that take over mundane tasks.

Volume Controller Rejects Skeumorphism, Embraces the Physical

10 Julio 2025 at 02:00

The volume slider on our virtual desktops is a skeuomorphic callback to the volume sliders on professional audio equipment on actual, physical desktops. [Maker Vibe] decided that this skeuomorphism was so last century, and made himself a physical audio control box for his PC.

Since he has three audio outputs he needs to consider, the peripheral he creates could conceivably be called a fader. It certainly has that look, anyway: each output is controlled by a volume slider — connected to a linear potentiometer — and a mute button. Seeing a linear potentiometer used for volume control threw us for a second, until we remembered this was for the computer’s volume control, not an actual volume control circuit. The computer’s volume slider already does the logarithmic conversion. A Seeed Studio Xiao ESP32S3 lives at the heart of this thing, emulating a Bluetooth gamepad using a library by LemmingDev. A trio of LEDs round out the electronics to provide an indicator for which audio channels are muted or active.

Those Bluetooth signals are interpreted by a Python script feeding a software called Voicmeeter Banana, because [Maker Vibe] uses Windows, and Redmond’s finest operating system doesn’t expose audio controls in an easily-accessible way. Voicmeeter Banana (and its attendant Python script) takes care of telling Windows what to do. 

The whole setup lives on [Maker Vibe]’s desk in a handsome 3D printed box. He used a Circuit vinyl cutter to cut out masks so he could airbrush different colours onto the print after sanding down the layer lines. That’s another one for the archive of how to make front panels.

If volume sliders aren’t doing it for you, perhaps you’d prefer to control your audio with a conductor’s baton. 

How To Train A New Voice For Piper With Only A Single Phrase

9 Julio 2025 at 23:00

[Cal Bryant] hacked together a home automation system years ago, which more recently utilizes Piper TTS (text-to-speech) voices for various undisclosed purposes. Not satisfied with the robotic-sounding standard voices available, [Cal] set about an experiment to fine-tune the Piper TTS AI voice model using a clone of a single phrase created by a commercial TTS voice as a starting point.

Before the release of Piper TTS in 2023, existing free-to-use TTS systems such as espeak and Festival sounded robotic and flat. Piper delivered much more natural-sounding output, without requiring massive resources to run. To change the voice style, the Piper AI model can be either retrained from scratch or fine-tuned with less effort. In the latter case, the problem to be solved first was how to generate the necessary volume of training phrases to run the fine-tuning of Piper’s AI model. This was solved using a heavyweight AI model, ChatterBox, which is capable of so-called zero-shot training. Check out the Chatterbox demo here.

As the loss function gets smaller, the model’s accuracy gets better

Training began with a corpus of test phrases in text format to ensure decent coverage of everyday English. [Cal] used ChatterBox to clone audio from a single test phrase generated by a ‘mystery TTS system’ and created 1,300 test phrases from this new voice. This audio set served as training data to fine-tune the Piper AI model on the lashed-up GPU rig.

To verify accuracy, [Cal] used OpenAI’s Whisper software to transcribe the audio back to text, in order to compare with the original text corpus. To overcome issues with punctuation and differences between US and UK English, the text was converted into phonemes using espeak-ng, resulting in a 98% phrase matching accuracy.

After down-sampling the training set using SoX, it was ready for the Piper TTS training system. Despite all the preparation, running the software felt anticlimactic. A few inconsistencies in the dataset necessitated the removal of some data points. After five days of training parked outside in the shade due to concerns about heat, TensorBoard indicated that the model’s loss function was converging. That’s AI-speak for: the model was tuned and ready for action! We think it sounds pretty slick.

If all this new-fangled AI speech synthesis is too complex and, well, a bit creepy for you, may we offer a more 1980s solution to making stuff talk? Finally, most people take the ability to speak for granted, until they can no longer do so. Here’s a team using cutting-edge AI to give people back that ability.

No Tension for Tensors?

9 Julio 2025 at 20:00

We always enjoy [FloatHeadPhysics] explaining any math or physics topic. We don’t know if he’s acting or not, but he seems genuinely excited about every topic he covers, and it is infectious. He also has entertaining imaginary conversations with people like Feynman and Einstein. His recent video on tensors begins by showing the vector form of Ohm’s law, making it even more interesting. Check out the video below.

If you ever thought you could use fewer numbers for many tensor calculations, [FloatHeadPhysics] had the same idea. Luckily, imaginary Feynman explains why this isn’t right, and the answer shows the basic nature of why people use tensors.

The spoiler: vectors and even scalars are just a special case of tensors, so you use tensors all the time, you just don’t realize it. He works through other examples, including an orbital satellite and a hydroelectric dam.

We love videos that help us have aha moments about complex math or physics. It is easy to spew formulas, but there’s no substitute for having a “feeling” about how things work.

The last time we checked in with [FloatHeadPhysics], he convinced us we were already travelling at the speed of light. We’ve looked at a simple tensor explainer before, if you want a second approach.

Ayer — 9 Julio 2025Hackaday

Crunching The News For Fun And Little Profit

Por: Jenny List
9 Julio 2025 at 14:00

Do you ever look at the news, and wonder about the process behind the news cycle? I did, and for the last couple of decades it’s been the subject of one of my projects. The Raspberry Pi on my shelf runs my word trend analysis tool for news content, and since my journey from curious geek to having my own large corpus analysis system has taken twenty years it’s worth a second look.

How Career Turmoil Led To A Two Decade Project

A hanging sign surrounded by ornate metalwork, with the legend "Cyder house".
This is very much a minority spelling. Colin Smith, CC BY-SA 2.0.

In the middle of the 2000s I had come out of the dotcom crash mostly intact, and was working for a small web shop. When they went bust I was casting around as one does, and spent a while as a Google quality rater while I looked for a new permie job. These teams are employed by the search giant through temporary employment agencies, and in loose terms their job is to be the trained monkeys against whom the algorithm is tested. The algorithm chose X, and if the humans also chose X, the algorithm is probably getting it right. Being a quality rater is not in any way a high-profile job, but with the big shiny G on my CV I soon found myself in demand from web companies seeking some white-hat search engine marketing expertise. What I learned mirrored my lesson from a decade earlier in the CD-ROM business, that on the web as in any other electronic publishing medium, good content well presented has priority over any black-hat tricks.

But what makes good content? Forget an obsession with stuffing bogus keywords in the text, and instead talk about the right things, and do it authoritatively. What are the right things in this context? If you are covering a subject, you need to do so using the right language; that which the majority uses rather than language only you use. I can think of a bunch of examples which I probably shouldn’t talk about, but an example close to home for me comes in cider. In the UK, cider is a fermented alcoholic drink made from apples, and as a craft cidermaker of many years standing I have a good grasp of its vocabulary. The accepted spelling is “Cider”, but there’s an alternate spelling of “Cyder” used by some commercial producers of the drink. It doesn’t take long to realise that online, hardly anyone uses cyder with a Y, and thus pages concentrating on that word will do less well than those talking about cider.

A graph of the word football versus the word soccer in British news.
We Brits rarely use the word “soccer” unless there’s a story about the Club World Cup in America.

I started to build software to analyse language around a given topic, with the aim of discerning the metaphorical cider from the cyder. It was a great surprise a few years later to discover that I had invented for myself the already-existing field of computational linguistics, something that would have saved me a lot of time had I known about it when I began. I was taking a corpus of text and computing the frequencies and collocates (words that appear alongside each other) of the words within it, and from that I could quickly see which wording mattered around a subject, and which didn’t. This led seamlessly to an interest in what the same process would look like for news data with a time axis added, so I created a version which harvested its corpus from RSS feeds. Thus began my decades-long project.

From Project Idea, To Corpus Appliance

In 2005 I knew how to create websites in the manner of the day, so I used the tools I had. PHP5, and MySQL. I know PHP is unfashionable these days, but at the time this wasn’t too controversial, and aside from all the questionable quality PHP code out there it remains a useful scripting language. Using MySQL however would cause me immense problems. I had done what seemed the right thing and created a structured database with linked tables, but I hadn’t fully appreciated just how huge was the task I had taken on. Harvesting the RSS firehose across multiple media outlets brings in thousands of stories every week, so queries which were near-instantaneous during my first development stages grew to take many minutes as my corpus expanded. It was time to come up with an alternative, and I found it in the most basic of OS features, the filesystem.

A graph of the words cat and doc in British news.
I have no idea why British news has more dog stories than cat stories.

Casting back to the 1990s, when you paid for web hosting it was given in terms of the storage space it came with. The processing power required to run your CGI scripts or later server-side interpreters such as ASP or PHP, wasn’t considered. It thus became normal practice to try to reduce storage use and not think about processing, and I had without thinking followed this path.

But by the 2000s the price of storage had dropped hugely while that of processing hadn’t. This was the decade in which cloud services such as AWS made an appearance, and as well as buying many-gigabyte hard disks for not a lot, you could also for the first time rent a cloud bucket for pennies. My corpus analysis system didn’t need to spend all its time computing if I could use a terabyte hard drive to make up for less processor usage, so I turned my system on its head. When collecting the RSS stories my retrieval script would pre-compute the final data and store it in a vast tree of tiny JSON files accessible at high speed through the filesystem, and then my analysis software could simply retrieve them and make its report. The system moved from a hard-working x86 laptop to a whisper-quiet and low powered Raspberry Pi with a USB hard disk, and there it has stayed in some form ever since.

Just What Can This Thing Do?

A bubble cloud for the week of 2016-06-23, when the UK Brexit referendum happened. Big words are EU, Brexit,referendum, leave, and vote.
No prizes for guessing what happened this week.

So I have a news corpus that has taken me a long time to build. I can take one or more words, and I can compare their occurrence over time. I can watch the news cycle, I can see stories build up over time. I can even see trends which sometimes go against received opinion, such as spotting that the eventual winner of the 2016 UK Labour leadership race was likely to be Jeremy Corbyn early on while the herd were looking elsewhere. Sometimes as with the performance of the word “Brexit” over the middle of the last decade I can see the great events of our times in stark relief, but perhaps it’s in the non-obvious that there’s most value. If you follow a topic and it suddenly dries up for a couple of days, expect a really big story on day three, for example. I can also see which outlets cover one story more than another, something helpful when trying to ascertain if a topic is being pushed on behalf of a particular lobby.

My experiment in text analysis then turned into something much more, even dare I say it, something I find of help in figuring out what’s really going on in turbulent times. But from a tech point of view it’s taught me a huge amount, about statistics, about language, about text parsing, and even about watching the number of available inodes on a hard drive. Believe me, many millions of tiny files in a tree can become unwieldy. But perhaps most of all, after a lifetime of mucking about with all manner of projects but generating little of lasting significance, I can look at this one and say I created something useful. And that is something to be happy about.

PIC Burnout: Dumping Protected OTP Memory in Microchip PIC MCUs

Por: Maya Posch
9 Julio 2025 at 11:00

Normally you can’t read out the One Time Programming (OTP) memory in Microchip’s PIC MCUs that have code protection enabled, but an exploit has been found that gets around the copy protection in a range of PIC12, PIC14 and PIC16 MCUs.

This exploit is called PIC Burnout, and was developed by [Prehistoricman], with the cautious note that although this process is non-invasive, it does damage the memory contents. This means that you likely will only get one shot at dumping the OTP data before the memory is ‘burned out’.

The copy protection normally returns scrambled OTP data, with an example of PIC Burnout provided for the PIC16LC63A. After entering programming mode by setting the ICSP CLK pin high, excessively high programming voltage and duration is used repeatedly while checking that an area that normally reads as zero now reads back proper data. After this the OTP should be read out repeatedly to ensure that the scrambling has been circumvented.

The trick appears to be that while there’s over-voltage and similar protections on much of the Flash, this approach can still be used to affect the entire flash bit column. Suffice it to say that this method isn’t very kind to the Flash memory cells and can take hours to get a good dump. Even after this you need to know the exact scrambling method used, which is fortunately often documented by Microchip datasheets.

Thanks to [DjBiohazard] for the tip.

Programming Like It’s 1986, For Fun and Zero Profit

9 Julio 2025 at 08:00
screenshot of C programming on Macintosh Plus

Some people slander retrocomputing as an old man’s game, just because most of those involved are more ancient than the hardware they’re playing with. But there are veritable children involved too — take the [ComputerSmith], who is recreating Conway’s game of life on a Macintosh Plus that could very well be as old as his parents. If there’s any nostalgia here, it’s at least a generation removed — thus proving for the haters that there’s more than a misplaced desire to relive one’s youth in exploring these ancient machines.

So what does a young person get out of programming on a 1980s Mac? Well, aside from internet clout, and possible YouTube monetization, there’s the sheer intellectual challenge of the thing. You cant go sniffing around StackExchange or LLMs for code to copy-paste when writing C for a 1986 machine, not if you’re going to be fully authentic. ANSI C only dates to 1987, after all, and figuring out the quirks and foibles of the specific C implementation is both half the fun, and not easily outsourced. Object Pascal would also have been an option (and quite likely more straightforward — at least the language was clearly-defined), but [ComputerSmith] seems to think the exercise will improve his chops with C, and he’s likely to be right. 

Apparently [ComputerSmith] brought this project to VCS Southwest, so anyone who was there doesn’t have to wait for Part 2 of the video to show up to see how this turns out, or to snag a copy of the code (which was apparently available on diskette). If you were there, let us know if you spotted the youngest Macintosh Plus programmer, and if you scored a disk from him.

If the idea of coding in this era tickles the dopamine receptors, check out this how-to for a prizewinning Amiga demo.  If you think pre-ANSI C isn’t retro enough, perhaps you’d prefer programming by card?

Five-minute(ish) Beanie is the Fastest We’ve Seen Yet

9 Julio 2025 at 05:00

Yes, you read that right– not benchy, but beanie, as in the hat. A toque, for those of us under the Maple Leaf. It’s not 3D printed, either, except perhaps by the loosest definition of the word: it is knit, by [Kevr102]’s motorized turbo knitter.

The turbo-knitter started life as an Addi Express King knitting machine. These circular knitting machines are typically crank-operated, functioning  with a cam that turns around to raise and lower special hooked needles that grab and knit the yarn. This particular example was not in good working order when [Kevr102] got a hold of it. Rather than a simple repair, they opted to improve on it.

A 12 volt motor with a printed gear and mount served for motorizing the machine. The original stitch counter proved a problem, so was replaced with an Arduino Nano and a hall effect sensor driving a 7-digit display. In theory, the Arduino could be interfaced with the motor controller and set to run the motor for a specific number of stitches, but in practice there’s no point as the machine needs babysat to maintain tension and avoid dropping stitches and the like. Especially, we imagine, when it runs fast enough to crank out a hat in under six minutes. Watch it go in the oddly cropped demo video embedded below.

Five minutes would still be a very respectable time for benchy, but it’s not going to get you on the SpeedBoatRace leaderboards against something like the minuteman we covered earlier.

If you prefer to take your time, this knitting machine clock might be more your fancy. We don’t see as many fiber arts hacks as perhaps we should here, so if you’re tangled up in anything interesting in that scene, please drop us a line

 

Oscillator Negativity is a Good Thing

9 Julio 2025 at 02:00

Many people who get analog electronics still struggle a bit to design oscillators. Even common simulators often need a trick to simulate some oscillating circuits. The Barkhausen criteria state that for stable oscillation, the loop gain must be one, and the phase shift around the feedback loop must be a multiple of 360 degrees. [All Electronics Channel] provides a thorough exploration of oscillators and, specifically, negative resistance, which is punctuated by practical measurements using a VNA. Check it out in the video below.

The video does have a little math and even mentions differential equations, but don’t worry. He points out that the universe solves the equation for you.

In an LC circuit, you can consider the losses in the circuit as a resistor. That makes sense. No component is perfect. But if you could provide a negative resistance, it would cancel out the parasitic resistance. With no loss, the inductor and capacitor will go back and forth, electrically, much like a pendulum.

So, how do you get a negative resistance? You’ll need an active device. He presents some example oscillator architectures and explains how they generate negative resistances.

Crystals are a great thing to look at with a VNA. That used to be a high-dollar piece of test gear, but not anymore.

View a Beehive Up Close with this 3D Printed Hive

Por: Ian Bos
8 Julio 2025 at 23:00
3 yellow modules are connected with bees filling 2 out of 3

Bees are incredible insects that live and die for their hive, producing rich honey in complicated hive structures. The problem is as the average beekeeper, you wouldn’t see much of these intricate structures without disturbing the hive. So why not 3D print an observation hive? With [Teddy Hatcher]’s 3D printing creativity, that is exactly what he did.

A yellow 3D printed hexagonal panel

Hexagonal sections allow for viewing of entire panels of hexagonal cells, growing new workers, and storing the rich syrup we all enjoy. Each module has two cell panels, giving depth to the hive for heat/humidity gradients. The rear of a module has a plywood backing and an acrylic front for ample viewing. [Teddy] uses three modules plus a Flow Hive for a single colony, enough room for more bees than we here at Hackaday would ever consider letting in the front door.

As with many 3D printed projects involving food or animals, the question remains about health down the line. Plastic can bio-accumulate in hives, which is a valid concern for anyone wanting to add the honey to their morning coffee. On the other hand, the printed plastic is not what honey is added to, nor what the actual cell panels are made from. When considering the collected honey, this is collected from the connected Flow Hive rather than anything directly in contact with 3D printed plastic.

Beehives might not always need a fancy 3D printed enclosure; the standard wooden crates seem to work just fine for most, but there’s a time and place for some bio-ingenuity. Conditions in a hive might vary creating problems for your honey production, so you better check out this monitoring system dedicated to just that!

Thanks to [George Graves] for the tip!

Better Solid State Heat Pumps Through Science

8 Julio 2025 at 20:00

If you need to cool something, the gold standard is using a gas compressor arrangement. Of course, there are definite downsides to that, like weight, power consumption, and vibrations. There are solid-state heat pumps — the kind you see in portable coolers, for example. But, they are not terribly efficient and have limited performance.

However, researchers at Johns Hopkins, working with Samsung, have developed a new thin-film thermoelectric heat pump, which they claim is easy to fabricate, scalable, and significantly more efficient. You can see a video about the new research below.

Manufacturing requires similar processes to solar cells, and the technology can make tiny heat pumps or — in theory — coolers that could provide air conditioning for large buildings. You can read the full paper in Nature.

CHESS stands for Controlled Hierarchically Engineered Superlattice Structures. These are nano-engineered thin-film superlattices (around 25 μm thick). The design optimizes their performance in this application.

The new devices claim to be 100% more efficient at room temperature than traditional devices. In practical devices, thermoelectric devices and the systems using them have improved by around 70% to 75%. The material can also harvest power from heat differences, such as body heat. The potential small size of devices made with this technology would make them practical for wearables.

We’ve looked at the traditional modules many times. They sometimes show up in cloud chambers.

AnteayerHackaday

The End Of The Hackintosh Is Upon Us

Por: Lewin Day
8 Julio 2025 at 14:00

From the very dawn of the personal computing era, the PC and Apple platforms have gone very different ways. IBM compatibles surged in popularity, while Apple was able to more closely guard the Macintosh from imitators wanting to duplicate its hardware and run its software.

Things changed when Apple announced it would hop aboard the x86 bandwagon in 2005. Soon enough was born the Hackintosh. It was difficult, yet possible, to run MacOS on your own computer built with the PC parts your heart desired.

Only, the Hackintosh era is now coming to the end. With the transition to Apple Silicon all but complete, MacOS will abandon the Intel world once more.

End Of An Era

macOS Tahoe is slated to drop later this year. Credit: Apple

2025 saw the 36th Worldwide Developers Conference take place in June, and with it, came the announcement of macOS Tahoe. The latest version of Apple’s full-fat operating system will offer more interface customization, improved search features, and the new attractive ‘Liquid Glass’ design language. More critically, however, it will also be the last version of the modern MacOS to support Apple’s now aging line of x86-based computers.

The latest OS will support both Apple Silicon machines as well as a small list of older Macs. Namely, if you’ve got anything with an M1 or newer, you’re onboard. If you’re Intel-based, though, you might be out of luck. It will run on the MacBook Pro 16 inch from 2019, as well as the MacBook Pro 13-inch from 2020, but only the model with four Thunderbolt 3 ports. It will also support iMacs and Mac Minis from 2020 or later. As for the Mac Pro, you’ll need one from 2019 or later, or 2022 or later for the Mac Studio.

Basically, beyond the release of Tahoe, Apple will stop releasing versions of its operating system for x86 systems. Going forward, it will only be compiling MacOS for ARM-based Apple Silicon machines.

How It Was Done

Of course, it’s worth remembering that Apple never wanted random PC builders to be able to run macOS to begin with. Yes, it will eventually stop making an x86 version of its operating system, but it had already gone to great lengths trying to stop macOS from running on non-authorized hardware. The dream of a Hackintosh was to build a powerful computer on the cheap, without having to pay Apple’s exorbitant prices for things like hard drive, CPU, and memory upgrades. However, you always had to jump through hoops, using hacks to fool macOS into running on a computer that Apple never built.

Installing macOS on a PC takes some doing.

Getting a Hackintosh running generally involved pulling down special patches crafted by a dedicated community of hackers. Soon after Apple started building x86 machines, hackers rushed to circumvent security features in what was then called Mac OS X, allowing it to run on non-Apple approved machines. The first patches landed just over a month after the first x86 Macs. Each subsequent Apple update to OS X locked things down further, only for the community to release new patches unlocking the operating system in quick succession. Sometimes this involved emulating the EFI subsystem which contemporary Macs used in place of a traditional PC’s BIOS. Sometimes it was involved as tweaking the kernel to stick to older SSE2 instructions when Apple’s use of SS3 instructions stopped the operating system running on older hardware. Depending on the precise machine you were building, and the version of OS X or MacOS that you hoped to run, you’d use different patches or hacks to get your machine booting, installing, and running to operating system.

Hackintosh communities maintain lists of bugs and things that don’t work quite right—no surprise given Apple’s developers put little thought into making their OS work on unofficial hardware. Credit: eliteMacx86.com via Screenshot

Running a Hackintosh often involved dealing with limitations. Apple’s operating system was never intended to run on just any hardware, after all. Typical hurdles included having to use specific GPUs or WiFi cards, for example, since broad support for the wide range of PC parts just wasn’t there. Similarly, sometimes certain motherboards wouldn’t work, or would require specific workarounds to make Apple’s operating system happy in a particularly unfamiliar environment.

Of course, you can still build a Hackintosh today. Instructions exist for installing and running macOS Sequoia (macOS 15), macOS Sonoma (macOS 14), as well as a whole host of earlier versions all the way back to when it was still called Mac OS X. When macOS Tahoe drops later this year, the community will likely work to make the x86 version run on any old PC hardware. Beyond that, though, the story will end, as Apple continues to walk farther into its ARM-powered future.

Ultimately, what the Hackintosh offered was choice. It wasn’t convenient, but if you were in love with macOS, it let you do what Apple said was verboten. You didn’t have to pay for expensive first party parts, and you could build your machine in the manner to which you were accustomed. You could have your cake and eat it too, which is to say that you could run the Mac version of Photoshop because that apparently mattered to some people. Now, all that’s over, so if you love weird modifier keys on your keyboard and a sleek, glassy operating system, you’ll have to pay the big bucks for Apple hardware again. The Hackintosh is dead. Long live Apple Silicon, so it goes.

 

Touch Lamp Tracks ISS with Style

8 Julio 2025 at 11:00

In the comments of a recent article, the question came up as to where to find projects from the really smart kids the greybeards remember being in the 70s. In the case of [Will Dana] the answer is YouTube, where he’s done an excellent job of producing an ISS-tracking lamp, especially considering he’s younger than almost all of the station’s major components.*

There’s nothing ground-breaking here, and [Will] is honest enough to call out his inspiration in the video. Choosing to make a ground-track display with an off-the-shelf globe is a nice change from the pointing devices we’ve featured most recently. Inside the globe is a pair of stepper motors configured for alt/az control– which means the device must reset every orbit, since [Willis] didn’t have slip rings or a 360 degree stepper on hand.  A pair of magnets couples the motion system inside the globe to the the 3D printed ISS model (with a lovely paintjob thanks to [Willis’s girlfriend]– who may or may be from Canada, but did show up in the video to banish your doubts as to her existence), letting it slide magically across the surface. (Skip to the end of the embedded video for a timelapse of the globe in action.) The lamp portion is provided by some LEDs in the base, which are touch-activated thanks to some conductive tape inside the 3D printed base.

It’s all controlled by an ESP32, which fetches the ISS position with a NASA API. Hopefully it doesn’t go the way of the sighting website, but if it does there’s more than enough horsepower to calculate the position from orbital parameters, and we are confident [Will] can figure out the code for that. That should be pretty easy compared to the homebrew relay computer or the animatronic sorting hat we featured from him last year.

Our thanks to [Will] for the tip. The tip line is for hackers of all ages,  but we admit that it’s great to see what the new generation is up to.

*Only the Roll Out Solar Array, unless you only count on-orbit age, in which case the Nakua module would qualify as well.

Managing Temperatures for Ultrafast Benchy Printing

8 Julio 2025 at 08:00
A blue 3DBenchy is visible on a small circular plate extending up through a cutout in a flat, reflective surface. Above the Benchy is a roughly triangular metal 3D printer extruder, with a frost-covered ring around the nozzle. A label below the Benchy reads “2 MIN 03 SEC.”

Commercial 3D printers keep getting faster and faster, but we can confidently say that none of them is nearly as fast as [Jan]’s Minuteman printer, so named for its goal of eventually printing a 3DBenchy in less than a minute. The Minuteman uses an air bearing as its print bed, feeds four streams of filament into one printhead for faster extrusion, and in [Jan]’s latest video, printed a Benchy in just over two minutes at much higher quality than previous two-minute Benchies.

[Jan] found that the biggest speed bottleneck was in cooling a layer quickly enough that it would solidify before the printer laid down the next layer. He was able to get his layer speed down to about 0.6-0.4 seconds per layer, but had trouble going beyond that. He was able to improve the quality of his prints, however, by varying the nozzle temperature throughout the print. For this he used [Salim BELAYEL]’s postprocessing script, which increases hotend temperature when volumetric flow rate is high, and decreases it when flow rate is low. This keeps the plastic coming out of the nozzle at an approximately constant temperature. With this, [Jan] could print quite good sub-four and sub-thee minute Benchies, with almost no print degradation from the five-minute version. [Jan] predicts that this will become a standard feature of slicers, and we have to agree that this could help even less speed-obsessed printers.

Now onto less generally-applicable optimizations: [Jan] still needed stronger cooling to get faster prints, so he designed a circular duct that directed a plane of compressed air horizontally toward the nozzle, in the manner of an air knife. This wasn’t quite enough, so he precooled his compressed air with dry ice. This made it both colder and denser, both of which made it a better coolant. The thermal gradient this produced in the print bed seemed to cause it to warp, making bed adhesion inconsistent. However, it did increase build quality, and [Jan]’s confident that he’s made the best two-minute Benchy yet.

If you’re curious about Minuteman’s motion system, we’ve previously looked at how that was built. Of course, it’s also possible to speed up prints by simply adding more extruders.

When is a synth a woodwind? When it’s a Pneumatone

8 Julio 2025 at 04:47

Ever have one of those ideas that’s just so silly, you just need to run with it? [Chris] from Sound Workshop ran into that when he had the idea that became the Pneumatone: a woodwind instrument that plays like a synth.

In its 3D printed case, it looks like a giant polyphonic analog synth, but under the plastic lies a pneumatic heart: the sound is actually being made by slide whistles. We always thought of the slide whistle as a bit of a gag instrument, but this might change our minds. The sliders on the synth-box obviously couple to the sliders in the whistles. The ‘volume knobs’ are actually speed controllers for computer fans that feed air into the whistles. The air path is possibly not ideal– there’s a bit of warbling in the whistles at some pitches– but the idea is certainly a fun one. Notes are played by not blocking the air path out the whistle, as you can see in the video embedded below.

Since the fans are always on, this is an example of a drone instrument, like bagpipes or the old hacker’s favourite, the hurdy gurdy. [Chris] actually says in his tip– for which we are very thankful– that this project takes inspiration not from those projects but from Indian instruments like the Shruthi Box and Tanpura. We haven’t seen those on Hackaday yet, but if you know of any hacks involving them, please leave a tip.

IR Point and Shoot Has a Raspberry Heart in a 35mm Body

7 Julio 2025 at 18:30

Photography is great, but sometimes it can get boring just reusing the same wavelengths over and over again. There are other options, though and when [Malcolm Wilson] decided he wanted to explore them, he decided to build a (near) IR camera. 

The IR images are almost ethereal.
Image : Malcom Wilson.

The housing is an old Yashica Electro 35 — apparently this model was prone to electrical issues, and there are a lot of broken camera bodies floating around– which hides a Pi NoIR Camera v3. That camera module, paired with an IR pass filter, makes for infrared photography like the old Yashica used to do with special film. The camera module is plugged into a Pi Zero 2 W, and it’s powered by a PiSugar battery. There’s a tiny (0.91″) OLED display, but it’s only for status messages. The viewfinder is 100% optical, as the designers of this camera intended. Point, shoot, shoot again.

There’s something pure in that experience; we sometimes find stopping to look at previews pulls one out of the creative zone of actually taking pictures. This camera won’t let you do that, though of course you do get to skip on developing photos. [Malcom] has the Pi set up to connect to his Wifi when he gets home, and he grabs the RAW (he is a photographer, after all) image files via SSH.  Follow the link above to [Malcom]’s substack, and you’ll get some design details and his python code.

The Raspberry Pi Foundation’s NoIR camera shows up on these pages from time to time, though rarely so artistically. We’re more likely to see it spying on reptiles, or make magic wands work.  So we are quite grateful to [Malcom] for the tip, via Petapixel. Yes, photographers and artists of all stripes are welcome to use the tips line to tell us about their work.

Follow the links in this article for more images like this.
Image: Malcom Wilson

❌
❌