Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

Swapping Batteries Has Never Looked This Cool

We don’t know much more than what we see with [Kounotori_DIY]’s battery loader design (video embedded below) but it just looks so cool we had to share. Watch it in action, it’ll explain itself.

Before 3D printers made it onto hobbyist workbenches, prototyping something like this would have been much more work.

[Kounotori_DIY] uses a small plastic linear guide as an interface for an 18650 battery holder and as you can see, it’s pretty slick. A little cylindrical container slides out of the assembly, allowing a spent cell to drop out. Loading a freshly charged cell consists of just popping a new one into the cylinder, then snapping it closed. The electrical connection is made by two springy metal tabs on either end that fit into guides in the cylindrical holder.

It’s just a prototype right now, and [Kounotori_DIY] admits that the assembly is still a bit big and there’s no solid retention — a good bump will pop the battery out — but we think this is onto something. We can’t help but imagine how swapping batteries in such style with a nice solid click would go very nicely on a cyberdeck build.

It’s not every day that someone tries to re-imagine a battery holder, let alone with such style. Any ideas how it could be improved? Have your own ideas about reimagining how batteries are handled? Let us know in the comments!

Would an Indexing Feature Benefit Your Next Hinge Design?

[Angus] of Maker’s Muse has a video with a roundup of different 3D-printable hinge designs, and he points out that a great thing about 3D printing objects is that adding printable features to them is essentially free.

These hinges have an indexing feature that allows them to lock into place, no additional parts needed.

A great example of this is his experimental print-in-place butt hinge with indexing feature, which is a hinge that can lock without adding any additional parts. The whole video is worth a watch, but he shows off the experimental design at the 7:47 mark. The hinge can swing normally but when positioned just right, the squared-off pin within slots into a tapered track, locking the part in place.

Inspired by a handheld shopping basket with a lockable handle, [Angus] worked out a design of his own and demonstrates it with a small GoPro tripod whose legs can fold and lock in place. He admits it’s a demonstration of the concept more than a genuinely useful tripod, but it does show what’s possible with some careful design. Being entirely 3D printed in a single piece and requiring no additional hardware is awfully nice.

3D printing is very well-suited to this sort of thing, and it’s worth playing to a printer’s strengths to do for pennies what one would otherwise need dollars to accomplish.

Want some tips on designing things in a way that take full advantage of what a 3D printer can achieve? Check out printing enclosures at an angle with minimal supports, leveraging the living hinge to print complex shapes flat (and fold them up for assembly), or even print a one-piece hinge that can actually withstand a serious load. All of those are full of tips, so keep them in mind the next time you design a part.

Life Found On Ryugu Asteroid Sample, But It Looks Very Familiar

Samples taken from the space-returned piece of asteroid Ryugu were collected and prepared under strict anti-contamination controls. Inside the cleanest of clean rooms, a tiny particle was collected from the returned sample with sterilized tools in a nitrogen atmosphere and stored in airtight containers before being embedded in an epoxy block for scanning electron microscopy.

It’s hard to imagine what more one could do, but despite all the precautions taken, the samples were rapidly colonized by terrestrial microorganisms. Only the upper few microns of the sample surface, but it happened. That’s what the images above show.

The surface of Ryugu from Rover 1B’s camera. Source: JAXA

Obtaining a sample from asteroid Ryugu was a triumph. Could this organic matter have come from the asteroid itself? In a word, no. Researchers have concluded the microorganisms are almost certainly terrestrial bacteria that contaminated the sample during collection, despite the precautions taken.

You can read the study to get all the details, but it seems that microorganisms — our world’s greatest colonizers — can circumvent contamination controls. No surprise, in a way. Every corner of our world is absolutely awash in microbial life. Opening samples on Earth comes with challenges.

As for off-Earth, robots may be doing the exploration but despite NASA assembling landers in clean room environments we may have already inadvertently exported terrestrial microbes to the Moon, and Mars. The search for life to which we are not related is one of science and humanity’s greatest quests, but it seems life found on a space-returned samples will end up looking awfully familiar until we step up our game.

Your Undocumented Project May Also Baffle People Someday

What’s life without a little mystery? There’s one less rolling around after historians finally identified a donated mystery machine that had been in storage for years.

Feeding dough through this machine may have been faster, but probably not safer.

The main pieces of the machine are about a century old and any staff who may have known more about the undocumented device were no longer around to ask. The historical society finally posted pictures and asked for any insights, which eventually led to solving the mystery.

The machine is in all likelihood a beaten biscuit maker, which was a type of dense baked good popular in the American south. Making them called for a long and labor-intensive process of pounding and working the dough, and the society says this machine was likely created by a fellow trying to help his aunt streamline her business, offloading the labor of working the dough to a machine.

The machine had no branding of any sort and lacked any identifying marks. Its purpose was doubtfully obvious at the time, but no records remained and quite possibly none existed in the first place. Sound familiar? Perhaps someday our own undocumented projects and prototypes will mystify people. It’s certainly happened in the case of mysterious Roman dodecahedrons, which remain a head-scratching mystery.

The Junk Machine Prints Corrupted Advertising On Demand

[ClownVamp]’s art project The Junk Machine is an interactive and eye-catching machine that, on demand, prints out an equally eye-catching and unique yet completely meaningless (one may even say corrupted) AI-generated advertisement for nothing in particular.

The machine is an artistic statement on how powerful software tools that have genuine promise and usefulness to creative types are finding their way into marketer’s hands, and resulting in a deluge of, well, junk. This machine simplifies and magnifies that in a physical way.

We can’t help but think that The Junk Machine is in a way highlighting Sturgeon’s Law (paraphrased as ‘ninety percent of everything is crud’) which happens to be particularly applicable to the current AI landscape. In short, the ease of use of these tools means that crud is also being effortlessly generated at an unprecedented scale, swamping any positive elements.

As for the hardware and software, we’re very interested in what’s inside. Unfortunately there’s no deep technical details, but the broad strokes are that The Junk Machine uses an embedded NVIDIA Jetson loaded up with Stable Diffusion’s SDXL Turbo, an open source AI image generator that can be installed and run locally. When and if a user mashes a large red button, the machine generates a piece of AI junk mail in real time without any need for a network connection of any kind, and prints it from an embedded printer.

Watch it in action in the video embedded below, just under the page break. There are a few more different photos on [ClownVamp]’s X account.

3D Space Can Be Tiled With Corner-free Shapes

Tiling a space with a repeated pattern that has no gaps or overlaps (a structure known as a tessellation) is what led mathematician [Gábor Domokos] to ponder a question: how few corners can a shape have and still fully tile a space? In a 2D the answer is two, and a 3D space can be tiled in shapes that have no corners at all, called soft cells.

These shapes can be made in a few different ways, and some are shown here. While they may have sharp edges there are no corners, or points where two or more line segments meet. Shapes capable of tiling a 2D space need a minimum of two corners, but in 3D the rules are different.

A great example of a natural soft cell is found in the chambers of a nautilus shell, but this turned out to be far from obvious. A cross-section of a nautilus shell shows a cell structure with obvious corners, but it turns out that’s just an artifact of looking at a 2D slice. When viewed in full 3D — which the team could do thanks to a micro CT scan available online — there are no visible corners in the structure. Once they knew what to look for, it was clear that soft cells are present in a variety of natural forms in our world.

[Domokos] not only seeks a better mathematical understanding of these shapes that seem common in our natural world but also wonders how they might relate to aperiodicity, or the ability of a shape to tile a space without making a repeating pattern. Penrose Tiles are probably the most common example.

An Animated Walkthrough of How Large Language Models Work

If you wonder how Large Language Models (LLMs) work and aren’t afraid of getting a bit technical, don’t miss [Brendan Bycroft]’s LLM Visualization. It is an interactively-animated step-by-step walk-through of a GPT large language model complete with animated and interactive 3D block diagram of everything going on under the hood. Check it out!

nano-gpt has only around 85,000 parameters, but the operating principles are all the same as for larger models.

The demonstration walks through a simple task and shows every step. The task is this: using the nano-gpt model, take a sequence of six letters and put them into alphabetical order.

A GPT model is a highly complex prediction engine, so the whole process begins with tokenizing the input (breaking up words and assigning numerical values to the chunks) and ends with choosing an appropriate output from a list of probabilities. There are of course many more steps in between, and different ways to adjust the model’s behavior. All of these are made quite clear by [Brendan]’s process breakdown.

We’ve previously covered how LLMs work, explained without math which eschews gritty technical details in favor of focusing on functionality, but it’s also nice to see an approach like this one, which embraces the technical elements of exactly what is going on.

We’ve also seen a much higher-level peek at how a modern AI model like Anthropic’s Claude works when it processes requests, extracting human-understandable concepts that illustrate what’s going on under the hood.

Power Supply With Benchtop Features Fits In Your Pocket

[CentyLab]’s PocketPD isn’t just adorably tiny — it also boasts some pretty useful features. It offers a lightweight way to get a precisely adjustable output of 0 to 20 V at up to 5 A with banana jack output, integrating a rotary encoder and OLED display for ease of use.

PocketPD leverages USB-C Power Delivery (PD), a technology with capabilities our own [Arya Voronova] has summarized nicely. In particular, PocketPD makes use of the Programmable Power Supply (PPS) functionality to precisely set and control voltage and current. Doing this does require a compatible USB-C charger or power bank, but that’s not too big of an ask these days.

Even if an attached charger doesn’t support PPS, PocketPD can still be useful. The device interrogates the attached charger on every bootup, and displays available options. By default PocketPD selects the first available 5 V output mode with chargers that don’t support PPS.

The latest hardware version is still in development and the GitHub repository has all the firmware, which is aimed at making it easy to modify or customize. Interested in some hardware? There’s a pre-launch crowdfunding campaign you can watch.

AI Face Anonymizer Masks Human Identity in Images

We’re all pretty familiar with AI’s ability to create realistic-looking images of people that don’t exist, but here’s an unusual implementation of using that technology for a different purpose: masking people’s identity without altering the substance of the image itself. The result is the photo’s content and “purpose” (for lack of a better term) of the image remains unchanged, while at the same time becoming impossible to identify the actual person in it. This invites some interesting privacy-related applications.

Originals on left, anonymized versions on the right. The substance of the images has not changed.

The paper for Face Anonymization Made Simple has all the details, but the method boils down to using diffusion models to take an input image, automatically pick out identity-related features, and alter them in a way that looks more or less natural. For this purpose, identity-related features essentially means key parts of a human face. Other elements of the photo (background, expression, pose, clothing) are left unchanged. As a concept it’s been explored before, but researchers show that this versatile method is both simpler and better-performing than others.

Diffusion models are the essence of AI image generators like Stable Diffusion. The fact that they can be run locally on personal hardware has opened the doors to all kinds of interesting experimentation, like this haunted mirror and other interactive experiments. Forget tweaking dull sliders like “brightness” and “contrast” for an image. How about altering the level of “moss”, “fire”, or “cookie” instead?

The Constant Monitoring and Work That Goes into JWST’s Optics

The James Webb Space Telescope’s array of eighteen hexagonal mirrors went through an intricate (and lengthy) alignment and calibration process before it could begin its mission — but the process is far from being a one-and-done. Keeping the telescope aligned and performing optimally requires constant work from its own team dedicated to the purpose.

Alignment of the optical elements in JWST are so fine, and the tool is so sensitive, that even small temperature variations have an effect on results. For about twenty minutes every other day, the monitoring program uses a set of lenses that intentionally de-focus images of stars by a known amount. These distortions contain measurable features that the team uses to build a profile of changes over time. Each of the mirror segments is also checked by being imaged selfie-style every three months.

This work and maintenance plan pays off. The team has made over 25 corrections since its mission began, and JWST’s optics continue to exceed specifications. The increased performance has direct payoffs in that better data can be gathered from faint celestial objects.

JWST was fantastically ambitious and is extremely successful, and as a science instrument it is jam-packed with amazing bits, not least of which are the actuators responsible for adjusting the mirrors.

Here’s Code for that AI-Generated Minecraft Clone

A little while ago Oasis was showcased on social media, billing itself as the world’s first playable “AI video game” that responds to complex user input in real-time. Code is available on GitHub for a down-scaled local version if you’d like to take a look. There’s a bit more detail and background in the accompanying project write-up, which talks about both the potential as well as the numerous limitations.

We suspect the focus on supporting complex user input (such as mouse look and an item inventory) is what the creators feel distinguishes it meaningfully from AI-generated DOOM. The latter was a concept that demonstrated AI image generators could (kinda) function as real-time game engines.

Image generators are, in a sense, prediction machines. The idea is that by providing a trained model with a short history of what just happened plus the user’s input as context, it can generate a pretty usable prediction of what should happen next, and do it quickly enough to be interactive. Run that in a loop, and you get some pretty impressive clips to put on social media.

It is a neat idea, and we certainly applaud the creativity of bending an image generator to this kind of application, but we can’t help but really notice the limitations. Sit and stare at something, or walk through dark or repetitive areas, and the system loses its grip and things rapidly go in a downward spiral we can only describe as “dreamily broken”.

It may be more a demonstration of a concept than a properly functioning game, but it’s still a very clever way to leverage image generation technology. Although, if you’d prefer AI to keep the game itself untouched take a look at neural networks trained to use the DOOM level creator tools.

Nix + Automated Fuzz Testing Finds Bug in PDF Parser

[Michael Lynch]’s adventures in configuring Nix to automate fuzz testing is a lot of things all rolled into one. It’s not only a primer on fuzz testing (a method of finding bugs) but it’s also a how-to on automating the setup using Nix (which is a lot of things, including a kind of package manager) as well as useful info on effectively automating software processes.

[Michael] not only walks through how he got it all up and running in a simplified and usefully-portable way, but he actually found a buffer overflow in pdftotext in the process! (Turns out someone else had reported the same bug a few weeks before he found it, but it demonstrates everything regardless.)

[Michael] chose fuzz testing because using it to find security vulnerabilities is conceptually simple, actually doing it tends to require setting up a test environment with a complex workflow and a lot of dependencies. The result has a high degree of task specificity, and isn’t very portable or reusable. Nix allowed him to really simplify the process while also making it more adaptable. Be sure to check out part two, which goes into detail about how exactly one goes from discovering an input that crashes a program to tracking down (and patching) the reason it happened.

Making fuzz testing easier (and in a sense, cheaper) is something people have been interested in for a long time, even going so far as to see whether pressing a stack of single-board computers into service as dedicated fuzz testers made economic sense.

Split-Flap Clock Flutters Its Way to Displaying Time Without Numbers

Here’s a design for a split-flap clock that doesn’t do it the usual way. Instead of the flaps showing numbers , Klapklok has a bit more in common with flip-dot displays.

Klapklok updates every 2.5 minutes.

It’s an art piece that uses custom-made split-flaps which flutter away to update the display as time passes. An array of vertically-mounted flaps creates a sort of low-res display, emulating an analog clock. These are no ordinary actuators, either. The visual contrast and cleanliness of the mechanism is fantastic, and the sound they make is less of a chatter and more of a whisper.

The sound the flaps create and the sight of the high-contrast flaps in motion are intended to be a relaxing and calming way to connect with the concept of time passing. There’s some interactivity built in as well, as the Klapklok also allows one to simply draw on it wirelessly with via a mobile phone.

Klapklok has a total of 69 elements which are all handmade. We imagine there was really no other way to get exactly what the designer had in mind; something many of us can relate to.

Split-flap mechanisms are wonderful for a number of reasons, and if you’re considering making your own be sure to check out this easy and modular DIY reference design before you go about re-inventing the wheel. On the other hand, if you do wish to get clever about actuators maybe check out this flexible PCB that is also its own actuator.

❌