SpaceX’s Starship is the most powerful launch system ever built, dwarfing even the mighty Saturn V both in terms of mass and total thrust. The scale of the vehicle is such that concerns have been raised about the impact each launch of the megarocket may have on the local environment. Which is why a team from Brigham Young University measured the sound produced during Starship’s fifth test flight and compared it to other launch vehicles.
Published in JASA Express Letters, the paper explains the team’s methodology for measuring the sound of a Starship launch at distances ranging from 10 to 35 kilometers (6 to 22 miles). Interestingly, measurements were also made of the Super Heavy booster as it returned to the launch pad and was ultimately caught — which included several sonic booms as well as the sound of the engines during the landing maneuver.
The paper goes into considerable detail on how the sound produced Starship’s launch and recovery propagate, but the short version is that it’s just as incredibly loud as you’d imagine. Even at a distance of 10 km, the roar of the 33 Raptor engines at ignition came in at approximately 105 dBA — which the paper compares to a rock concert or chainsaw. Double that distance to 20 km, and the launch is still about as loud as a table saw. On the way back in, the sonic boom from the falling Super Heavy booster was enough to set off car alarms at 10 km from the launch pad, which the paper says comes out to a roughly 50% increase in loudness over the Concorde zooming by.
OK, so it’s loud. But how does it compare with other rockets? Running the numbers, the paper estimates that the noise produced during a Starship launch is at least ten times greater than that of the Falcon 9. Of course, this isn’t hugely surprising given the vastly different scales of the two vehicles. A somewhat closer comparison would be with the Space Launch System (SLS); the data indicates Starship is between four and six times as loud as NASA’s homegrown super heavy-lift rocket.
That last bit is probably the most surprising fact uncovered by this research. While Starship is the larger and more powerful of the two launch vehicles, the SLS is still putting out around half the total energy at liftoff. So shouldn’t Starship only be twice as loud? To try and explain this dependency, the paper points to an earlier study done by two of the same authors which compared the SLS with the Saturn V. In that paper, it was theorized that the arrangement of rocket nozzles on the bottom of the booster may play a part in the measured result.
Hydrogen! It’s a highly flammable gas that seems way too cool to be easy to come by. And yet, it’s actually trivial to make it out of water if you know how. [Maciej Nowak] has shown us how to do just that with his latest build.
The project in question is a simple hydrogen generator that relies on the electrolysis of water. Long story short, run a current through water and you can split H2O molecules up and make H2 and O2 molecules instead. From water, you get both hydrogen to burn and the oxygen to burn it in! Even better, when you do burn the hydrogen, it combines with the oxygen to make water again! It’s all too perfect.
This particular hydrogen generator uses a series of acrylic tanks. Each is fitted with electrodes assembled from threaded rods to pass current through water. The tops of the tanks have barbed fittings which allow the gas produced to be plumbed off to another storage vessel for later use. The video shows us the construction of the generator, but we also get to see it in action—both in terms of generating gas from the water, and that gas later being used in some fun combustion experiments.
Pedants will point out this isn’t really just a hydrogen generator, because it’s generating oxygen too. Either way, it’s still cool. We’ve featured a few similar builds before as well.
As [Maurycy] explains, clues to how a fluxgate magnetometer works can be found right in the name. We all know what happens when a current is applied to a coil of wire wrapped around an iron or ferrite core — it makes an electromagnet. Wrap another coil around the same core, and you’ve got a simple transformer.
Now, power the first coil, called the drive coil, with alternating current and measure the induced current on the second, or sense coil. Unexpected differences between the current in the drive coil and the sense coil are due to any external magnetic field. The difference indicates the strength of the field. Genius!
For [Maurycy]’s homebrew version, binocular ferrite cores were stacked one on top of each other and strung together with a loop of magnet wire passing through the lined-up holes in the stack. That entire assembly formed the drive coil, which was wrapped with copper foil to thwart eddy currents. The sense coil was made by wrapping another length of magnet wire around the drive coil package; [Maurycy] found that this orthogonal of coils worked better than an antiparallel coil setup at reducing interference from the powerful drive coil field.
Driving the magnetometer required adding a MOSFET amp to give a function generator a little more oomph. [Maurycy] mentions that scope probes will attenuate the weak sense coil current, so we assume that the sense coil output goes right into the oscilloscope via coax. Calibrating the instrument was accomplished with a homebrew coil and some simple calculations.
This was a great demo of magnetometry methods and some of the intricacies of measuring weak fields with simple instruments. We’ve covered fluxgate magnetometer basics before and even talked about how they made pre-GPS car navigation possible.
You might say that the worst LEGO to step on is any given piece that happens to get caught underfoot, but have you ever thought about what the worst one would really be? For us, those little caltrops come to mind most immediately, and we’d probably be satisfied with believing that was the answer. But not [Nate Scovill]. He had to quantitatively find out one way or another.
And no, the research did not involve stepping on one of each of the thousands of LEGO pieces in existence. [Nate] started by building a test rig that approximated the force of his own 150 lb. frame stepping on each piece under scrutiny and seeing what it did to a cardboard substrate.
And how did [Nate] narrow down which pieces to try? He took to the proverbial streets and asked redditors and Discordians to help him come up with a list of subjects.
If you love LEGO to the point where you can’t bear to see it destroyed, then this video is not for you. But if you need to know the semi-scientific answer as badly as we did, then go for it. The best part is round two, when [Nate] makes a foot out of ballistics gel to rate the worst from the first test. So, what’s the worst LEGO to step on? The answer may surprise you.
And what’s more dangerous than plain LEGO? A LEGO Snake, we reckon.
A conventional tube amplifier has a circuit whose fundamentals were well in place around a hundred years ago, so there are few surprises to be found in building one today. Nevertheless, building one is still a challenge, as [Mike Freda shows us with a stereo amplifier in the video below the break.
The tubes in question are the 12AU7 double triode and 6L6 tetrode, in this case brand new PSVANE parts from China. The design is a very conventional single-ended class A circuit, with both side of the double triode being used for extra gain driving the tetrode. The output uses a tapped transformer with the tap going to the other grid in the tertode, something we dimly remember as being an “ultra-linear” circuit.
There’s an element of workshop entertainment in the video, but aside from that we think it’s the process of characterising the amp and getting its voltages right which is the take-away here. It’s not something many of us do these days, so despite the apparent simplicity of the circuit it’s worth a look.
For a time, pocketwatches were all the rage, but they were eventually supplanted by the wristwatch. [abe] built this cyberpunk Lock’n’Watch to explore an alternate history for the once trendy device.
The build was inspired by the chunky looks of Casio sport watches and other plastic consumer electronics from the 1980s and 90s. The electronics portion of this project relies heavily on a 1.28″ Seeed Studio Round Display and a Raspberry Pi 2040 XIAO microcontroller board. The final product features a faux segmented display for information in almost the same color scheme as your favorite website.
[abe] spent a good deal of the time on this project iterating on the bezel and case to hold the electronics in this delightfully anachronistic enclosure. We appreciated the brief aside on the philosophical differences between Blender, TinkerCAD, and Fusion360. Once everything was assembled, he walks us through some of joys of debugging hardware issues with a screen flicker problem. We think the end result really fulfills the vision of a 1980s pocketwatch and that it might be just the thing to go with your cyberdeck.
This week, Jonathan Bennett, Randal Schwartz, and Aaron Newcomb chat about Linux, the challenges with using system modules like the Raspberry Pi, challenges with funding development, and more!
Did you know you can watch the live recording of the show Right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or contact the guest and have them contact us! Take a look at the schedule here.
If you wonder how Large Language Models (LLMs) work and aren’t afraid of getting a bit technical, don’t miss [Brendan Bycroft]’s LLM Visualization. It is an interactively-animated step-by-step walk-through of a GPT large language model complete with animated and interactive 3D block diagram of everything going on under the hood. Check it out!
The demonstration walks through a simple task and shows every step. The task is this: using the nano-gpt model, take a sequence of six letters and put them into alphabetical order.
A GPT model is a highly complex prediction engine, so the whole process begins with tokenizing the input (breaking up words and assigning numerical values to the chunks) and ends with choosing an appropriate output from a list of probabilities. There are of course many more steps in between, and different ways to adjust the model’s behavior. All of these are made quite clear by [Brendan]’s process breakdown.
We’ve previously covered how LLMs work, explained without math which eschews gritty technical details in favor of focusing on functionality, but it’s also nice to see an approach like this one, which embraces the technical elements of exactly what is going on.
SDRs have been a game changer for radio hobbyists, but for ham radio applications, they often need a little help. That’s especially true of SDR dongles, which don’t have a lot of selectivity in the HF bands. But they’re so darn cheap and fun to play with, what’s a ham to do?
[VK3YE] has an answer, in the form of this homebrew software-defined radio (SDR) helper. It’s got a few features that make using a dongle like the RTL-SDR on the HF bands a little easier and a bit more pleasant. Construction is dead simple and based on what was in the junk bin and includes a potentiometer for attenuating stronger signals, a high-pass filter to tamp down stronger medium-wave broadcast stations, and a series-tuned LC circuit for each of the HF bands to provide some needed selectivity. Everything is wired together ugly-style in a metal enclosure, with a little jiggering needed to isolate the variable capacitor from ground.
The last two-thirds of the video below shows the helper in use on everything from the 11-meter (CB) band down to the AM bands. This would be a great addition to any ham’s SDR toolkit.
Recently China’s new CHIEF hypergravity facility came online to begin research projects after beginning construction in 2018. Standing for Centrifugal Hypergravity and Interdisciplinary Experiment Facility the name covers basically what it is about: using centrifuges immense acceleration can be generated. With gravity defined as an acceleration on Earth of 1 g, hypergravity is thus a force of gravity >1 g. This is distinct from simple pressure as in e.g. a hydraulic press, as gravitational acceleration directly affects the object and defines characteristics such as its effective mass. This is highly relevant for many disciplines, including space flight, deep ocean exploration, materials science and aeronautics.
While humans can take a g-force (g0) of about 9 g0 (88 m/s2) sustained in the case of trained fighter pilots, the acceleration generated by CHIEF’s two centrifuges is significantly above that, able to reach hundreds of g. For details of these centrifuges, this preprint article by [Jianyong Liu] et al. from April 2024 shows the construction of these centrifuges and the engineering that goes into their operation, especially the aerodynamic characteristics. Both air pressure (30 – 101 kPa) and arm velocity (200 – 1000 g) are considered, with the risks being overpressure and resonance, which if not designed for can obliterate such a centrifuge.
The acceleration of CHIEF is said to max out at 1,900 gravity tons (gt, weight of one ton due to gravity), which is significantly more than the 1,200 gt of the US Army Corps of Engineers’ hypergravity facility.
Tinkerers and tech enthusiasts, brace yourselves: the frontier of biohacking has just expanded. Picture implantable medical devices that don’t need batteries—no more surgeries for replacements or bulky contraptions. Though not all new (see below), ChemistryWorld recently shed new light on these innovations. It’s as exciting as it is unnerving; we, as hackers, know too well that tech and biology blend a fine ethical line. Realising our bodies can be hacked both tickles our excitement and unsettlement, posing deeper questions about human-machine integration.
Since the first pacemaker hit the scene in 1958, powered by rechargeable nickel-cadmium batteries and induction coils, progress has been steady but bound by battery limitations. Now, researchers like Jacob Robinson from Rice University are flipping the script, moving to designs that harvest energy from within. Whether through mechanical heartbeats or lung inflation, these implants are shifting to a network of energy-harvesting nodes.
From triboelectric nanogenerators made of flexible, biodegradable materials to piezoelectric devices tapping body motion is quite a leap. John Rogers at Northwestern University points out that the real challenge is balancing power extraction without harming the body’s natural function. Energy isn’t free-flowing; overharvesting could strain or damage organs. A topic we also addressed in April of this year.
As we edge toward battery-free implants, these breakthroughs could redefine biomedical tech. A good start on diving into this paradigm shift and past innovations is this article from 2023. It’ll get you on track of some prior innovations in this field. Happy tinkering, and: stay critical! For we hackers know that there’s an alternative use for everything!
Who doesn’t like dial-up internet? Even if those who survived the dial-up years are happy to be on broadband, and those who are still on dial-up wish that they weren’t, there’s definitely a nostalgic factor to the experience. Yet recreating the experience can be a hassle, with signing up for a dial-up ISP or jumping through many (POTS) hoops to get a dial-up server up and running. An easier way is demonstrated by [Minh Danh] with a Viking DLE-200B telephone line simulator in a recent blog post.
This little device does all the work of making two telephones (or modems) think that they’re communicating via a regular old POTS network. After picking up one of these puppies for a mere $5 at a flea market, [Minh Danh] tested it first with two landline phones to confirm that yes, you can call one phone from the other and hold a conversation. The next step was thus to connect two PCs via their modems, with the other side of the line receiving the ‘call’. In this case a Windows XP system was configured to be the dial-up server, passing through its internet connection via the modem.
With this done, a 33.6 kbps dial-up connection was successfully established on the client Windows XP system, with a blistering 3.8 kB/s download speed. The reason for 33.6 kbps is because the DLE-200B does not support 56K, and according to the manual doesn’t even support higher than 28.8 kbps, so even reaching these speeds was lucky.
Last Thursday we were at Electronica, which is billed as the world’s largest electronics trade show, and it probably is! It fills up twenty airplane-hangar-sized halls in Munich, and only takes place every two years.
And what did we see on the wall in the Raspberry Pi department? One of the relatively new AI-enabled cameras running a real-time pose estimation demo, powered by nothing less than a brand-new Raspberry Pi Compute Module 5. And it seemed happy to be running without a heatsink, but we don’t know how much load it was put under – most of the AI processing is done in the camera module.
We haven’t heard anything about the CM5 yet from the Raspberry folks, but we can’t imagine there’s all that much to say except that they’re getting ready to start production soon. If you look really carefully, this CM5 seems to have mouse bites on it that haven’t been ground off, so we’re speculating that this is still a pre-production unit, but feel free to generate wild rumors in the comment section.
The test board looks very similar to the RP4 CM demo board, so we imagine that the footprint hasn’t changed. (Edit: Oh wait, check out the M2 slot on the left-hand side!)
Last week I completed the SAO flower badge redrawing task, making a complete KiCad project. Most of the SAO petals are already released as KiCad projects, except for the Petal Matrix. The design features 56 LEDs arranged in eight spiral arms radiating from the center. What it does not feature are straight lines, right angles, nor parts placed on a regular grid.
Importing into KiCad
I followed the same procedures as the main flower badge with no major hiccups. This design didn’t have any released schematics, but backing out the circuits was straightforward. It also helped that user [sphereinabox] over on the Hackaday Discord server had rung out the LED matrix connections and gave me his notes.
Grep Those Positons
I first wanted to only read the data from the LEDs for analysis, and I didn’t need the full Kicad + Python scripting for that. Using grep on the PCB file, you get a text file that can be easily parsed to get the numbers. I confirmed that the LED placements were truly as irregular as they looked.
My biggest worry was how obtain and re-apply the positions and angles of the LEDs, given the irregular layout of the spiral arms. Just like the random angles of six SAO connector on the badge board, [Voja] doesn’t disappoint on this board, either. I fired up Python and used Matplotlib to get a visual perspective of the randomness of the placements, as one does. Due to the overall shape of the arms, there is a general trend to the numbers. But no obvious equation is discernable.
It was obvious that I needed a script of some sort to locate 56 new KiCad LED footprints onto the board. (Spoiler: I was wrong.) Theoretically I could have processed the PCB text file with bash or Python, creating a modified file. Since I only needed to change a few numbers, this wasn’t completely out of the question. But that is inelegant. It was time to get familiar with the KiCad + Python scripting capabilities. I dug in with gusto, but came away baffled.
KiCad’s Python Console to the Rescue — NOT
This being a one-time task for one specific PCB, writing a KiCad plugin didn’t seem appropriate. Instead, hacking around in the KiCad Python console looked like the way to go. But I didn’t work well for quick experimenting. You open the KiCad PCB console within the PCB editor. But when the console boots up, it doesn’t know anything about the currently loaded PCB. You need to import the Kicad Python interface library, and then open the PCB file. Also, the current state of the Python REPL and the command history are not maintained between restarts of KiCad. I don’t see any advantages of using the built-in Python console over just running a script in your usual Python environment.
Clearly there is a use case for this console. By all appearances, a lot of effort has gone into building up this capability. It appears to be full of features that must be valuable to some users and/or developers. Perhaps I should have stuck with it longer and figured it out.
KiCad Python Script Outside KiCad
This seemed like the perfect solution. The buzz in the community is that modern KiCad versions interface very well with Python. I’ve also been impressed with the improved KiCad project documentation on recent years. “This is going to be easy”, I thought.
First thing to note, the KiCad v8 interface library works only with Python 3.9. I run pyenv on my computers and already have 3.9 installed — check. However, you cannot just do a pip install kicad-something-or-other... to get the KiCad python interface library. These libraries come bundled within the KiCad distribution. Furthermore, they only work with a custom built version of Python 3.9 that is also included in the bundle. While I haven’t encountered this situation before, I figured out you can make pyenv point to a Python that has been installed outside of pyenv. But before I got that working, I made another discovery.
The Python API is not “officially” supported. KiCad has announced that the current Simplified Wrapper and Interface Generator-based Python interface bindings are slated to be deprecated. They are to be replaced by Inter-Process Communication-based bindings in Feb 2026. This tidbit of news coincided with learning of a similar 3rd party library.
Introducing KiUtils
Many people were asking questions about including external pip-installed modules from within the KiCad Python console. This confounded my search results, until I hit upon someone using the KiUtils package to solve the same problem I was having. Armed with this tool, I was up and running in no time. To be fair, I susepct KiUtils may also break when KiCad switched from SWIG to IPC interface, but KiUtils was so much easier to get up and running, I stuck with it.
I wrote a Python script to extract all the information I needed for the LEDs. The next step was to apply those values to the 56 new KiCad LED footprints to place each one in the correct position and orientation. As I searched for an example of writing a PCB file from KiUtils, I saw issue #113, “Broken as of KiCAD 8?”, on the KiUtils GitHub repository. Looks like KiUtils is already broken for v8 files. While I was able to read data from my v8 PCB file, it is reported that KiCad v8 cannot read files written by KiUtils.
Scripting Not Needed — DOH
At a dead end, I was about to hand place all the LEDs manually when I realized I could do it from inside KiCad. My excursions into KiCad and Python scripting were all for naught. The LED footprints had been imported from Altium Circuit Maker as one single footprint per LED (as opposed to some parts which convert as one footprint per pad). This single realization made the problem trivial. I just needed to update footprints from the library. While this did require a few attempts to get the cathode and anodes sorted out, it was basically solved with a single mouse click.
Those Freehand Traces
The imported traces on this PCB were harder to cleanup than those on the badge board. There were a lot of disconinuities in track segments. These artifacts would work fine if you made a real PCB, but because some segment endpoints don’t precisely line up, KiCad doesn’t know they belong to the same net. Here is how these were fixed:
Curved segments endpoints can’t be dragged like a straight line segment can. Solutions:
If the next track is a straight line, drag the line to connect to the curved segment.
If the next track is also a curve, manually route a very short track between the two endpoints.
If you route a track broadside into a curved track, it will usually not connect as far as KiCad is concerned. The solution is to break the curved track at the desired intersection, and those endpoints will accept a connection.
Some end segments were not connected to a pad. These were fixed by either dragging or routing a short trace.
Applying these rules over and over again, I finaly cleared all the discontinuities. Frustratingly, the algorithm to do this task already exists in a KiCad function: Tools -> Cleanup Graphics... -> Fix Discontinuities in Board Outline, and an accompanying tolerance field specified as a length in millimeters. But this operation, as noted in the its name, is restricted to lines on the Edge.Cuts layer.
PCB vs Picture
When I was all done, I noticed a detail in the photo of the Petal Matrix PCB assembly from the Hackaday reveal article. That board (sitting on a rock) has six debugging / expansion test points connected to the six pins of the SAO connector. But in the Altium Circuit Maker PCB design, there are only two pads, A and B. These connect to the two auxiliary input pins of the AS1115 chip. I don’t know which is correct. (Editor’s note: they were just there for debugging.) If you use this project to build one of these boards, edit it according to your needs.
Conclusion
The SAO Petal Matrix redrawn KiCad project can be found over at this GitHub repository. It isn’t easy to work backwards using KiCad from the PCB to the schematic. I certainly wouldn’t want to reverse engineer a 9U VME board this way. But for many smaller projects, it isn’t an unreasonable task, either. You can also use much simpler tools to get the job done. Earlier this year over on Hackaday.io, user [Skyhawkson] did a gread job backing out schematics from an Apollo-era PCB with Microsoft Paint 3D — a tool released in 2017 and just discontinued last week.
[CentyLab]’s PocketPD isn’t just adorably tiny — it also boasts some pretty useful features. It offers a lightweight way to get a precisely adjustable output of 0 to 20 V at up to 5 A with banana jack output, integrating a rotary encoder and OLED display for ease of use.
PocketPD leverages USB-C Power Delivery (PD), a technology with capabilities our own [Arya Voronova] has summarized nicely. In particular, PocketPD makes use of the Programmable Power Supply (PPS) functionality to precisely set and control voltage and current. Doing this does require a compatible USB-C charger or power bank, but that’s not too big of an ask these days.
Even if an attached charger doesn’t support PPS, PocketPD can still be useful. The device interrogates the attached charger on every bootup, and displays available options. By default PocketPD selects the first available 5 V output mode with chargers that don’t support PPS.
The latest hardware version is still in development and the GitHub repository has all the firmware, which is aimed at making it easy to modify or customize. Interested in some hardware? There’s a pre-launch crowdfunding campaign you can watch.
Over the years we’ve featured many projects which attempt to replicate the feel of physical media when playing music. Usually this involves some kind of token representation of the media, but here’s [Bas] with a different twist (Dutch language, Google Translate link). He’s using the CDs themselves in their cases, identifying them by their barcodes.
At its heart is a Raspberry Pi Pico W and a barcode scanner — after reading the barcode, the Pi calls Discogs to find the tracks, and then uses the Spotify API to find the appropriate links. From there, Home Assistant forwards them along to a smart speaker for playback. As a nice touch, [Bas] designed a 3D printed holder for the electronics which makes the whole thing a bit neater to use.
Product teardowns are great, but getting an unfiltered one from the people who actually designed and built the product is a rare treat. In the lengthy video after the break, former Formlabs engineer [Shane Wighton] tears down the Form 4 SLA printer while [Alec Rudd], the engineering lead for the project, answers all his prying questions.
[Shane] was part of the team that brought all Form 4’s predecessors to life, so he’s intimately familiar with the challenges of developing such a complex product. This means he can spot the small design details that most people would miss, and dive into the story behind each one. These include the hinges and poka-yoke (error-proofing) designed into the lid, the leveling features in the build-plate mount, the complex prototyping challenges behind the LCD panel and backlight, and the mounting features incorporated into every component.
A considerable portion of the engineering effort went into mitigating all the ways things could go wrong in production, shipping, and operation. The fact that most of the parts on the Form 4 are user-replaceable makes this even harder. It’s apparent that both engineers speak from a deep well of hard-earned experience, and it’s well worth the watch if you dream of bringing a physical product to market.
On November 6th, Northwestern University introduced a groundbreaking leap in haptic technology, and it’s worth every bit of attention now, even two weeks later. Full details are in their original article. This innovation brings tactile feedback into the future with a hexagonal matrix of 19 mini actuators embedded in a flexible silicone mesh. It’s the stuff of dreams for hackers and tinkerers looking for the next big thing in wearables.
What makes this patch truly cutting-edge? First, it offers multi-dimensional feedback: pressure, vibration, and twisting sensations—imagine a wearable that can nudge or twist your skin instead of just buzzing. Unlike the simple, one-note “buzzers” of old devices, this setup adds depth and realism to interactions. For those in the VR community or anyone keen on building sensory experiences, this is a game changer.
But the real kicker is its energy management. The patch incorporates a ‘bistable’ mechanism, meaning it stays in two stable positions without continuous power, saving energy by recycling elastic energy stored in the skin. Think of it like a rubber band that snaps back and releases stored energy during operation. The result? Longer battery life and efficient power usage—perfect for tinkering with extended use cases.
And it’s not all fun and games (though VR fans should rejoice). This patch turns sensory substitution into practical tech for the visually impaired, using LiDAR data and Bluetooth to transmit surroundings into tactile feedback. It’s like a white cane but integrated with data-rich, spatial awareness feedback—a boost for accessibility.
Fancy more stories like this? Earlier this year, we wrote about these lightweight haptic gloves—for those who notice, featuring a similar hexagonal array of 19 sensors—a pattern for success? You can read the original article on TechXplore here.
High-speed photography with the camera on a fast-moving robot arm has become all the rage at red-carpet events, but this GlamBOT setup comes with a hefty price tag. To get similar visual effects on a much lower budget [Henry Kidman] built a large, very fast camera slider. As is usually the case with such projects, it’s harder than it seems.
The original GlamBOT has a full 6 degrees of freedom, but many of the shots it’s famous for are just a slightly curved path between two points. That curve adds a few zeros to the required budget, so a straight slider was deemed good enough for [Henry]’s purposes. The first remaining challenge is speed. V1 one used linear rails made from shower curtain rails, with 3D printed sliders driven by a large stepper motor via a belt. The stepper motor wasn’t powerful enough to achieve the desired acceleration, so [Henry] upgraded to a more powerful 6 hp servo motor.
Unfortunately, the MDF and 3D-printed frame components were not rigid enough for the upgraded torque. It caused several crashes into the ends of the frame as the belt slipped and failed to stop the camera platform. The frame was rebuilt from steel, with square tubing for the rails and steel plates for the brackets. It provided the required rigidity, but the welding had warped the rails which led to a bumpy ride for the camera so he had to use active stabilization on the gimbal and camera. This project was filled with setback and challenges, but in the end the results look very promising with great slow motion shots on a mock red carpet.