Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerHackaday

Polaris Dawn, and the Prudence of a Short Spacewalk

Por: Tom Nardi
3 Octubre 2024 at 14:00

For months before liftoff, the popular press had been hyping up the fact that the Polaris Dawn mission would include the first-ever private spacewalk. Not only would this be the first time anyone who wasn’t a professional astronaut would be opening the hatch of their spacecraft and venturing outside, but it would also be the first real-world test of SpaceX’s own extravehicular activity (EVA) suits. Whether you considered it a billionaire’s publicity stunt or an important step forward for commercial spaceflight, one thing was undeniable: when that hatch opened, it was going to be a moment for the history books.

But if you happened to have been watching the live stream of the big event earlier this month, you’d be forgiven for finding the whole thing a bit…abrupt. After years of training and hundreds of millions of dollars spent, crew members Jared Isaacman and Sarah Gillis both spent less than eight minutes outside of the Dragon capsule. Even then, you could argue that calling it a spacewalk would be a bit of a stretch.

Neither crew member ever fully exited the spacecraft, they simply stuck their upper bodies out into space while keeping their legs within the hatch at all times. When it was all said and done, the Dragon’s hatch was locked up tight less than half an hour after it was opened.

Likely, many armchair astronauts watching at home found the whole thing rather anticlimactic. But those who know a bit about the history of human spaceflight probably found themselves unable to move off of the edge of their seat until that hatch locked into place and all crew members were back in their seats.

Flying into space is already one of the most mindbogglingly dangerous activities a human could engage in, but opening the hatch and floating out into the infinite black once you’re out there is even riskier still. Thankfully the Polaris Dawn EVA appeared to go off without a hitch, but not everyone has been so lucky on their first trip outside the capsule.

A High Pressure Situation

The first-ever EVA took place during the Voskhod 2 mission in March of 1965. Through the use of an ingenious inflatable airlock module, cosmonaut Alexei Leonov was able to exit the Voskhod 3KD spacecraft and float freely in space at the end of a 5.35 m (17.6 ft) tether. He attached a camera to the outside of the airlock, providing a visual record of yet another space “first” achieved by the Soviet Union.

This very first EVA had two mission objectives, one of which Leonov had accomplished when he successfully rigged the external camera. The last thing he had to do was turn around and take pictures of the Voskhod spacecraft flying over the Earth — a powerful propaganda image that the USSR was eager to get their hands on. But when he tried to activate his suit’s camera using the trigger mounted to his thigh, he found he couldn’t reach it. It was then that he realized the suit had begun to balloon around him, and that moving his arms and legs was taking greater and greater effort due to the suit’s material stiffening.

After about ten minutes in space Leonov attempted to re-enter the airlock, but to his horror found that the suit had expanded to the point that it would no longer fit into the opening. As he struggled to cram himself into the airlock, his body temperature started to climb. Soon he was sweating profusely, which pooled around his body within the confines of the suit.

Unable to cope with the higher than anticipated internal temperature, the suit’s primitive life support system started to fail, making matters even worse. The runaway conditions in the suit caused his helmet’s visor to fog up, which he had no way to clear as he was now deep into a failure mode that the Soviet engineers had simply not anticipated. Not that they hadn’t provided him with a solution of sorts. Decades later, Leonov would reveal that there was a suicide pill in the helmet that he could have opted to use if need be.

With his core temperature now elevated by several degrees, Leonov was on the verge of heat stroke. His last option was to open a vent in his suit, which would hopefully cause it to deflate enough for him to fit inside the airlock. He noted that the suit was currently at 0.4 atmosphere, and started reducing the pressure. The safety minimum was 0.27 atm, but even at that pressure, he couldn’t fit. It wasn’t until the pressure fell to 0.25 atm that he was able to flex the suit enough to get his body back into the airlock, and from there back into the confines of the spacecraft.

In total, Alexei Leonov spent 12 minutes and 9 seconds in space. But it must have felt like an eternity.

Gemini’s Tricky Hatch

In classic Soviet style, nobody would know about the trouble Leonov ran into during his spacewalk for years. So when American astronaut Ed White was preparing to step out of the Gemini 4 capsule three months later in June of 1965, he believed he really had his work cut out for him. Not only had the Soviets pulled off a perfect EVA, but as far as anyone knew, they had made it look easy.

So it’s not hard to imagine how White must have felt when he pulled the lever to open the hatch on the Gemini spacecraft, only to find it refused to budge. As it so happens, this wasn’t the first time the hatch failed to open. During vacuum chamber testing back on the ground, the hatch had refused to lock because a spring-loaded gear in the mechanism failed to engage properly. Luckily the second astronaut aboard the Gemini capsule, James McDivitt, was present when they had this issue on the ground and knew how the latch mechanism functioned.

Ed White

McDivitt felt confident that he could get the gear to engage and allow White to open the hatch, but was concerned about getting it closed. Failing to open the hatch and calling off the EVA was one thing, but not being able to secure the hatch afterwards meant certain death for the two men. Knowing that Mission Control would almost certainly have told them to abort the EVA if they were informed about the hatch situation, the astronauts decided to go ahead with the attempt.

As he predicted, McDivitt was able to fiddle with the latching mechanism and got the hatch open for White. Although there were some communication issues during the spacewalk due to problems with the voice-operated microphones, the EVA went very well, with White demonstrating a hand-held maneuvering thruster that allowed him to fly around the spacecraft at the end of his tether.

White was having such a good time that he kept making excuses to extend the spacewalk. Finally, after approximately 23 minutes, he begrudgingly returned to the Gemini capsule — informing Mission Control that it was “the saddest moment of my life.”

The hatch had remained open during the EVA, but now that White was strapped back into the capsule, it was time to close it back up. Unfortunately, just as McDivitt feared, the latches wouldn’t engage. To make matters worse, it took White so long to get back into the spacecraft that they were now shadowed by the Earth and working in the dark. Reaching blindly inside the mechanism, White was once again able to coax it into engaging, and the hatch was securely closed.

But there was still a problem. The mission plan called for the astronauts to open the hatch so they could discard unnecessary equipment before attempting to reenter the Earth’s atmosphere. As neither man was willing to risk opening the hatch again, they instead elected to stow everything aboard the capsule for the remainder of the flight.

Overworked, and Underprepared

At this point the Soviet Union and the United States had successfully conducted EVAs, but both had come dangerously close to disaster. Unfortunately, between the secretive nature of the Soviets and the reluctance of the Gemini 4 crew to communicate their issues to Mission Control, NASA administration started to underestimate the difficulties involved.

NASA didn’t even schedule EVAs for the next three Gemini missions, and the ambitious spacewalk planned for Gemini 8 never happened due to the mission being cut short due to technical issues with the spacecraft. It wouldn’t be until Gemini 9A that another human stepped out of their spacecraft.

The plan was for astronaut Gene Cernan to spend an incredible two hours outside of the capsule, during which time he would make his way to the rear of the spacecraft where a prototype Astronaut Maneuvering Unit (AMU) was stored. Once there, Cernan was to disconnect himself from the Gemini tether and don the AMU, which was essentially a small self-contained spacecraft in its own right.

Photo of the Gemini spacecraft taken by Gene Cernan

But as soon as he left the capsule, Cernan reported that his suit had started to swell and that movement was becoming difficult. To make matters worse, there were insufficient handholds installed on the outside of the Gemini spacecraft, making it difficult for him to navigate his away along its exterior. After eventually reaching the AMU and struggling desperately to put it on, Mission Control noted his heart rate had climbed to 180 beats per minute. The flight surgeon was worried he would pass out, so Mission Control asked him to take a break while they debated if he should continue with the AMU demonstration.

At this point Cernan noted that his helmet’s visor had begun to fog up, and just as Alexei Leonov had discovered during his own EVA, the suit had no system to clear it up. The only way he was able to see was by stretching forward and clearing off a small section of the glass by rubbing his nose against it. Realizing the futility of continuing, Commander Thomas Stafford decided not to wait on Mission Control and ordered Cernan to abort the EVA and get back into the spacecraft.

Cernan slowly made his way back to the Gemini’s hatch. The cooling system in his suit had by now been completely overwhelmed, which caused the visor to fog up completely. Effectively blind, Cernan finally arrived at the spacecraft’s hatch, but was too exhausted to continue. Stafford held onto Cernan’s legs while he rested and finally regained the strength to lower himself into the capsule and close the hatch.

When they returned to Earth the next day, a medical examination revealed Cernan had lost 13 pounds (5.8 kg) during his ordeal. The close-call during his spacewalk lead NASA to completely reassess their EVA training and procedures, and the decision was made to limit the workload on all future Gemini spacewalks, as the current air-cooled suit clearly wasn’t suitable for long duration use. It wasn’t until the Apollo program introduced a liquid-cooled suit that American astronauts would spend any significant time working outside of their spacecraft.

The Next Giant Leap

Thanks to the magic of live streaming video, we know that the Polaris Dawn crew was able to complete their brief EVA without incident: no shadowy government cover-ups, cowboy heroics, or near death experiences involved.

With the benefit of improved materials and technology, not to mention the knowledge gained over the hundreds of spacewalks that have been completed since the early days of the Space Race, the first private spacewalk looked almost mundane in comparison to what had come before it.

But there’s still much work to be done. SpaceX needs to perform further tests of their new EVA suit, and will likely want to demonstrate that crew members can actually get work done while outside of the Dragon. So it’s safe to assume that when the next Polaris Dawn mission flies, its crew will do a bit more than just stick their heads out the hatch.

Mining and Refining: Lead, Silver, and Zinc

2 Octubre 2024 at 14:00

If you are in need of a lesson on just how much things have changed in the last 60 years, an anecdote from my childhood might suffice. My grandfather was a junk man, augmenting the income from his regular job by collecting scrap metal and selling it to metal recyclers. He knew the current scrap value of every common metal, and his garage and yard were stuffed with barrels of steel shavings, old brake drums and rotors, and miles of copper wire.

But his most valuable scrap was lead, specifically the weights used to balance car wheels, which he’d buy as waste from tire shops. The weights had spring steel clips that had to be removed before the scrap dealers would take them, which my grandfather did by melting them in a big cauldron over a propane burner in the garage. I clearly remember hanging out with him during his “melts,” fascinated by the flames and simmering pools of molten lead, completely unconcerned by the potential danger of the situation.

Fast forward a few too many decades and in an ironic twist I find myself living very close to the place where all that lead probably came from, a place that was also blissfully unconcerned by the toxic consequences of pulling this valuable industrial metal from tunnels burrowed deep into the Bitterroot Mountains. It didn’t help that the lead-bearing ores also happened to be especially rich in other metals including zinc and copper. But the real prize was silver, present in such abundance that the most productive silver mine in the world was once located in a place that is known as “Silver Valley” to this day. Together, these three metals made fortunes for North Idaho, with unfortunate side effects from the mining and refining processes used to win them from the mountains.

All Together Now

Thanks to the relative abundance of their ores and their physical and chemical properties, lead, silver, and zinc have been known and worked since prehistoric times. Lead, in fact, may have been the first metal our ancestors learned to smelt. It’s primarily the low melting points of these metals that made this possible; lead, for instance, melts at only 327°C, well within the range of a simple wood fire. It’s also soft and ductile, making it easy enough to work with simple tools that lead beads and wires dating back over 9,000 years have been found.

Unlike many industrial metals, minerals containing lead, silver, and zinc generally aren’t oxides of the metals. Rather, these three metals are far more likely to combine with sulfur, so their ores are mostly sulfide minerals. For lead, the primary ore is galena or lead (II) sulfide (PbS). Galena is a naturally occurring semiconductor, crystals of which lent their name to the early “crystal radios” which used a lump of galena probed with a fine cat’s whisker as a rectifier or detector for AM radio signals.

Geologically, galena is found in veins within various metamorphic rocks, and in association with a wide variety of sulfide minerals. Exactly what minerals those are depends greatly on the conditions under which the rock formed. Galena crystallized out of low-temperature geological processes is likely to be found in limestone deposits alongside other sulfide minerals such as sphalerite, or zincblende, an ore of zinc. When galena forms under higher temperatures, such as those associated with geothermal processes, it’s more likely to be associated with iron sulfides like pyrite, or Fool’s Gold. Hydrothermal galenas are also more likely to have silver dissolved into the mineral, classifying them as argentiferous ores. In some cases, such as the mines of the Silver Valley, the silver is at high enough concentrations that the lead is considered the byproduct rather than the primary product, despite galena not being a primary ore of silver.

Like a Lead Bubble

How galena is extracted and refined depends on where the deposits are found. In some places, galena deposits are close enough to the surface that open-cast mining techniques can be used. In the Silver Valley, though, and in other locations in North America with commercially significant galena deposits, galena deposits follow deep fissures left by geothermal processes, making deep tunnel mining more likely to be used. The scale of some of the mines in the Silver Valley is hard to grasp. The galena deposits that led to the Bunker Hill stake in the 1880s were found at an elevation of 3,600′ (1,100 meters) above sea level; the shafts and workings of the Bunker Hill Mine are now 1,600′ (488 meters) below sea level, requiring miners to take an elevator ride one mile straight down to get to work.

Ore veins are followed into the rock using a series of tunnels or stopes that branch out from vertical shafts. Stopes are cut with the time-honored combination of drilling and blasting, freeing up hundreds of tons of ore with each blasting operation. Loose ore is gathered with a slusher, a bucket attached to a dragline that pulls ore back up the stope, or using mining loaders, low-slung payloaders specialized for operation in tight spaces.

Ore plus soap equals metal bubbles. Froth flotation of copper sulfide is similar to the process for extracting zinc sulfide. Source: Geomartin, CC BY-SA 4.0

Silver Valley galena typically assays at about 10% lead, making it a fairly rich ore. It’s still not rich enough, though, and needs to be concentrated before smelting. Most mines do the initial concentration on site, starting with the usual crushing, classifying, washing, and grinding steps. Ball mills are used to reduce the ore to a fine powder, mixed with water and surfactants to form a slurry, and pumped into a broad, shallow tank. Air pumped into the bottom of the tanks creates bubbles in the slurry that carry the fine lead particles up to the surface while letting the waste rock particles, or gangue, sink to the bottom. It seems counterintuitive to separate lead by floating it, but froth flotation is quite common in metal refining; we’ve seen it used to concentrate everything from lightweight graphite to ultradense uranium. It’s also important to note that this is not yet elemental lead, but rather still the lead sulfide that made up the bulk of the galena ore.

Once the froth is skimmed off and dried, it’s about 80% pure lead sulfide and ready for smelting. The Bunker Hill Mine used to have the largest lead smelter in the world, but that closed in 1982 after decades of operation that left an environmental and public health catastrophe in its wake. Now, concentrate is mainly sent to smelters located overseas for final processing, which begins with roasting the lead sulfide in a blast of hot air. This converts the lead sulfide to lead oxide and gaseous sulfur dioxide as a waste product:

2 PbS + 3 O{_2} \rightarrow2 PbO + 2 SO{_2}

After roasting, the lead oxide undergoes a reduction reaction to free up the elemental lead by adding everything to a blast furnace fueled with coke:

2 PbO + C \rightarrow2 Pb + CO{_2}

Any remaining impurities float to the top of the batch while the molten lead is tapped off from the bottom of the furnace.

Zinc!

A significant amount of zinc is also located in the ore veins of the Silver Valey, enough to become a major contributor to the district’s riches. The mineral sphalerite is the main zinc ore found in this region; like galena, it’s a sulfide mineral, but it’s a mixture of zinc sulfide and iron sulfide instead of the more-or-less pure lead oxide in galena. Sphalerite also tends to be relatively rich in industrially important contaminants like cadmium, gallium, germanium, and indium.

Most sphalerite ore isn’t this pretty. Source: Ivar Leidus, CC BY-SA 4.0.

Extraction of sphalerite occurs alongside galena extraction and uses mostly the same mining processes. Concentration also uses the froth flotation method used to isolate lead sulfide, albeit with different surfactants specific for zinc sulfide. Concentration yields a material with about 50% zinc by weight, with iron, sulfur, silicates, and trace metals making up the rest.

Purification of zinc from the concentrate is via a roasting process similar to that used for lead, and results in zinc oxide and more sulfur dioxide:

2 ZnS + 3 O{_2}\rightarrow2 ZnO + 2SO{_2}

Originally, the Bunker Hill smelter just vented the sulfur dioxide out into the atmosphere, resulting in massive environmental damage in the Silver Valley. My neighbor relates his arrival in Idaho in 1970, crossing over the Lookout Pass from Montana on the then brand-new Interstate 90. Descending into the Silver Valley was like “a scene from Dante’s Inferno,” with thick smoke billowing from the smelter’s towering smokestacks trapped in the valley by a persistent inversion. The pine trees on the hillsides had all been stripped of needles by the sulfuric acid created when the sulfur dioxide mixed with moisture in the stale air. Eventually, the company realized that sulfur was too valuable to waste and started capturing it, and even built a fertilizer plant to put it to use. But the damage was done, and it took decades for the area to bounce back.

Recovering metallic zinc from zinc oxide is performed by reduction, again in a coke-fired blast furnace which collects the zinc vapors and condenses them to the liquid phase, which is tapped off into molds to create ingots. An alternative is electrowinning, where zinc oxide is converted to zinc sulfate using sulfuric acid, often made from the sulfur recovered from roasting. The zinc sulfate solution is then electrolyzed, and metallic zinc is recovered from the cathodes, melted, further purified if necessary, and cast into ingots.

Silver from Lead

If the original ore was argentiferous, as most of the Silver Valley’s galena is, now’s the time to recover the silver through the Parke’s process, a solvent extraction technique. In this case, the solvent is the molten lead, in which silver is quite soluble. The dissolved silver is precipitated by adding molten zinc, which has the useful property of reacting with silver while being immiscible with lead. Zinc also has a higher melting point than lead, meaning that as the temperature of the mixture drops, the zinc solidifies, carrying along any silver it combined with while in the molten state. The zinc-silver particles float to the top of the desilvered lead where they can be skimmed off. The zinc, which has a lower boiling point than silver, is driven off by vaporization, leaving behind relatively pure silver.

To further purify the recovered silver, cupellation is often employed. Cupellation is a pyrometallurgical process used since antiquity to purify noble metals by exploiting the different melting points and chemical properties of metals. In this case, silver contaminated with zinc is heated to the point where the zinc oxidizes in a shallow, porous vessel called a cupel. Cupels were traditionally made from bone ash or other materials rich in calcium carbonate, which gradually absorbs the zinc oxide, leaving behind a button of purified silver. Cupellation can also be used to purify silver directly from argentiferous galena ore, by differentially absorbing lead oxide from the molten solution, with the obvious disadvantage of wasting the lead:

Ag + 2 Pb + O{_2}\rightarrow 2PbO + Ag

Cupellation can also be used to recover small amounts of silver directly from refined lead, such as that in wheel weights:

If my grandfather had only known.

Static Electricity And The Machines That Make It

Por: Lewin Day
30 Septiembre 2024 at 14:00

Static electricity often just seems like an everyday annoyance when a wool sweater crackles as you pull it off, or when a doorknob delivers an unexpected zap. Regardless, the phenomenon is much more fascinating and complex than these simple examples suggest. In fact, static electricity is direct observable evidence of the actions of subatomic particles and the charges they carry.

While zaps from a fuzzy carpet or playground slide are funny, humanity has learned how to harness this naturally occurring force in far more deliberate and intriguing ways. In this article, we’ll dive into some of the most iconic machines that generate static electricity and explore how they work.

What Is It?

Before we look at the fancy science gear, we should actually define what we’re talking about here. In simple terms, static electricity is the result of an imbalance of electric charges within or on the surface of a material. While positively-charged protons tend to stay put, electrons, with their negative charges, can move between materials when they come into contact or rub against one another. When one material gains electrons and becomes negatively charged, and another loses electrons and becomes positively charged, a static electric field is created. The most visible result of this is when those charges are released—often in the form of a sudden spark.

Since it forms so easily on common materials, humans have been aware of static electricity for quite some time. One of the earliest recorded studies of the phenomenon came from the ancient Greeks. Around 1000 BC, they noticed that rubbing amber with fur would then allow it to attract small objects like feathers. Little came of this discovery, which was ascribed as a curious property of amber itself. Fast forward to the 17th century, though, and scientists were creating the first machines designed to intentionally store or generate static electricity. These devices helped shape our understanding of electricity and paved the way for the advanced electrical technologies we use today. Let’s explore a few key examples of these machines, each of which demonstrates a different approach to building and manipulating static charge.

The Leyden Jar

An 1886 drawing of Andreas Cunaeus experimenting with his apparatus. In this case, his hand is helping to store the charge. Credit: public domain

Though not exactly a machine for generating static electricity, the Leyden jar is a critical part of early electrostatic experiments. Effectively a static electricity storage device, it was independently discovered twice, first by a German named Ewald Georg von Kleist in 1745. However, it gained its common name when it was discovered by Pieter van Musschenbroek, a Dutch physicist, sometime between 1745 and 1746. The earliest versions were very simple, consisting of water in a glass jar that was charged with static electricity conducted to it via a metal rod. The experimenter’s hand holding the jar served as one plate of what was a rudimentary capacitor, the water being the other. The Leyden jar thus stored static electricity in the water and the experimenter’s hand.

Eventually the common design became a glass jar with layers of metal foil both inside and outside, separated by the glass. Early experimenters would charge the jar using electrostatic generators, and then discharge it with a dramatic spark.

The Leyden jar is one of the first devices that allowed humans to store and release static electricity on command. It demonstrated that static charge could be accumulated and held for later use, which was a critical step in understanding the principles that would lead to modern capacitors. The Leyden jar can still be used in demonstrations of electrostatic phenomena and continues to serve as a fascinating link to the history of electrical science.

The Van de Graaff Generator

A Van de Graaff generator can be configured to run in either polarity, depending on the materials chosen and how it is set up. Here, we see the generator being used to feed negative charges into an attached spherical conductor. Credit: Omphalosskeptic, CC BY-SA 3.0

Perhaps the most iconic machine associated with generating static electricity is the Van de Graaff generator. Developed in the 1920s by American physicist Robert J. Van de Graaff, this machine became a staple of science classrooms and physics demonstrations worldwide. The device is instantly recognizable thanks to its large, polished metal sphere that often causes hair to stand on end when a person touches it.

The Van de Graaff generator works by transferring electrons through mechanical movement. It uses a motor-driven belt made of insulating material, like rubber or nylon, which runs between two rollers. At the bottom roller, plastic in this example, a comb or brush (called the lower electrode) is placed very close to the belt. As the belt moves, electrons are transferred from the lower roller onto the belt due to friction in what is known as the triboelectric effect. This leaves the lower roller positively charged and the belt carrying excess electrons, giving it a negative charge. The electric field surrounding the positively charged roller tends to ionize the surrounding air and attracts more negative charges from the lower electrode.

As the belt moves upward, it carries these electrons to the top of the generator, where another comb or brush (the upper electrode) is positioned near the large metal sphere. The upper roller is usually metal in these cases, which stays neutral rather than becoming intensely charged like the bottom roller. The upper electrode pulls the electrons off the belt, and they are transferred to the surface of the metal sphere. Because the metal sphere is insulated and not connected to anything that can allow the electrons to escape, the negative charge on the sphere keeps building up to very high voltages, often in the range of hundreds of thousands of volts. Alternatively, the whole thing can be reversed in polarity by changing the belt or roller materials, or by using a high voltage power supply to charge the belt instead of the triboelectric effect.

The result is a machine capable of producing massive static charges and dramatic sparks. In addition to its use as a demonstration tool, Van de Graaff generators have applications in particle physics. Since they can generate incredibly high voltages, they were once used to accelerate particles to high speeds for physics experiments. These days, though, our particle accelerators are altogether more complex. 

The Whimsical Wimshurst Machine

Two disks with metal sectors spin in opposite directions upon turning the hand crank. A small initial charge is able to induce charge in other sectors as the machine is turned. Credit: public domain

Another fascinating machine for generating static electricity is the Wimshurst machine, invented in the late 19th century by British engineer James Wimshurst. While less famous than the Van de Graaff generator, the Wimshurst machine is equally impressive in its operation and design.

The key functional parts of the machine are the two large, circular disks made of insulating material—originally glass, but plastic works too. These disks are mounted on a shared axle, but they rotate in opposite directions when the hand crank is turned. The surfaces of the disks have small metal sectors—typically aluminum or brass—which play a key role in generating static charge. As the disks rotate, brushes made of fine metal wire or other conductive material lightly touch their surfaces near the outer edges. These brushes don’t generate the initial charge but help to collect and amplify it once it is present.

The key to the Wimshurst machine’s operation lies in a process called electrostatic induction, which is essentially the influence that a charged object can exert on nearby objects, even without touching them. At any given moment, one small area of the rotating disk may randomly pick up a small amount of charge from the surrounding air or by friction. This tiny initial charge is enough to start the process. As this charged area on the disk moves past the metal brushes, it induces an opposite charge in the metal sectors on the other disk, which is rotating in the opposite direction.

For example, if a positively charged area on one disk passes by a brush, it will induce a negative charge on the metal sectors of the opposite disk at the same position. These newly induced charges are then collected by a pair of metal combs located above and below the disks. The combs are typically connected to Leyden jars to store the charge, until the voltage builds up high enough to jump a spark over a gap between two terminals.

It is common to pair a Wimshurst machine with Leyden jars to store the generated charge. Credit: public domain

The Wimshurst machine doesn’t create static electricity out of nothing; rather, it amplifies small random charges through the process of electrostatic induction as the disks rotate. As the charge is collected by brushes and combs, it builds up on the machine’s terminals, resulting in a high-voltage output that can produce dramatic sparks. This self-amplifying loop is what makes the Wimshurst machine so effective at generating static electricity.

The Wimshurst machine is seen largely as a curio today, but it did have genuine scientific applications back in the day. Beyond simply using it to investigate static electricity, its output could be discharged into Crookes tubes to create X-rays in a very rudimentary way.

The Electrophorus: Simple Yet Ingenious

One of the simplest machines for working with static electricity is the electrophorus, a device that dates back to 1762. Invented by Swedish scientist Johan Carl Wilcke, the electrophorus consists of two key parts: a flat dielectric plate and a metal disk with an insulating handle. The dielectric plate was originally made of resinous material, but plastic works too. Meanwhile, the metal disk is naturally conductive.

An electrophorus device, showing the top metal disk, and the bottom dielectric material, at times referred to as the “cake.” The lower dielectric was classically charged by rubbing with fur. Credit: public domain

To generate static electricity with the electrophorus, the dielectric plate is first rubbed with a cloth to create a static charge through friction. This is another example of the triboelectric effect, as also used in the Van de Graaff generator. Once the plate is charged, the metal disk is placed on top of it. The disc then becomes charged by induction. It’s much the same principle as the Wimshurst machine, with the electrostatic field of the dielectric plate pushing around the charges in the metal plate until it too has a distinct charge.

For example, if the dielectric plate has been given a negative charge by rubbing, it will repel negative charges in the metal plate to the opposite side, giving the near surface a positive charge, and the opposite surface a negative charge. The net charge, though, remains neutral. But, if the metal disk is then grounded—for example, by briefly touching it with a finger—the negative charge on the disk can drained away, leaving it positively charged as a whole. This process does not deplete the charge on the dielectric, so it can be used to charge the metal disk multiple times, though the dielectric’s charge will slowly leak away in time.

Though it’s simple in design, the electrophorus remains a remarkable demonstration of static electricity generation and was widely used in early electrostatic experiments. A particularly well-known example is that of Georg Lichtenberg. He used a version a full two meters in diameter to create large discharges for his famous Lichtenberg figures. Overall, it’s an excellent tool for teaching the basic principles of electrostatics and charge separation—particularly given how simple it is in construction compared to some of the above machines.

Zap

Static electricity, once a mysterious and elusive force, has long since been tamed and turned into a valuable tool for scientific inquiry and education. Humans have developed numerous machines to generate, manipulate, and study static electricity—these are just some of the stars of the field. Each of these devices played an important role in furthering humanity’s understanding of electrostatics, and to a degree, physics in general.

Today, these machines continue to serve as educational tools and historical curiosities, offering a glimpse into the early days of electrical science—and they still spark fascination on the regular, quite literally. Static electricity may be an everyday phenomenon, but the machines that harness its power are still captivating today. Just go to any local science museum for the proof!

 

What’s the Deal with AI Art?

28 Septiembre 2024 at 14:00

A couple weeks ago, we had a kerfuffle here on Hackaday: A writer put out a piece with AI-generated headline art. It was, honestly, pretty good, but it was also subject to all of the usual horrors that get generated along the way. If you have played around with any of the image generators you know the AI-art uncanny style, where it looks good enough at first glance, but then you notice limbs in the wrong place if you look hard enough. We replaced it shortly after an editor noticed.

The story is that the writer couldn’t find any nice visuals to go with the blog post, with was about encoding data in QR codes and printing them out for storage. This is a problem we have frequently here, actually. When people write up a code hack, for instance, there’s usually just no good image to go along with it. Our writers have to get creative. In this case, he tossed it off to Stable Diffusion.

Some commenters were afraid that this meant that we were outsourcing work from our fantastic, and very human, art director Joe Kim, whose trademark style you’ve seen on many of our longer-form original articles. Of course we’re not! He’s a genius, and when we tell him we need some art about topics ranging from refining cobalt to Wimshurst machines to generate static electricity, he comes through. I think that all of us probably have wanted to make a poster out of one or more of his headline art pieces. Joe is a treasure.

But for our daily blog posts, which cover your works, we usually just use a picture of the project. We can’t ask Joe to make ten pieces of art per day, and we never have. At least as far as Hackaday is concerned, AI-generated art is just as good as finding some cleared-for-use clip art out there, right?

Except it’s not. There is a lot of uncertainty about the data that the algorithms are trained on, whether the copyright of the original artists was respected or needed to be, ethically or legally. Some people even worry that the whole thing is going to bring about the end of Art. (They worried about this at the introduction of the camera as well.) But then there’s also the extra limbs, and AI-generated art’s cliche styles, which we fear will get old and boring after we’re all saturated with them.

So we’re not using AI-generated art as a policy for now, but that’s not to say that we don’t see both the benefits and the risks. We’re not Luddites, after all, but we are also in favor of artists getting paid for their work, and of respect for the commons when people copyleft license their images. We’re very interested to see how this all plays out in the future, but for now, we’re sitting on the sidelines. Sorry if that means more headlines with colorful code!

This article is part of the Hackaday.com newsletter, delivered every seven days for each of the last 200+ weeks. It also includes our favorite articles from the last seven days that you can see on the web version of the newsletter. Want this type of article to hit your inbox every Friday morning? You should sign up!

3D Printed Jellyfish Lights Up

27 Septiembre 2024 at 23:00

[Ben] may be 15 years old, but he’s got the knack for 3D printing and artistic mechanical design. When you see his 3D-printed mechanical jellyfish lamp, we think you’ll agree. Honestly, it is hardly fair to call it a lamp. It is really — as [Ben] points out — a kinetic sculpture.

One of the high points of the post is the very detailed documentation. Not only is everything explained, but there is quite a bit of background information on jellyfish, different types of gears, and optimizing 3D prints along with information on how to recreate the sculpture.

There is quite a bit of printing, including the tentacles. There are a few options, like Arduino-controlled LEDs. However, the heart of the operation is a geared motor.

All the design files for 3D printing and the Arduino code are in the post. There’s also a remote control. The design allows you to have different colors for various pieces and easily swap them with a screwdriver.

One major concern was how noisy the thing would be with a spinning motor. According to [Ben], the noise level is about 33 dB, which is about what a whisper sounds like. However, he mentions you could consider using ball bearings, quieter motors, or different types of gears to get the noise down even further.

We imagine this jellyfish will come in at well under $6 million. If you don’t want your jellyfish to be art, maybe you’d prefer one that creates art.

Is That A Coaster? No, It’s An LED Matrix!

19 Septiembre 2024 at 23:00

I’m sure you all love to see some colorful blinkenlights every now and then, and we are of course no exception. While these might look like coasters at a distance, do not be deceived! They’re actually [bitluni]’s latest project!

[bitluni]’s high-fidelity LED matrix started life as some 8×8 LED matrices lying on the shelf for 10 years taunting him – admit it, we’re all guilty of this – before he finally decided to make something with them. That idea took the form of a tileable display with the help of some magnets and pogo pins, which is certainly a very satisfying way to connect these oddly futuristic blinky coasters together.

It all starts with some schematics and a PCB. Because the CH32V208 has an annoying package to solder, [bitluni] opted to have the PCB fab do placement for him. Unfortunately, though, and like any good prototype, it needed a bodge! [bitluni] had accidentally mirrored a chip in the schematic, meaning he had to solder one of the SMD chips on upside-down, “dead bug mode”. Fortunately, the rest was seemingly more successful, because with a little 3D-printed case and some fancy programming, the tiny tiles came to life in all of their rainbow-barfing glory. Sure, the pogo pins were less reliable than desired, but [bitluni] has some ideas for a future version we’re very much looking forward to.

Video after the break.

Has your hunger for blinkenlights not been satiated? More posts about [bitluni] perhaps? How about the time [bitluni] made a very blinkenlight-y “super”computer?

Creating a Twisted Grid Image Illusion With a Diffusion Model

Por: Maya Posch
19 Septiembre 2024 at 02:00

Images that can be interpreted in a variety of ways have existed for many decades, with the classical example being Rubin’s vase — which some viewers see as a vase, and others a pair of human faces.

When the duck becomes a bunny, if you ignore the graphical glitches that used to be part of the duck. (Credit: Steve Mould, YouTube)
When the duck becomes a bunny, if you ignore the graphical glitches that used to be part of the duck. (Credit: Steve Mould, YouTube)

Where things get trickier is if you want to create an image that changes into something else that looks realistic when you rotate each section of it within a 3×3 grid. In a video by [Steve Mould], he explains how this can be accomplished, by using a diffusion model to identify similar characteristics of two images and to create an output image that effectively contains essential features of both images.

Naturally, this process can be done by hand too, with the goal always being to create a plausible image in either orientation that has enough detail to trick the brain into filling in the details. To head down the path of interpreting what the eye sees as a duck, a bunny, a vase or the outline of faces.

Using a diffusion model to create such illusions is quite a natural fit, as it works with filling in noise until a plausible enough image begins to appear. Of course, whether it is a viable image is ultimately not determined by the model, but by the viewer, as humans are susceptible to such illusions while machine vision still struggles to distinguish a cat from a loaf and a raisin bun from a spotted dog. The imperfections of diffusion models would seem to be a benefit here, as it will happily churn through abstractions and iterations with no understanding or interpretive bias, while the human can steer it towards a viable interpretation.

Catching The BOAT: Gamma-Ray Bursts and The Brightest of All Time

18 Septiembre 2024 at 14:00

Down here at the bottom of our ocean of air, it’s easy to get complacent about the hazards our universe presents. We feel safe from the dangers of the vacuum of space, where radiation sizzles and rocks whizz around. In the same way that a catfish doesn’t much care what’s going on above the surface of his pond, so too are we content that our atmosphere will deflect, absorb, or incinerate just about anything that space throws our way.

Or will it? We all know that there are things out there in the solar system that are more than capable of wiping us out, and every day holds a non-zero chance that we’ll take the same ride the dinosaurs took 65 million years ago. But if that’s not enough to get you going, now we have to worry about gamma-ray bursts, searing blasts of energy crossing half the universe to arrive here and dump unimaginable amounts of energy on us, enough to not only be measurable by sensitive instruments in space but also to effect systems here on the ground, and in some cases, to physically alter our atmosphere.

Gamma-ray bursts are equal parts fascinating physics and terrifying science fiction. Here’s a look at the science behind them and the engineering that goes into detecting and studying them.

Collapsars and Neutron Stars

Although we now know that gamma-ray bursts are relatively common, it wasn’t all that long ago that we were ignorant of their existence, thanks in part to our thick, protective atmosphere. The discovery of GRBs had to wait for the Space Race to couple with Cold War paranoia, which resulted in Project Vela, a series of early US Air Force satellites designed in part to watch for Soviet compliance with the Partial Test Ban Treaty, which forbade everything except underground nuclear tests. In 1967, gamma ray detectors on satellites Vela 3 and Vela 4 saw a flash of gamma radiation that didn’t match the signature of any known nuclear weapon. Analysis of the data from these and subsequent flashes revealed that they came from space, and the race to understand these energetic cosmic outbursts was on.

Trust, but verify. Vela 4, designed to monitor Soviet nuclear testing, was among the first satellites to detect cosmic gamma-ray bursts. Source: ENERGY.GOV, Public domain, via Wikimedia Commons

Gamma-ray bursts are the most energetic phenomena known, with energies that are almost unfathomable. Their extreme brightness, primarily as gamma rays but across the spectrum and including visible light, makes them some of the most distant objects ever observed. To put their energetic nature into perspective, a GRB in 2008, dubbed GRB 080319B, was bright enough in the visible part of the spectrum to just be visible to the naked eye even though it was 7.5 billion light years away. That’s more than halfway across the observable universe, 3,000 times farther away than the Andromeda galaxy, normally the farthest naked-eye visible object.

For all their energy, GRBs tend to be very short-lived. GRBs break down into two rough groups. Short GRBs last for less than about two seconds, with everything else falling into the long GRB category. About 70% of GRBs we see fall into the long category, but that might be due to the fact that the short bursts are harder to see. It could also be that the events that precipitate the long variety, hypernovae, or the collapse of extremely massive stars and the subsequent formation of rapidly spinning black holes, greatly outnumber the progenitor event for the short category of GRBs, which is the merging of binary neutron stars locked in a terminal death spiral.

The trouble is, the math doesn’t work out; neither of these mind-bogglingly energetic events could create a burst of gamma rays bright enough to be observed across half the universe. The light from such a collapse would spread out evenly in all directions, and the tyranny of the inverse square law would attenuate the signal into the background long before it reached us. Unless, of course, the gamma rays were somehow collimated. The current thinking is that a disk of rapidly spinning material called an accretion disk develops outside the hypernova or the neutron star merger. The magnetic field of this matter is tortured and twisted by its rapid rotation, with magnetic lines of flux getting tangled and torn until they break. This releases all the energy of the hypernova or neutron star merger in the form of gamma rays in two tightly focused jets aligned with the pole of rotation of the accretion disk. And if one of those two jets happens to be pointed our way, we’ll see the resulting GRB.

Crystals and Shadows

But how exactly do we detect gamma-ray bursts? The first trick is to get to space, or at least above the bulk of the atmosphere. Our atmosphere does a fantastic job shielding us from all forms of cosmic radiation, which is why the field of gamma-ray astronomy in general and the discovery of GRBs in particular had to wait until the 1960s. A substantial number of GRBs have been detected by gamma-ray detectors carried aloft on high-altitude balloons, especially in the early days, but most dedicated GRB observatories are now satellite-borne

Gamma-ray detection technology has advanced considerably since the days of Vela, but a lot of the tried and true technology is still used today. Scintillation detectors, for example, use crystals that release photons of visible light when gamma rays of a specific energy pass through them. The photons can then be amplified by photomultiplier tubes, resulting in a pulse of current proportional to the energy of the incident gamma ray. This is the technology used by the Gamma-ray Burst Monitor (GBM) aboard the Fermi Gamma-Ray Space Telescope, a satellite that was launched in 2008. Sensors with the GBT are mounted around the main chassis of Fermi, giving it a complete very of the sky. It consists of twelve sodium iodide detectors, each of which is directly coupled to a 12.7-cm diameter photomultiplier tube. Two additional sensors are made from cylindrical bismuth germanate scintillators, each of which is sandwiched between two photomultipliers. Together, the fourteen sensors cover from 8 keV to 30 MeV,  and used in concert they can tell where in the sky a gamma-ray burst has occurred.

The coded aperture for Swift’s BAT. Each tiny lead square casts a unique shadow pattern on the array of cadmiun-zinc-telluride (CZT) ionization sensors, allowing an algorithm to work out the characteristics of the gamma rays falling on it. Source: NASA.

Ionization methods are also used as gamma-ray detectors. The Niel Gehrels Swift Observatory, a dedicated GRB hunting satellite that was launched in 2004, has an instrument known as the Burst Alert Telescope, or BAT. This instrument has a very large field of view and is intended to monitor a huge swath of sky. It uses 32,768 cadmium-zinc-telluride (CZT) detector elements, each 4 x 4 x 2 mm, to directly detect the passage of gamma rays. CZT is a direct-bandgap semiconductor in which electron-hole pairs are formed across an electric field when hit by ionizing radiation, producing a current pulse. The CZT array sits behind a fan-shaped coded aperture, which has thousands of thin lead tiles arranged in an array that looks a little like a QR code. Gamma rays hit the coded aperture first, casting a pattern on the CZT array below. The pattern is used to reconstruct the original properties of the radiation beam mathematically, since conventional mirrors and lenses don’t work with gamma radiation. The BAT is used to rapidly detect the location of a GRB and to determine if it’s something worth looking at. If it is, it rapidly slews the spacecraft to look at the burst with its other instruments and instantly informs other gamma observatories about the source so they can take a look too.

The B.O.A.T.

On October 9, 2022, both Swift and Fermi, along with dozens of other spacecraft and even some ground observatories, would get to witness a cataclysmically powerful gamma-ray burst. Bloodlessly named GRB 221009A but later dubbed “The BOAT,” for “brightest of all time,” the initial GRB lasted for an incredible ten minutes with a signal that remained detectable for hours. Coming from the direction of the constellation Sagittarius from a distance of 2.4 billion light years, the burst was powerful enough to saturate Fermi’s sensors and was ten times more powerful than any signal yet received by Swift.

The BOAT. A ten-hour time-lapse of data from the Fermi Large Area Telescope during GRB 221009A on October 8, 2022. Source: NASA/DOE/Fermi LAT Collaboration, Public domain

Almost everything about the BOAT is fascinating, and the superlatives are too many to list. The gamma-ray burst was so powerful that it showed up in the scientific data of spacecraft that aren’t even equipped with gamma-ray detectors, including orbiters at Mars and Voyager 1. Ground-based observatories noted the burst, too, with observatories in Russia and China noting very high-energy photons in the range of tens to hundreds of TeV arriving at their detectors.

The total energy released by GRB 221009A is hard to gauge with precision, mainly because it swamped the very instruments designed to measure it. Estimates range from 1048 to 1050 joules, either of which dwarfs the total output of the Sun over its entire 10 billion-year lifespan. So much energy was thrown in our direction in such a short timespan that even our own atmosphere was impacted. Lightning detectors in India and Germany were triggered by the burst, and the ionosphere suddenly started behaving as if a small solar flare had just occurred. Most surprising was that the ionospheric effects showed up on the daylight side of the Earth, swamping the usual dampening effect of the Sun.

When the dust had settled from the initial detection of GRB 221009A, the question remained: What happened to cause such an outburst? To answer that, the James Webb Space Telescope was tasked with peering into space, off in the direction of Sagittarius, where it found pretty much what was expected — the remains of a massive supernova. In fact, the supernova that spawned this GRB doesn’t appear to have been particularly special when compared to other supernovae from similarly massive stars, which leaves the question of how the BOAT got to be so powerful.

Does any of this mean that a gamma-ray burst is going to ablate our atmosphere and wipe us out next week? Probably not, and given that this recent outburst was estimated to be a one-in-10,000-year event, we’re probably good for a while. It seems likely that there’s plenty that we don’t yet understand about GRBs, and that the data from GRB 221009A will be pored over for decades to come. It could be that we just got lucky this time, both in that we were in the right place at the right time to see the BOAT, and that it didn’t incinerate us in the process. But given that on average we see one GRB per day somewhere in the sky, chances are good that we’ll have plenty of opportunities to study these remarkable events.

Hack On Self: Collecting Data

16 Septiembre 2024 at 14:00

A month ago, I’ve talked about using computers to hack on our day-to-day existence, specifically, augmenting my sense of time (or rather, lack thereof). Collecting data has been super helpful – and it’s best to automate it as much as possible. Furthermore, an augment can’t be annoying beyond the level you expect, and making it context-sensitive is important – the augment needs to understand whether it’s the right time to activate.

I want to talk about context sensitivity – it’s one of the aspects that brings us closest to the sci-fi future; currently, in some good ways and many bad ways. Your device needs to know what’s happening around it, which means that you need to give it data beyond what the augment itself is able to collect. Let me show you how you can extract fun insights from collecting data, with an example of a data source you can easily tap while on your computer, talk about implications of data collections, and why you should do it despite everything.

Started At The Workplace, Now We’re Here

Around 2018-2019, I was doing a fair bit of gig work – electronics, programming, electronics and programming, sometimes even programming and electronics. Of course, for some, I billed per hour, and I was asked to provide estimates. How many hours does it take for me to perform task X?

I decided to collect data on what I do on my computer – to make sure I can bill people as fairly as possible, and also to try and improve my estimate-making skills. Fortunately, I do a lot of my work on a laptop – surely I could monitor it very easily? Indeed, and unlike Microsoft Recall, neither LLMs nor people were harmed during this quest. What could be a proxy for “what I’m currently doing”? For a start, currently focused window names.

All these alt-tabs, it feels like a miracle I manage to write articles sometimes

Thankfully, my laptop runs Linux, a hacker-friendly OS. I quickly wrote a Python script that polls the currently focused window, writing every change into a logfile, each day a new file. A fair bit of disk activity, but nothing that my SSDs can’t handle. Initially, I just let the script run 24/7, writing its silly little logs every time I Alt-Tabbed or opened a new window, checking them manually when I needed to give a client a retrospective estimate.

I Alt-Tab a lot more than I expected, while somehow staying on the task course and making progress. Also, as soon as I started trying to sort log entries into types of activity, I was quickly reminded that categorizing data is a whole project in itself – it’s no wonder big companies outsource it to the Global South for pennies. In the end, I can’t tell you a lot about data processing here, but only because I ended up not bothering with it much, thinking that I would do it One Day – and I likely will mention it later on.

Collect Data, And Usecases Will Come

Instead, over time, I came up with other uses for this data. As it ran in an always-open commandline window, I could always scroll up and see the timestamps. Of course, this meant I could keep tabs on things like my gaming habits – at least, after the fact. I fall asleep with my laptop by my side, and usually my laptop is one of the first things I check when I wake up. Quickly, I learned to scroll through the data to figure out when I went to sleep, when I woke up, and check how long I slept.

seriously, check out D-Feet – turns out there’s so, so much you can find on DBus!

I also started tacking features on the side. One thing I added was monitoring media file playback, logging it alongside window title changes. Linux systems expose this information over Dbus, and there’s a ton of other useful stuff there too! And Dbus is way easier to work with than I’ve heard, especially when you use a GUI explorer like D-Feet to help you learn the ropes.

The original idea was figuring out how much time I was spending actively watching YouTube videos, as opposed to watching them passively in the background, and trying to notice trends. Another idea was to keep an independent YouTube watch history, since the YouTube-integrated one is notoriously unreliable. I never actually did either of these, but the data is there whenever I feel the need to do so.

Of course, having the main loop modifiable meant that I could add some hardcoded on-window-switch actions, too. For instance, at some point I was participating in a Discord community and I had trouble remembering a particular community rule. No big deal – I programmed the script to show me a notification whenever I switched into that server, reminding me of the rule.

whenever I wish, I have two years’ worth of data to learn from!

There is no shortage of information you can extract even from this simple data source. How much time do I spend talking to friends, and at which points in the day; how does that relate to my level of well-being? When I spend all-nighters on a project, how does the work graph look? Am I crashing by getting distracted into something unrelated, not asleep, but too sleepy to get up and get myself to bed? Can I estimate my focus levels at any point simply by measuring my Alt-Tab-bing frequency, then perhaps, measure my typing speed alongside and plot them together on a graph?

Window title switches turned out to be a decent proxy for “what I’m currently doing with my computer”. Plus, it gives me a wonderful hook, of the “if I do X, I need to remember to do Y” variety – there can never be enough of those! Moreover, it provides me with sizeable amounts of data about myself, data that I now store. Some of you will be iffy about collecting such data – there are some good reasons for it.

Taking Back Power

We emit information just like we emit heat. As long as we are alive, there’s always something being digitized; even your shed in the woods is being observed by a spy satellite. The Internet revolution has made information emissivity increase exponentially, a widespread phenomenon it now uses to grow itself, since now your data pays for online articles, songs, and YouTube videos. Now there are entire databanks containing various small parts of your personality, way more than you could ever have been theoretically comfortable with, enough to track your moves before you’re aware you’re making them.

:¬)

Cloning is not yet here, but Internet already contains your clone – it can sure answer your security questions to your bank, with a fair bit of your voice to impersonate you while doing so, and not to mention all the little tidbits used to sway your purchase power and voting preferences alike. When it comes to protections, all we have is pretenses like “privacy policies” and “data anonymization”. EU is trying to move in the right direction through directives like GDPR, with Snowden discoveries having left a deep mark, but it’s barely enough and not a consistent trend.

Just like with heat signatures, not taking care of your information signature gives you zero advantages and a formidable threat profile, but if you are tapped into it, you can protect people – or preserve dictatorships. Now, if anyone deserves to have power over yourself, it’s you, as opposed to an algorithm currently tracking your toilet paper purchases, which might be used tomorrow to catch weed smokers when it notices an increase in late night snack runs. It’s already likely to be used to ramp up prices during an emergency, or just because of increased demand – that’s where all these e-ink pricetags come into play!

Isn’t It Ridiculous?

Your data will be collected by others no matter your preference, and it will not be shared with you, so you have to collect it yourself. Once you have it, you can use your data to understand yourself better, become stronger by compensating for your weaknesses, help you build healthier relationships with others, living a more fulfilling and fun life overall. Collecting data also means knowing what others might collect and the power it provides, and tyis can help you fight and offset the damage you are bound to suffer because of datamining. Why are we not doing more of this, again?

We’ve got a lot to catch up to. Our conversations can get recorded with the ever-present networked microphones and then datamined, but you don’t get a transcript of that one phonecall where you made a doctor’s appointment and forgot to note the appointment time. Your store knows how often you buy toilet paper, what’s with these loyalty cards we use to get discounts while linking our purchases to our identities, but they are not kind enough to send you a notification saying it might be time to restock. Ever looked back on a roadtrip you did and wished you had a GPS track saved? Your telco operators know your location well enough, now even better with 5G towers, but you won’t get a log. Oh, also, your data can benefit us all, in a non-creepy way.

Unlike police departments, scientists are bound by ethics codes and can’t just buy data without the data owner’s consent – but science and scientific research is where our data could seriously shine. In fact, scientific research thrives when we can provide it with data we collected – just look at Apple Health. In particular, social sciences could really use a boost in available data, as reproducibility crises have no end in sight – research does turn out to skew a certain way when your survey respondents are other social science students.

Grab the power that you’re owed, collect your own data, store it safely, and see where it gets you – you will find good uses for it, whether it’s self-improvement, scientific research, or just building a motorized rolling chair that brings you to your bed as it notices you become too tired after hacking all night throughout. Speaking of which, my clock tells me it’s 5 AM.

Works, Helps, Grows

The code is on GitHub, for whatever purposes. This kind of program is a useful data source, and you could add it into other things you might want to build. This year, I slapped some websocket server code over the window monitoring code – now, other programs on my computer can connect to the websocket server, listen to messages, making decisions based on my currently open windows and currently playing media. If you want to start tracking your computer activity right now, there are some promising programs you should consider – ActivityWatch looks really nice in particular.

I have plans for computer activity tracking beyond today – from tracking typing on the keyboard, to condensing this data into ongoing activity summaries. When storing data you collect, make sure you include a version number from the start and increment it on every data format change. You will improve upon your data formats and you will want to parse them all, and you’ll be thankful for having a version number to refer to.

The GitHub-published portion is currently being used for a bigger project, where the window monitoring code plays a crucial part. Specifically, I wanted to write a companion program that would help me stay on track when working on specific projects on my laptop. In a week’s time, I will show you that program, talk about how I’ve come to create it and how it hooks into my brain, how much it helps me in the end, share the code, and give you yet another heap of cool things I’ve learned.

What other kinds of data could one collect?

A Look At The Small Web, Part 1

Por: Jenny List
10 Septiembre 2024 at 14:00

In the early 1990s I was privileged enough to be immersed in the world of technology during the exciting period that gave birth to the World Wide Web, and I can honestly say I managed to completely miss those first stirrings of the information revolution in favour of CD-ROMs, a piece of technology which definitely didn’t have a future. I’ve written in the past about that experience and what it taught me about confusing the medium with the message, but today I’m returning to that period in search of something else. How can we regain some of the things that made that early Web good?

We All Know What’s Wrong With The Web…

It’s likely most Hackaday readers could recite a list of problems with the web as it exists here in 2024. Cory Doctrow coined a word for it, enshitification, referring to the shift of web users from being the consumers of online services to the product of those services, squeezed by a few Internet monopolies. A few massive corporations control so much of our online experience from the server to the browser, to the extent that for so many people there is very little the touch outside those confines.

A screenshot of the first ever web page
The first ever web page is maintained as a historical exhibit by CERN.

Contrasting the enshitified web of 2024 with the early web, it’s not difficult to see how some of the promise was lost. Perhaps not the web of Tim Berners-Lee and his NeXT cube, but the one of a few years later, when Netscape was the new kid on the block to pair with your Trumpet Winsock. CD-ROMs were about to crash and burn, and I was learning how to create simple HTML pages.

The promise then was of a decentralised information network in which we would all have our own websites, or homepages as the language of the time put it, on our own servers. Microsoft even gave their users the tools to do this with Windows, in that the least technical of users could put a Frontpage Express web site on their Personal Web Server instance. This promise seems fanciful to modern ears, as fanciful perhaps as keeping the overall size of each individual page under 50k, but at the time it seemed possible.

With such promise then, just how did we end up here? I’m sure many of you will chip in in the comments with your own takes, but of course, setting up and maintaining a web server is either hard, or costly. Anyone foolish enough to point their Windows Personal Web Server directly at the Internet would find their machine compromised by script kiddies, and having your own “proper” hosting took money and expertise. Free stuff always wins online, so in those early days it was the likes of Geocities or Angelfire which drew the non-technical crowds. It’s hardly surprising that this trend continued into the early days of social media, starting the inevitable slide into today’s scene described above.

…So Here’s How To Fix It

If there’s a ray of hope in this wilderness then, it comes in the shape of the Small Web. This is a movement in reaction to a Facebook or Google internet, an attempt to return to that mid-1990s dream of a web of lightweight self-hosted sites. It’s a term which encompases both lightweight use of traditional web tehnologies and some new ones designed more specifically to deliver lightweight services, and it’s fair to say that while it’s not going to displace those corporations any time soon it does hold the interesting prospect of providing an alternative. From a Hackaday perspective we see Small Web technologies as ideal for serving and consuming through microcontroller-based devices, for instance, such as event badges. Why shouldn’t a hacker camp badge have a Gemini client which picks up the camp schedule, for example? Because the Small Web is something of a broad term, this is the first part of a short series providing an introduction to the topic. We’ve set out here what it is and where it comes from, so it’s now time to take a look at some of those 1990s beginnings in the form of Gopher, before looking at what some might call its spiritual successors today.

A screenshot of a browser with a very plain text page.
An ancient Firefox version shows us a Gopher site. Ph0t0phobic, MPL 1.1.

It’s odd to return to Gopher after three decades, as it’s one of those protocols which was for most of us immediately lost as the Web gained traction. Particulrly as at the time I associated Gopher with CLI base clients and the Web with the then-new NCSA Mosaic, I’d retained that view somehow. It’s interesting then to come back and look at how the first generation of web browsers rendered Gopher sites, and see that they did a reasonable job of making them look a lot like the more texty web sites of the day. In another universe perhaps Gopher would have evolved further to something more like the web, but instead it remains an ossifed glimpse of 1992 even if there are still a surprising number of active Gopher servers still to be found. There’s a re-imagined version of the Veronica search engine, and some fun can be had browsing this backwater.

With the benefit of a few decades of the Web it’s immediately clear that while Gopher is very fast indeed in the days of 64-bit desktops and gigabit fibre, the limitations of what it can do are rather obvious. We’re used to consuming information as pages instead of as files, and it just doesn’t meet those expectations. Happily  though Gopher never made those modifications, there’s something like what it might have become in Gemini. This is a lightweight protocol like Gopher, but with a page format that allows hyperlinking. Intentionally it’s not simply trying to re-implement the web and HTML, instead it’s trying to preserve the simplicity while giving users the hyperlinking that makes the web so useful.

A Kennedy search engine Gemini search page for "Hackaday".
It feels a lot like the early 1990s Web, doesn’t it.

The great thing about Gemini is that it’s easy to try. The Gemini protocol website has a list of known clients, but if even that’s too much, find a Gemini to HTTP proxy (I’m not linking to one, to avoid swamping someone’s low traffic web server). I was soon up and running, and exploring the world of Gemini sites. Hackaday don’t have a presence there… yet.

We’ve become so used to web pages taking a visible time to load, that the lightning-fast response of Gemini is a bit of a shock at first. It’s normal for a web page to contain many megabytes of images, Javascript, CSS, and other resources, so what is in effect the Web stripped down to only the information  is  unexpected. The pages are only a few K in size and load in effect, instantaneously. This may not be how the Web should be, but it’s certainly how fast and efficient hypertext information should be.

This has been part 1 of a series on the Small Web, in looking at the history and the Gemini protocol from a user perspective we know we’ve only scratched the surface of the topic. Next time we’ll be looking at how to create a Gemini site of your own, through learning it ourselves.

Shedding New Light on the Voynich Manuscript With Multispectral Imaging

10 Septiembre 2024 at 11:00

The Voynich Manuscript is a medieval codex written in an unknown alphabet and is replete with fantastic illustrations as unusual and bizarre as they are esoteric. It has captured interest for hundreds of years, and expert [Lisa Fagin Davis] shared interesting results from using multispectral imaging on some pages of this highly unusual document.

We should make it clear up front that the imaging results have not yielded a decryption key (nor a secret map or anything of the sort) but the detailed write-up and freely-downloadable imaging results are fascinating reading for anyone interested in either the manuscript itself, or just how exactly multispectral imaging is applied to rare documents. Modern imaging techniques might get leveraged into things like authenticating sealed packs of Pokémon cards, but that’s not all it can do.

Because multispectral imaging involves things outside our normal perception, the results require careful analysis rather than intuitive interpretation. Here is one example: multispectral imaging may yield faded text visible “between the lines” of other text and invite leaping to conclusions about hidden or erased content. But the faded text could be the result of show-through (content from the opposite side of the page is being picked up) or an offset (when a page picks up ink and pigment from its opposing page after being closed for centuries.)

[Lisa] provides a highly detailed analysis of specific pages, and explains the kind of historical context and evidence this approach yields. Make some time to give it a read if you’re at all interested, we promise it’s worth your while.

Getting Root on Cheap WiFi Repeaters, the Long Way Around

5 Septiembre 2024 at 11:00

What can you do with a cheap Linux machine with limited flash and only a single free GPIO line? Probably not much, but sometimes, just getting root to prove you can is the main goal of a project. If that happens to lead somewhere useful, well, that’s just icing on the cake.

Like many interesting stories, this one starts on AliExpress, where [Easton] spied some low-cost WiFi repeaters, the ones that plug directly into the wall and extend your wireless network another few meters or so. Unable to resist the siren song, a few of these dongles showed up in the mailbox, ripe for the hacking. Spoiler alert: although the attempt on the first device had some success by getting a console session through the UART port and resetting the root password, [Easton] ended up bricking the repeater while trying to install an OpenWRT image.

The second attempt, this time on a different but similar device, proved more fruitful. The rudimentary web UI provided no easy path in, although it did a pretty good job enumerating the hardware [Easton] was working with. With the UART route only likely to provide temptation to brick this one too, [Easton] turned to a security advisory about a vulnerability that allows remote code execution through a specially crafted SSID. That means getting root on these dongles is as simple as a curl command — no hardware hacks needed!

As for what to do with a bunch of little plug-in Linux boxes with WiFi, we’ll leave that up to your imagination. We like [Easton]’s idea of running something like Pi-Hole on them; maybe Home Assistant would be possible, but these are pretty resource-constrained machines. Still, the lessons learned here are valuable, and at this price point, let the games begin.

Ultra-Black Material, Sustainably Made from Wood

2 Septiembre 2024 at 08:00

Researchers at the University of British Columbia leveraged an unusual discovery into ultra-black material made from wood. The deep, dark black is not the result of any sort of dye or surface coating; it’s structural change to the wood itself that causes it to swallow up at least 99% of incoming light.

One of a number of prototypes for watch faces and jewelry.

The discovery was partially accidental, as researchers happened upon it while looking at using high-energy plasma etching to machine the surface of wood in order to improve it’s water resistance. In the process of doing so, they discovered that with the right process applied to the right thickness and orientation of wood grain, the plasma treatment resulted in a surprisingly dark end result. Fresh from the plasma chamber, a wood sample has a thin coating of white powder that, once removed, reveals an ultra-black surface.

The resulting material has been dubbed Nxylon (the name comes from mashing together Nyx, the Greek goddess of darkness, with xylon the Greek word for wood) and has been prototyped into watch faces and jewelry. It’s made from natural materials, the treatment doesn’t create or involve nasty waste, and it’s an economical process. For more information, check out UBC’s press release.

You have probably heard about Vantablack (and how you can’t buy any) and artist Stuart Semple’s ongoing efforts at making ever-darker and accessible black paint. Blacker than black has applications in optical instruments and is a compelling thing in the art world. It’s also very unusual to see an ultra-black anything that isn’t the result of a pigment or surface coating.

It Turns Out, A PCB Makes A Nice Watch Dial

Por: Lewin Day
26 Agosto 2024 at 20:00

Printed circuit boards are typically only something you’d find in a digital watch. However, as [IndoorGeek] demonstrates, you can put them to wonderful use in a classical analog watch, too. They can make the perfect watch dial!

Here’s the thing. A printed circuit board is fundamentally some fiberglass coated in soldermask, some copper, maybe a layer of gold plating, and with some silk screen on top of that. As we’ve seen a million times, it’s possible to do all kinds of artistic things with PCBs; a watch dial seems almost obvious in retrospect!

[IndoorGeek] steps through using Altium Designer and AutoCAD to layout the watch face. The guide also covers the assembly of the watch face into an actual wrist watch, including the delicate placement of the movement and hands. They note that there are also opportunities to go further—such as introducing LEDs into the watch face given that it is a PCB, after all!

It’s a creative way to make a hardy and accurate watch face, and we’re surprised we haven’t seen more of this sort of thing before. That’s not to say we haven’t seen other kinds of watch hacks, though; for those, there have been many. Video after the break.

How Jurassic Park’s Dinosaur Input Device Bridged the Stop-Motion and CGI Worlds

Por: Maya Posch
21 Agosto 2024 at 11:00
Raptor DID. Photo by Matt Mechtley.

In a double-blast from the past, [Ian Failes]’ 2018 interview with [Phil Tippett] and others who worked on Jurassic Park is a great look at how the dinosaurs in this 1993 blockbuster movie came to be. Originally conceived as stop-motion animatronics with some motion blurring applied using a method called go-motion, a large team of puppeteers was actively working to make turning the book into a movie when [Steven Spielberg] decided to go in a different direction after seeing a computer-generated Tyrannosaurus rex test made by Industrial Light and Magic (ILM).

Naturally, this left [Phil Tippett] and his crew rather flabbergasted, leading to a range of puppeteering-related extinction jokes. Of course, it was the early 90s, with computer-generated imagery (CGI) animators being still very scarce. This led to an interesting hybrid solution where [Tippett]’s team were put in charge of the dinosaur motion using a custom gadget called the Dinosaur Input Device (DID). This effectively was like a stop-motion puppet, but tricked out with motion capture sensors.

This way the puppeteers could provide motion data for the CG dinosaur using their stop-motion skills, albeit with the computer handling a lot of interpolation. Meanwhile ILM could handle the integration and sprucing up of the final result using their existing pool of artists. As a bridge between the old and new, DIDs provided the means for both puppeteers and CGI artists to cooperate, creating the first major CGI production that holds up to today.

Even if DIDs went the way of the non-avian dinosaurs, their legacy will forever leave their dino-sized footprints on the movie industry.

Thanks to [Aaron] for the tip.


Top image: Raptor DID. Photo by Matt Mechtley.

Australia Didn’t Invent WiFi, Despite What You’ve Heard

Por: Lewin Day
20 Agosto 2024 at 14:00

Wireless networking is all-pervasive in our modern lives. Wi-Fi technology lives in our smartphones, our laptops, and even our watches. Internet is available to be plucked out of the air in virtually every home across the country. Wi-Fi has been one of the grand computing revolutions of the past few decades.

It might surprise you to know that Australia proudly claims the invention of Wi-Fi as its own. It had good reason to, as well— given the money that would surely be due to the creators of the technology. However, dig deeper, and you’ll find things are altogether more complex.

Big Ideas

The official Wi-Fi logo.

It all began at the Commonwealth Scientific and Industrial Research Organization, or CSIRO. The government agency has a wide-ranging brief to pursue research goals across many areas. In the 1990s, this extended to research into various radio technologies, including wireless networking.

The CSIRO is very proud of what it achieved, crediting itself with “Bringing WiFi to the world.” It’s a common piece of trivia thrown around the pub as a bit of national pride—it was scientists Down Under that managed to cook up one of the biggest technologies of recent times!

This might sound a little confusing to you if you’ve looked into the history of Wi-Fi at all. Wasn’t it the IEEE that established the working group for 802.11? And wasn’t it that standard that was released to the public in 1997? Indeed, it was!

The fact is that many groups were working on wireless networking technology in the 1980s and 1990s. Notably, the CSIRO was among them, but it wasn’t the first by any means—nor was it involved with the group behind 802.11. That group formed in 1990, while the precursor to 802.11 was actually developed by NCR Corporation/AT&T in a lab in the Netherlands in 1991. The first standard of what would later become Wi-Fi—802.11-1997—was established by the IEEE based on a proposal by Lucent and NTT, with a bitrate of just 2 MBit/s and operating at 2.4GHz. This standard operated based on frequency-hopping or direct-sequence spread spectrum technology.  This later developed into the popular 802.11b standard in 1999, which upped the speed to 11 Mbit/s. 802.11a came later, switching to 5GHz and using a modulation scheme based around orthogonal frequency division multiplexing (OFDM).

A diagram from the CSIRO patent for wireless LAN technology, dated 1993.

Given we apparently know who invented Wi-Fi, why are Australians allegedly taking credit? Well, it all comes down to patents. A team at the CSIRO had long been developing wireless networking technologies on its own.  In fact, the group filed a patent on 19 November 1993 entitled “Invention: A Wireless Lan.” The crux of the patent was the idea of using multicarrier modulation to get around a frustrating problem—that of multipath interference in indoor environments. This was followed up with a later US patent in 1996 following along the same lines.

The patents were filed because the CSIRO team reckoned they’d cracked wireless networking at rates of many megabits per second. But the details differ quite significantly from the modern networking technologies we use today. Read the patents, and you’ll see repeated references to “operating at frequencies in excess of 10 GHz.” Indeed, the diagrams in the patent documents refer to transmissions in the 60 to 61 GHz range. That’s rather different from the mainstream Wi-Fi standards established by the IEEE. The CSIRO tried over the years to find commercial partners to work with to establish its technology, however, little came of it barring a short-lived start-up called Radiata that was swallowed up by Cisco, never to be seen again.

Steve Jobs shocked the crowd with a demonstration of the first mainstream laptop with wireless networking in 1999. Funnily enough, the CSIRO name didn’t come up.

Based on the fact that the CSIRO wasn’t in the 802.11 working group, and that its patents don’t correspond to the frequencies or specific technologies used in Wi-Fi, you might assume that the CSIRO wouldn’t have any right to claim the invention of Wi-Fi. And yet, the agency’s website could very much give you that impression! So what’s going on?

The CSIRO had been working on wireless LAN technology at the same time as everyone else. It had, by and large, failed to directly commercialize anything it had developed. However, the agency still had its patents. Thus, in the 2000s, it contested that it effectively held the rights to the techniques developed for effective wireless networking, and that those techniques were used in Wi-Fi standards. After writing to multiple companies demanding payment, it came up short. The CSIRO started taking wireless networking companies to court, charging that various companies had violated its patents and demanding heavy royalties, up to $4 per device in some cases. It contested that its scientists had come up with a unique combination of OFDM multiplexing, forward error correction, and interleaving that was key to making wireless networking practical.

An excerpt from the CSIRO’s Australian patent filing in 1993. The agency’s 1996 US patent covers much of the same ground.

A first test case against a Japanese company called Buffalo Technology went the CSIRO’s way. A follow-up case in 2009 aimed at a group of 14 companies. After four days of testimony, the case would have gone down to a jury decision, many members of which would not have been particularly well educated on the finer points of radio communications. The matter was instead settled for $205 million in the CSIRO’s favor. 2012 saw the Australian group go again, taking on a group of nine companies including T-Mobile, AT&T, Lenovo, and Broadcom. This case ended in a further $229 million settlement paid to the CSIRO.

We know little about what went on in these cases, nor the negotiations involved. Transcripts from the short-lived 2009 case had defence lawyers pointing out that the modulation techniques used in the Wi-Fi standards had been around for decades prior to the CSIRO’s later wireless LAN patent.  Meanwhile, the CSIRO stuck to its guns, claiming that it was the combination of techniques that made wireless LAN possible, and that it deserved fair recompense for the use of its patented techniques.

Was this valid? Well, to a degree, that’s how patents work. If you patent an idea, and it’s deemed unique and special, you can generally demand a payment others that like to use it. For better or worse, the CSIRO was granted a US patent for its combination of techniques to do wireless networking. Other companies may have come to similar conclusions on their own, but that didn’t get a patent for it and that left them open to very expensive litigation from the CSIRO.

However, there’s a big caveat here. None of this means that the CSIRO invented Wi-Fi. These days, the agency’s website is careful with the wording, noting that it “invented Wireless LAN.”

The CSIRO has published several comics about the history of Wi-Fi, which might confuse some as to the agency’s role in the standard. This paragraph is a more reserved explanation, though it accuses other companies of having “less success”—a bold statement given that 802.11 was commercially successful, and the CSIRO’s 60 GHz ideas weren’t. Credit: CSIRO website via screenshot

It’s certainly valid to say that the CSIRO’s scientists did invent a wireless networking technique. The problem is that in the mass media, this has commonly been transliterated to say that the agency invented Wi-Fi, which it obviously did not. Of course, this misconception doesn’t hurt the agency’s public profile one bit.

Ultimately, the CSIRO did file some patents. It did come up with a wireless networking technique in the 1990s. But did it invent Wi-Fi? Certainly not. And many will contest that the agency’s patent should not have earned it any money from equipment built to standards it had no role in developing. Still, the myth with persist for some time to come. At least until someone writes a New York Times bestseller on the true and exact history of the real Wi-Fi standards. Can’t wait.

Laser Art Inspired by the Ford Motor Company

17 Agosto 2024 at 08:00

Have you ever heard of Fordite? It was a man-made agate-like stone that originated from the Ford auto factories in the 1920s. Multiple layers of paint would build up as cars were painted different colors, and when it was thick enough, workers would cut it, polish it, and use it in jewelry. [SheltonMaker] uses a similar technique to create artwork using a laser engraver and shares how it works by showing off a replica of [Van Gogh’s] “Starry Night.”

A piece of Fordite on a pendant

The technique does have some random variation, so the result isn’t a perfect copy but, hey, it is art, after all. While true Fordite has random color layers, this technique uses specific colors layered from the lightest to the darkest. Each layer of paint is applied to a canvas. Only after all the layers are in place does the canvas go under the laser.

The first few layers of paint are white and serve as a backer. Each subsequent layer is darker until the final black layer. The idea is that the laser will cut at different depths depending on the desired lightness. A program called ImagR prepared the image as a negative image. Adjustments to the brightness, contrast, and gamma will impact the final result.

Of course, getting the exact power settings is tricky. The best result was to start at a relatively low power and then make more passes at an even lower power until things looked right. In between, compressed air cleared the print, although you have to be careful not to move the piece, of course.

There are pictures of each pass, and the final product looks great. If art’s not your thing, you can also do chip logos. While the laser used in this project is a 40-watt unit, we’ve noted before that wattage isn’t everything. You could do this—probably slower—with a lower-powered engraver.

Fordite image By [Rhonda]  CC BY-SA 2.0.

❌
❌