As we watched the latest SpaceX Starship rocket test end in a spectacular explosion, we might have missed the news from Japan of a different rocket passing a successful test. We all know Honda as a car company but it seems they are in the rocket business too, and they successfully tested a reusable rocket. It’s an experimental 900 kg model that flew to a height of 300 m before returning itself to the pad, but it serves as a valuable test platform for Honda’s take on the technology.
It’s a research project as it stands, but it’s being developed with an eye towards future low-cost satellite launches rather than as a crew launch platform.As a news story though it’s of interest beyond its technology, because it’s too easy to miss news from the other side of the world when all eyes are looking at Texas. It’s the latest in a long line of interesting research projects from the company, and we hope that this time they resist the temptation to kill their creation rather than bring it to market.
We take it for granted that we almost always have cell service, no matter where you go around town. But there are places — the desert, the forest, or the ocean — where you might not have cell service. In addition, there are certain jobs where you must be able to make a call even if the cell towers are down, for example, after a hurricane. Recently, a combination of technological advancements has made it possible for your ordinary cell phone to connect to a satellite for at least some kind of service. But before that, you needed a satellite phone.
On TV and in movies, these are simple. You pull out your cell phone that has a bulkier-than-usual antenna, and you make a call. But the real-life version is quite different. While some satellite phones were connected to something like a ship, I’m going to consider a satellite phone, for the purpose of this post, to be a handheld device that can make calls.
History
Satellites have been relaying phone calls for a very long time. Early satellites carried voice transmissions in the late 1950s. But it would be 1979 before Inmarsat would provide MARISAT for phone calls from sea. It was clear that the cost of operating a truly global satellite phone system would be too high for any single country, but it would be a boon for ships at sea.
Inmarsat, started as a UN organization to create a satellite network for naval operations. It would grow to operate 15 satellites and become a private British-based company in 1998. However, by the late 1990s, there were competing companies like Thuraya, Iridium, and GlobalStar.
An IsatPhone-Pro (CC-BY-SA-3.0 by [Klaus Därr])The first commercial satellite phone call was in 1976. The oil platform “Deep Sea Explorer” had a call with Phillips Petroleum in Oklahoma from the coast of Madagascar. Keep in mind that these early systems were not what we think of as mobile phones. They were more like portable ground stations, often with large antennas.
For example, here was part of a press release for a 1989 satellite terminal:
…small enough to fit into a standard suitcase. The TCS-9200 satellite terminal weighs 70lb and can be used to send voice, facsimile and still photographs… The TCS-9200 starts at $53,000, while Inmarsat charges are $7 to $10 per minute.
Keep in mind, too, that in addition to the briefcase, you needed an antenna. If you were lucky, your antenna folded up and, when deployed, looked a lot like an upside-down umbrella.
However, Iridium launched specifically to bring a handheld satellite phone service to the market. The first call? In late 1998, U.S. Vice President Al Gore dialed Gilbert Grosvenor, the great-grandson of Alexander Graham Bell. The phones looked like very big “brick” phones with a very large antenna that swung out.
Of course, all of this was during the Cold War, so the USSR also had its own satellite systems: Volna and Morya, in addition to military satellites.
Location, Location, Location
The earliest satellites made one orbit of the Earth each day, which means they orbit at a very specific height. Higher orbits would cause the Earth to appear to move under the satellite, while lower orbits would have the satellite racing around the Earth.
That means that, from the ground, it looks like they never move. This gives reasonable coverage as long as you can “see” the satellite in the sky. However, it means you need better transmitters, receivers, and antennas.
This is how Inmarsat and Thuraya worked. Unless there is some special arrangement, a geosynchronous satellite only covers about 40% of the Earth.
Getting a satellite into a high orbit is challenging, and there are only so many “slots” at the exact orbit required to be geosynchronous available. That’s why other companies like Iridium and Globalstar wanted an alternative.
That alternative is to have satellites in lower orbits. It is easier to talk to them, and you can blanket the Earth. However, for full coverage of the globe, you need at least 40 or 50 satellites.
The system is also more complex. Each satellite is only overhead for a few minutes, so you have to switch between orbiting “cell towers” all the time. If there are enough satellites, it can be an advantage because you might get blocked from one satellite by, say, a mountain, and just pick up a different one instead.
Globalstar used 48 satellites, but couldn’t cover the poles. They eventually switched to a constellation of 24 satellites. Iridium, on the other hand, operates 66 satellites and claims to cover the entire globe. The satellites can beam signals to the Earth or each other.
The Problems
There are a variety of issues with most, if not all, satellite phones. First, geosynchronous satellites won’t work if you are too far North or South since the satellite will be so low, you’ll bump into things like trees and mountains. Of course, they don’t work if you are on the wrong side of the world, either, unless there is a network of them.
Getting a signal indoors is tricky. Sometimes, it is tricky outdoors, too. And this isn’t cheap. Prices vary, but soon after the release, phones started at around $1,300, and then you paid $7 a minute to talk. The geosynchronous satellites, in particular, are subject to getting blocked momentarily by just about anything. The same can happen if you have too few satellites in the sky above you.
Modern pricing is a bit harder to figure out because of all the different plans. However, expect to pay between $50 and $150 a month, plus per-minute charges ranging from $0.25 to $1.50 per minute. In general, networks with less coverage are cheaper than those that work everywhere. Text messages are extra. So, of course, is data.
If you want to see what it really looked like to use a 1990-era Iridium phone, check out [saveitforparts] video below.
If you prefer to see an older non-phone system, check him out with an even older Inmarsat station in this video:
Time series of O2 (blue) and VGADM (red). (Credit: Weijia Kuang, Science Advances, 2025)
In an Earth-sized take on the age-old ‘correlation or causality’ question, researchers have come across a fascinating match between Earth’s magnetic field and its oxygen levels since the Cambrian explosion, about 500 million years ago. The full results by [Weijia Kuang] et al. were published in Science Advances, where the authors speculate that this high correlation between the geomagnetic dipole and oxygen levels as recorded in the Earth’s geological mineral record may be indicative of the Earth’s geological processes affecting the evolution of lifeforms in its biosphere.
As with any such correlation, one has to entertain the notion that said correlation might be spurious or indirectly related before assuming a strong causal link. Here it is for example known already that the solar winds affect the Earth’s atmosphere and with it the geomagnetic field, as more intense solar winds increase the loss of oxygen into space, but this does not affect the strength of the geomagnetic field, just its shape. The question is thus whether there is a mechanism that would affect this field strength and consequently cause the loss of oxygen to the solar winds to spike.
Here the authors suggest that the Earth’s core dynamics – critical to the geomagnetic field – may play a major role, with conceivably the core-mantle interactions over the course of millions of years affecting it. As supercontinents like Pangea formed, broke up and partially reformed again, the impact of this material solidifying and melting could have been the underlying cause of these fluctuations in oxygen and magnetic field strength levels.
Although hard to say at this point in time, it may very well be that this correlation is causal, albeit as symptoms of activity of the Earth’s core and liquid mantle.
We like scale models here, but how small can you shrink the very large? If you’re [Frans], it’s pretty small indeed: his Micro Tellurium fits the orbit of the Earth on top of an ordinary pencil. While you’ll often see models of Earth, Moon and Sun’s orbital relationship called “Orrery”, that’s word should technically be reserved for models of the solar system, inclusive of at least the classical planets, like [Frans]’s Gentleman’s Orrery that recently graced these pages. When it’s just the Earth, Moon and Sun, it’s a Tellurium.
The whole thing is made out of brass, save for the ball-bearings for the Earth and Moon. Construction was done by a combination of manual milling and CNC machining, as you can see in the video below. It is a very elegant device, and almost functional: the Earth-Moon system rotates, simulating the orbit of the moon when you turn the ring to make the Earth orbit the sun. This is accomplished by carefully-constructed rods and a rubber O-ring.
Unfortunately, it seems [Franz] had to switch to a thicker axle than originally planned, so the tiny moon does not orbit Earth at the correct speed compared to the solar orbit: it’s about half what it ought to be. That’s unfortunate, but perhaps that’s the cost one pays when chasing smallness. It might be possible to fix in a future iteration, but right now [Franz] is happy with how the project turned out, and we can’t blame him; it’s a beautiful piece of machining.
It should be noted that there is likely no tellurium in this tellurium — the metal and the model share the same root, but are otherwise unrelated. We have featured hacks with that element, though.
Thanks to [Franz] for submitting this hack. Don’t forget: the tips line is always open, and we’re more than happy to hear you toot your own horn, or sing the praises of someone else’s work.
When we were kids, it was a rite of passage to read the newly arrived Edmund catalog and dream of building our own telescope. One of our friends lived near a University, and they even had a summer program that would help you measure your mirrors and ensure you had a successful build. But most of us never ground mirrors from glass blanks and did all the other arcane steps required to make a working telescope. However, [La3emedimension] wants to tempt us again with a 3D-printable telescope kit.
Before you fire up the 3D printer, be aware that PLA is not recommended, and, of course, you are going to need some extra parts. There is supposed to be a README with a bill of parts, but we didn’t see it. However, there is a support page in French and a Discord server, so we have no doubt it can be found.
It is possible to steal the optics from another telescope or, of course, buy new. You probably don’t want to grind your own mirrors, although good on you if you do! You can even buy the entire kit if you don’t want to print it and gather all the parts yourself.
The scope is made to be ultra-portable, and it looks like it would be a great travel scope. Let us know if you build one or a derivative.
This telescope looks much different than other builds we’ve seen. If you want to do it all old school, we’ve seen a great guide.
Where’s the best place for a datacenter? It’s an increasing problem as the AI buildup continues seemingly without pause. It’s not just a problem of NIMBYism; earthly power grids are having trouble coping, to say nothing of the demand for cooling water. Regulators and environmental groups alike are raising alarms about the impact that powering and cooling these massive AI datacenters will have on our planet.
While Sam Altman fantasizes about fusion power, one obvious response to those who say “think about the planet!” is to ask, “Well, what if we don’t put them on the planet?” Just as Gerald O’Niell asked over 50 years ago when our technology was merely industrial, the question remains:
“Is the surface of a planet really the right place for expanding technological civilization?”
O’Neill’s answer was a resounding “No.” The answer has not changed, even though our technology has. Generative AI is the latest and greatest technology on offer, but it turns out it may be the first one to make the productive jump to Earth Orbit. Indeed, it already has, but more on that later, because you’re probably scoffing at such a pie-in-the-sky idea.
There are three things needed for a datacenter: power, cooling, and connectivity. The people at companies like Starcloud, Inc, formally Lumen Orbit, make a good, solid case that all of these can be more easily met in orbit– one that includes hard numbers.
Sure, there’s also more radiation on orbit than here on earth, but our electronics turn out to be a lot more resilient than was once thought, as all the cell-phone cubesats have proven. Starcloud budgets only 1 kg of sheilding per kW of compute power in their whitepaper, as an example. If we can provide power, cooling, and connectivity, the radiation environment won’t be a showstopper.
Power
There’s a great big honkin’ fusion reactor already available for anyone to use to power their GPUs: the sun. Of course on Earth we have tricky things like weather, and the planet has an annoying habit of occluding the sun for half the day but there are no clouds in LEO. Depending on your choice of orbit, you do have that annoying 45 minutes of darkness– but a battery to run things for 45 minutes is not a big UPS, by professional standards. Besides, the sun-synchronous orbits are right there, just waiting for us to soak up that delicious, non-stop solar power.
Sun Synchronous Orbit, because nights are for squats. Image by Brandir via Wikimedia.
Sun-synchronous orbits (SSOs) are polar orbits that precess around the Earth once every sidereal year, so that they always maintain the same angle to the sun. For example, you might have an SSO that crosses the equator 12 times a day, each time at local 15:00, or 10:43, any other time set by the orbital parameters. With SSOs, you don’t have to worry about ever losing solar power to some silly, primitive, planet-bound concept like nighttime.
Without the atmosphere in the way, solar panels are also considerably more effective per unit area, something the Space Solar Power people have been pointing out since O’Neill’s day. The problem with Space Solar Power has always been the efficiencies and regulatory hurdles of beaming the power back to Earth– but if you use the power to train an AI model, and send the data down, that’s no longer an issue. Given that the 120 kW array on ISS has been trouble-free for decades now, we can consider it a solved problem. Sure, solar panels degrade, but the rate is in fractions of a percent per year, and it happens on Earth too. By the time solar panel replacement is likely to be the rest of the hardware is likely to be totally obsolete.
Cooling
This is where skepticism creeps in. After all, cooling is the greatest challenge with high performance computing hardware here on earth, and heat rejection is the great constraint of space operations. The “icy blackness of space” you see in popular culture is as realistic as warp drive; space is a thermos, and shedding heat is no trivial issue. It is also, from an engineering perspective, not a complex issue. We’ve been cooling spacecraft and satellites using radiators to shed heat via infrared emission for decades now. It’s pretty easy to calculate that if you have X watts of heat to reject at Y degrees, you will need a radiator of area Z. The Stephan-Boltzmann Law isn’t exactly rocket science.
Photons go out, liquid cools down. It might be rocket science, but it’s a fairly mature technology. (Image: EEATCS radiator deployment during ISS Flight 5A, NASA)
Even better, unlike on Earth where you have changeable things like seasons and heat waves, in a SSO you need only account for throttling– and if your data center is profitable, you won’t be doing much of that. So while you need a cooling system, it won’t be difficult to design. Liquid or two-phase cooling on server hardware? Not new. Plumbing cooling a loop to a radiator in the vacuum of space? That’s been part of satellite busses for years.
Aside from providing you with a stable thermal environment, the other advantage of an SSO is that if one chooses the dawn/dusk orbit along the terminator, while the solar panels always face the sun, the radiators can always face black space, letting them work to their optimal potential. This would also simplify the satellite bus, as no motion system would be required to keep the solar panels and radiators aligned into/out of the sun. Conceivably the whole thing could be stabilized by gravity gradient, minimizing the need to use reaction wheels.
Connectivity
One word: Starlink. That’s not to say that future data centers will necessarily be hooking into the Starlink network, but high-bandwidth operations on orbit are already proven, as long as you consider 100 gigabytes per second sufficient bandwidth. An advantage not often thought of for this sort of space-based communications is that the speed of light in a vacuum is about 31% faster than glass fibers, while the circumference of a low Earth orbit is much less than 31% greater than the circumference of the planet. That reduces ping times between elements of free-flying clusters or clusters and whatever communications satellite is overhead of the user. It is conceivable, but by no means a sure thing, that a user in the EU might have faster access to orbital data than they would to a data center in the US.
The Race
This hypothetical European might want to use European-owned servers. Well, the European Commission is on it; in the ASCEND study (Advanced Space Cloud for European Net zero Emission and Data sovereignty) you can tell from the title they put as much emphasis on keeping European data European as they do on the environmental aspects mentioned in the introduction. ASCEND imagines a 32-tonne, 800 kW data center lofted by a single super-heavy booster (sadly not Ariane 6), and proposes it could be ready by the 2030s. There’s no hint in this proposal that the ASCEND Consortium or the EC would be willing to stop at one, either. European efforts have already put AI in orbit, with missions like PhiSat2 using on-board AI image processing for Earth observation.
You know Italians were involved because it’s so stylish. No other proposal has that honeycomb aesthetic for their busy AI bees. Image ASCEND.
AWS Snowcone after ISS delivery. The future is here and it’s wrapped in Kapton. (Image NASA)
There are other American companies chasing venture capital for this purpose, like Google-founder-backed Relativity Space or the wonderfully-named Starcloud mentioned above. Starcloud’s whitepaper is incredibly ambitious, talking about building an up to 5 GW cluster whose double-sided solar/radiator array would be by far the largest object ever built in orbit at 4 km by 4 km. (Only a few orders of magnitude bigger than ISS. Not big deal.) At least it is a modular plan, that could be built up over time, and they are planning to start with a smaller standalone proof-of-concept, Starcloud-2, in 2026.
You can’t accuse Starcloud of thinking small. (Image Starcloud via Youtube.)A closeup of one of the twelve “Stars” in the Three Body Computing Constellation. This times 2,800. Image ADA Space.
Once they get up there, the American and European AIs are are going to find someone else has already claimed the high ground, and that that someone else speaks Chinese. A startup called ADA Space launched 12 satellites in May 2025 to begin building out the world’s first orbital supercomputer, called the Three Body Computing Constellation. (You can’t help but love the poetry of Chinese naming conventions.)
Unlike the American startups, they aren’t shy about its capabilities: 100 Gb/s optical datalinks, with the most powerful satellite in the constellation capable of 744 trillion operations per second. (TOPS, not FLOPS. FLOPS specifically refers to floating point operations, whereas TOPS could be any operation but usually refers to operations on 8-bit integers.)
For comparison, Microsoft requires an “AI PC” like the copilot laptops to have 40 TOPS of AI-crunching capacity. The 12 satellites must not be identical, as the constellation together has a quoted capability of 5 POPS (peta-operations per second), and a storage capacity of 30 TB. That’s seems pretty reasonable for a proof-of-concept. You don’t get a sense of the ambition behind it until you hear that these 12 are just the first wave of a planned 2,800 satellites. Now that’s what I’d call a supercluster!
A man can dream, can’t he? Image NASA.
High-performance computing in space? It’s no AI hallucination, it’s already here. There is a network forming in the sky. A sky-net, if you will, and I for one welcome our future AI overlords. They already have the high ground, so there’s no point fighting now. Hopefully this datacenter build-out will just be the first step on the road Gerry O’Neill and his students envisioned all those years ago: a road that ends with Earth’s surface as parkland, and civilization growing onwards and upwards. Ad astra per AI? There are worse futures.
If you were alive when 2001: A Space Odyssey was in theaters, you might have thought it didn’t really go far enough. After all, in 1958, the US launched its first satellite. The first US astronaut went up in 1961. Eight years later, Armstrong put a boot on the moon’s surface. That was a lot of progress for 11 years. The movie came out in 1968, so what would happen in 33 years? Turns out, not as much as you would have guessed back then. [The History Guy] takes us through a trip of what could have been if progress had marched on after those first few moon landings. You can watch the video below.
The story picks up way before NASA. Each of the US military branches felt like it should take the lead on space technology. Sputnik changed everything and spawned both ARPA and NASA. The Air Force, though, had an entire space program in development, and many of the astronauts for that program became NASA astronauts.
The Army also had its own stymied space program. They eventually decided it would be strategic to develop an Army base on the moon for about $6 billion. The base would be a large titanium cylinder buried on the moon that would house 12 people.
The base called for forty launches in a single year before sending astronauts, and then a stunning 150 Saturn V launches to supply building materials for the base. Certainly ambitious and probably overly ambitious, in retrospect.
There were other moon base plans. Most languished with little support or interest. The death knell, though, was the 1967 Outer Space Treaty, which forbids military bases on the moon.
While we’d love to visit a moon base, we are fine with it not being militarized. We also want our jet packs.
Starting on June 12, 2025, the NASA Spot the Station website will no longer provide ISS sighting information, per a message recently sent out. This means no information on sighting opportunities provided on the website, nor will users subscribed via the website receive email or text notifications. Instead anyone interested in this kind of information will have to download the mobile app for iOS or Android.
Obviously this has people, like [Keith Cowing] over at Nasa Watch, rather disappointed, due to how the website has been this easy to use resource that anyone could access, even without access to a smart phone. Although the assumption is often made that everyone has their own personal iOS or Android powered glass slab with them, one can think of communal settings where an internet café is the sole form of internet access. There is also the consideration that for children a website like this would be much easier to access. They would now see this opportunity vanish.
With smart phone apps hardly a replacement for a website of this type, it’s easy to see how the app-ification of the WWW continues, at the cost of us users.
Have you heard that author Andy Weir has a new book coming out? Very exciting, we know, and according to a syndicated reading list for Summer 2025, it’s called The Last Algorithm, and it’s a tale of a programmer who discovers a dark and dangerous secret about artificial intelligence. If that seems a little out of sync with his usual space-hacking fare such as The Martian and Project Hail Mary, that’s because the book doesn’t exist, and neither do most of the other books on the list.
The list was published in a 64-page supplement that ran in major US newspapers like the Chicago Sun-Times and the Philadelphia Inquirer. The feature listed fifteen must-read books, only five of which exist, and it’s no surprise that AI is to behind the muck-up. Writer Marco Buscaglia took the blame, saying that he used an LLM to produce the list without checking the results. Nobody else in the editorial chain appears to have reviewed the list either, resulting in the hallucination getting published. Readers are understandably upset about this, but for our part, we’re just bummed that Andy doesn’t have a new book coming out.
In equally exciting but ultimately fake news, we had more than a few stories pop up in our feed about NASA’s recent discovery of urban lights on an exoplanet. AI isn’t to blame for this one, though, at least not directly. Ironically, the rumor started with a TikTok video debunking a claim of city lights on a distant planet. Social media did what social media does, though, sharing only the parts that summarized the false claim and turning a debunking into a bunking. This is why we can’t have nice things.
That wasn’t the only story about distant lights, though, with this report of unexplained signals from two nearby stars. This one is far more believable, coming as it does from retired JPL scientist Richard H. Stanton, who has been using a 30″ telescope to systematically search for optical SETI signals for the past few years. These searches led to seeing two rapid pulses of light from HD 89389, an F-type star located in the constellation Ursa Major. The star rapidly brightened, dimmed, brightened again, then returned to baseline over a fraction of second; the same pattern repeated itself about 4.4 seconds later.
Intrigued, he looked back through his observations and found a similar event from a different star, HD 217014 in Pegasus, four years previously. Interestingly, this G-type star is known to have at least one exoplanet. Stanton made the first observation in 2023, and he’s spent much of the last two years ruling out things like meteor flashes or birds passing through his field of view. More study is needed to figure out what this means, and while it’s clearly not aliens, it’s fun to imagine it could be some kind of technosignature.
And one last space story, this time with the first observation of extra-solar ice. The discovery comes from the James Webb Space Telescope, which caught the telltale signature of ice crystals in a debris ring circling HD 181327, a very young star only 155 light-years away. Water vapor had been detected plenty of times outside our solar system, but not actual ice crystals until now. The ice crystals seem to be coming from collisions between icy bodies in the debris field, an observation that has interesting implications for planetary evolution.
And finally, if like us you’re impressed anytime someone busts out a project with a six-layer PCB design, wait till you get a load of this 124-layer beast. The board comes from OKI Circuit Technologies and is intended for high-bandwidth memory for AI accelerators. The dielectric for each layer is only 125-μm thick, and the board is still only 7.6 mm thick overall. At $4,800 per square meter, it’s not likely we’ll see our friends at JLC PCB offering these anytime soon, but it’s still some pretty cool engineering.
This particular video is a bit over ten minutes long and is basically a montage; there is no narration or explanation given, but you can watch clear progress being made and the ultimate success of the backyard facility.
Obviously the coolest thing about this building is that the roof can be moved, but those telescope mounts look pretty sexy too. About halfway through the video the concrete slab that was supporting one metal mounting pole gets torn up so that two replacements can be installed, thereby doubling the capacity of the observatory from one telescope to two.
Some hacks are so great that when you die you receive the rare honor of both an obituary in the New York Times and an in memoriam article at Hackaday.
The recently deceased, [Ed Smylie], was a NASA engineer leading the effort to save the crew of Apollo 13 with a makeshift gas conduit made from plastic bags and duct tape back in the year 1970. [Ed] died recently, on April 21, in Crossville, Tennessee, at the age of 95.
This particular hack, another in the long and storied history of duct tape, literally required putting a square peg in a round hole. After an explosion crippled the command module the astronauts needed to escape on the lunar excursion module. But the lunar module was only designed to support two people, not three.
The problem was that there was only enough lithium hydroxide onboard the lunar module to filter the air for two people. The astronauts could salvage lithium hydroxide canisters from the command module, but those canisters were square, whereas the canisters for the lunar module were round.
[Ed] and his team famously designed the required adapter from a small inventory of materials available on the space craft. This celebrated story has been told many times, including in the 1995 film, Apollo 13.
Thank you, [Ed], for one of the greatest hacks of all time. May you rest in peace.
Header: Gas conduit adapter designed by [Ed Smylie], NASA, Public domain.
As with all aging bodies, clogged tubes form an increasing issue. So too with the 47-year old Voyager 1 spacecraft and its hydrazine thrusters. Over the decades silicon dioxide from an aging rubber diaphragm in the fuel tank has been depositing on the inside of fuel tubes. By switching between primary, backup and trajectory thrusters the Voyager team has been managing this issue and kept the spacecraft oriented towards Earth. Now this team has performed another amazing feat by reviving the primary thrusters that had been deemed a loss since a heater failure back in 2004.
Unlike the backup thrusters, the trajectory thrusters do not provide roll control, so reviving the primary thrusters would buy the mission a precious Plan B if the backup thrusters were to fail. Back in 2004 engineers had determined that the heater failure was likely unfixable, but over twenty years later the team was willing to give it another shot. Analyzing the original failure data indicated that a glitch in the heater control circuit was likely to blame, so they might actually still work fine.
To test this theory, the team remotely jiggled the heater controls, enabled the primary thrusters and waited for the spacecraft’s star tracker to drift off course so that the thrusters would be engaged by the board computer. Making this extra exciting was scheduled maintenance on the Deep Space Network coming up in a matter of weeks, which would troubleshooting impossible for months.
To their relief the changes appears to have worked, with the heaters clearly working again, as are the primary thrusters. With this fix in place, it seems that Voyager 1 will be with us for a while longer, even as we face the inevitable end to the amazing Voyager program.
You normally think of ELINT — Electronic Intelligence — as something done in secret by shadowy three-letter agencies or the military. The term usually means gathering intelligence from signals that don’t contain speech (since that’s COMINT). But [Nukes] was looking at public data from NASA’s SMAP satellite and made an interesting discovery. Despite the satellite’s mission to measure soil moisture, it also provided data on strange happenings in the radio spectrum.
While 1.4 GHz is technically in the L-band, it is reserved (from 1.400–1.427 GHz) for specialized purposes. The frequency is critical for radio astronomy, so it is typically clear other than low-power safety critical data systems that benefit from the low potential for interference. SMAP, coincidentally, listens on 1.41 GHz and maps where there is interference.
Since there aren’t supposed to be any high-power transmitters at that frequency, you can imagine that anything showing up there is probably something unusual and interesting. In particular, it is often a signature for military jamming since nearby frequencies are often used for passive radar and to control drones. So looking at the data can give you a window on geopolitics at any given moment.
The data is out there, and a simple Python script can pull it. We imagine this is the kind of data that only a spook in a SCIF would have had just a decade or two ago.
A group of students from Lancing College in the UK have sent in their Critical Design Review (CDR) for their entry in the UK CanSat project.
Per the competition guidelines the UK CanSat project challenges students aged 14 to 19 years of age to build a satellite which can relay telemetry data about atmospheric conditions such as could help with space exploration. The students’ primary mission is to collect temperature and pressure readings, and these students picked their secondary mission to be collection of GPS data, for use on planets where GPS infrastructure is available, such as on Earth. This CDR follows their Preliminary Design Review (PDR).
The six students in the group bring a range of relevant skills. Their satellite transmits six metrics every second: temperature, pressure, altitude reading 1, altitude reading 2, latitude, and longitude. The main processor is an Arduino Nano Every, a BMP388 sensor provides the first three metrics, and a BE880 GPS module provides the following three metrics. The RFM69HCW module provides radio transmission and reception using LoRa.
The students present their plan and progress in a Gantt chart, catalog their inventory of relevant skills, assess risks, prepare mechanical and electrical designs, breadboard the satellite circuitry and receiver wiring, design a PCB in KiCad, and develop flow charts for the software. The use of Blender for data visualization was a nice hack, as was using ChatGPT to generate an example data file for testing purposes. Mechanical details such as parachute design and composition are worked out along with a shiny finish for high visibility. The students conduct various tests to ensure the suitability of their design and then conduct an outreach program to advertise their achievements to their school community and the internet at large.
We here at Hackaday would like to wish these talented students every success with their submission and we hope you had good luck on launch day, March 4th!
The backbone of this project is the LoRa technology and if you’re interested in that we’ve covered that here at Hackaday many times before, such as in this rain gauge and these soil moisture sensors.
Telescopes are great tools for observing the heavens, or even surrounding landscapes if you have the right vantage point. You don’t have to be a professional to build one though; you can make all kinds of telescopes as an amateur, as this guide from the Springfield Telesfcope Makers demonstrates.
The guide is remarkably deep and rich; no surprise given that the Springfield Telescope Makers club dates back to the early 20th century. It starts out with the basics—how to select a telescope, and how to decide whether to make or buy your desired instrument. It also explains in good detail why you might want to start with a simple Newtonian reflector setup on Dobsonian mounts if you’re crafting your first telescope, in no small part because mirrors are so much easier to craft than lenses for the amateur. From there, the guide gets into the nitty gritty of mirror production, right down to grinding and polishing techniques, as well as how to test your optical components and assemble your final telescope.
It’s hard to imagine a better place to start than here as an amateur telescope builder. It’s a rich mine of experience and practical advice that should give you the best possible chance of success. You might also like to peruse some of the other telescope projects we’ve covered previously. And, if you succeed, you can always tell us of your tales on the tipsline!
Last week, the mainstream news was filled with headlines about K2-18b — an exoplanet some 124 light-years away from Earth that 98% of the population had never even heard about. Even astronomers weren’t aware of its existence until the Kepler Space Telescope picked it out back in 2015, just one of the more than 2,700 planets the now defunct observatory was able to identify during its storied career. But now, thanks to recent observations by the James Web Space Telescope, this obscure planet has been thrust into the limelight by the discovery of what researchers believe are the telltale signs of life in its atmosphere.
Artist’s rendition of planet K2-18b.
Well, maybe. As you might imagine, being able to determine if a planet has life on it from 124 light-years away isn’t exactly easy. We haven’t even been able to conclusively rule out past, or even present, life in our very own solar system, which in astronomical terms is about as far off as the end of your block.
To be fair the University of Cambridge’s Institute of Astronomy researchers, lead by Nikku Madhusudhan, aren’t claiming to have definitive proof that life exists on K2-18b. We probably won’t get undeniable proof of life on another planet until a rover literally runs over it. Rather, their paper proposes that abundant biological life, potentially some form of marine phytoplankton, is one of the strongest explanations for the concentrations of dimethyl sulfide and dimethyl disulfide that they’ve detected in the atmosphere of K2-18b.
As you might expect, there are already challenges to that conclusion. Which is of course exactly how the scientific process is supposed to work. Though the findings from Cambridge are certainly compelling, adding just a bit of context can show that things aren’t as cut and dried as we might like. There’s even an argument to be made that we wouldn’t necessarily know what the signs of extraterrestrial life would look like even if it was right in front of us.
Life as We Know It
Credit where credit is due, most of the news outlets have so far treated this story with the appropriate amount of skepticism. Reading though the coverage, Cambridge’s findings are commonly described as the “strongest evidence yet” of potential extraterrestrial life, rather than being treated as definitive proof. Well, other than the Daily Mail anyway. They decided to consult with ChatGPT and other AI tools in an effort to find out what lifeforms on K2-18b would look like.
So, AI-generated frogmen renders not withstanding, what makes these findings so difficult to interpret? For one thing, we have very little idea of what extraterrestrial life would actually be like, so proving that it exists is exceptionally difficult. Scientists have precisely one data point for what constitutes as life, and you’re sitting on it. We only know what life on Earth looks like, and while there’s an incredible amount of biodiversity on our home planet, it all still tends to play by the same established rules.
On Earth, dimethyl sulfide (DMS) is produced by phytoplankton.
We assume those rules to be a constant on other planets, but that’s only because we don’t know what else to look for. Consider that the bulk of our efforts in the search for extraterrestrial intelligence (SETI) thus far have been based on the idea that other sentient beings would develop some form of radio technology similar to our own, and that if we simply pointed a receiver at their star, we would be able to pick up their version of I Love Lucy.
This is a preposterous presupposition, which doesn’t even make much sense when compared to humanity’s history. Consider the science, literature, and art that humankind was able to produce before the advent of the electric light. Now imagine that Proxima Centauri’s answer to Beethoven is putting the finishing touches on their latest masterpiece as our radio telescope silently checks their planet off the list of inhabited worlds because it wasn’t emanating any RF transmissions we recognize.
Similarly, here on Earth dimethyl sulfide (DMS) and dimethyl disulfide (DMDS) are produced exclusively by biological processes. DMS specifically is so commonly associated with marine phytoplankton that we often associate its smell with being in proximity of the sea. This being the case, you could see how finding large quantities of these gases in the atmosphere of an alien planet would seem to indicate that it must be teaming with aquatic life.
But just because that’s true on Earth doesn’t mean it’s true on K2-18b. We know these gases can be created abiotically in the laboratory, which means there are alternative explanations to how they could be produced on another planet — even if we can’t explain them currently. Further, a paper released in November 2024 pointed out that DMS was detected on comet 67P/Churyumov–Gerasimenko by the European Space Agency’s Rosetta spacecraft, indicating there’s some unknown method by which it can be produced in the absence of any biological activity.
Finding What You’re Looking For
All that being said, let’s assume for the sake of argument that the presence of dimethyl sulfide and dimethyl disulfide was indeed enough to confirm there was life on the planet. You’d still need to confirm beyond a shadow of a doubt that those gases were present in the atmosphere. So how do you do that?
Within our own solar system, you could send a probe. Which is what’s been suggested to investigate the possibility that phosphine gas exists on Venus. But remember, we’re talking about a planet that’s 124 light-years away. In this case, the only way to study the atmosphere is through spectroscopy — that is, examining the degree to which various wavelengths of light (visible and otherwise) are blocked as they pass through it.
This is, as you may have guessed, easier said than done. The amount of data you can collect from such a distant object, even with an instrument as powerful as the James Webb Space Telescope is minuscule. You need to massage the data with various models to extract any useful information from the noise, and according to some critics, that’s when bias can creep in.
In a recently released paper, Jake Taylor from the University of Oxford argues that the only reason Nikku Madhusudhan and his team found signs of DMS and DMDS in the spectrographic data is because that’s what they were looking for. Given their previous research that potentially detected methane and carbon dioxide in the atmosphere of K2-18b, it’s possible the team was already primed to find further evidence of biological processes on the planet, and were looking a bit too hard to find evidence to back up their theory.
When analyzing the raw data without any preconceived notion of what you’re looking for, Taylor says there’s “no strong statistical evidence” to support the detection of DMS and DMDS in the atmosphere of K2-18b. This conclusion itself will need to be scrutinized, of course, though it does have the benefit of Occam’s razor on its side.
In short, there may or may not be dimethyl sulfide and dimethyl disulfide gases in the atmosphere of K2-18b, and that may or may not mean there’s potentially some form of biological life in the planet’s oceans…which it may or may not actually have. If you’re looking for anything more specific than that, the science is still out.
Space X Starship firing its many Raptor engines. The raptor pioneered the new generation of methalox. (Image: Space X)
Go back a generation of development, and excepting the shuttle-derived systems, all liquid rockets used RP-1 (aka kerosene) for their first stage. Now it seems everybody and their dog wants to fuel their rockets with methane. What happened? [Eager Space] was eager to explain in recent video, which you’ll find embedded below.
At first glance, it’s a bit of a wash: the density and specific impulses of kerolox (kerosene-oxygen) and metholox (methane-oxygen) rockets are very similar. So there’s no immediate performance improvement or volumetric disadvantage, like you would see with hydrogen fuel. Instead it is a series of small factors that all add up to a meaningful design benefit when engineering the whole system.
Methane also has the advantage of being a gas when it warms up, and rocket engines tend to be warm. So the injectors don’t have to worry about atomizing a thick liquid, and mixing fuel and oxidizer inside the engine does tend to be easier. [Eager Space] calls RP-1 “a soup”, while methane’s simpler combustion chemistry makes the simulation of these engines quicker and easier as well.
There are other factors as well, like the fact that methane is much closer in temperature to LOX, and does cost quite a bit less than RP-1, but you’ll need to watch the whole video to see how they all stack up.
We about rocketry fairly often on Hackaday, seeing projects with both liquid-fueled and solid-fueled engines. We’ve even highlighted at least one methalox rocket, way back in 2019. Our thanks to space-loving reader [Stephen Walters] for the tip. Building a rocket of your own? Let us know about it with the tip line.
If you’ve ever fumbled through circuit simulation and ended up with a flatline instead of a sine wave, this video from [saisri] might just be the fix. In this walkthrough she demonstrates simulating a Colpitts oscillator using NI Multisim 14.3 – a deceptively simple analog circuit known for generating stable sine waves. Her video not only shows how to place and wire components, but it demonstrates why precision matters, even in virtual space.
You’ll notice the emphasis on wiring accuracy at multi-node junctions, something many tutorials skim over. [saisri] points out that a single misconnected node in Multisim can cause the circuit to output zilch. She guides viewers step-by-step, starting with component selection via the “Place > Components” dialog, through to running the simulation and interpreting the sine wave output on Channel A. The manual included at the end of the video is a neat bonus, bundling theory, waveform visuals, and circuit diagrams into one handy PDF.
If you’re into precision hacking, retro analogue joy, or just love watching a sine wave bloom onscreen, this is worth your time. You can watch the original video here.
NASA astronaut Catherine Coleman gives ESA astronaut Paolo Nespoli a haircut in the Kibo laboratory on the ISS in 2011. (Credit: NASA)
Although we tend to see mostly the glorious and fun parts of hanging out in a space station, the human body will not cease to do its usual things, whether it involves the digestive system, or even something as mundane as the hair that sprouts from our heads. After all, we do not want our astronauts to return to Earth after a half-year stay in the ISS looking as if they got marooned on an uninhabited island. Introducing the onboard barbershop on the ISS, and the engineering behind making sure that after a decade the ISS doesn’t positively look like it got the 1970s shaggy wall carpet treatment.
The basic solution is rather straightforward: an electric hair clipper attached to a vacuum that will whisk the clippings safely into a container rather than being allowed to drift around. In a way this is similar to the vacuums you find on routers and saws in a woodworking shop, just with more keratin rather than cellulose and lignin.
On the Chinese Tiangong space station they use a similar approach, with the video showing how simple the system is, little more than a small handheld vacuum cleaner attached to the clippers. Naturally, you cannot just tape the vacuum cleaner to some clippers and expect it to get most of the clippings, which is where both the ISS and Tiangong solutions seems to have a carefully designed construction to maximize the hair removal. You can see the ISS system in action in this 2019 video from the Canadian Space Agency.
Of course, this system is not perfect, but amidst the kilograms of shed skin particles from the crew, a few small hair clippings can likely be handled by the ISS’ air treatment systems just fine. The goal after all is to not have a massive expanding cloud of hair clippings filling up the space station.