Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerIT And Programming

Fukushima Daiichi: Cleaning Up After a Nuclear Accident

Por: Maya Posch
23 Septiembre 2024 at 14:00

On 11 March, 2011, a massive magnitude 9.1 earthquake shook the west coast of Japan, with the epicenter located at a shallow depth of 32 km,  a mere 72 km off the coast of Oshika Peninsula, of the Touhoku region. Following this earthquake, an equally massive tsunami made its way towards Japan’s eastern shores, flooding many kilometers inland. Over 20,000 people were killed by the tsunami and earthquake, thousands of whom were dragged into the ocean when the tsunami retreated. This Touhoku earthquake was the most devastating in Japan’s history, both in human and economic cost, but also in the effect it had on one of Japan’s nuclear power plants: the six-unit Fukushima Daiichi plant.

In the subsequent Investigation Commission report by the Japanese Diet, a lack of safety culture at the plant’s owner (TEPCO) was noted, along with significant corruption and poor emergency preparation, all of which resulted in the preventable meltdown of three of the plant’s reactors and a botched evacuation. Although afterwards TEPCO was nationalized, and a new nuclear regulatory body established, this still left Japan with the daunting task of cleaning up the damaged Fukushima Daiichi nuclear plant.

Removal of the damaged fuel rods is the biggest priority, as this will take care of the main radiation hazard. This year TEPCO has begun work on removing the damaged fuel inside the cores, the outcome of which will set the pace for the rest of the clean-up.

Safety Cheese Holes

Overview of a GE BWR as at Fukushima Daiichi. (Credit: WNA)
Overview of a GE reactor as at Fukushima Daiichi. (Credit: WNA)

The Fukushima Daiichi nuclear power plant was built between 1967 and 1979, with the first unit coming online in 1970 and the third unit by 1975. It features three generations of General Electric-designed boiling water reactors of a 1960s (Generation II) design. It features what is known as a Mark I containment structure. At the time of the earthquake only units 1, 2 and 3 were active, with the quake triggering safeties which shut down these reactors as designed. The quake itself did not cause significant damage to the reactors, but three TEPCO employees at the Fukushima Daiichi and Daini plants died as a result of the earthquake.

A mere 41 minutes later the first tsunami hit, followed by a second tsunami 8 minutes later, leading to the events of the Fukushima Daiichi accident. The too low seawall did not contain the tsunami, allowing water to submerge the land behind it. This damaged the seawater pumps for the main and auxiliary condenser circuits, while also flooding the turbine hall basements containing the emergency diesel generators and electrical switching gear. The backup batteries for units 1 and 2 also got taken out in the flooding, disabling instrumentation, control and lighting.

One hour after the emergency shutdown of units 1 through 3, they were still producing about 1.5% of their nominal thermal power. With no way to shed the heat externally, the hot steam, and eventually hydrogen from hot steam interacting with the zirconium-alloy fuel rod cladding, was diverted into the dry primary containment and then the wetwell, with the Emergency Core Cooling System (ECCS) injecting replacement water. This kept the cores mostly intact over the course of three days, with seawater eventually injected externally, though the fuel rods would eventually melt due to dropping core water levels, before solidifying inside the reactor pressure vessel (RPV) as well as on the concrete below it.

It was attempted to vent the steam pressure in unit 1, but this resulted in the hydrogen-rich air to flow into the service floor, where it found an ignition source and blew off the roof. To prevent this with unit 2, a blow-out panel was opened, but unit 3 suffered a similar hydrogen explosion on the service floor, with part of the hydrogen also making it into the defueled unit 4 via ducts and similarly blowing off its roof.

The hydrogen issue was later resolved by injecting nitrogen into the RPVs of units 1 through 3, along with external cooling and power being supplied to the reactors. This stabilized the three crippled reactors to the point where clean-up could be considered after the decay of the short-lived isotopes present in the released air. These isotopes consisted of mostly iodine-131, with a half-life of 8 days, but also cesium-137, with a half-life of 30 years, and a number of other isotopes.

Nuclear Pick-up Sticks

Before the hydrogen explosions ripped out the service floors and the building roofs, the clean-up would probably have been significantly easier. Now it seemed that the first tasks would consist out of service floor clean-up of tangled metal and creating temporary roofs to keep the elements out and any radioactive particles inside. These roof covers are fitted with cameras as well as radiation and hydrogen sensors. They also provide the means for a crane to remove fuel rods from the spent fuel pools at the top of the reactors, as most of the original cranes were destroyed in the hydrogen explosions.

Phot of the damaged unit 1 of Fukushima Daiichi and a schematic overview of the status. (Credit: TEPCO)
Phot of the damaged unit 1 of Fukushima Daiichi and a schematic overview of the status. (Credit: TEPCO)

This meant that the next task is to remove all spent fuel from these spent fuel pools, with the status being tracked on the TEPCO status page. As units 5 and 6 were undamaged, they are not part of these clean-up efforts and will be retained after clean-up and decommissioning of units 1-4 for training purposes.

Meanwhile, spent fuel rods were removed already from units 3 and 4. For unit 1, a cover still has to be constructed as has has been done for unit 3, while for the more intact unit 2 a fuel handling facility is being constructed on the side of the building. Currently a lot of the hang-up with unit 1 is the removal of debris on the service floor, without risking disturbing the debris too much, like a gigantic game of pick-up sticks. Within a few years, these last spent fuel rods can then be safely transported off-site for storage, reprocessing and the manufacturing of fresh reactor fuel. That’s projected to be 2026 for Unit 2 and 2028 for Unit 1.

This spent fuel removal stage will be followed by removing the remnants of the fuel rods from inside the RPVs, which is the trickiest part as the normal way to defuel these three boiling-water reactors was rendered impossible due to the hydrogen explosions and the melting of fuel rods into puddles of corium mostly outside of the RPVs. The mostly intact unit number 2 is the first target of this stage of the clean-up.

Estimated corium distribution in Fukushima Daiichi unit 1 through 3. (Credit: TEPCO)
Estimated corium distribution in Fukushima Daiichi unit 1 through 3. (Credit: TEPCO)

To develop an appropriate approach, TEPCO relies heavily on exploration using robotic systems. These can explore the insides of the units, even in areas which are deemed unsafe for humans and can be made to fit into narrow tubes and vents to explore even the insides of the RPVs. This is how we have some idea of where the corium ended up, allowing for a plan to be formed for the extracting of this corium for disposal.

Detailed updates on the progress of the clean-up can be found as monthly reports, which also provide updates on any changes noted inside the damaged units. Currently the cores are completely stable, but there is the ongoing issue of ground- and rainwater making it into the buildings, which causes radioactive particles to be carried along into the soil. This is why groundwater at the site has been for years now been pumped up and treated with the ALPS radioactive isotope removal system. This leaves just water with some tritium, which after mixing with seawater is released into the ocean. The effective tritium release this way is lower than when the Fukushima Daiichi plant was operating.

TEPCO employees connect pipes that push the 'Telesco' robot into the containment of Unit 2 for core sample retrieval. (Credit: TEPCO)
TEPCO employees connect pipes that push the ‘Telesco’ robot into the containment of Unit 2 for core sample retrieval. (Credit: TEPCO)

In these reports we also get updates on the robotic exploration, but the most recent update here involves a telescoping robot nicknamed ‘Telesco’ (because it can extend by 22 meters) which is tasked with retrieving a corium sample of a few grams from the unit 2 reactor, in the area underneath the RPV where significant amounts of corium have collected. This can then be analyzed and any findings factored into the next steps, which would involve removing the tons of corium. This debris consists of the ceramic uranium fuel, the zirconium-alloy cladding, the RPV steel and the transuranics and minor actinides like plutonium, Cs-137 and Sr-90, making it radiologically quite ‘hot’.

Looking Ahead

Although the clean-up of Fukushima Daiichi may seem slow, with a projected completion date decades from now, the fact of the matter is that time is in our favor, as the issue of radiological contamination lessens with every passing day. Although the groundwater contamination is probably the issue that gets the most attention, courtesy of the highly visible storage tanks, this is now fully contained including with sea walls, and there is even an argument to be made that dilution of radioisotopes into the ocean would make it a non-issue.

Regardless of the current debate about radiological overreacting and safe background levels, most of the exclusion zone around the Fukushima Daiichi plant has already been reopened, with only some zones still marked as ‘problematic’, despite having background radiation levels that are no higher than the natural levels in other inhabited regions of the world. This is also the finding of the UNSCEAR in their 2020 status report (PDF), which finds levels of Cs-137 in marine foods having dropped already sharply by 2015, no radiation-related events in those evacuated or workers in the exclusion zone, and no observed effects on the local fauna and flora.

Along with the rather extreme top soil remediation measures that continue in the exclusion zone, it seems likely that within a few years this exclusion zone will be mostly lifted, and the stricken plant itself devoid of spent fuel rods, even as the gradual removal of the corium will have begun. First starting with small samples, then larger pieces, until all that will left inside units 1-3 will be some radioactive dust, clearing the way to demolish the buildings. But it’s a long road.

 

 

Catching The BOAT: Gamma-Ray Bursts and The Brightest of All Time

18 Septiembre 2024 at 14:00

Down here at the bottom of our ocean of air, it’s easy to get complacent about the hazards our universe presents. We feel safe from the dangers of the vacuum of space, where radiation sizzles and rocks whizz around. In the same way that a catfish doesn’t much care what’s going on above the surface of his pond, so too are we content that our atmosphere will deflect, absorb, or incinerate just about anything that space throws our way.

Or will it? We all know that there are things out there in the solar system that are more than capable of wiping us out, and every day holds a non-zero chance that we’ll take the same ride the dinosaurs took 65 million years ago. But if that’s not enough to get you going, now we have to worry about gamma-ray bursts, searing blasts of energy crossing half the universe to arrive here and dump unimaginable amounts of energy on us, enough to not only be measurable by sensitive instruments in space but also to effect systems here on the ground, and in some cases, to physically alter our atmosphere.

Gamma-ray bursts are equal parts fascinating physics and terrifying science fiction. Here’s a look at the science behind them and the engineering that goes into detecting and studying them.

Collapsars and Neutron Stars

Although we now know that gamma-ray bursts are relatively common, it wasn’t all that long ago that we were ignorant of their existence, thanks in part to our thick, protective atmosphere. The discovery of GRBs had to wait for the Space Race to couple with Cold War paranoia, which resulted in Project Vela, a series of early US Air Force satellites designed in part to watch for Soviet compliance with the Partial Test Ban Treaty, which forbade everything except underground nuclear tests. In 1967, gamma ray detectors on satellites Vela 3 and Vela 4 saw a flash of gamma radiation that didn’t match the signature of any known nuclear weapon. Analysis of the data from these and subsequent flashes revealed that they came from space, and the race to understand these energetic cosmic outbursts was on.

Trust, but verify. Vela 4, designed to monitor Soviet nuclear testing, was among the first satellites to detect cosmic gamma-ray bursts. Source: ENERGY.GOV, Public domain, via Wikimedia Commons

Gamma-ray bursts are the most energetic phenomena known, with energies that are almost unfathomable. Their extreme brightness, primarily as gamma rays but across the spectrum and including visible light, makes them some of the most distant objects ever observed. To put their energetic nature into perspective, a GRB in 2008, dubbed GRB 080319B, was bright enough in the visible part of the spectrum to just be visible to the naked eye even though it was 7.5 billion light years away. That’s more than halfway across the observable universe, 3,000 times farther away than the Andromeda galaxy, normally the farthest naked-eye visible object.

For all their energy, GRBs tend to be very short-lived. GRBs break down into two rough groups. Short GRBs last for less than about two seconds, with everything else falling into the long GRB category. About 70% of GRBs we see fall into the long category, but that might be due to the fact that the short bursts are harder to see. It could also be that the events that precipitate the long variety, hypernovae, or the collapse of extremely massive stars and the subsequent formation of rapidly spinning black holes, greatly outnumber the progenitor event for the short category of GRBs, which is the merging of binary neutron stars locked in a terminal death spiral.

The trouble is, the math doesn’t work out; neither of these mind-bogglingly energetic events could create a burst of gamma rays bright enough to be observed across half the universe. The light from such a collapse would spread out evenly in all directions, and the tyranny of the inverse square law would attenuate the signal into the background long before it reached us. Unless, of course, the gamma rays were somehow collimated. The current thinking is that a disk of rapidly spinning material called an accretion disk develops outside the hypernova or the neutron star merger. The magnetic field of this matter is tortured and twisted by its rapid rotation, with magnetic lines of flux getting tangled and torn until they break. This releases all the energy of the hypernova or neutron star merger in the form of gamma rays in two tightly focused jets aligned with the pole of rotation of the accretion disk. And if one of those two jets happens to be pointed our way, we’ll see the resulting GRB.

Crystals and Shadows

But how exactly do we detect gamma-ray bursts? The first trick is to get to space, or at least above the bulk of the atmosphere. Our atmosphere does a fantastic job shielding us from all forms of cosmic radiation, which is why the field of gamma-ray astronomy in general and the discovery of GRBs in particular had to wait until the 1960s. A substantial number of GRBs have been detected by gamma-ray detectors carried aloft on high-altitude balloons, especially in the early days, but most dedicated GRB observatories are now satellite-borne

Gamma-ray detection technology has advanced considerably since the days of Vela, but a lot of the tried and true technology is still used today. Scintillation detectors, for example, use crystals that release photons of visible light when gamma rays of a specific energy pass through them. The photons can then be amplified by photomultiplier tubes, resulting in a pulse of current proportional to the energy of the incident gamma ray. This is the technology used by the Gamma-ray Burst Monitor (GBM) aboard the Fermi Gamma-Ray Space Telescope, a satellite that was launched in 2008. Sensors with the GBT are mounted around the main chassis of Fermi, giving it a complete very of the sky. It consists of twelve sodium iodide detectors, each of which is directly coupled to a 12.7-cm diameter photomultiplier tube. Two additional sensors are made from cylindrical bismuth germanate scintillators, each of which is sandwiched between two photomultipliers. Together, the fourteen sensors cover from 8 keV to 30 MeV,  and used in concert they can tell where in the sky a gamma-ray burst has occurred.

The coded aperture for Swift’s BAT. Each tiny lead square casts a unique shadow pattern on the array of cadmiun-zinc-telluride (CZT) ionization sensors, allowing an algorithm to work out the characteristics of the gamma rays falling on it. Source: NASA.

Ionization methods are also used as gamma-ray detectors. The Niel Gehrels Swift Observatory, a dedicated GRB hunting satellite that was launched in 2004, has an instrument known as the Burst Alert Telescope, or BAT. This instrument has a very large field of view and is intended to monitor a huge swath of sky. It uses 32,768 cadmium-zinc-telluride (CZT) detector elements, each 4 x 4 x 2 mm, to directly detect the passage of gamma rays. CZT is a direct-bandgap semiconductor in which electron-hole pairs are formed across an electric field when hit by ionizing radiation, producing a current pulse. The CZT array sits behind a fan-shaped coded aperture, which has thousands of thin lead tiles arranged in an array that looks a little like a QR code. Gamma rays hit the coded aperture first, casting a pattern on the CZT array below. The pattern is used to reconstruct the original properties of the radiation beam mathematically, since conventional mirrors and lenses don’t work with gamma radiation. The BAT is used to rapidly detect the location of a GRB and to determine if it’s something worth looking at. If it is, it rapidly slews the spacecraft to look at the burst with its other instruments and instantly informs other gamma observatories about the source so they can take a look too.

The B.O.A.T.

On October 9, 2022, both Swift and Fermi, along with dozens of other spacecraft and even some ground observatories, would get to witness a cataclysmically powerful gamma-ray burst. Bloodlessly named GRB 221009A but later dubbed “The BOAT,” for “brightest of all time,” the initial GRB lasted for an incredible ten minutes with a signal that remained detectable for hours. Coming from the direction of the constellation Sagittarius from a distance of 2.4 billion light years, the burst was powerful enough to saturate Fermi’s sensors and was ten times more powerful than any signal yet received by Swift.

The BOAT. A ten-hour time-lapse of data from the Fermi Large Area Telescope during GRB 221009A on October 8, 2022. Source: NASA/DOE/Fermi LAT Collaboration, Public domain

Almost everything about the BOAT is fascinating, and the superlatives are too many to list. The gamma-ray burst was so powerful that it showed up in the scientific data of spacecraft that aren’t even equipped with gamma-ray detectors, including orbiters at Mars and Voyager 1. Ground-based observatories noted the burst, too, with observatories in Russia and China noting very high-energy photons in the range of tens to hundreds of TeV arriving at their detectors.

The total energy released by GRB 221009A is hard to gauge with precision, mainly because it swamped the very instruments designed to measure it. Estimates range from 1048 to 1050 joules, either of which dwarfs the total output of the Sun over its entire 10 billion-year lifespan. So much energy was thrown in our direction in such a short timespan that even our own atmosphere was impacted. Lightning detectors in India and Germany were triggered by the burst, and the ionosphere suddenly started behaving as if a small solar flare had just occurred. Most surprising was that the ionospheric effects showed up on the daylight side of the Earth, swamping the usual dampening effect of the Sun.

When the dust had settled from the initial detection of GRB 221009A, the question remained: What happened to cause such an outburst? To answer that, the James Webb Space Telescope was tasked with peering into space, off in the direction of Sagittarius, where it found pretty much what was expected — the remains of a massive supernova. In fact, the supernova that spawned this GRB doesn’t appear to have been particularly special when compared to other supernovae from similarly massive stars, which leaves the question of how the BOAT got to be so powerful.

Does any of this mean that a gamma-ray burst is going to ablate our atmosphere and wipe us out next week? Probably not, and given that this recent outburst was estimated to be a one-in-10,000-year event, we’re probably good for a while. It seems likely that there’s plenty that we don’t yet understand about GRBs, and that the data from GRB 221009A will be pored over for decades to come. It could be that we just got lucky this time, both in that we were in the right place at the right time to see the BOAT, and that it didn’t incinerate us in the process. But given that on average we see one GRB per day somewhere in the sky, chances are good that we’ll have plenty of opportunities to study these remarkable events.

A Look At The Small Web, Part 1

Por: Jenny List
10 Septiembre 2024 at 14:00

In the early 1990s I was privileged enough to be immersed in the world of technology during the exciting period that gave birth to the World Wide Web, and I can honestly say I managed to completely miss those first stirrings of the information revolution in favour of CD-ROMs, a piece of technology which definitely didn’t have a future. I’ve written in the past about that experience and what it taught me about confusing the medium with the message, but today I’m returning to that period in search of something else. How can we regain some of the things that made that early Web good?

We All Know What’s Wrong With The Web…

It’s likely most Hackaday readers could recite a list of problems with the web as it exists here in 2024. Cory Doctrow coined a word for it, enshitification, referring to the shift of web users from being the consumers of online services to the product of those services, squeezed by a few Internet monopolies. A few massive corporations control so much of our online experience from the server to the browser, to the extent that for so many people there is very little the touch outside those confines.

A screenshot of the first ever web page
The first ever web page is maintained as a historical exhibit by CERN.

Contrasting the enshitified web of 2024 with the early web, it’s not difficult to see how some of the promise was lost. Perhaps not the web of Tim Berners-Lee and his NeXT cube, but the one of a few years later, when Netscape was the new kid on the block to pair with your Trumpet Winsock. CD-ROMs were about to crash and burn, and I was learning how to create simple HTML pages.

The promise then was of a decentralised information network in which we would all have our own websites, or homepages as the language of the time put it, on our own servers. Microsoft even gave their users the tools to do this with Windows, in that the least technical of users could put a Frontpage Express web site on their Personal Web Server instance. This promise seems fanciful to modern ears, as fanciful perhaps as keeping the overall size of each individual page under 50k, but at the time it seemed possible.

With such promise then, just how did we end up here? I’m sure many of you will chip in in the comments with your own takes, but of course, setting up and maintaining a web server is either hard, or costly. Anyone foolish enough to point their Windows Personal Web Server directly at the Internet would find their machine compromised by script kiddies, and having your own “proper” hosting took money and expertise. Free stuff always wins online, so in those early days it was the likes of Geocities or Angelfire which drew the non-technical crowds. It’s hardly surprising that this trend continued into the early days of social media, starting the inevitable slide into today’s scene described above.

…So Here’s How To Fix It

If there’s a ray of hope in this wilderness then, it comes in the shape of the Small Web. This is a movement in reaction to a Facebook or Google internet, an attempt to return to that mid-1990s dream of a web of lightweight self-hosted sites. It’s a term which encompases both lightweight use of traditional web tehnologies and some new ones designed more specifically to deliver lightweight services, and it’s fair to say that while it’s not going to displace those corporations any time soon it does hold the interesting prospect of providing an alternative. From a Hackaday perspective we see Small Web technologies as ideal for serving and consuming through microcontroller-based devices, for instance, such as event badges. Why shouldn’t a hacker camp badge have a Gemini client which picks up the camp schedule, for example? Because the Small Web is something of a broad term, this is the first part of a short series providing an introduction to the topic. We’ve set out here what it is and where it comes from, so it’s now time to take a look at some of those 1990s beginnings in the form of Gopher, before looking at what some might call its spiritual successors today.

A screenshot of a browser with a very plain text page.
An ancient Firefox version shows us a Gopher site. Ph0t0phobic, MPL 1.1.

It’s odd to return to Gopher after three decades, as it’s one of those protocols which was for most of us immediately lost as the Web gained traction. Particulrly as at the time I associated Gopher with CLI base clients and the Web with the then-new NCSA Mosaic, I’d retained that view somehow. It’s interesting then to come back and look at how the first generation of web browsers rendered Gopher sites, and see that they did a reasonable job of making them look a lot like the more texty web sites of the day. In another universe perhaps Gopher would have evolved further to something more like the web, but instead it remains an ossifed glimpse of 1992 even if there are still a surprising number of active Gopher servers still to be found. There’s a re-imagined version of the Veronica search engine, and some fun can be had browsing this backwater.

With the benefit of a few decades of the Web it’s immediately clear that while Gopher is very fast indeed in the days of 64-bit desktops and gigabit fibre, the limitations of what it can do are rather obvious. We’re used to consuming information as pages instead of as files, and it just doesn’t meet those expectations. Happily  though Gopher never made those modifications, there’s something like what it might have become in Gemini. This is a lightweight protocol like Gopher, but with a page format that allows hyperlinking. Intentionally it’s not simply trying to re-implement the web and HTML, instead it’s trying to preserve the simplicity while giving users the hyperlinking that makes the web so useful.

A Kennedy search engine Gemini search page for "Hackaday".
It feels a lot like the early 1990s Web, doesn’t it.

The great thing about Gemini is that it’s easy to try. The Gemini protocol website has a list of known clients, but if even that’s too much, find a Gemini to HTTP proxy (I’m not linking to one, to avoid swamping someone’s low traffic web server). I was soon up and running, and exploring the world of Gemini sites. Hackaday don’t have a presence there… yet.

We’ve become so used to web pages taking a visible time to load, that the lightning-fast response of Gemini is a bit of a shock at first. It’s normal for a web page to contain many megabytes of images, Javascript, CSS, and other resources, so what is in effect the Web stripped down to only the information  is  unexpected. The pages are only a few K in size and load in effect, instantaneously. This may not be how the Web should be, but it’s certainly how fast and efficient hypertext information should be.

This has been part 1 of a series on the Small Web, in looking at the history and the Gemini protocol from a user perspective we know we’ve only scratched the surface of the topic. Next time we’ll be looking at how to create a Gemini site of your own, through learning it ourselves.

Reinforcing Plastic Polymers With Cellulose and Other Natural Fibers

Por: Maya Posch
9 Septiembre 2024 at 14:00

While plastics are very useful on their own, they can be much stronger when reinforced and mixed with a range of fibers. Not surprisingly, this includes the thermoplastic polymers which are commonly used with FDM 3D printing, such as polylactic acid (PLA) and polyamide (PA, also known as nylon). Although the most well-known fibers used for this purpose are probably glass fiber (GF) and carbon fiber (CF), these come with a range of issues, including their high abrasiveness when printing and potential carcinogenic properties in the case of carbon fiber.

So what other reinforcing fiber options are there? As it turns out, cellulose is one of these, along with basalt. The former has received a lot of attention currently, as the addition of cellulose and similar elements to thermopolymers such as PLA can create so-called biocomposites that create plastics without the brittleness of PLA, while also being made fully out of plant-based materials.

Regardless of the chosen composite, the goal is to enhance the properties of the base polymer matrix with the reinforcement material. Is cellulose the best material here?

Cellulose Nanofibers

Plastic objects created by fused deposition modeling (FDM) 3D printing are quite different from their injection-molding counterparts. In the case of FDM objects, the relatively poor layer adhesion and presence of voids means that 3D-printed PLA parts only have a fraction of the strength of the molded part, while also affecting the way that any fiber reinforcement can be integrated into the plastic. This latter aspect can also be observed with the commonly sold CF-containing FDM filaments, where small fragments of CF are used rather than long strands.

According to a study by Tushar Ambone et al. (2020) as published (PDF) in Polymer Engineering and Science, FDM-printed PLA has a 49% lower tensile strength and 41% lower modulus compared to compression molded PLA samples. The addition of a small amount of sisal-based cellulose nanofiber (CNF) at 1% by weight to the PLA subsequently improved these parameters by 84% and 63% respectively, with X-ray microtomography showing a reduction in voids compared to the plain PLA. Here the addition of CNF appears to significantly improve the crystallization of the PLA with corresponding improvement in its properties.

Fibers Everywhere

Incidentally a related study by Chuanchom Aumnate et al. (2021) as published in Cellulose used locally (India) sourced kenaf cellulose fibers to reinforce PLA, coming to similar results. This meshes well with the findings by  Usha Kiran Sanivada et al. (2020) as published in Polymers, who mixed flax and jute fibers into PLA. Although since they used fairly long fibers in compression and injection molded samples a direct comparison with the FDM results in the Aumnate et al. study is somewhat complicated.

Meanwhile the use of basalt fibers (BF) is already quite well-established alongside glass fibers (GF) in insulation, where it replaced asbestos due to the latter’s rather unpleasant reputation. BF has some advantages over GF in composite materials, as per e.g. Li Yan et al. (2020) including better chemical stability and lower moisture absorption rates. As basalt is primarily composed of silicate, this does raise the specter of it being another potential cause of silicosis and related health risks.

With the primary health risk of mineral fibers like asbestos coming from the jagged, respirable fragments that these can create when damaged in some way, this is probably a very pertinent issue to consider before putting certain fibers quite literally everywhere.

A 2018 review by Seung-Hyun Park in Saf Health Work titled “Types and Health Hazards of Fibrous Materials Used as Asbestos Substitutes” provides a good overview of the relative risks of a range of asbestos-replacements, including BF (mineral wool) and cellulose. Here mineral wool fibers got rated as IARC Group 3 (insufficient evidence of carcinogenicity) except for the more biopersistent types (Group 2B, possibly carcinogenic), while cellulose is considered to be completely safe.

Finally, related to cellulose, there is also ongoing research on using lignin (present in plants next to cellulose as cell reinforcement) to improve the properties of PLA in combination with cellulose. An example is found in a 2021 study by Diana Gregor-Svetec et al. as published in Polymers. PLA composites created with lignin and surface-modified nanofibrillated (nanofiber) cellulose (NFC). A 2023 study by Sofia P. Makri et al. (also in Polymers) examined methods to improve the dispersion of the lignin nanoparticles. The benefit of lignin in a PLA/NFC composite appears to be in UV stabilization most of all, which should make objects FDM printed using this material last significantly longer when placed outside.

End Of Life

Another major question with plastic polymers is what happens with them once they inevitably end up discarded in the environment. There should be little doubt about what happens with cellulose and lignin in this case, as every day many tons of cellulose and lignin are happily devoured by countless microorganisms around the globe. This means that the only consideration for cellulose-reinforced plastics in an end-of-life scenario is that of the biodegradability of PLA and other base polymers one might use for the polymer composite.

Today, many PLA products end up discarded in landfills or polluting the environment, where PLA’s biodegradability is consistently shown to be poor, similar to other plastics, as it requires an industrial composting process involving microbial and hydrolytic treatments. Although incinerating PLA is not a terrible option due to its chemical composition, it is perhaps an ironic thought that the PLA in cellulose-reinforced PLA might actually be the most durable component in such a composite.

That said, if PLA is properly recycled or composted, it seems to pose few issues compared to other plastics, and any cellulose components would likely not interfere with the process, unlike CF-reinforced PLA, where incinerating it is probably the easiest option.

Do you print with hybrid or fiber-mixed plastics yet?

Olympic Sprint Decided By 40,000 FPS Photo Finish

17 Agosto 2024 at 20:00
40,000 FPS Omega camera captures Olympic photo-finish

Advanced technology played a crucial role in determining the winner of the men’s 100-meter final at the Paris 2024 Olympics. In a historically close race, American sprinter Noah Lyles narrowly edged out Jamaica’s Kishane Thompson by just five-thousandths of a second. The final decision relied on an image captured by an Omega photo finish camera that shoots an astonishing 40,000 frames per second.

This cutting-edge technology, originally reported by PetaPixel, ensured the accuracy of the result in a race where both athletes recorded a time of 9.78 seconds. If SmartThings’ shot pourer from the 2012 Olympics were still around, it could once again fulfill its intended role of celebrating US medals.

Omega, the Olympics’ official timekeeper for decades, has continually innovated to enhance performance measurement. The Omega Scan ‘O’ Vision Ultimate, the camera used for this photo finish, is a significant upgrade from its 10,000 frames per second predecessor. The new system captures four times as many frames per second and offers higher resolution, providing a detailed view of the moment each runner’s torso touches the finish line. This level of detail was crucial in determining that Lyles’ torso touched the line first, securing his gold medal.

This camera is part of Omega’s broader technological advancements for the Paris 2024 Olympics, which include advanced Computer Vision systems utilizing AI and high-definition cameras to track athletes in real-time. For a closer look at how technology decided this historic race, watch the video by Eurosport that captured the event.

Austraila’s Controlled Loads Are In Hot Water

Por: Lewin Day
15 Agosto 2024 at 14:00

Australian grids have long run a two-tiered pricing scheme for electricity. In many jurisdictions, regular electricity was charged at a certain rate. Meanwhile, you could get cheaper electricity for certain applications if your home was set up with a “controlled load.” Typically, this involved high energy equipment like pool heaters or hot water heaters.

This scheme has long allowed Australians to save money while keeping their water piping-hot at the same time. However, the electrical grid has changed significantly in the last decade. These controlled loads are starting to look increasingly out of step with what the grid and the consumer needs. What is to be done?

Controlled What Now?

Hot water heaters can draw in excess of 5 kW for hours on end when warming up. Electrical authorities figured that it would be smart to take this huge load on the grid, and shift it to night time, a period of otherwise low demand. Credit: Lewin Day

In Australia, the electricity grid has long relied on a system of “controlled loads” to manage the energy demand from high-consumption appliances, particularly electric hot water heaters. These controlled loads were designed to take advantage of periods when overall electricity demand was lower, traditionally at night. By scheduling energy-intensive activities like heating water during these off-peak hours, utilities could balance the load on the grid and reduce the need for additional power generation capacity during peak times. In turn, households would receive cheaper off-peak electricity rates for energy used by their controlled load.

This system was achieved quite simply. Households would have a special “controlled load” meter in their electrical box. This would measure energy use by the hot water heater, or whatever else the electrical authority had allowed to be hooked up in this manner. The controlled load meter would be set on a timer so the attached circuit would only be powered in the designated off-peak times. Meanwhile, the rest of the home’s electrical circuits would be connected to the main electrical meter which would provide power 24 hours a day.

By and large, this system worked well. However, it did lead to more than a few larger families running out of hot water on the regular. For example, you might have had a 250 liter hot water heater. Hooked up as a controlled load, it would heat up overnight and switch off around 7 AM. Two or three showers later, the hot water heater would have delivered all its hot water, and you’d be stuck without any more until it switched back on at night.

Historically, most electric hot water heaters were set to run during the low-demand night period, typically after 10 PM. Historically, the demand for electricity was low at this time, while peak demand was in the day time. It made sense to take the huge load from everyone’s hot water system, and move all that demand to the otherwise quiet night period. This lowered the daytime peak, reducing demand on the grid, in turn slashing infrastructure and generation costs. It had the effect of keeping the demand curve flatter throughout the whole 24-hour period.

This strategy was particularly effective in a grid predominantly powered by coal-fired power stations, which operated most efficiently when running continuously at a stable output. By shifting the hot water heating load to nighttime, utilities could maintain a more consistent demand for electricity throughout the day and night, reducing the need for sudden increases in generation capacity during peak times.

 

 

Everything Changed

The Australian grid now sees large peaks in solar generation during the day. Credit: APVI.org.au via screenshot

However, the energy landscape in Australia has undergone a significant transformation in recent years. This has been primarily driven by the rapid growth of renewable energy sources, particularly home solar generation. As a result, the dynamics of electricity supply and demand have changed, prompting a reevaluation of the traditional approach to controlled loads.

Renewable energy has completely changed the way supply and demand works in the Australian grid. These days, energy is abundant while the sun is up. During the middle of the day, wholesale energy prices routinely plummet below $0.10 / kWh as the sun bears down on thousands upon thousands of solar panels across the country. Energy becomes incredibly cheap. Meanwhile, at night, energy is now very expensive. The solar panels are all contributing nothing, and it becomes the job of coal and gas generators to carry the majority of the burden. Fossil fuels are increasingly expensive, and spikes in the wholesale price are not uncommon, at times exceeding $10 / kWh.

Solar power generation peaks are now so high that Australian cities often produce more electricity than is needed to meet demand. This excess solar energy has led to periods where electricity prices can be very low, or even negative, due to the abundance of renewable energy on the grid. As a result, there is a growing argument that it now makes more sense to shift controlled loads, such as hot water heaters, to run during the daytime rather than at night.

The rise of home solar generation has created unexpected flow-on effects for Australia’s power grid. Credit: Wayne National Forest, CC BY 2.0

Shifting controlled loads to the daytime would help absorb the surplus solar energy. This would reduce the need for grid authorities to kick renewable generators off the grid in times of excess. It would also help mitigate the so-called “duck curve” effect, where the demand for electricity sharply increases in the late afternoon and early evening as solar generation declines, leading to a steep ramp-up in non-renewable generation. By using excess solar energy to power controlled loads during the day, the overall demand on the grid would be more balanced, and the reliance on fossil fuels during peak times could be reduced.

Implementing this shift would require adjustments to the current tariff structures and perhaps the installation of smart meters capable of dynamically managing when controlled loads are activated based on real-time grid conditions. In a blessed serendipity, some Australian states—like Victoria—have already achieved near-100% penetration of smart meters. Others are still in the process of rollout, aiming for near 100% coverage by 2030. While these changes would involve some initial investment, the long-term benefits, including greater integration of renewable energy, reduced carbon emissions, and potentially lower electricity costs for consumers, make it a compelling option.

Fundamentally, it makes no sense for controlled loads to continue running as they have done for decades. Millions of Australians are now paying to heat their water during higher-demand periods where energy is more expensive. This can be particularly punitive for those on regularly-updated live tariffs that change with the current wholesale energy price. Those customers will sit by, watching cheap solar energy effectively go to waste during a sunny day, before their water heater finally kicks at night when the coal generators are going their hardest.

While the traditional approach to controlled loads in Australia has served the grid well in the past, the rise of renewable energy has changed things. The abundance of solar generation necessitates a rethinking of when these loads are scheduled. By shifting the operation of controlled loads like hot water heaters to the daytime, Australia can make better use of its abundant renewable energy resources, improve grid stability, and move closer to its sustainability goals. It’s a simple idea that makes a lot of sense. Here’s waiting for the broader power authorities to step up and make the change.

❌
❌