Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Hoy — 14 Noviembre 2024Salida Principal

AI Face Anonymizer Masks Human Identity in Images

14 Noviembre 2024 at 03:00

We’re all pretty familiar with AI’s ability to create realistic-looking images of people that don’t exist, but here’s an unusual implementation of using that technology for a different purpose: masking people’s identity without altering the substance of the image itself. The result is the photo’s content and “purpose” (for lack of a better term) of the image remains unchanged, while at the same time becoming impossible to identify the actual person in it. This invites some interesting privacy-related applications.

Originals on left, anonymized versions on the right. The substance of the images has not changed.

The paper for Face Anonymization Made Simple has all the details, but the method boils down to using diffusion models to take an input image, automatically pick out identity-related features, and alter them in a way that looks more or less natural. For this purpose, identity-related features essentially means key parts of a human face. Other elements of the photo (background, expression, pose, clothing) are left unchanged. As a concept it’s been explored before, but researchers show that this versatile method is both simpler and better-performing than others.

Diffusion models are the essence of AI image generators like Stable Diffusion. The fact that they can be run locally on personal hardware has opened the doors to all kinds of interesting experimentation, like this haunted mirror and other interactive experiments. Forget tweaking dull sliders like “brightness” and “contrast” for an image. How about altering the level of “moss”, “fire”, or “cookie” instead?

AnteayerSalida Principal

The Constant Monitoring and Work That Goes into JWST’s Optics

11 Noviembre 2024 at 12:00

The James Webb Space Telescope’s array of eighteen hexagonal mirrors went through an intricate (and lengthy) alignment and calibration process before it could begin its mission — but the process is far from being a one-and-done. Keeping the telescope aligned and performing optimally requires constant work from its own team dedicated to the purpose.

Alignment of the optical elements in JWST are so fine, and the tool is so sensitive, that even small temperature variations have an effect on results. For about twenty minutes every other day, the monitoring program uses a set of lenses that intentionally de-focus images of stars by a known amount. These distortions contain measurable features that the team uses to build a profile of changes over time. Each of the mirror segments is also checked by being imaged selfie-style every three months.

This work and maintenance plan pays off. The team has made over 25 corrections since its mission began, and JWST’s optics continue to exceed specifications. The increased performance has direct payoffs in that better data can be gathered from faint celestial objects.

JWST was fantastically ambitious and is extremely successful, and as a science instrument it is jam-packed with amazing bits, not least of which are the actuators responsible for adjusting the mirrors.

Here’s Code for that AI-Generated Minecraft Clone

10 Noviembre 2024 at 00:00

A little while ago Oasis was showcased on social media, billing itself as the world’s first playable “AI video game” that responds to complex user input in real-time. Code is available on GitHub for a down-scaled local version if you’d like to take a look. There’s a bit more detail and background in the accompanying project write-up, which talks about both the potential as well as the numerous limitations.

We suspect the focus on supporting complex user input (such as mouse look and an item inventory) is what the creators feel distinguishes it meaningfully from AI-generated DOOM. The latter was a concept that demonstrated AI image generators could (kinda) function as real-time game engines.

Image generators are, in a sense, prediction machines. The idea is that by providing a trained model with a short history of what just happened plus the user’s input as context, it can generate a pretty usable prediction of what should happen next, and do it quickly enough to be interactive. Run that in a loop, and you get some pretty impressive clips to put on social media.

It is a neat idea, and we certainly applaud the creativity of bending an image generator to this kind of application, but we can’t help but really notice the limitations. Sit and stare at something, or walk through dark or repetitive areas, and the system loses its grip and things rapidly go in a downward spiral we can only describe as “dreamily broken”.

It may be more a demonstration of a concept than a properly functioning game, but it’s still a very clever way to leverage image generation technology. Although, if you’d prefer AI to keep the game itself untouched take a look at neural networks trained to use the DOOM level creator tools.

Nix + Automated Fuzz Testing Finds Bug in PDF Parser

9 Noviembre 2024 at 12:00

[Michael Lynch]’s adventures in configuring Nix to automate fuzz testing is a lot of things all rolled into one. It’s not only a primer on fuzz testing (a method of finding bugs) but it’s also a how-to on automating the setup using Nix (which is a lot of things, including a kind of package manager) as well as useful info on effectively automating software processes.

[Michael] not only walks through how he got it all up and running in a simplified and usefully-portable way, but he actually found a buffer overflow in pdftotext in the process! (Turns out someone else had reported the same bug a few weeks before he found it, but it demonstrates everything regardless.)

[Michael] chose fuzz testing because using it to find security vulnerabilities is conceptually simple, actually doing it tends to require setting up a test environment with a complex workflow and a lot of dependencies. The result has a high degree of task specificity, and isn’t very portable or reusable. Nix allowed him to really simplify the process while also making it more adaptable. Be sure to check out part two, which goes into detail about how exactly one goes from discovering an input that crashes a program to tracking down (and patching) the reason it happened.

Making fuzz testing easier (and in a sense, cheaper) is something people have been interested in for a long time, even going so far as to see whether pressing a stack of single-board computers into service as dedicated fuzz testers made economic sense.

Split-Flap Clock Flutters Its Way to Displaying Time Without Numbers

5 Noviembre 2024 at 21:00

Here’s a design for a split-flap clock that doesn’t do it the usual way. Instead of the flaps showing numbers , Klapklok has a bit more in common with flip-dot displays.

Klapklok updates every 2.5 minutes.

It’s an art piece that uses custom-made split-flaps which flutter away to update the display as time passes. An array of vertically-mounted flaps creates a sort of low-res display, emulating an analog clock. These are no ordinary actuators, either. The visual contrast and cleanliness of the mechanism is fantastic, and the sound they make is less of a chatter and more of a whisper.

The sound the flaps create and the sight of the high-contrast flaps in motion are intended to be a relaxing and calming way to connect with the concept of time passing. There’s some interactivity built in as well, as the Klapklok also allows one to simply draw on it wirelessly with via a mobile phone.

Klapklok has a total of 69 elements which are all handmade. We imagine there was really no other way to get exactly what the designer had in mind; something many of us can relate to.

Split-flap mechanisms are wonderful for a number of reasons, and if you’re considering making your own be sure to check out this easy and modular DIY reference design before you go about re-inventing the wheel. On the other hand, if you do wish to get clever about actuators maybe check out this flexible PCB that is also its own actuator.

DIY Laser Tag Project Does it in Style

26 Octubre 2024 at 14:00

This DIY lasertag project designed by [Nii], which he brought to Tokyo Maker Faire back in September, is a treasure trove. It’s all in Japanese and you’ll need to visit X (formerly Twitter) to see it, but the images do a fine job of getting the essentials across and your favorite translator tool will do a fair job of the rest.

There’s a whole lot to admire in this project. The swing-out transparent OLED display is super slick, the electronics are housed on a single PCB, the back half of the grip is in fact a portable USB power bank that slots directly in to provide power, and there’s a really smart use of a short RGB LED strip for effects.

The optical elements show some inspired design, as well. An infrared LED points forward, and with the help of a lens, focuses the beam tightly enough to make aiming meaningful. For detecting hits, the top of the pistol conceals a custom-made reflector that directs any IR downward into a receiver, making it omnidirectional in terms of hit sensing but only needing a single sensor.

Want to know more? Check out [Nii]’s earlier prototypes on his website. It’s clear this has been in the works for a while, so if you like seeing how a project develops, you’re in for a treat.

As for the choice of transparent OLED displays? They are certainly cool, and we remember how wild it looks to have several stacked together.

Behold a First-Person 3D Maze, Vintage Atari Style

20 Octubre 2024 at 08:00

[Joe Musashi] was inspired by discussions about 3D engines and decided to create a first-person 3D maze of his own. The really neat part? It could have been done on vintage Atari hardware. Well, mostly.

He does admit he had to do a little cheating to make this work; he relies on code for the ARM processor in the modern Atari VCS do the ray casting work, and the 6507 chip just handles the display kernel. Still, running his demo on a vintage Atari 2600 console could be possible, but would definitely require a Melody or Harmony cartridge, which are special reprogrammable cartridges popular for development and homebrew.

Ray casting is a conceptually simple method of generating a 3D view from given perspective, and here’s a tutorial that will tell you all you need to know about how it works, and how to implement your own.

[Joe]’s demo is just a navigable 3D maze rather than a game, but it’s pretty wild to see what could in theory have run on such an old platform, even if a few modern cheats are needed to pull it off. And if you agree that it’s neat, then hold onto your hats because a full 3D ray casting game — complete with a micro physics engine — was perfectly doable on the Commodore PET, which even had the additional limitation of a monochrome character-based display.

Make Your Own Remy the Rat This Halloween

19 Octubre 2024 at 20:00

[Christina Ernst] executed a fantastic idea just in time for Halloween: her very own Remy the rat (from the 2007 film Ratatouille). Just like in the film Remy perches on her head and appears to guide her movements by pulling on hair as though operating a marionette. It’s a great effect, and we love the hard headband used to anchor everything, which also offers a handy way to route the necessary wires.

Behind Remy are hidden two sub-micro servos, one for each arm. [Christina] simply ties locks of her hair to Remy’s hands, and lets the servos do the rest. Part of what makes the effect work so well is that Remy is eye-catching, and the relatively small movements of Remy’s hands are magnified and made more visible in the process of moving the locks of hair.

Originally Remy’s movements were random, but [Christina] added an MPU6050 accelerometer board to measure vertical movements of her own arm. She uses that sensor data to make Remy’s motions reflect her own. The MPU6050 is economical and easy to work with, readily available on breakout boards from countless overseas sellers, and we’ve seen it show up in all kinds of projects such as this tiny DIY drone and self-balancing cube.

Want to make your own Remy, or put your own spin on the idea? The 3D models and code are all on GitHub and if you want to see more of it in action, [Christina] posts videos of her work on TikTok and Instagram.

[via CBC]

All System Prompts for Anthropic’s Claude, Revealed

13 Octubre 2024 at 05:00

For as long as AI Large Language Models have been around (well, for as long as modern ones have been accessible online, anyway) people have tried to coax the models into revealing their system prompts. The system prompt is essentially the model’s fundamental directives on what it should do and how it should act. Such healthy curiosity is rarely welcomed, however, and creative efforts at making a model cough up its instructions is frequently met with a figurative glare and stern tapping of the Terms & Conditions sign.

Anthropic have bucked this trend by making system prompts public for the web and mobile interfaces of all three incarnations of Claude. The prompt for Claude Opus (their flagship model) is well over 1500 words long, with different sections specifically for handling text and images. The prompt does things like help ensure Claude communicates in a useful way, taking into account the current date and an awareness of its knowledge cut-off, or the date after which Claude has no knowledge of events. There’s some stylistic stuff in there as well, such as Claude being specifically told to avoid obsequious-sounding filler affirmations, like starting a response with any form of the word “Certainly.”

While the source code (and more importantly, the training data and resulting model weights) for Claude remain under wraps, Anthropic have been rather more forthcoming than others when it comes to sharing other details about inner workings, showing how human-interpretable features and concepts can be extracted from LLMs (which uses Claude Sonnet as an example).

Naturally, safety is a concern with LLMs, which is as good an opportunity as any to remind everyone of Goody-2, undoubtedly the world’s safest AI.

Remembering John Wheeler: You’ve Definitely Heard of His Work

12 Octubre 2024 at 23:00

Physicist John Archibald Wheeler made groundbreaking contributions to physics, and [Amanda Gefter] has a fantastic writeup about the man. He was undeniably brilliant, and if you haven’t heard of him, you have certainly heard of some of his students, not to mention his work.

Ever heard of wormholes? Black holes? How about the phrase “It from Bit”? Then you’ve heard of his work. All of those terms were coined by Wheeler; a knack for naming things being one of his talents. His students included Richard Feynman and Kip Thorne (if you enjoyed The Martian, you at least indirectly know of Kip Thorne) and more. He never won a Nobel prize, but his contributions were lifelong and varied.

One thing that set Wheeler apart was the highly ambitious nature of his research and inquiries. He was known for pushing theories to (and past) their absolute limits, always seeking deeper insights into the nature of reality. The progress of new discoveries in the fields of general relativity (for which his textbook, Gravitation, remains highly relevant), space-time, and quantum mechanics frequently left Wheeler feeling as though more questions were raised than answered. His thirst for a greater understanding of the nature of reality was one he pursued until his death in 2006. He pondered not just the ultimate nature of our universe but also why we seem to have the same basic experience of it. Wheeler saw these questions as having answers that were far from being self-evident.

Wheeler’s relentless curiosity pushed the boundaries, reminding us that the search for knowledge never truly ends. If that inspires you, then take the time to check out the full article and see whether his questions inspire and challenge your own perspective.

Scientists can now make black holes — sort of. You can even make your own wormhole. Sort of.

Memristors Are Cool, Radiation-resistant Memristors Even Moreso

6 Octubre 2024 at 02:00

Space is a challenging environment for semiconductors, but researchers have shown that a specific type of memristor (the hafnium oxide memristor, to be exact) actually reacts quite usefully when exposed to gamma radiation. In fact, it’s even able to leverage this behavior as a way to measure radiation exposure. In essence, it’s able to act as both memory and a sensor.

Being able to resist radiation exposure is highly desirable for space applications. Efficient ways to measure radiation exposure are just as valuable. The hafnium oxide memristor looks like it might be able to do both, but before going into how that works, let’s take a moment for a memristor refresher.

A memristor is essentially two conductive plates between which bridges can be made by applying a voltage to “write” to the device, by which one sets it to a particular resistance. A positive voltage causes bridging to occur between the two ends, lowering the device’s resistance, and a negative voltage reverses the process, increasing the resistance. The exact formulation of a memristor can vary. The memristor was conceived in the 1970s by Leon Chua, and HP Labs created a working one in 2008. An (expensive) 16-pin DIP was first made available in 2015.

A hafnium oxide memristor is a bit different. Normally it would be write-once, meaning a negative voltage does not reset the device, but researchers discovered that exposing it to gamma radiation appears to weaken the bridging, allowing a negative voltage to reset the device as expected. Exposure to radiation also caused a higher voltage to be required to set the memristor; a behavior researchers were able to leverage into using the memristor to measure radiation exposure. Given time, a hafnium oxide memristor exposed to radiation, causing it to require higher-than-normal voltages to be “set”, eventually lost this attribute. After 30 days, the exposed memristors appeared to recover completely from the effects of radiation exposure and no longer required an elevated voltage for writing. This is the behavior the article refers to as “self-healing”.

The research paper has all the details, and it’s interesting to see new things relating to memristors. After all, when it comes to electronic components it’s been quite a long time since we’ve seen something genuinely new.

See the “Pause-and-Attach” Technique for 3D Printing in Action

5 Octubre 2024 at 20:00

[3DPrintBunny] is someone who continually explores new techniques and designs in 3D printing, and her latest is one she calls “pause-and-attach”, which she demonstrates by printing a vase design with elements of the design splayed out onto the print bed.

The splayed-out elements get peeled up and attached to the print during a pause.

At a key point, the print is paused and one peels up the extended bits, manually attaching them to sockets on the main body of the print. Then the print resumes and seals everything in. The result is something that appears to defy the usual 3D printer constraints, as you can see here.

Pausing a 3D print to insert hardware (like nuts or magnets) is one thing, but we can’t recall seeing anything quite like this approach. It’s a little bit reminiscent of printing foldable structures to avoid supports in that it prints all of its own self-connecting elements, but at the same time it’s very different.

We’ve seen [3DPrintBunny]’s innovative approaches before with intentional stringing used as a design element and like the rest of her work, it’s both highly visual and definitely it’s own thing. You can see the whole process in a video she posted to social media, embedded below.

I tried out another 'pause-and-attach' type print today using some strings. The strings give it extra flexibility and allow me to add a twist😁 pic.twitter.com/gIytsb8NEm

— 3DPrintBunny (@3DPrintBunny) October 3, 2024

Shoot Smooth Video From Your Phone With the Syringe Slider

30 Septiembre 2024 at 05:00

We love the idea [Btoretsukuru] shared that uses a simple setup called the Syringe Slider to take smoothly-tracked video footage of small scenes like model trains in action. The post is in Japanese, but the video is very much “show, don’t tell” and it’s perfectly clear how it all works. The results look fantastic!

Suited to filming small subjects.

The device consists of a frame that forms a sort of enclosed track in which one’s mobile phone can slide horizontally. The phone butts up against the plunger of an ordinary syringe built into the frame. As the phone is pushed along, it depresses the plunger which puts up enough resistance to turn the phone’s slide into a slow, even, and smooth glide. Want to fine-tune the resistance and therefore the performance? Simply attach different diameter tips to the syringe.

The results speak for themselves, and it’s a fantastically clever bit of work. There are plenty of DIY slider designs (some of which get amazingly complex) but they are rarely small things that can be easily gotten up close and personal with small subjects like mini train terrain.

「注射器スライダー」を使った滑らか撮影テクが進化しました!!
枠を作って注射器とスマホを収め、いろんなアングルでスライド動作できるようにしました!
動画10秒目からの作例もぜひ見てください!#鉄道模型 #Nゲージ #Bトレ #ジオラマ pic.twitter.com/57uVTeHOxq

— B作 (@Btoretsukuru) September 24, 2024

See the Hands-on Details Behind Stunning Helmet Build

28 Septiembre 2024 at 08:00

[Zibartas] recently created wearable helmets from the game Starfield that look fantastic, and we’re happy to see that he created a video showcasing the whole process of design, manufacture, and assembly. The video really highlights just how much good old-fashioned manual work like sanding goes into getting good results, even in an era where fancy modern equipment like 3D printing is available to just about anyone.

The secret to perfectly-tinted and glassy-smooth clear visors? Lots and lots of sanding and polishing.

The visor, for example, is one such example. The usual approach to making a custom helmet visor (like for Daft Punk helmet builds) is some kind of thermoforming. However, the Starfield helmet visors were poor candidates due to their shape and color. [Zibartas]’s solution was to 3D print the whole visor in custom-tinted resin, followed by lots and lots of sanding and polishing to obtain a clear and glassy-smooth end product.

A lot of patient sanding ended up being necessary for other reasons as well. Each helmet has a staggering number of individual parts, most of which are 3D printed with resin, and these parts didn’t always fit together perfectly well.

[Zibartas] also ended up spending a lot of time troubleshooting an issue that many of us might have had an easier time recognizing and addressing. The helmet cleverly integrates a faux-neon style RGB LED strip for internal lighting, but the LED strip would glitch out when the ventilation fan was turned on. The solution after a lot of troubleshooting ended up being simple decoupling capacitors, helping to isolate the microcontrollers built into the LED strip from the inductive load of the motors.

What [Zibartas] may have lacked in the finer points of electronics, he certainly makes up for in practical experience when it comes to wearable pieces like these. The helmets look solid but are in fact full of open spaces and hollow, porous surfaces. This makes them more challenging to design and assemble, but it pays off in spades when worn. The helmets not only look great, but allow a huge amount of airflow. This along with the fans makes them comfortable to wear as well as prevents the face shield from misting up from the wearer’s breathing. It’s a real work of art, so check out the build video, embedded just below.

Hands-on With New iPhone’s Electrically-Released Adhesive

23 Septiembre 2024 at 05:00

There’s a wild new feature making repair jobs easier (not to mention less messy) and iFixit covers it in their roundup of the iPhone 16’s repairability: electrically-released adhesive.

Here’s how it works. The adhesive looks like a curved strip with what appears to be a thin film of aluminum embedded into it. It’s applied much like any other adhesive strip: peel away the film, and press it between whatever two things it needs to stick. But to release it, that’s where the magic happens. One applies a voltage (a 9 V battery will do the job) between the aluminum frame of the phone and a special tab on the battery. In about a minute the battery will come away with no force, and residue-free.

There is one catch: make sure the polarity is correct! The adhesive releases because applying voltage oxidizes aluminum a small amount, causing Al3+ to migrate into the adhesive and debond it. One wants the adhesive debonded from the phone’s frame (negative) and left on the battery. Flipping the polarity will debond the adhesive the wrong way around, leaving the adhesive on the phone instead.

Some months ago we shared that Apple was likely going to go in this direction but it’s great to see some hands-on and see it in action. This adhesive does seem to match electrical debonding offered by a company called Tesa, and there’s a research paper describing it.

A video embedded below goes through the iPhone 16’s repairability innovations, but if you’d like to skip straight to the nifty new battery adhesive, that starts at the 2:36 mark.

Robotic Touch Using a DIY Squishy Magnetic Pad

22 Septiembre 2024 at 14:00

There are a number of ways to give a robotic actuator a sense of touch, but the AnySkin project aims to make it an overall more reliable and practical process. The idea is twofold: create modular grippy “skins” that can be slipped onto actuators, and separate the sensing electronics from the skins themselves. The whole system ends up being quite small, as shown here.

Cast skins can be installed onto bases as easily as slipping a phone case onto a phone.

The skins are cast in whatever shape is called for by using silicone (using an off-the-shelf formulation from Smooth-on) mixed with iron particles. This skin is then slipped onto a base that contains the electronics, but first it is magnetized with a pulse magnetizer. It’s the magnetic field that is at the heart of how the system works.

The base contains five MLX90393 triple-axis magnetometers, each capable of sensing tiny changes in magnetic fields. When the magnetized skin over the base is deformed — no matter how slightly — its magnetic field changes in distinct ways that paint an impressively detailed picture of exactly what is happening at the sensor. As a bonus, slippage of the skin against the sensor (a kind of shearing) can also be distinctly detected with a high degree of accuracy.

The result is a durable and swappable robotic skin that can be cast in whatever shape is needed, itself contains no electronics, and can even be changed without needing to re-calibrate everything. Cameras can also sense touch with a high degree of accuracy, but camera-based sensors put constraints on the size and shape of the end result.

AnySkin builds on another project called ReSkin and in fact uses the same sensor PCB (design files and bill of materials available here) but provides a streamlined process to create swappable skins, and has pre-made models for a variety of different robot arms.

An Espresso Machine for the DIY Crowd

15 Septiembre 2024 at 11:00

Want to build your own espresso machine, complete with open-source software to drive it? The diyPresso might be right up your alley.

diyPresso parts, laid out and ready for assembly.

It might not be the cheapest road to obtaining an espresso machine, but it’s probably the most economical way to turn high-quality components (including a custom-designed boiler) and sensors into a machine of a proven design.

Coffee and the machines that turn it into a delicious beverage are fertile ground for the type of folk who like to measure, modify, and optimize. We’ve seen DIY roasters, grinders, and even a manual lever espresso machine. There are also many efforts at modifying existing machines with improved software-driven controls but this is the first time we’ve seen such a focused effort at bringing the DIY angle to a ground-up espresso machine specifically offered as a kit.

Curious to know more? Browse the assembly manual or take a peek at the software’s GitHub repository. You might feel some ideas start to flow for your next coffee hack.

Watch NASA’s Solar Sail Reflect Brightly in the Night Sky

15 Septiembre 2024 at 08:00

NASA’s ACS3 (Advanced Composite Solar Sail System) is currently fully deployed in low Earth orbit, and stargazers can spot it if they know what to look for. It’s actually one of the brightest things in the night sky. When the conditions are right, anyway.

ACS3’s sail is as thin as it is big.

What conditions are those? Orientation, mostly. ACS3 is currently tumbling across the sky while NASA takes measurements about how it acts and moves. Once that’s done, the spacecraft will be stabilized. For now, it means that visibility depends on the ACS’s orientation relative to someone on the ground. At it’s brightest, it appears as bright as Sirius, the brightest star in the night sky.

ACS3 is part of NASA’s analysis and testing of solar sail technology for use in future missions. Solar sails represent a way of using reflected photons (from sunlight, but also possibly from a giant laser) for propulsion.

This perhaps doesn’t have much in the way of raw energy compared to traditional thrusters, but offers low cost and high efficiency (not to mention considerably lower complexity and weight) compared to propellant-based solutions. That makes it very worth investigating. Solar sail technology aims to send a probe to Alpha Centauri within the next twenty years.

Want to try to spot ACS3 with your own eyes? There’s a NASA app that can alert you to sighting opportunities in your local time and region, and even guide you toward the right region of the sky to look. Check it out!

Shedding New Light on the Voynich Manuscript With Multispectral Imaging

10 Septiembre 2024 at 11:00

The Voynich Manuscript is a medieval codex written in an unknown alphabet and is replete with fantastic illustrations as unusual and bizarre as they are esoteric. It has captured interest for hundreds of years, and expert [Lisa Fagin Davis] shared interesting results from using multispectral imaging on some pages of this highly unusual document.

We should make it clear up front that the imaging results have not yielded a decryption key (nor a secret map or anything of the sort) but the detailed write-up and freely-downloadable imaging results are fascinating reading for anyone interested in either the manuscript itself, or just how exactly multispectral imaging is applied to rare documents. Modern imaging techniques might get leveraged into things like authenticating sealed packs of Pokémon cards, but that’s not all it can do.

Because multispectral imaging involves things outside our normal perception, the results require careful analysis rather than intuitive interpretation. Here is one example: multispectral imaging may yield faded text visible “between the lines” of other text and invite leaping to conclusions about hidden or erased content. But the faded text could be the result of show-through (content from the opposite side of the page is being picked up) or an offset (when a page picks up ink and pigment from its opposing page after being closed for centuries.)

[Lisa] provides a highly detailed analysis of specific pages, and explains the kind of historical context and evidence this approach yields. Make some time to give it a read if you’re at all interested, we promise it’s worth your while.

❌
❌