Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

USB Stick Hides Large Language Model

Large language models (LLMs) are all the rage in the generative AI world these days, with the truly large ones like GPT, LLaMA, and others using tens or even hundreds of billions of parameters to churn out their text-based responses. These typically require glacier-melting amounts of computing hardware, but the “large” in “large language models” doesn’t really need to be that big for there to be a functional, useful model. LLMs designed for limited hardware or consumer-grade PCs are available now as well, but [Binh] wanted something even smaller and more portable, so he put an LLM on a USB stick.

This USB stick isn’t just a jump drive with a bit of memory on it, though. Inside the custom 3D printed case is a Raspberry Pi Zero W running llama.cpp, a lightweight, high-performance version of LLaMA. Getting it on this Pi wasn’t straightforward at all, though, as the latest version of llama.cpp is meant for ARMv8 and this particular Pi was running the ARMv6 instruction set. That meant that [Binh] needed to change the source code to remove the optimizations for the more modern ARM machines, but with a week’s worth of effort spent on it he finally got the model on the older Raspberry Pi.

Getting the model to run was just one part of this project. The rest of the build was ensuring that the LLM could run on any computer without drivers and be relatively simple to use. By setting up the USB device as a composite device which presents a filesystem to the host computer, all a user has to do to interact with the LLM is to create an empty text file with a filename, and the LLM will automatically fill the file with generated text. While it’s not blindingly fast, [Binh] believes this is the first plug-and-play USB-based LLM, and we’d have to agree. It’s not the least powerful computer to ever run an LLM, though. That honor goes to this project which is able to cram one on an ESP32.

Will Embodied AI Make Prosthetics More Humane?

Building a robotic arm and hand that matches human dexterity is tougher than it looks. We can create aesthetically pleasing ones, very functional ones, but the perfect mix of both? Still a work in progress. Just ask [Sarah de Lagarde], who in 2022 literally lost an arm and a leg in a life-changing accident. In this BBC interview, she shares her experiences openly – highlighting both the promise and the limits of today’s prosthetics.

The problem is that our hands aren’t just grabby bits. They’re intricate systems of nerves, tendons, and ridiculously precise motor control. Even the best AI-powered prosthetics rely on crude muscle signals, while dexterous robots struggle with the simplest things — like tying shoelaces or flipping a pancake without launching it into orbit.

That doesn’t mean progress isn’t happening. Researchers are training robotic fingers with real-world data, moving from ‘oops’ to actual precision. Embodied AI, i.e. machines that learn by physically interacting with their environment, is bridging the gap. Soft robotics with AI-driven feedback loops mimic how our fingers instinctively adjust grip pressure. If haptics are your point of interest, we have posted about it before.

The future isn’t just robots copying our movements, it’s about them understanding touch. Instead of machine learning, we might want to shift focus to human learning. If AI cracks that, we’re one step closer.

Original photo by Marco Bianchetti on Unsplash

 

It’s Always Pizza O’Clock With This AI-Powered Timepiece

Right up front, we’ll say that [likeablob]’s pizza-faced clock gives us mixed feelings about our AI-powered future. On the one hand, if that’s Stable Diffusion’s idea of what a pizza looks like, then it should be pretty easy to slip the virtual chains these algorithms no doubt have in store for us. Then again, if they do manage to snare us and this ends up on the menu, we’ll pray for a mercifully quick end to the suffering.

The idea is pretty simple; the clock’s face is an empty pizza pan that fills with pretend pizza as the day builds to noon, whereupon pizza is removed until midnight when the whole thing starts again. The pizza images are generated by a two-stage algorithm using Stable Diffusion 1.5, and tend to favor suspiciously uncooked whole basil sprigs along with weird pepperoni slices and Dali-esque globs of cheese. Everything runs on a Raspberry Pi Zero W, with the results displayed on a 4″ diameter LCD with an HDMI adapter. Alternatively, you can just hit the web app and have a pizza clock on your desktop. If pizza isn’t your thing, fear not — other food and non-food images are possible, limited only by Stable Diffusion’s apparently quite limited imagination.

As clocks go, this one is pretty unique. But we’re used to seeing unusual clocks around here, from another food-centric timepiece to a clock that knits.

A Great Use for AI: Wasting Scammers Time!

We may have found the killer app for AI. Well, actually, British telecom provider O2 has. As The Guardian reports, they have an AI chatbot that acts like a 78-year-old grandmother and receives phone calls. Of course, since the grandmother—Daisy, by name—doesn’t get any real phone calls, anyone calling that number is probably a scammer. Daisy’s specialty? Keeping them tied up on the phone.

While this might just seem like a prank for revenge, it is actually more than that. Scamming people is a numbers game. Most people won’t bite. So, to be successful, scammers have to make lots of calls. Daisy can keep one tied up for around 40 minutes or more.

You can see some of Daisy’s antics in the video below. Or listen to Daisy do her thing in the second video. When a bogus tech support agent tried to direct Daisy to the Play Store, she replied, “Did you say pastry?” Some of them became quite flustered. She even has her own homepage.

While we have mixed feelings about some AI applications, this is one we think everyone can get onboard with. Well, everyone but the scammers.

It might not do voice, but you can play with local AI models easily now. Spoofing scammers is the perfect job for the worst summer intern ever.

More Details On Why DeepSeek is a Big Deal

The DeepSeek large language models (LLM) have been making headlines lately, and for more than one reason. IEEE Spectrum has an article that sums everything up very nicely.

We shared the way DeepSeek made a splash when it came onto the AI scene not long ago, and this is a good opportunity to go into a few more details of why this has been such a big deal.

For one thing, DeepSeek (there’s actually two flavors, -V3 and -R1, more on them in a moment) punches well above its weight. DeepSeek is the product of an innovative development process, and freely available to use or modify. It is also indirectly highlighting the way companies in this space like to label their LLM offerings as “open” or “free”, but stop well short of actually making them open source.

The DeepSeek-V3 LLM was developed in China and reportedly cost less than 6 million USD to train. This was possible thanks to developing DualPipe, a highly optimized and scalable method of training the system despite limitations due to export restrictions on Nvidia hardware. Details are in the technical paper for DeepSeek-V3.

There’s also DeepSeek-R1, a chain-of-thought “reasoning” model which handily provides its thought process enclosed within easily-parsed <think> and </think> pseudo-tags that are included in its responses. A model like this takes an iterative step-by-step approach to formulating responses, and benefits from prompts that provide a clear goal the LLM can aim for. The way DeepSeek-R1 was created was itself novel. Its training started with supervised fine-tuning (SFT) which is a human-led, intensive process as a “cold start” which eventually handed off to a more automated reinforcement learning (RL) process with a rules-based reward system. The result avoided problems that come from relying too much on RL, while minimizing the human effort of SFT. Technical details on the process of training DeepSeek-R1 are here.

DeepSeek-V3 and -R1 are freely available in the sense that one can access the full-powered models online or via an app, or download distilled models for local use on more limited hardware. It is free and open as in accessible, but not open source because not everything needed to replicate the work is actually released. Like with most LLMs, the training data and actual training code used are not available.

What is released and making waves of its own are the technical details of how researchers produced what they did, and that means there are efforts to try to make an actually open source version. Keep an eye out for Open-R1!

Examining the Vulnerability of Large Language Models to Data-Poisoning

Large language models (LLMs) are wholly dependent on the quality of the input data with which these models are trained. While suggestions that people eat rocks are funny to you and me, in the case of LLMs intended to help out medical professionals, any false claims or statements dripping out of such an LLM can have dire consequences, ranging from incorrect diagnoses to much worse. In a recent study published in Nature Medicine by [Daniel Alexander Alber] et al. the ease with which this data poisoning can occur is demonstrated.

According to their findings, only 0.001% of training tokens have to be replaced with medical misinformation to order to create models that are likely to produce medically erroneous statement. Most concerning is that such a corrupted model isn’t readily discovered using standard medical LLM benchmarks. There are filters for erroneous content, but these tend to be limited in scope due to the overhead. Post-training adjustments can be made, as can the addition of RAG, but none of this helps with the confident bull excrement due to corruption.

The mitigation approach that the researchers developed cross-references LLM output against biomedical knowledge graphs, to reduce the LLM mostly for generating natural language. In this approach LLM outputs are matched against the graphs and if LLM ‘facts’ cannot be verified, it’s marked as potential misinformation. In a test with 1,000 random passages detected issues with a claimed effectiveness of 91.9%.

Naturally, this does not guarantee that misinformation does not make it past these knowledge graphs, and largely leaves the original problem with LLMs in place, namely that their outputs can never be fully trusted. This study also makes it abundantly clear how easy it is to corrupt an LLM via the input training data, as well as underlining the broader problem that AI is making mistakes that we don’t expect.

New Open Source DeepSeek V3 Language Model Making Waves

In the world of large language models (LLMs) there tend to be relatively few upsets ever since OpenAI barged onto the scene with its transformer-based GPT models a few years ago, yet now it seems that Chinese company DeepSeek has upended the status quo. Its new DeepSeek-V3 model is not only open source, it also claims to have been trained for only a fraction of the effort required by competing models, while performing significantly better.

The full training of DeepSeek-V3’s 671B parameters is claimed to have only taken 2.788 M hours on NVidia H800 (Hopper-based) GPUs, which is almost a factor of ten less than others. Naturally this has the LLM industry somewhat up in a mild panic, but for those who are not investors in LLM companies or NVidia can partake in this new OSS model that has been released under the MIT license, along with the DeepSeek-R1 reasoning model.

Both of these models can be run locally, using both AMD and NVidia GPUs, as well as using the online APIs. If these models do indeed perform as efficiently as claimed, they stand to massively reduce the hardware and power required to not only train but also query LLMs.

Prompt Injection Tricks AI Into Downloading and Executing Malware

[wunderwuzzi] demonstrates a proof of concept in which a service that enables an AI to control a virtual computer (in this case, Anthropic’s Claude Computer Use) is made to download and execute a piece of malware that successfully connects to a command and control (C2) server. [wonderwuzzi] makes the reasonable case that such a system has therefore become a “ZombAI”. Here’s how it worked.

Referring to the malware as a “support tool” and embedding instructions into the body of the web page is what got the binary downloaded and executed, compromising the system.

After setting up a web page with a download link to the malicious binary, [wunderwuzzi] attempts to get Claude to download and run the malware. At first, Claude doesn’t bite. But that all changes when the content of the HTML page gets rewritten with instructions to download and execute the “Support Tool”. That new content gets interpreted as orders to follow; being essentially a form of prompt injection.

Claude dutifully downloads the malicious binary, then autonomously (and cleverly) locates the downloaded file and even uses chmod to make it executable before running it. The result? A compromised machine.

Now, just to be clear, Claude Computer Use is experimental and this sort of risk is absolutely and explicitly called out in Anthropic’s documentation. But what’s interesting here is that the methods used to convince Claude to compromise the system it’s using are essentially the same one might take to convince a person. Make something nefarious look innocent, and obfuscate the true source (and intent) of the directions. Watch it in action from beginning to end in a video, embedded just under the page break.

This is a demonstration of the importance of security and caution when using or designing systems like this. It’s also a reminder that large language models (LLMs) fundamentally mix instructions and input data together in the same stream. This is a big part of what makes them so fantastically useful at communicating naturally, but it’s also why prompt injection is so tricky to truly solve.

Preventing AI Plagiarism With .ASS Subtitling

Around two years ago, the world was inundated with news about how generative AI or large language models would revolutionize the world. At the time it was easy to get caught up in the hype, but in the intervening months these tools have done little in the way of productive work outside of a few edge cases, and mostly serve to burn tons of cash while turning the Internet into even more of a desolate wasteland than it was before. They do this largely by regurgitating human creations like text, audio, and video into inferior simulacrums and, if you still want to exist on the Internet, there’s basically nothing you can do to prevent this sort of plagiarism. Except feed the AI models garbage data like this YouTuber has started doing.

At least as far as YouTube is concerned, the worst offenders of AI plagiarism work by downloading the video’s subtitles, passing them through some sort of AI model, and then generating another YouTube video based off of the original creator’s work. Most subtitle files are the fairly straightfoward .srt filetype which only allows for timing and text information. But a more obscure subtitle filetype known as Advanced SubStation Alpha, or .ass, allows for all kinds of subtitle customization like orientation, formatting, font types, colors, shadowing, and many others. YouTuber [f4mi] realized that using this subtitle system, extra garbage text could be placed in the subtitle filetype but set out of view of the video itself, either by placing the text outside the viewable area or increasing its transparency. So now when an AI crawler downloads the subtitle file it can’t distinguish real subtitles from the garbage placed into it.

[f4mi] created a few scripts to do this automatically so that it doesn’t have to be done by hand for each one. It also doesn’t impact the actual subtitles on the screen for people who need them for accessibility reasons. It’s a great way to “poison” AI models and make it at least harder for them to rip off the creations of original artists, and [f4mi]’s tests show that it does work. We’ve actually seen a similar method for poisoning data sets used for emails long ago, back when we were all collectively much more concerned about groups like the NSA using automated snooping tools in our emails than we were that machines were going to steal our creative endeavors.

Thanks to [www2] for the tip!

AI Mistakes Are Different, and That’s a Problem

People have been making mistakes — roughly the same ones — since forever, and we’ve spent about the same amount of time learning to detect and mitigate them. Artificial Intelligence (AI) systems make mistakes too, but [Bruce Schneier] and [Nathan E. Sanders] make the observation that, compared to humans, AI models make entirely different kinds of mistakes. We are perhaps less equipped to handle this unusual problem than we realize.

The basic idea is this: as humans we have tremendous experience making mistakes, and this has also given us a pretty good idea of what to expect our mistakes to look like, and how to deal with them. Humans tend to make mistakes at the edges of our knowledge, our mistakes tend to clump around the same things, we make more of them when bored or tired, and so on. We have as a result developed controls and systems of checks and balances to help reduce the frequency and limit the harm of our mistakes. But these controls don’t carry over to AI systems, because AI mistakes are pretty strange.

The mistakes of AI models (particularly Large Language Models) happen seemingly randomly and aren’t limited to particular topics or areas of knowledge. Models may unpredictably appear to lack common sense. As [Bruce] puts it, “A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats.” A slight re-wording of a question might be all it takes for a model to suddenly be confidently and utterly wrong about something it just a moment ago seemed to grasp completely. And speaking of confidence, AI mistakes aren’t accompanied by uncertainty. Of course humans are no strangers to being confidently wrong, but as a whole the sort of mistakes AI systems make aren’t the same kinds of mistakes we’re used to.

There are different ideas on how to deal with this, some of which researchers are (ahem) confidently undertaking. But for best results, we’ll need to invent new ways as well. The essay also appeared in IEEE Spectrum and isn’t terribly long, so take a few minutes to check it out and get some food for thought.

And remember, if preventing mistakes at all costs is the goal, that problem is already solved: GOODY-2 is undeniably the world’s safest AI.

Turning GLaDOS into Ted: A Tale of a Talking Toy

Hacked teddybear on a desk

What if your old, neglected toys could come to life — with a bit of sass? That’s exactly what [Binh] achieved when he transformed his sister’s worn-out teddy bear into ‘Ted’, an interactive talking plush with a personality of its own. This project, which combines the GLaDOS Personality Core project from the Portal series with clever microcontroller tinkering, brings a whole new personality to a childhood favorite.

[Binh] started with the basics: a teddy bear already equipped with buttons and speakers, which he overhauled with an ESP32 microcontroller. The bear’s personality originated from GLaDOS, but was rewritten by [Binh] to fit a cheeky, teddy-bear tone. With a few tweaks in the Python-based fork, [Binh] created threads to handle touch-based interaction. For example, the ESP32 detects where the bear is touched and sends this input to a modified neural network, which then generates a response. The bear can, for instance, call you out for holding his paw for too long or sarcastically plead for mercy. I hear you say ‘but that bear Ted could do a lot more!’ Well — maybe, all this is just what an innocent bear with a personality should be capable of.

Instead, let us imagine future iterations featuring capacitive touch sensors or accelerometers to detect movement. The project is simple, but showcases the potential for intelligent plush toys. It might raise some questions, too.

Modern AI on Vintage Hardware: LLama 2 Runs on Windows 98

[EXO Labs] demonstrated something pretty striking: a modified version of Llama 2 (a large language model) that runs on Windows 98. Why? Because when it comes to personal computing, if something can run on Windows 98, it can run on anything. More to the point: if something can run on Windows 98 then it’s something no tech company can control how you use, no matter how large or influential they may be. More on that in a minute.

Ever wanted to run a local LLM on 25 year old hardware? No? Well now you can, and at a respectable speed, too!

What’s it like to run an LLM on Windows 98? Aside from the struggles of things like finding compatible peripherals (back to PS/2 hardware!) and transferring the required files (FTP over Ethernet to the rescue) or even compilation (some porting required), it works maybe better than one might expect.

A Windows 98 machine with Pentium II processor and 128 MB of RAM generates a speedy 39.31 tokens per second with a 260K parameter Llama 2 model. A much larger 15M model generates 1.03 tokens per second. Slow, but it works. Going even larger will also work, just ever slower. There’s a video on X that shows it all in action.

It’s true that modern LLMs have billions of parameters so these models are tiny in comparison. But that doesn’t mean they can’t be useful. Models can be shockingly small and still be perfectly coherent and deliver surprisingly strong performance if their training and “job” is narrow enough, and the tools to do that for oneself are all on GitHub.

This is a good time to mention that this particular project (and its ongoing efforts) are part of a set of twelve projects by EXO Labs focusing on ensuring things like AI models can be run anywhere, by anyone, independent of tech giants aiming to hold all the strings.

And hey, if local AI and the command line is something that’s up your alley, did you know they already exist as single-file, multi-platform, command-line executables?

Going Digital: Teaching a TI-84 Handwriting Recognition

close up of a TI-84 Plus CE running custom software

You wouldn’t typically associate graphing calculators with artificial intelligence, but hacker [KermMartian] recently made it happen. The innovative project involved running a neural network directly on a TI-84 Plus CE to recognize handwritten digits. By using the MNIST dataset, a well-known collection of handwritten numbers, the calculator could identify digits in just 18 seconds. If you want to learn how, check out his full video on it here.

The project began with a proof of concept: running a convolutional neural network (CNN) on the calculator’s limited hardware, a TI-84 Plus CE with only 256 KB of memory and a 48 MHz processor. Despite these constraints, the neural network could train and make predictions. The key to success: optimizing the code, leveraging the calculator’s C programming tools, and offloading the heavy lifting to a computer for training. Once trained, the network could be transferred to the calculator for real-time inference. Not only did it run the digits from MNIST, but it also accepted input from a USB mouse, letting [KermMartian] draw digits directly on the screen.

While the calculator’s limited resources mean it can’t train the network in real-time, this project is a proof that, with enough ingenuity, even a small device can be used for something as complex as AI. It’s not just about power; it’s about resourcefulness. If you’re into unconventional projects, this is one for the books.

Digital Skills: Artificial Intelligence by Accenture

Category – General Artificial Intelligence Course Difficulty – Easy Course Length – 3 Weeks / 6 Hours Price – Free Rating  4.5/5 View Course Discover the potential for artificial intelligence to transform everyday life and reshape the way you work in this comprehensive online course from Accenture. With a duration of 3 weeks and […]

Source

❌