Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerHackaday

Hackaday Links: June 16, 2024

16 Junio 2024 at 23:00
Hackaday Links Column Banner

Attention, slackers — if you do remote work for a financial institution, using a mouse jiggler might not be the best career move. That’s what a dozen people learned this week as they became former employees of Wells Fargo after allegedly being caught “simulating keyboard activity” while working remotely. Having now spent more than twice as many years working either hybrid or fully remote, we get it; sometimes, you’ve just got to step away from the keyboard for a bit. But we’ve never once felt the need to create the “impression of active work” during those absences. Perhaps that’s because we’ve never worked in a regulated environment like financial services.

For our part, we’re curious as to how the bank detected the use of a jiggler. The linked article mentions that regulators recently tightened rules that require employers to treat an employee’s home as a “non-branch location” subject to periodic inspection. More than enough reason to quit, in our opinion, but perhaps they sent someone snooping? More likely, the activity simulators were discovered by technical means. The article contains a helpful tip to avoid powering a jiggler from the computer’s USB, which implies detecting the device over the port. Our guess is that Wells tracks mouse and keyboard activity and compares it against a machine-learning model to look for signs of slacking.

Speaking of the intersection of soulless corporate giants and AI, what’s this world coming to when AI walks you right into an online scam? That’s what happened to a Canadian man recently when he tried to get help moving Facebook to his new phone. He searched for a customer service number for Facebook and found one listed, but thought it would be wise to verify the number. So he pulled up the “Meta AI”-powered search tool in Facebook Messenger and asked if the number was legit. “No problem,” came the reply, so he called the number and promptly got attacked by the scammers on the other end, who within minutes used his PayPal account to buy $500 worth of Apple gift cards. From the sound of it, the guy did everything he should have to protect himself, at least up to a point. But when a company’s chatbot system gives you bad information about their own customer support, things like this are going to happen.

Just a reminder that we’re deep into con season now. Open Sauce should be just about wrapped up by the time this gets published, and coming up the week after is Teardown 2024 in Portland. The schedule for that has been released, which includes a workshop on retrocomputing with the “Voja4” Supercon badge. A little further on into the summer and back on the East Coast will be HOPE XV, which still has some tickets left. The list of speakers for that one looks pretty good, as does the workshop roundup.

And finally, if you have some STL models in need of a little creative mutilation, try out this STL twister online tool. It’s by our friend [Andrew Sink], who has come up with a couple of other interesting 3D tools, like the Banana for Scale tool and the 3D Low-Poly Generator. The STL Twister does pretty much what it says and puts the screws to whatever STL model you drop on it. The MakerBot Gnome mascot that pops up by default is a particularly good model for screwifying. Enjoy!

Australian Library Uses Chatbot To Imitate Veteran With Predictable Results

Por: Lewin Day
27 Abril 2024 at 02:00

The educational sector is usually the first to decry large language models and AI, due to worries about cheating. The State Library of Queensland, however, has embraced the technology in controversial fashion. In the lead-up to Anzac Day, the primarily Australian war memorial holiday, the library released a chatbot intended to imitate a World War One veteran. It went as well as you’d expect.

The highlighted line was apparently added to the chatbot’s instructions later on to help shut down tomfoolery.

Twitter users immediately chimed in with dismay at the very concept. Others showed how easy it was to “jailbreak” the AI, convincing Charlie he was actually supposed to teach Python, imitate Frasier Crane, or explain laws like Elle from Legally Blonde. One person figured out how to get Charlie to spit out his initial instructions; these were patched later in the day to try and stop some of the shenanigans.

From those instructions, it’s clear that this was supposed to be educational, rather than some sort of macabre experiment. However, Charlie didn’t do a great job here, either. As with any Large Language Model, Charlie had no sense of objective truth. He routinely spat out incorrect facts regarding the war, and regularly contradicted himself.

Generally, any plan that includes the words “impersonate a veteran” is a foolhardy one at best. Throwing a machine-generated portrait and a largely uncontrolled AI into the mix didn’t help things. Regardless, the State Library has left the “Virtual Veterans” experience up at the time of writing.

The problem with AI is that it’s not a magic box that gets things right all the time. It never has been. As long as organizations keep putting AI to use in ways like this, the same story will keep playing out.

❌
❌