Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

Android Studio Bot

Por: EasyWithAI
5 Octubre 2023 at 13:40
Android’s Studio Bot is an AI coding assistant integrated into Android Studio. It uses natural language processing to understand development questions and provide helpful code snippets, best practice recommendations, and relevant documentation. To get started, download the latest Android Studio preview, enable data sharing, and launch Studio Bot from the tool window. Studio Bot is […]

Source

This Week in Security: Chat Control, Vulnerability Extortion, and Emoji Malware

21 Junio 2024 at 14:00

Way back in 2020, I actually read the proposed US legislation known as EARN IT, and with some controversy, concluded that much of the criticism of that bill was inaccurate. Well what’s old is new again, except this time it’s the European Union that’s wrestling with how to police online Child Sexual Abuse Material (CSAM). And from what I can tell of reading the actual legislation (pdf), this time it really is that bad.

The legislation lays out two primary goals, both of them problematic. The first is detection, or what some are calling “upload moderation”. The technical details are completely omitted here, simply stating that services “… take reasonable measures to mitigate the risk of their services being misused for such abuse …” The implication here is that providers would do some sort of automated scanning to detect illicit text or visuals, but exactly what constitutes “reasonable measures” is left unspecified.

The second goal is the detection order. It’s worth pointing out that interpersonal communication services are explicitly mentioned as required to implement these goals. From the bill:

Providers of hosting services and providers of interpersonal communications services that have received a detection order shall execute it by installing and operating technologies approved by the Commission to detect the dissemination of known or new child sexual abuse material or the solicitation of children…

This bill is careful not to prohibit end-to-end encryption, nor require that such encryption be backdoored. Instead, it requires that the apps themselves be backdoored, to spy on users before encryption happens. No wonder Meredith Whittaker has promised to pull the Signal app out of the EU if it becomes law. As this scanning is done prior to encryption, it’s technically not breaking end-to-end encryption.

You may wonder why that’s such a big deal. Why is it a non-negotiable for the Signal app to not look for CSAM in messages prior to encryption? For starters, it’s a violation of user trust and an intentional weakening of the security of the Signal system. But maybe most importantly, it puts a mechanism in place that will undoubtedly prove too tempting for future governments. If Signal can be forced into looking for CSAM in the EU, why not anti-government speech in China?

This story is ongoing, with the latest news that the EU has delayed the next step in attempting to ratify the proposal. It’s great news, but the future is still uncertain. For more background and analysis, see our conversation with the minds behind Matrix, on this very topic:

Bounty or Extortion?

A bit of drama played out over Twitter this week. The Kraken cryptography exchange had a problem where a deposit could be interrupted, and funds added to the Kraken account without actually transferring funds to back the deposit. A security research group, which turned out to be the CertiK company, discovered and disclosed the flaw via email.

Kraken Security Update:

On June 9 2024, we received a Bug Bounty program alert from a security researcher. No specifics were initially disclosed, but their email claimed to find an “extremely critical” bug that allowed them to artificially inflate their balance on our platform.

— Nick Percoco (@c7five) June 19, 2024

All seemed well, and the Kraken team managed to roll a hotfix out in an impressive 47 minutes. But things got weird when they cross referenced the flaw to see if anyone had exploited it. Three accounts had used it to duplicate money. The first use was for all of four dollars, which is consistent with doing legitimate research. But additionally, there were more instances from two other users, totaling close to $3 million in faked transfers — not to mention transfers of *real* money back out of those accounts. Kraken asked for the details and the money back.

According to the Kraken account, the researchers refused, and instead wanted to arrange a call with their “business development team”. The implication is that the transferred money was serving as a bargaining chip to request a higher bug bounty payout. According to Kraken, that’s extortion.

There is a second side to this story, of course. CertiK has a response on their x.com account where they claim to have wanted to return the transferred money, but they were just testing Kraken’s risk control system. There are things about this story that seem odd. At the very least, it’s unwise to transfer stolen currency in this way. At worst, this was an attempt at real theft that was thwarted. The end result is that the funds were eventually completed.

There are two fundamental problems with vuln disclosure/bounty:
#1 companies think security researchers are trying to extort them when they are not
#2 security researchers trying to extort companies https://t.co/I7vnk3oXi5

— Robert Graham 𝕏 (@ErrataRob) June 20, 2024

Report Bug, Get Nastygram

For the other side of the coin, [Lemon] found a trivial flaw in a traffic controller system. After turning it in, he was rewarded with an odd letter that was a combination of “thank you” and your work “may have constituted a violation of the Computer Fraud and Abuse Act”. This is not how you respond to responsible disclosure.

I received my first cease and desist for responsibly disclosing a critical vulnerability that gives a remote unauthenticated attacker full access to modify a traffic controller and change stoplights. Does this make me a Security Researcher now? pic.twitter.com/ftW35DxqeF

— Lemon (@Lemonitup) June 18, 2024

Emoji Malware

We don’t talk much about malware in South Asia, but this is an interesting one. DISGOMOJI is a malware attributed to a Pakistani group, mainly targeting government Linux machines in India. What really makes it notable is that the command and control system uses emoji in Discord channels. The camera emoji instructs the malware to take a screenshot. A fox triggers a hoovering of the Firefox profiles, and so on. Cute!

Using Roundcube to break PHP

This is a slow moving vulnerability, giving that the core is a 24-year old buffer overflow in iconv() in glibc. [Charles Fol] found this issue, which can pop up when using iconv() to convert to the ISO-2022-CN-EXT character set, and has been working on how to actually trigger the bug in a useful way. Enter PHP. OK, that’s not entirely accurate, since the crash was originally found in PHP. It’s more like we’re giving up on finding something else, and going back to PHP.

The core vulnerability can only overwrite one, two, or three bytes past the end of a buffer. To make use of that, the PHP bucket structure can be used. This is a growable doubly-linked list that is used for data handling. Chunked HTTP messages can be used to build a multi-bucket structure, and triggering the iconv() flaw overwrites one of the pointers in that structure. Bumping that pointer by a few bytes lands in attacker controlled data, which can land in a fake data structure, and continuing the dechunking procedure gives us an arbitrary memory write. At that point, a function pointer just has to be pointed at system() for code execution.

That’s a great theoretical attack chain, but actually getting there in the wild is less straightforward. There has been a notable web application identified that is vulnerable: Roundcube. Upon sending an email, the user can specify the addresses, as well as the character set parameter. Roundcube makes an iconv() call, triggering the core vulnerability. And thus an authenticated user has a path to remote code execution.

Bits and Bytes

Speaking of email, do you know the characters that are allowed in an email address? Did you know that the local user part of an email address can be a quoted string, with many special characters allowed? I wonder if every mail server and email security device realized that quirk? Apparently not, at least in the case of MailCleaner, which had a set of flaws allowing such an email to lead to full appliance takeover. Keep an eye out for other devices and applications to fall to this same quirk.

Nextcloud has a pair of vulnerabilities to pay attention to, with the first being an issue where a user with read and share permissions to an object could reshare it with additional permissions. The second is more troubling, giving an attacker a potential method to bypass a two-factor authentication requirement. Fixes are available.

Pointed out by [Herr Brain] on Hackaday’s Discord, we have a bit of bad news about the Arm Memory Tagging Extensions (MTE) security feature. Namely, speculative execution can reveal the needed MTE tags about 95% of the time. While this is significant, there is a bit of chicken-and-egg problem for attackers, as MTE is primarily useful to prevent running arbitrary code at all, which is the most straightforward way to achieve a speculative attack to start with.

And finally, over at Google Project Zero, [Seth Jenkins] has a report on a trio of Android devices, and finding vulnerabilities in their respective kernel drivers. In each case, the vulnerable drivers can be accessed from unprivileged applications. [Seth]’s opinion is that as the Android core code gets tighter and more secure, these third-party drivers of potentially questionable code quality will quickly become the target of choice for attack.

Lindroid Promises True Linux on Android

19 Junio 2024 at 02:00

Since Android uses Linux, you’d think it would be easier to run Linux apps on your Android phone or tablet. There are some solutions out there, but the experience is usually less than stellar. A new player, Lindroid, claims to provide real Linux distributions with hardware-accelerated Wayland on phones. How capable is it? The suggested window manager is KDE’s KWIN. That software is fairly difficult to run on anything but a full-blown system with dbus, hardware accelerations, and similar features.

There are, however, a few problems. First, you need a rooted phone, which isn’t totally surprising. Second, there are no clear instructions yet about how to install the software. The bulk of the information available is on an X thread. You can go about 4 hours into the very long video below to see a slide presentation about Lindroid.

While it appears Linux is running inside a container, it looks like they’ve opened up device access, which allows a full Linux experience even though Linux is technically, in this case, an Android app.

We are interested in seeing how this works, and when the instructions show up, we might root an old phone to try it out. Of course, there are other methods. Termux seems to be the most popular, but running GUI programs on it isn’t always the best experience. Not that we haven’t done it.

Google Removes RISC-V Support From Android

Por: Maya Posch
4 Mayo 2024 at 02:00

Last year the introduction of  RISC-V support to the Android-specific, Linux-derived Android Common Kernel (ACK) made it seem that before long Android devices might be using SoCs based around the RISC-V ISA, but it would seem that these hopes are now dashed. As reported by Android Authority, with a series of recently accepted patches this RISC-V support was stripped again from the ACK. While this doesn’t mean that Android cannot be made to work on RISC-V, any company interested would have to do all of the heavy lifting themselves, which might include Qualcomm with their recently announced RISC-V-based smartwatch Snapdragon SoC.

No reason was provided by Google for this change, and the official statement from Google to Android Authority says that Google is not ready to provide a single supported Android Generic Kernel Image (GKI), but that ‘Android will continue to support RISC-V’. This change however, removes RISC-V kernel support from the ACK, and since Google only certifies Android builds which ship with a GKI featuring an ACK, this effectively means that RISC-V is not supported at this point, and likely won’t be for the foreseeable future.

As discussed on Hacker News, a potential reason might be the very fragmentary nature of the RISC-V ISA, which makes a standard RISC-V kernel very complicated if you want to support more than a (barebones) profile. This is also supported by a RISC-V mailing list thread, where ‘expensive maintenance’ is mentioned for why Google doesn’t want to support RISC-V.

❌
❌