Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

Using Integer Addition to Approximate Float Multiplication

Por: Maya Posch
11 Abril 2025 at 02:00

Once the domain of esoteric scientific and business computing, floating point calculations are now practically everywhere. From video games to large language models and kin, it would seem that a processor without floating point capabilities is pretty much a brick at this point. Yet the truth is that integer-based approximations can be good enough to hit the required accuracy. For example, approximating floating point multiplication with integer addition, as [Malte Skarupke] recently had a poke at based on an integer addition-only LLM approach suggested by [Hongyin Luo] and [Wei Sun].

As for the way this works, it does pretty much what it says on the tin: adding the two floating point inputs as integer values, followed by adjusting the exponent. This adjustment factor is what gets you close to the answer, but as the article and comments to it illustrate, there are plenty of issues and edge cases you have to concern yourself with. These include under- and overflow, but also specific floating point inputs.

Unlike in scientific calculations where even minor inaccuracies tend to propagate and cause much larger errors down the line, graphics and LLMs do not care that much about float point precision, so the ~7.5% accuracy of the integer approach is good enough. The question is whether it’s truly more efficient as the paper suggests, rather than a fallback as seen with e.g. integer-only audio decoders for platforms without an FPU.

Since one of the nice things about FP-focused vector processors like GPUs and derivatives (tensor, ‘neural’, etc.) is that they can churn through a lot of data quite efficiently, the benefits of shifting this to the ALU of a CPU and expecting (energy) improvements seem quite optimistic.

How Hard is it to Write a Calculator App?

16 Febrero 2025 at 21:00

How hard can it be to write a simple four-function calculator program? After all, computers are good at math, and making a calculator isn’t exactly blazing a new trail, right? But [Chad Nauseam] will tell you that it is harder than you probably think. His post starts with a screenshot of the iOS calculator app with a mildly complex equation. The app’s answer is wrong. Android’s calculator does better on the same problem.

What follows is a bit of a history lesson and a bit of a math lesson combined. As you might realize, the inherent problem with computers and math isn’t that they aren’t good at it. Floating point numbers have a finite precision and this leads to problems, especially when you do operations that combine large and small numbers together.

Indeed, any floating point representation has a bigger infinity of numbers that it can’t represent than those that it can. But the same is true of a calculator. Think about how many digits you are willing to type in, and how many digits you want out. All you want is for each of them to be correct, and that’s a much smaller set of numbers.

Google’s developer, [Hans-J. Boehm] tackled this problem by turning to recursive real arithmetic (RRA). Here, each math function is told how accurate it needs to be, and a set of rules determines the highest required accuracy.

But every solution brings a problem. With RRA, there is no way to tell very small numbers from zero. So computing “1-1” might give you “0.000000000”, which is correct but upsetting because of all the excess precision. You could try to test if “0.00000000” was equal to “0”, and simplify the output. But testing for equality of two numbers in RRA is not guaranteed to terminate: you can tell if two numbers are unequal by going to more and more precision until you find a difference, but if the numbers happen to be equal, this procedure never ends.

The key realization for [Boehm] and his collaborators was that you could use RRA only for cases where you deal with inexact numbers. Most of the time, the Android calculator deals with rationals. However, when an operation produces a potentially irrational result, it switches to RRA for the approximation, which works because no finite representation ever gets it exactly right. The result is a system that doesn’t show excess precision, but correctly displays all of the digits that it does show.

We really like [Chad’s] step-by-step explanation. If you would rather dive into the math, you can read [Boehm’s] paper on the topic. If you ever wonder how many computer systems handle odd functions like sine and cosine, read about CORDIC. Or, avoid all of this and stick to your slide rule.

❌
❌