1

Floating Point Math

 1 year ago
source link: https://0.30000000000000004.com/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Floating Point Math

Floating Point Math

Your language isnโ€™t broken, itโ€™s doing floating point math. Computers can only natively store integers, so they need some way of representing decimal numbers. This representation is not perfectly accurate. This is why, more often than not, 0.1 + 0.2 != 0.3.

Why does this happen?

Itโ€™s actually rather interesting. When you have a base-10 system (like ours), it can only express fractions that use a prime factor of the base. The prime factors of 10 are 2 and 5. So 1/2, 1/4, 1/5, 1/8, and 1/10 can all be expressed cleanly because the denominators all use prime factors of 10. In contrast, 1/3, 1/6, 1/7 and 1/9 are all repeating decimals because their denominators use a prime factor of 3 or 7.

In binary (or base-2), the only prime factor is 2, so you can only cleanly express fractions whose denominator has only 2 as a prime factor. In binary, 1/2, 1/4, 1/8 would all be expressed cleanly as decimals, while 1/5 or 1/10 would be repeating decimals. So 0.1 and 0.2 (1/10 and 1/5), while clean decimals in a base-10 system, are repeating decimals in the base-2 system the computer uses. When you perform math on these repeating decimals, you end up with leftovers which carry over when you convert the computerโ€™s base-2 (binary) number into a more human-readable base-10 representation.

Below are some examples of sending .1 + .2 to standard output in a variety of languages.

Read more:

Language Code Result

๐Ÿ”—

PowerShell by default uses double type, but because it runs on .NET it has the same types as C# does. Thanks to that the Decimal type can be used - directly by providing the type name [decimal] or via suffix d.

More about that in the C# section.

๐Ÿ”— ABAP

WRITE / CONV f( '.1' + '.2' ).
WRITE / CONV decfloat16( '.1' + '.2' ).
0.30000000000000004
0.3

๐Ÿ”— APL

0.1 + 0.2
โŽ•PP โ† 17
0.1 + 0.2
0.3 = 0.1 + 0.2
โŽ•CTโ†0
0.3 = 0.1 + 0.2
โŽ•FR โ† 1287
โŽ•PP โ† 34
0.1 + 0.2
โŽ•FR โ† 1287
โŽ•DCT โ† 0
0.3 = 0.1 + 0.2
0.3
0.30000000000000004
0
0.3

APL has a default printing precision of 10 significant digits. Setting โŽ•PP to 17 shows the error, however 0.3 = 0.1 + 0.2 is still true (1) because thereโ€™s a default comparison tolerance of about 10-14. Setting โŽ•CT to 0 shows the inequality. Dyalog APL also supports 128-bit decimal numbers (activated by setting the float representation, โŽ•FR, to 1287, i.e. 128-bit decimal), where even setting the decimal comparison tolerance (โŽ•DCT) to zero still makes the equation hold true. Try it online! Multi-precision floats, unlimited precision rationals, and ball arithmetic are available in NARS2000.

๐Ÿ”— Ada

with Ada.Text_IO; use Ada.Text_IO;
procedure Sum is
  A : Float := 0.1;
  B : Float := 0.2;
  C : Float := A + B;
begin
  Put_Line(Float'Image(C));
  Put_Line(Float'Image(0.1 + 0.2));
end Sum;
3.00000E-01  
3.00000E-01

๐Ÿ”— AutoHotkey

MsgBox, % 0.1 + 0.2
0.3

๐Ÿ”— C

#include <stdio.h>

int main(int argc, char** argv) {
  printf("%.17f\n", .1 + .2);
  return 0;
}
0.30000000000000004

๐Ÿ”— C#

Console.WriteLine("{0:R}", .1 + .2);
Console.WriteLine("{0:R}", .1f + .2f);
Console.WriteLine("{0:R}", .1m + .2m);
0.30000000000000004
0.3
0.3

C# has support for 128-bit decimal numbers, with 28-29 significant digits of precision. Their range, however, is smaller than that of both the single and double precision floating point types. Decimal literals are denoted with the m suffix.

๐Ÿ”— C++

#include <iomanip>
#include <iostream>

int main() {
  std::cout << std::setprecision(17) << 0.1 + 0.2;
}
0.30000000000000004

๐Ÿ”— Clojure

(+ 0.1 0.2)
0.30000000000000004

Clojure supports arbitrary precision and ratios. (+ 0.1M 0.2M) returns 0.3M, while (+ 1/10 2/10) returns 3/10.

๐Ÿ”— ColdFusion

<cfset foo = .1 + .2>
<cfoutput>#foo#</cfoutput>
0.3

๐Ÿ”— Common Lisp

(+ .1 .2)
(+ 1/10 2/10)
(+ 0.1d0 0.2d0)
(- 1.2 1.0)
0.3
3/10
0.30000000000000004d0
0.20000005

CLโ€™s spec doesnโ€™t actually even require radix-2 floats (let alone specifically 32-bit singles and 64-bit doubles), but the high-performance implementations all seem to use IEEE floats with the usual sizes. This was tested on SBCL and ECL in particular.

๐Ÿ”— Crystal

puts 0.1 + 0.2
puts 0.1_f32 + 0.2_f32
0.30000000000000004
0.3

๐Ÿ”— D

import std.stdio;

void main(string[] args) {
  writefln("%.17f", .1+.2);
  writefln("%.17f", .1f+.2f);
  writefln("%.17f", .1L+.2L);
}
0.29999999999999999  
0.30000001192092896  
0.30000000000000000

๐Ÿ”— Dart

print(.1 + .2);
0.30000000000000004

๐Ÿ”— Delphi XE5

writeln(0.1 + 0.2);
0.3

๐Ÿ”— Elixir

IO.puts(0.1 + 0.2)
0.30000000000000004

๐Ÿ”— Elm

0.1 + 0.2
0.30000000000000004

๐Ÿ”— Elvish

+ .1 .2
0.30000000000000004

Elvish uses Goโ€™s double for numerical operations.

๐Ÿ”— Emacs Lisp

(+ .1 .2)
0.30000000000000004

๐Ÿ”— Erlang

io:format("~w~n", [0.1 + 0.2]).
io:format("~f~n", [0.1 + 0.2]).
io:format("~e~n", [0.1 + 0.2]).
io_lib:format("~.1f~n", [0.1 + 0.2]).
io_lib:format("~.2f~n", [0.1 + 0.2]).
0.30000000000000004
0.300000
3.00000e-1
"0.3\n"
"0.30\n"

๐Ÿ”— FORTRAN

program FLOATMATHTEST
  real(kind=4) :: x4, y4
  real(kind=8) :: x8, y8
  real(kind=16) :: x16, y16
  ! REAL literals are single precision, use _8 or _16
  ! if the literal should be wider.
  x4 = .1; x8 = .1_8; x16 = .1_16
  y4 = .2; y8 = .2_8; y16 = .2_16
  write (*,*) x4 + y4, x8 + y8, x16 + y16
end
0.300000012  
0.30000000000000004  
0.300000000000000000000000000000000039

๐Ÿ”— Fish

math .1 + .2
0.3

๐Ÿ”— GHC (Haskell)

0.1 + 0.2 :: Double
0.1 + 0.2 :: Float
0.1 + 0.2 :: Rational
0.30000000000000004
0.3
3 % 10

If you need real numbers, packages like exact-real give you the correct answer.

๐Ÿ”— GNU Octave

0.1 + 0.2
single(0.1)+single(0.2)
double(0.1)+double(0.2)
0.1+single(0.2)
0.1+double(0.2)
sprintf('%.17f',0.1+0.2)
0.3
0.3
0.3
0.3
0.3
0.30000000000000004

๐Ÿ”— Gforth

0.1e 0.2e f+ f.
0.1e 0.2e f+ 0.3e f= .
0.3e 0.3e f= .
0.3
0
-1

In Gforth 0 means false and -1 means true. First example print 0.3 but itโ€™s not equal to actuall 0.3.

๐Ÿ”— Go

package main
import "fmt"

func main() {
  fmt.Println(.1 + .2)
  var a float64 = .1
  var b float64 = .2
  fmt.Println(a + b)
  fmt.Printf("%.54f\n", .1 + .2)
}
0.3  
0.30000000000000004  
0.299999999999999988897769753748434595763683319091796875

Go numeric constants have arbitrary precision.

๐Ÿ”— Groovy

println 0.1 + 0.2
0.3

Literal decimal values in Groovy are instances of java.math.BigDecimal.

๐Ÿ”— Guile

(+ 0.1 0.2)
(+ 1/10 2/10)
0.30000000000000004
3/10

๐Ÿ”— Hugs (Haskell)

0.1 + 0.2
0.3

๐Ÿ”— Io

(0.1 + 0.2) print
0.3

๐Ÿ”— Java

System.out.println(.1 + .2);
System.out.println(.1F + .2F);
0.30000000000000004
0.3

Java has built-in support for arbitrary-precision numbers using the BigDecimal class.

๐Ÿ”— JavaScript

console.log(.1 + .2);
0.30000000000000004

The decimal.js library provides an arbitrary-precision Decimal type for JavaScript.

๐Ÿ”— Julia

.1 + .2
0.30000000000000004

Julia has built-in rational numbers support and also a built-in arbitrary-precision BigFloat data type. To get the math right, 1//10 + 2//10 returns 3//10.

๐Ÿ”— K (Kona)

0.1 + 0.2
0.3

๐Ÿ”— Kotlin

println(.1 + .2)
println(.1F + .2F)
0.30000000000000004
0.3

See Reference documentation.

๐Ÿ”— Lua

print(.1 + .2)
print(string.format("%0.17f", 0.1 + 0.2))
0.3
0.30000000000000004

๐Ÿ”— MATLAB

0.1 + 0.2
sprintf('%.17f', 0.1 + 0.2)
0.3
0.30000000000000004

๐Ÿ”— MIT/GNU Scheme

(+ 0.1 0.2)
(+ \#e0.1 \#e0.2)
0.30000000000000004
3/10

The scheme specification has a concept exactness.

๐Ÿ”— Mathematica

0.1 + 0.2
0.3

Mathematica has a fairly thorough internal mechanism for dealing with numerical precision and supports arbitrary precision.

By default, the inputs 0.1 and 0.2 in the example are taken to have MachinePrecision. At a common MachinePrecision of 15.9546 digits, 0.1 + 0.2 actually has a [FullForm][4] of 0.30000000000000004, but is printed as 0.3.

Mathematica supports rational numbers: 1/10 + 2/10 is 3/10 (which has a FullForm of Rational[3, 10]).

๐Ÿ”— MySQL

SELECT .1 + .2;
0.3

๐Ÿ”— Nim

echo(0.1 + 0.2)
0.3

๐Ÿ”— OCaml

0.1 +. 0.2;;
float = 0.300000000000000044

๐Ÿ”— Objective-C

#import <Foundation/Foundation.h>

int main(int argc, const char * argv[]) {
  @autoreleasepool {
    NSLog(@"%.17f\n", .1+.2);
  }
  return 0;
}
0.30000000000000004

๐Ÿ”— PHP

echo .1 + .2;
var_dump(.1 + .2);
var_dump(bcadd(.1, .2, 1));
0.3
float(0.30000000000000004441)
string(3) "0.3"

PHP echo converts 0.30000000000000004441 to a string and shortens it to โ€œ0.3โ€. To achieve the desired floating-point result, adjust the precision setting: ini_set("precision", 17).

๐Ÿ”— Perl

perl -E 'say 0.1+0.2'
perl -e 'printf q{%.17f}, 0.1+0.2'
perl -MMath::BigFloat -E 'say Math::BigFloat->new(q{0.1}) + Math::BigFloat->new(q{0.2})'
0.3
0.30000000000000004
0.3

The addition of float primitives only appears to print correctly because not all of the 17 digits are printed by default. The core Math::BigFloat allows true arbitrary precision floating point operations by never using numeric primitives.

๐Ÿ”— PicoLisp

[load "frac.min.l"]
[println (+ (/ 1 10) (/ 2 10))]
(/ 3 10)

You must load file โ€œfrac.min.lโ€.

๐Ÿ”— PostgreSQL

SELECT 0.1::float + 0.2::float;
SELECT 0.1 + 0.2;
0.30000000000000004
0.3

PostgreSQL treats decimal literals as arbitrary precision numbers with fixed point. Explicit type casts are required to get floating-point numbers.

PostgreSQL 11 and earlier outputs 0.3 as a result for query SELECT 0.1::float + 0.2::float;, but the result is rounded only for display, and under the hood it is still good old 0.30000000000000004.

In PostgreSQL 12 default behavior for textual output of floats was changed from more human-readable rounded format to shortest-precise format. Format can be customized by the extra_float_digits configuration parameter.

๐Ÿ”— Prolog (SWI-Prolog)

?- X is 0.1 + 0.2.
X = 0.30000000000000004.

๐Ÿ”— Pyret

0.1 + 0.2
~0.1 + ~0.2
0.3
~0.30000000000000004

Pyret has built-in support for both rational numbers and floating points. Numbers written normally are assumed to be exact. In contrast, RoughNums are represented by floating points, and are written prefixed with a ~, indicating that they are not precise answers โ€“ the ~ is meant to visually evoke hand-waving. A user who sees a computation produce ~0.30000000000000004 knows to treat the value with skepticism. RoughNums cannot be compared directly for equality; they can only be compared up to a given tolerance.

๐Ÿ”— Python 2

print .1 + .2
.1 + .2
float(decimal.Decimal(".1") + decimal.Decimal(".2"))
float(fractions.Fraction('0.1') + fractions.Fraction('0.2'))
0.3
0.30000000000000004
0.3
0.3

Python 2โ€™s print statement converts 0.30000000000000004 to a string and shortens it to โ€œ0.3โ€. To achieve the desired floating point result, use print repr(.1 + .2). This was fixed in Python 3 (see below).

๐Ÿ”— Python 3

print(.1 + .2)
.1 + .2
float(decimal.Decimal('.1') + decimal.Decimal('.2'))
float(fractions.Fraction('0.1') + fractions.Fraction('0.2'))
0.30000000000000004
0.30000000000000004
0.3
0.3

Python (both 2 and 3) supports decimal arithmetic with the decimal module, and true rational numbers with the fractions module.

๐Ÿ”— R

print(.1 + .2)
print(.1 + .2, digits=18)
0.3
0.30000000000000004

๐Ÿ”— Racket (PLT Scheme)

(+ .1 .2)
(+ 1/10 2/10)
0.30000000000000004
3/10

๐Ÿ”— Raku

raku -e 'say 0.1 + 0.2'
raku -e 'say (0.1 + 0.2).fmt(\"%.17f\")'
raku -e 'say 1/10 + 2/10'
raku -e 'say 0.1e0 + 0.2e0'
0.3
0.30000000000000000
0.3
0.30000000000000004

Raku uses rationals by default, so .1 is stored something like { numerator => 1, denominator => 10 }. To actually trigger the behavior, you must force the numbers to be of type Num (double in C terms) and use the base function instead of the sprintf or fmt functions (since those functions have a bug that limits the precision of the output).

๐Ÿ”— Regina REXX

say '.1+.2'
0.3

๐Ÿ”— Ruby

puts 0.1 + 0.2
puts 1/10r + 2/10r
0.30000000000000004
3/10

Ruby supports rational numbers in syntax with version 2.1 and newer directly. For older versions use Rational. Ruby also has a library specifically for decimals: BigDecimal.

๐Ÿ”— Rust

extern crate num;
use num::rational::Ratio;

fn main() {
  println!("{}", 0.1 + 0.2);
  println!("{}", 0.1_f32 + 0.2_f32);
  println!("1/10 + 2/10 = {}", Ratio::new(1, 10) + Ratio::new(2, 10));
}
0.30000000000000004
0.3
1/10 + 2/10 = 3/10

Rust has rational number support from the num crate.

๐Ÿ”— SageMath

.1 + .2
RDF(.1) + RDF(.2)
RBF('.1') + RBF('.2')
QQ('1/10') + QQ('2/10')
0.3
0.30000000000000004
["0.300000000000000 +/- 1.64e-16"]
3/10

SageMath supports various fields for arithmetic: Arbitrary Precision Real Numbers, RealDoubleField, Ball Arithmetic, Rational Numbers, etc.

๐Ÿ”— Scala

scala -e 'println(0.1 + 0.2)'
scala -e 'println(0.1F + 0.2F)'
scala -e 'println(BigDecimal(\"0.1\") + BigDecimal(\"0.2\"))'
0.30000000000000004
0.3
0.3

๐Ÿ”— Smalltalk

(1/10) + (2/10).
0.1 + 0.2.
0.1s17 + 0.2s17.
(3/10)
0.30000000000000004
0.30000000000000000s17

Smalltalk uses fractions by default in most operations; in fact, standard devision results in fractions, not floating point numbers. Squeak and similar Smalltalks provide โ€œscaled decimalsโ€ that allow fixed-point real numbers (s-suffix indicating precision places).

๐Ÿ”— Swift

0.1 + 0.2
Decimal(0.1) + Decimal(0.2)
0.30000000000000004
0.3

Swift supports decimal arithmetic with the Foundation module.

๐Ÿ”— TCL

puts [expr .1 + .2]
0.30000000000000004

๐Ÿ”— Turbo Pascal 7.0

writeln(0.1 + 0.2);
0.3

๐Ÿ”— Vala

static int main(string[] args) {
  stdout.printf("%.17f\n", 0.1 + 0.2);
  return 0;
}
0.30000000000000004

๐Ÿ”— Visual Basic 6

a# = 0.1 + 0.2: b# = 0.3
Debug.Print Format(a - b, "0." & String(16, "0"))
Debug.Print a = b
0.0000000000000001  
False

Appending the identifier type character # to any identifier forces it to Double.

๐Ÿ”— WebAssembly (WAST)

(func $add_f32 (result f32)
  f32.const 0.1
  f32.const 0.2
  f32.add)
(export "add_f32" (func $add_f32))
(func $add_f64 (result f64)
  f64.const 0.1
  f64.const 0.2
  f64.add)
(export "add_f64" (func $add_f64))
0.30000001192092896
0.30000000000000004

See demo.

๐Ÿ”— awk

awk 'BEGIN { print 0.1 + 0.2 }'
0.3

๐Ÿ”— bc

0.1 + 0.2
0.3

๐Ÿ”— dc

0.1 0.2 + p
0.3

๐Ÿ”— zsh

echo "$((.1 + .2))"
0.30000000000000004

I am Erik Wiffin. You can contact me at: erik.wiffin.com or [email protected].

This project is on GitHub. If you think this page could be improved, send me a pull request.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK