Where's the decimal type in Racket? (hint: right there)
Everyone knows that floating-point numbers shouldn’t be used to represent financial data (like prices of products or an order total) in software, because floats may introduce rounding errors.
For this reason databases and some programming languages have special
data types for representing and doing calculations on decimals -
either to fixed precision specified upfront in terms of decimal places
(like the NUMERIC
type in databases) or with arbitrary precision
(limited only by system memory).
How does Racket handle such numbers? If you are coming from languages
that don’t natively do things like (+ 3/7 12/15)
, you may be
expecting to find some kind of decimal type in Racket. I was, and I
didn’t find one.
Here’s the thing though: 3/7
is not a float in Racket. It is an
exact number:
> (exact? 3/7)
#t
> (exact? (+ 3/7 12/15)
#t
What is an exact number? According to Racket reference:
The precision and size of exact numbers is limited only by available memory (and the precision of operations that can produce irrational numbers). In particular, adding, multiplying, subtracting, and dividing exact numbers always produces an exact result.
How does this help us with decimals? Well, non-integer decimals are
themselves fractional numbers where the denominator is a power
of 10. You don’t need any special additional type to represent
decimals if you have precise fractional numbers. The decimal 0.1
is
simply a 1/10
, 7.75
is 775/100
, 345.25
is 34525/100
, and
5.0
is 5/1
or just 5
.1
Essentially, Racket’s number?
has built-in support for
arbitrary-precision fractions, including decimals.
Therefore:
> (exact? 0.1)
#f
> (+ 0.1 0.2)
0.30000000000000004 ; aha, binary approximation!
But:
> (+ 1/10 2/10)
3/10 ; precisely!
> (exact? 3/10)
#t ; indeed
Usage in PostgreSQL
This type of number is also used by the PostgreSQL connector
in the db
module:
A SQL value of type decimal is converted to either an exact rational or +nan.0. When converting Racket values to SQL decimal, exact rational values representable by finite decimal strings are converted without loss of precision. (Precision may be lost, of course, if the value is then stored in a database field of lower precision.)
Let’s verify:
> (define row (query-row my-conn "SELECT 0.1::decimal, 0.2::decimal"))
> (for/sum ([i row]) i)
3/10
So adding together decimals 0.1
and 0.2
after receiving them from
PostgreSQL yields the correct result. How about inserting values and
calculating their sum on PostgreSQL’s side, then querying the result?
> (query-exec my-conn "CREATE TEMPORARY TABLE decimals (a_decimal NUMERIC(10, 2))")
> (query-exec my-conn "INSERT INTO decimals VALUES($1), ($2)" 1/10 2/10)
> (query-row my-conn "SELECT SUM(a_decimal) FROM decimals")
'#(3/10)
Great, no binary rounding so far.
Let’s do another test, out of curiosity:
> (query-exec my-conn "TRUNCATE TABLE decimals")
> (query-exec my-conn "INSERT INTO decimals VALUES ($1)" 1/3)
> (query-rows my-conn "SELECT * FROM decimals WHERE a_decimal = $1" 1/3)
'()
> (query-rows my-conn "SELECT * FROM decimals")
'(#(33/100))
This happens because 1/3
does not exist as a decimal unless you can
represent an infinite sequence of 3
s in 0.3333...
. When creating
the table we have declared that our a_decimal
field should have only
2 digits in the fractional part. Hence 1/3
gets rounded to 0.33
when inserting, and is no longer equal to 1/3
. Fair enough!
Converting from and to strings
You can control how string->number
reads decimals either via the
read-decimal-as-inexact
parameter or directly via
the function’s decimal-mode
argument:
> (string->number "3.33")
3.33 ; inexact
> (parameterize ([read-decimal-as-inexact #f])
(string->number "3.33"))
333/100 ; exact
> (string->number "3.33" 10 'number-or-false 'decimal-as-exact)
333/100 ; also exact
To format a fractional number (or any real number) as a decimal, you
can use real->decimal-string
; this will
output the number rounded to a fixed number of decimal places
(defaults to 2):
> (real->decimal-string (+ 1/10 2/10))
"0.30"
Lets check it behaves as we expect. Let’s print an exact and an inexact number and compare:
;; an exact case
> (real->decimal-string (+ 1/10 2/10) 40)
"0.3000000000000000000000000000000000000000"
;; an inexact number for reference
> (real->decimal-string (+ 0.1 0.2) 40)
"0.3000000000000000444089209850062616169453"
JSON
The IETF JSON standard doesn’t require the number data type to have arbitrary precision. Instead, it suggests assuming limited precision and range:
This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision.
This means that two different implementation can have different ideas about the value of “number” in the following:
{"number": 0.1}
The default Racket JSON implementation reads numbers as inexact if
they have a fractional part, and vice versa, permits only integers or
inexact numbers as JSON values. If you want to use
arbitrary-precision numbers instead, there’s an “exact json” library
available. However, be careful with assuming other
implementations will also maintain arbitrary precision on the other
end of the wire. (For instance, Python’s json module reads 0.1
as a
float by default too).
Some APIs convey numeric values in JSON as strings in order to explicitly manage the parsing and formatting of such values.
A non-conclusion
I hope this was useful, and please let me know if I have messed something up or missed something, or simply if you have a comment.
-
Note that Racket will not necessarily represent these numbers as decimal fractions if they have some simpler fractional representation:
> (parameterize ([read-decimal-as-inexact #f]) (map string->number '("0.1" "7.75" "345.25" "5.0"))) '(1/10 31/4 1381/4 5)
They represent the same number nevertheless. ↩