Skip to content Skip to sidebar Skip to footer

Python Dictionary Floats

I came across a strange behavior in Python (2.6.1) dictionaries: The code I have is: new_item = {'val': 1.4} print new_item['val'] print new_item And the result is: 1.4 {'val': 1.

Solution 1:

This is not Python-specific, the issue appears with every language that uses binary floating point (which is pretty much every mainstream language).

From the Floating-Point Guide:

Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.

When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.

Some values can be exactly represented as binary fraction, and output formatting routines will often display the shortest number that is closer to the actual value than to any other floating-point number, which masks some of the rounding errors.

Solution 2:

This problem is related to floating point representations in binary, as others have pointed out.

But I thought you might want something that would help you solve your implied problem in Python.

It's unrelated to dictionaries, so if I were you, I would remove that tag.

If you can use a fixed-precision decimal number for your purposes, I would recommend you check out the Python decimal module. From the page (emphaisis mine):

  • Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.” – excerpt from the decimal arithmetic specification.

  • Decimal numbers can be represented exactly. In contrast, numbers like 1.1 and 2.2 do not have an exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as 3.3000000000000003 as it does with binary floating point.

  • The exactness carries over into arithmetic. In decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero. In binary floating point, the result is 5.5511151231257827e-017. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal is preferred in accounting applications which have strict equality invariants.

Post a Comment for "Python Dictionary Floats"