acm-header
Sign In

Communications of the ACM

ACM TechNews

A New Number Format For Computers Could Nuke Approximation Errors For Good


View as: Print Mobile App Share:
omputers have a persistent problem representing fractional numerical values.

High-performance computing scientist John Gustafson has proposed a universal number format that permits the various "fields" within a binary floating-point number representation to grow and shrink according to required precision.

Credit: Mclek/Shutterstock

Computers have a persistent problem representing fractional numerical values, as it requires a method to encode where precisely within a string of digits a decimal point should be located. The larger the number, the less room there is to represent its fractional value, but high-performance computing scientist John Gustafson has proposed a universal number (unum) format to eliminate the approximation error problem.

An unum permits the various "fields" within a binary floating-point number representation to grow and shrink according to required precision.

The exponential values needed to scale most decimal numbers are typically a lot less than what can be represented by five bits, as are allocated in a 32-bit float, but the current standard attempts to prepare for the worst case. Gustafson says an exponent field that can contract based on need leaves more digits that can represent actual numerical content.

He notes an unum contains three extra fields that make the number self-descriptive, representing whether a number is exact or in between exact values, the size of the exponent in bits, and the size of the fraction in bits. "So not only does the binary point float, the number of significant digits also floats," Gustafson says.

From Motherboard
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account