Swift: String representation for floating-point numbers should be compact and bijective like in e.g. JavaScript
| Originator: | pyry.jahkola | ||
| Number: | rdar://20186548 | Date Originated: | 2015-03-17 |
| Status: | Open | Resolved: | |
| Product: | Developer Tools | Product Version: | |
| Classification: | Feature (new) | Reproducible: | Always |
Summary:
I'd like Swift to have a well-defined String representation for floating-point numbers.
There is a widely used algorithm that could be used: http://www.cs.indiana.edu/~dyb/pubs/FP-Printing-PLDI96.pdf – "Printing Floating-Point Numbers Quickly and Accurately" by Burger & Dybvig.
In Cocoa, I'm only aware of JavaScriptCore making use of that algorithm.
* * *
Currently, the REPL, Playground, debugger and the toString functions all print or show floating point number differently. And quite often, two different Double values print as if they were equal.
A number of programming languages (Python, Java, JavaScript; see e.g. http://www.ecma-international.org/ecma-262/5.1/#sec-9.8.1) require the default string conversion from floating-point numbers to not lose precision while still producing the shortest decimal representation in the number of decimal digits.
Doing so has the benefit that converting a number to String and back (think JSON, CSV) doesn't lose precision compared to storing the floating-point number as bytes in memory and reading them later:
```
import Foundation
func arbitrary() -> Double {
while true {
let u = UInt64(arc4random()) << 32 | UInt64(arc4random())
let x = unsafeBitCast(u, Double.self)
if x.isFinite { return x }
}
}
func toDouble(string: String) -> Double {
return (string as NSString).doubleValue
}
for _ in 0 ..< 100000 {
let x: Double = arbitrary()
println(x)
assert( toDouble(toString(x)) == x) // FAILS!
assert(toString(toDouble(toString(x))) == toString(x)) // FAILS!
}
```
Steps to Reproduce:
Swift (as well as Objective-C and Foundation) have varying standards for representing floating-point numbers. One the one hand, the conversion may lose precision:
```
let moreThanOne = nextafter(1, Double.infinity) //=> 1.0000000000000002
assert(moreThanOne == toDouble(toString(moreThanOne))) // FAILS!
toString(moreThanOne) //=> "1.0"
toString(moreThanOne as NSNumber) //=> "1"
NSString(format: "%lg", moreThanOne) //=> "1"
let lessThanOne = nextafter(1, -Double.infinity) //=> 0.99999999999999988
assert(lessThanOne == toDouble(toString(lessThanOne))) // FAILS!
toString(lessThanOne) //=> "1.0"
toString(lessThanOne as NSNumber) //=> "0.9999999999999999"
NSString(format: "%lg", lessThanOne) //=> "1"
```
On the other hand, none of the conversions use decimal digits sparingly either:
```
let oneAndSome = 1 + 1e-15 //=> 1.0000000000000011 // why not just "1.000000000000001"?
assert(oneAndSome == toDouble(toString(oneAndSome))) // FAILS!
toString(oneAndSome) //=> "1.0"
toString(oneAndSome as NSNumber) //=> "1.000000000000001"
NSString(format: "%lg", oneAndSome) //=> "1"
let almostZero = DBL_TRUE_MIN //=> 4.9406564584124654E-324 // why not just "5e-324"?
assert(almostZero == toDouble(toString(almostZero))) // pass
toString(almostZero) // "4.94065645841247e-324"
toString(almostZero as NSNumber) // "4.940656458412465e-324"
NSString(format: "%lg", almostZero) // "4.94066e-324"
```
All of the above string representations for almostZero can be seen as "wrong" because the least positive Double value 5e-324 is only succeeded by:
```
let stillTiny = nextafter(almostZero, Double.infinity) //=> 9.8813129168249309E-324 // why not just "1e-323"
toString(stillTiny) //=> "9.88131291682493e-324"
```
which means that neither of these require more than one significant digit to be distinguishable and to perfectly round trip the conversion to and from String.
Expected Results:
```
toString(1.0) //=> "1"
toString(nextafter(1, Double.infinity)) //=> "1.0000000000000002"
toString(nextafter(1, -Double.infinity)) //=> "0.9999999999999999"
toString(1 + 1e-15) //=> "1.000000000000001"
toString(DBL_TRUE_MIN) //=> "5e-324"
toString(2 * DBL_TRUE_MIN) //=> "1e-323"
```
I'd also expect there to be a function for converting a String to a Float or Double:
```
Double(string: "1") //=> Optional(1.0)
Double(string: "nan") //=> Optional(NaN)
Double(string: "1e-323") //=> Optional(1e-323)
Double(string: "-Infinity") //=> Optional(-Inf)
Float(string: "+Infinity") //=> Optional(+Inf)
```
Actual Results:
```
toString(1.0) //=> "1.0"
toString(nextafter(1, Double.infinity)) //=> "1.0"
toString(nextafter(1, -Double.infinity)) //=> "1.0"
toString(1 + 1e-15) //=> "1.0"
toString(DBL_TRUE_MIN) //=> "4.94065645841247e-324"
toString(2 * DBL_TRUE_MIN) //=> "9.88131291682493e-324"
```
```
error: extra argument 'string' in call
Double(string: "1.0")
^ ~~~~~
```
Version:
Notes:
As a workaround, JavaScriptCore could be used like so:
let ctx = JSContext()
func toJSString(double: Double) -> String {
return JSValue(double: double, inContext: ctx).toString()
}
// All as expected:
toJSString(1.0) //=> "1"
toJSString(moreThanOne) //=> "1.0000000000000002"
toJSString(lessThanOne) //=> "0.9999999999999999"
toJSString(oneAndSome) //=> "1.000000000000001"
toJSString(almostZero) //=> "5e-324"
toJSString(stillTiny) //=> "1e-323"
assert(moreThanOne == toDouble(toJSString(moreThanOne))) // pass
assert(lessThanOne == toDouble(toJSString(lessThanOne))) // pass
assert(oneAndSome == toDouble(toJSString(oneAndSome))) // pass
assert(almostZero == toDouble(toJSString(almostZero))) // pass
assert(stillTiny == toDouble(toJSString(stillTiny))) // pass
Wouldn't it be a good idea to implement this conversion in the Swift standard library as default?
Configuration:
Attachments:
Comments
Please note: Reports posted here will not necessarily be seen by Apple. All problems should be submitted at bugreport.apple.com before they are posted here. Please only post information for Radars that you have filed yourself, and please do not include Apple confidential information in your posts. Thank you!