Oh my, but this rabbit hole goes deep.Split from the
Project Statistics thread.
The concept at play here is that of cumulative rounding errors when working with finite-precision floating-point numbers.
The values in the last post were calculated using 64-bit double floats. Using 128-bit long doubles instead,
we don't see a difference within 15 decimal places. Whether using parentheses or not, we get 7.985000019074437 mi. The loss of precision becomes too small for us to (easily) see again. For a while, at least.Edit: Wait, no, scratch that. I only used 128-bit long doubles for the simple arithmethic, but still used
sin, cos & acos, which receive & return 64-bit doubles. Switching to
sinl, cosl & acosl, which receive & return 128-bit long doubles, we get 7.984999988845444 with parentheses and 7.984999988841631 without. Our data start to differ in the 12th decimal place rather than the 8th, and we are now firmly below the 7.985 mark.
Here's Jim's formula for distance calculation, rewritten in C:
acos(cos(rlat1)*cos(rlng1)*cos(rlat2)*cos(rlng2) + cos(rlat1)*sin(rlng1)*cos(rlat2)*sin(rlng2) + sin(rlat1)*sin(rlat2)) * 3963.1 * 1.02112(The 3963.1 is the Earth's radius in miles; the 1.02112 is the CHM fudge factor.)Factor out
cos(rlat1)*cos(rlat2), and substitute
cosine angle difference identity, and we get:
acos(cos(rlat1)*cos(rlat2)*cos(rlng1-rlng2) + sin(rlat1)*sin(rlat2) * 3963.1 * 1.02112This is the "spherical law of cosines" formula for great-circle distance given
here, and it takes us from performing 11 trig functions down to 6.
• Factoring out the
cos(rlat1)*cos(rlat2) alone causes no change in the resulting value. The author of
this page suggests compiler optimizations; sounds legit. Thus, it probably won't affect speed either.
• OTOH, substituting the cosine identity does yield slightly different segment length. And is slightly faster.
• There's less difference, using 64-bit double-precision floats, between the results with & without parentheses. My theory is that, in cutting a couple computations out of the process, there are fewer remaining steps to add to the cumulative rounding errors.
However!...Haversine formula:On computer systems with low floating-point precision, the spherical law of cosines formula can have large rounding errors if the distance is small (if the two points are a kilometer apart on the surface of the Earth, the cosine of the central angle is near 0.99999999). For modern 64-bit floating-point numbers, the spherical law of cosines formula, given above, does not have serious rounding errors for distances larger than a few meters on the surface of the Earth. The haversine formula is numerically better-conditioned for small distances
asin(sqrt(pow(sin((rlat2-rlat1)/2),2) + cos(rlat1) * cos(rlat2) * pow(sin((rlng2-rlng1)/2),2))) * 7926.2 * 1.02112 • This is a
tiny bit faster (9.3s vice 9.7s for
Creating per-traveler stats logs and augmenting data structure) than Jim's formula that we're currently using: we've gotten rid of some trig functions, but added a couple
pows and a
sqrt into the mix. I suspect that the spherical law of cosines formula, simplified for the cos angle diff identity, may be a smidgen faster
, but have yet to test.
Edit: Tested. 0.21s faster average, 0.05s slower median. Essentially the same speed. In that case, why not use the haversine formula. C++ at least. It's possible that the compiler is doing some SIMD optimization; results may be different for Python.
• It does seem at first glance to be more precise; in this case, we don't get a parens/no parens difference until the 13th decimal place, and the difference from the 64-bit to the 128-bit version is also pretty small.
yakra@BiggaTomato:~/proj/tmtools/me103oscar$ ./me103oscar
using 64-bit precision:
-------------------------------------------
Jim's method with parens: 7.985000010811417
Sph Law Cos with parens: 7.985000008192277
Vincenty with parens: 7.984999988839341
Haversine with parens: 7.984999988839563
Jim's method sans parens: 7.984999986832708
Sph Law Cos sans parens: 7.984999999901063
Vincenty sans parens: 7.984999988839544
Haversine sans parens: 7.984999988839754
using 128-bit precision:
-------------------------------------------
Jim's method with parens: 7.984999988845444
Sph Law Cos with parens: 7.984999988839575
Vincenty with parens: 7.984999988839748
Haversine with parens: 7.984999988839748
Jim's method sans parens: 7.984999988841631
Sph Law Cos sans parens: 7.984999988839235
Vincenty sans parens: 7.984999988839748
Haversine sans parens: 7.984999988839748
But then, I've only been looking at
one traveled segment, and can't generalize from that one piece of info to the whole dataset...
It may be interesting to run some tests with the different formulas & see what proudces the smallest cumulative error across all segments (by introducing "noise" via comparing the Deg->Rad conversions with & without parentheses). A kinda useless low-priority task, but just the kind of thing I like to get into.
My hypothesis is that the haversine formula will produce the most precise results, with the modified spherical law of cosines formula being fastest. Then, it'll be a matter of deciding on the speed-vs-precision trade-off. Or maybe not; more on this below...
Pretty neat how a small rounding error of less than 1/20 millimeter works out to a difference in Oscar's reported mileage of almost the length of a 53' semi trailer.
If we take the greater accuracy of the haversine figure as gospel, then it may be that Oscar has just
under 7.985 mi on ME103 after all or "7.98" mi, if you will.
Postscript:Wikipedia also
mentions a "special case of the Vincenty formula for an ellipsoid with equal major and minor axes" that doesn't suffer from rounding errors...
• as points approach 0 distance, as affects the spherical law of cosines method, or
• as points approach antipodes, as affects the haversine method. This shouldn't be very big a problem, as the TM dataset primarily deals with shorter-length segments.
• This should be considerably slower, as it uses even more trig functions than the original, and still has a couple
pows and a
sqrt.
Edit: Not that much worse, actually. 0.02s slower average over 10 trials; 0.1s slower median. A pleasant surprise.
• ...and a bunch more addition, subtraction & multiplication on top of that.
• With all these extra opportunities to introduce noise into the process, I suspect that this would end up less precise for our data than the haversine method, with more cumulative rounding error.
• I may give it a go anyway, just for giggles.
•
Edit: Done. Results are in the code block above. Note how, for 128-bit long doubles converted back to 64-bit doubles before display, we have the same value for both Vincenty & haversine, both with & without parentheses. 64-bit, the two methods' results are pretty close to one another.