Author Topic: Rounding errors & point-to-point great circle distance: the curious case of Oscar's mileage on ME103  (Read 17747 times)

0 Members and 1 Guest are viewing this topic.

Offline yakra

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 4422
  • Last Login:November 11, 2024, 12:50:03 pm
  • I like C++
This is really obscure, but I think it's pretty neat:

Oscar's mileage on ME103 is just about exactly 7.985. His stats in oscar.log are, according to...
Python: 7.99 (or 7.985000010811417 to be more precise)
   C++: 7.98 (or 7.984999986832708 to be more precise)
The difference works out to less than 0.04 mm.

This is the only instance in the whole database of the difference being just enough to throw things off all the way to the 2nd decimal place.
HighwayData @ 0c0573b | UserData @ 207c8eb

Why did this happen?
In C++ I was doing the degrees->radians conversion by
double rlat1 = lat * pi/180; Changing it to
double rlat1 = lat * (pi/180); yields the same result as Python. Presto.

I'll commit the change to the C++ siteupdate program. Oscar's put a lot of time & effort into racking up all his mileage over the years, and deserves to not be shortchanged. :)
« Last Edit: April 28, 2019, 02:47:59 am by yakra »
Sri Syadasti Syadavaktavya Syadasti Syannasti Syadasti Cavaktavyasca Syadasti Syannasti Syadavatavyasca Syadasti Syannasti Syadavaktavyasca

Offline yakra

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 4422
  • Last Login:November 11, 2024, 12:50:03 pm
  • I like C++
Oh my, but this rabbit hole goes deep.
Split from the Project Statistics thread.
The concept at play here is that of cumulative rounding errors when working with finite-precision floating-point numbers.
The values in the last post were calculated using 64-bit double floats. Using 128-bit long doubles instead, we don't see a difference within 15 decimal places. Whether using parentheses or not, we get 7.985000019074437 mi. The loss of precision becomes too small for us to (easily) see again. For a while, at least.
Edit: Wait, no, scratch that. I only used 128-bit long doubles for the simple arithmethic, but still used sin, cos & acos, which receive & return 64-bit doubles. Switching to sinl, cosl & acosl, which receive & return 128-bit long doubles, we get 7.984999988845444 with parentheses and 7.984999988841631 without. Our data start to differ in the 12th decimal place rather than the 8th, and we are now firmly below the 7.985 mark.

Here's Jim's formula for distance calculation, rewritten in C:
acos(cos(rlat1)*cos(rlng1)*cos(rlat2)*cos(rlng2) + cos(rlat1)*sin(rlng1)*cos(rlat2)*sin(rlng2) + sin(rlat1)*sin(rlat2)) * 3963.1 * 1.02112
(The 3963.1 is the Earth's radius in miles; the 1.02112 is the CHM fudge factor.)

Factor out cos(rlat1)*cos(rlat2), and substitute cosine angle difference identity, and we get:
acos(cos(rlat1)*cos(rlat2)*cos(rlng1-rlng2) + sin(rlat1)*sin(rlat2) * 3963.1 * 1.02112
This is the "spherical law of cosines" formula for great-circle distance given here, and it takes us from performing 11 trig functions down to 6.
    • Factoring out the cos(rlat1)*cos(rlat2) alone causes no change in the resulting value. The author of this page suggests compiler optimizations; sounds legit. Thus, it probably won't affect speed either.
    • OTOH, substituting the cosine identity does yield slightly different segment length. And is slightly faster.
    • There's less difference, using 64-bit double-precision floats, between the results with & without parentheses. My theory is that, in cutting a couple computations out of the process, there are fewer remaining steps to add to the cumulative rounding errors.

However!...
Haversine formula:
Quote from: Wikipedia
On computer systems with low floating-point precision, the spherical law of cosines formula can have large rounding errors if the distance is small (if the two points are a kilometer apart on the surface of the Earth, the cosine of the central angle is near 0.99999999). For modern 64-bit floating-point numbers, the spherical law of cosines formula, given above, does not have serious rounding errors for distances larger than a few meters on the surface of the Earth. The haversine formula is numerically better-conditioned for small distances
asin(sqrt(pow(sin((rlat2-rlat1)/2),2) + cos(rlat1) * cos(rlat2) * pow(sin((rlng2-rlng1)/2),2))) * 7926.2 * 1.02112
    • This is a tiny bit faster (9.3s vice 9.7s for Creating per-traveler stats logs and augmenting data structure) than Jim's formula that we're currently using: we've gotten rid of some trig functions, but added a couple pows and a sqrt into the mix. I suspect that the spherical law of cosines formula, simplified for the cos angle diff identity, may be a smidgen faster, but have yet to test. Edit: Tested. 0.21s faster average, 0.05s slower median. Essentially the same speed. In that case, why not use the haversine formula. C++ at least. It's possible that the compiler is doing some SIMD optimization; results may be different for Python.
    • It does seem at first glance to be more precise; in this case, we don't get a parens/no parens difference until the 13th decimal place, and the difference from the 64-bit to the 128-bit version is also pretty small.
Code: [Select]
yakra@BiggaTomato:~/proj/tmtools/me103oscar$ ./me103oscar

using 64-bit precision:
-------------------------------------------
Jim's method with parens: 7.985000010811417
 Sph Law Cos with parens: 7.985000008192277
    Vincenty with parens: 7.984999988839341
   Haversine with parens: 7.984999988839563

Jim's method sans parens: 7.984999986832708
 Sph Law Cos sans parens: 7.984999999901063
    Vincenty sans parens: 7.984999988839544
   Haversine sans parens: 7.984999988839754


using 128-bit precision:
-------------------------------------------
Jim's method with parens: 7.984999988845444
 Sph Law Cos with parens: 7.984999988839575
    Vincenty with parens: 7.984999988839748
   Haversine with parens: 7.984999988839748

Jim's method sans parens: 7.984999988841631
 Sph Law Cos sans parens: 7.984999988839235
    Vincenty sans parens: 7.984999988839748
   Haversine sans parens: 7.984999988839748
But then, I've only been looking at one traveled segment, and can't generalize from that one piece of info to the whole dataset...

It may be interesting to run some tests with the different formulas & see what proudces the smallest cumulative error across all segments (by introducing "noise" via comparing the Deg->Rad conversions with & without parentheses). A kinda useless low-priority task, but just the kind of thing I like to get into. :)
My hypothesis is that the haversine formula will produce the most precise results, with the modified spherical law of cosines formula being fastest. Then, it'll be a matter of deciding on the speed-vs-precision trade-off. Or maybe not; more on this below...

Pretty neat how a small rounding error of less than 1/20 millimeter works out to a difference in Oscar's reported mileage of almost the length of a 53' semi trailer.
If we take the greater accuracy of the haversine figure as gospel, then it may be that Oscar has just under 7.985 mi on ME103 after all or "7.98" mi, if you will.

Postscript:
Wikipedia also mentions a "special case of the Vincenty formula for an ellipsoid with equal major and minor axes" that doesn't suffer from rounding errors...
    • as points approach 0 distance, as affects the spherical law of cosines method, or
    • as points approach antipodes, as affects the haversine method. This shouldn't be very big a problem, as the TM dataset primarily deals with shorter-length segments.

• This should be considerably slower, as it uses even more trig functions than the original, and still has a couple pows and a sqrt.
   Edit: Not that much worse, actually. 0.02s slower average over 10 trials; 0.1s slower median. A pleasant surprise.
• ...and a bunch more addition, subtraction & multiplication on top of that.
• With all these extra opportunities to introduce noise into the process, I suspect that this would end up less precise for our data than the haversine method, with more cumulative rounding error.
• I may give it a go anyway, just for giggles. :P
Edit: Done. Results are in the code block above. Note how, for 128-bit long doubles converted back to 64-bit doubles before display, we have the same value for both Vincenty & haversine, both with & without parentheses. 64-bit, the two methods' results are pretty close to one another.
« Last Edit: April 30, 2019, 01:57:17 am by yakra »
Sri Syadasti Syadavaktavya Syadasti Syannasti Syadasti Cavaktavyasca Syadasti Syannasti Syadavatavyasca Syadasti Syannasti Syadavaktavyasca

Offline yakra

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 4422
  • Last Login:November 11, 2024, 12:50:03 pm
  • I like C++
Quote
My hypothesis is that the haversine formula will produce the most precise results, with the modified spherical law of cosines formula being fastest. Then, it'll be a matter of deciding on the speed-vs-precision trade-off.
Or maybe not:
While Creating per-traveler stats logs and augmenting data structure, we iterate thru hundreds of travelers & thousands of their segments, calculating segment length multiple times.
We already iterate thru HighwaySegments and measure them while Computing stats. We can just store the computed value (should only take an additional 6.4 MB) with the HighwaySegment & retrieve it later.
Better still, we could calculate & store segment length during object construction, making use of idle CPU time while waiting for disk I/O.
This may even render the question of optimizing the measurement formula for speed a moot point.
Sri Syadasti Syadavaktavya Syadasti Syannasti Syadasti Cavaktavyasca Syadasti Syannasti Syadavatavyasca Syadasti Syannasti Syadavaktavyasca

Offline Jim

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 2856
  • Last Login:Yesterday at 10:17:44 pm
Interesting stuff to consider, especially computing once and storing to do that work while waiting for disk I/O anyway.

I admit not to have thought at all about how to do this, as the formulas I used were taken directly from old CHM code.

The only thing I think about some here is that any change might mean some NMP FP entries could be invalidated.

Offline yakra

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 4422
  • Last Login:November 11, 2024, 12:50:03 pm
  • I like C++
It may be interesting to run some tests with the different formulas & see what proudces the smallest cumulative error across all segments (by introducing "noise" via comparing the Deg->Rad conversions with & without parentheses). A kinda useless low-priority task, but just the kind of thing I like to get into. :)
Wasn't even hard to do, actually. I'd already written most of the necessary code when figuring out the above post. :P

My hypothesis is that the haversine formula will produce the most precise results, with the modified spherical law of cosines formula being fastest.
Hypothesis confirmed. The results of Oscar's travels on ME103 do extend to the dataset as a whole:
cumul_jim_error = 0.000575115280650134709
cumul_slc_error = 0.000453815759200803669
cumul_vin_error = 2.026717349801763e-07
cumul_hav_error = 1.96066878074526232e-07


Interesting stuff to consider, especially computing once and storing to do that work while waiting for disk I/O anyway.
In Python's case, we actually don't benefit from overlapping I/O & computation when multithreading. I've noticed since my early days of getting into the code that siteupdate.py will run noticeably slower with 4 threads than 1, on both BiggaTomato & noreaster. Makes sense to me; there's enough computation going anyway when reading WPTs to redline one CPU. Managing more threads will just increase the overhead.
Still though, once we're past Computing stats, we should still see a time benefit when Creating per-traveler stats log entries and augmenting data structure.

I admit not to have thought at all about how to do this, as the formulas I used were taken directly from old CHM code.
I already had my eye on speeding up stats creation, then got into the Waypoint::distance_to code to investigate that one odd diff in oscar.log. Looking into combining things via trig identities seemed worthwhile so I headed to Wikipedia for a refresher. From there, the haversine formula was a serendipitous discovery.

The only thing I think about some here is that any change might mean some NMP FP entries could be invalidated.
Not NMPs; those are based entirely on lat/lng. But for regular datachecks, yes, there's that possibility.
DIFFs of datacheck.log, nearmatchfps.log, & unmatchedfps.log look good; only the file creation date/times.
I can easily check that again once making any changes to siteupdate.py.

Yikes. This is what happens when I get hung up on something...
Sri Syadasti Syadavaktavya Syadasti Syannasti Syadasti Cavaktavyasca Syadasti Syannasti Syadavatavyasca Syadasti Syannasti Syadavaktavyasca

Offline yakra

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 4422
  • Last Login:November 11, 2024, 12:50:03 pm
  • I like C++
Storing & retrieving segment lengths didn't save as much time as I'd hoped, only about half a second. At least it's something.
The results may be more significant in Python, where the trig functions have more of a time premium (if area graph generation is any indication), and this section of the code takes a lot more time overall.
I'll do #203 first though.
Sri Syadasti Syadavaktavya Syadasti Syannasti Syadasti Cavaktavyasca Syadasti Syannasti Syadavatavyasca Syadasti Syannasti Syadavaktavyasca

Offline yakra

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 4422
  • Last Login:November 11, 2024, 12:50:03 pm
  • I like C++
Python on BiggaTomato
Median time out of 10 trials, without storing & retrieving segment length yet

                                  VincentyOrig formulaHaversineSphLawCos
   Computing stats   33.05          31.5               31              30.35         
Per-traveler stats   64.05          56.3               55.3            50.65         
          subtotal   97.1           87.8               86.3            81             
  area data graphs   212.95         68.35              62.8            52             
             total   310.05         156.15             149.1           133           

Here, with Python taking longer than C++ to perform the calculations, and without the potential benefit of compiler optimizations, the differences in algorithm speed become more apparent.
We can see the Vincenty formula does take quite a bit longer here, especially with the area graphs, which compare distance to every vertex in the graph structure for every graph. This is included only for reference purposes; I have no interest in implementing it, as I expect its results to be marginally less precise than the haversine formula.
The spherical law of cosines formula does yield a not-insignificant speed improvement.
The haversine formula is a wee bit faster than the original, but not as fast as SphLawCos.

The stats' subtotal of time for each formula is listed separately from area graph generation time, because stats and graphs rely on different parts of the code. If we want, we can tune stats for precision using haversine, and tune area graphs for speed using SphLawCos.

Segment length storage & retrieval:
For stats, do we use SphLawCos for speed, or haversine for precision?
As discussed upthread, storing & retrieving segment lengths can save us some time when Creating per-traveler stats log entries and augmenting data structure. Doing so, time for this task should be that same whatever formula is used.
In Python, reading .WPTs (which includes HighwaySegment construction) is CPU-bound. Any time saved by retrieving while Computing stats would be time added while reading .WPTs, so let's assume the same Computing stats time for simplicity's sake.
The haversine formula's performance hit while Computing stats will only be about 0.5-0.75s (or ~2%) depending on how we want to measure things, or about 2.1s worst-case, making it easier to justify using this method.

Whatever we end up doing, best case scenario, I'd like to see C++ & Python producing identical, DIFFable values for the SQL tables. I haven't checked on this since adding the parentheses to the C++ deg->rad conversion.
« Last Edit: May 18, 2019, 10:52:40 pm by yakra »
Sri Syadasti Syadavaktavya Syadasti Syannasti Syadasti Cavaktavyasca Syadasti Syannasti Syadavatavyasca Syadasti Syannasti Syadavaktavyasca

Offline yakra

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 4422
  • Last Login:November 11, 2024, 12:50:03 pm
  • I like C++
Whatever we end up doing, best case scenario, I'd like to see C++ & Python producing identical, DIFFable values for the SQL tables. I haven't checked on this since adding the parentheses to the C++ deg->rad conversion.
C++ and Python SQL files are identical up to the routes table.
From there, roughly half the time, Python will round otherwise-identical values off to one less significant figure, for reasons I can't really fathom. If I have the C++ version output a couple more digits, there's no indication things are straddling the 0.5*10-n boundary. Besides, it's happening far too often. And when differences in reported value did occur as a result of the formula used, they showed up well before the 16th sig fig.
If I have my heart set on DIFFable SQL files, I may want to look into formatted strings.
Sri Syadasti Syadavaktavya Syadasti Syannasti Syadasti Cavaktavyasca Syadasti Syannasti Syadavatavyasca Syadasti Syannasti Syadavaktavyasca

Offline yakra

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 4422
  • Last Login:November 11, 2024, 12:50:03 pm
  • I like C++
https://github.com/TravelMapping/DataProcessing/pull/224
• Stores & retrieves segment length
• Switches to Spherical law of cosines formula for checking area graph membership.
Saves 30 s altogether in siteupdate.py on noreaster.

New great-circle distance formulas have also been added to Waypoint.distance_to, commented out, still using the original formula for now.
Sri Syadasti Syadavaktavya Syadasti Syannasti Syadasti Cavaktavyasca Syadasti Syannasti Syadavatavyasca Syadasti Syannasti Syadavaktavyasca

Offline yakra

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 4422
  • Last Login:November 11, 2024, 12:50:03 pm
  • I like C++
Jim, thanks for the merge of #224.
Shall I go ahead and switch over to the haversine formula for Waypoint.distance_to, for best precision?
Now that we're storing & retrieving segment lengths, as of HighwayData @ f3c9d9c284bed1773dbde38d5af3bf4996d78431 and UserData @ 2083294b98e85119845df3dc7c7e934cbf6921ac, this method only takes about 0.6s longer on noreaster than the simplified Spherical Law of Cosines method. This is pretty darn close to our margin of error, and still a wee bit faster than the current original formula.

Other than log creation times:
• No DIFF to datachack.log
• No DIFF to highwaydatastats.log
• No DIFF to stats CSVs
• The only user log DIFF is:
Code: [Select]
diff -r logs/users.old/oscar.log logs/users/oscar.log
7664c7664
< ME103: 7.99 of 15.52 mi (51.4%)
---
> ME103: 7.98 of 15.52 mi (51.4%)
Sri Syadasti Syadavaktavya Syadasti Syannasti Syadasti Cavaktavyasca Syadasti Syannasti Syadavatavyasca Syadasti Syannasti Syadavaktavyasca

Offline Jim

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 2856
  • Last Login:Yesterday at 10:17:44 pm
I'd say go for it.

Offline yakra

  • TM Collaborator
  • Hero Member
  • *****
  • Posts: 4422
  • Last Login:November 11, 2024, 12:50:03 pm
  • I like C++
Sri Syadasti Syadavaktavya Syadasti Syannasti Syadasti Cavaktavyasca Syadasti Syannasti Syadavatavyasca Syadasti Syannasti Syadavaktavyasca