I came up with another debugging tool for the floating point error case ...
How many bits are needed to store the ratio of total milliseconds from 1970 (1685632697000) divided by those two minutes (118458)?
1685632697000 / 118458 ~ 14,229,791.9
2^10 = 1024, so
16,000,000 ~ 2^4 * 2^10 * 2^10 , so 16,000,000 is roughly 2^24.
A single precision floating point number mantissa is 24 bits.
How does that number of bits compare to the mantissa length of an AI2 floating point number?
Unfortunately, according to
AI2 floats are 64 bits.
So I can't blame the difference on mantissa length.
That still does not rule out the possibility of order of operations in that formula for preserving precision in intermediate steps.