Index Home About Blog
Newsgroups: comp.risks
X-issue: 17.08
Date: Sun, 23 Apr 1995 22:02:42 +0059 (EDT)
From: Robert J Horn <rjh@world.std.com>
Subject: Floating-Point Time

The opponents of floating-point representation for time have done an
insufficient analysis.  About twenty years ago I was part of a research
group doing extensive time series analysis of weather and related data.  We
needed a good way to represent time.  Fortunately we had a few astronomers
on the team, so time was reasonably well understood.

We chose "second of century", using a double precision floating point
representation.  Analysis showed that this would preserve millisecond
accuracy for the span of interest.  (Actually for all of recorded history
and more.)  Since we usually were satisfied with one minute accuracy this
seemed sufficient.  There was a brief debate about using a better time base,
but 12:00:01 AM GMT, 1 January, 1901 was easy to explain to everyone.  There
are a few applications that need better than millisecond precision, but for
most of the worlds applications double precision floating point will provide
enough precision for the next few millenia.  (A simple test for those who
are unsure about their needs.  Do you compensate for the variations in the
rate of the Earth's rotation?  If not, you probably don't need millisecond
accuracy.)

This notation had some interesting side effects.  At the time, floating 
point turned out to be somewhat faster than 64-bit integers due to a 
quirk of hardware.  It also led to excellent compatibility with the 
other time series processing.  Time was just another well behaved 
variable.  This notation eliminated a lot of the mistakes made by the 
typical programmer who is ignorant of traditional time notations and 
their problems.  There could have been some round-off issues, but we 
rarely did any arithmetic other than addition or subtraction of two 
times, where millisecond accuracy is maintained.  It even led to a 
simple notation for interval time span data, e.g. "0.01 inches of rain 
fell between 1633 and 1647 on ...", which is how many meteorological 
measurements are made.

The difficult problems were in translation to and from local.  The most
severe problem was the inherent ambiguity of local time in recent decades. 
There are two true times corresponding to each time in the one hour of
overlap when Daylight Savings shifts back to Standard.  Correctly
resolving this ambiguity was always a headache.  Fortunately most 
professional measurements have been recorded in UTC, or GMT before UTC 
was defined.

A word of caution, double precision floating point is suitable for an 
internal representation of UTC, or "absolute" time.  You have to do your 
own analysis if you are interested in timing relative to some event.

Rob Horn    rjh@world.std.com

P.S.  The turn of century problem has made The NY Times.  It may be so
widely hyped that almost all the problems are fixed by the time it comes.

Index Home About Blog