From Saturday, Nov 23rd 7:00 PM CST - Sunday, Nov 24th 7:45 AM CST, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW Timestamp Overview

Contents

What is the LabVIEW timestamp?

The LabVIEW timestamp is a 128-bit data type that represents absolute time. You can interpret this data type as a signed, 128-bit fixed-point number with a 64-bit radix.

{
   (i64) seconds since the epoch 01/01/1904 00:00:00.00 UTC (using the Gregorian calendar and ignoring leap seconds),
   (u64) positive fractions of a second
}

When is this information important?

This information is important to know when passing a timestamp to a call library node or a custom library. The most significant 64 bits should be interpreted as a 64-bit signed two's complement  integer. It represents the number of whole seconds after the Epoch 01/01/1904 00:00:00.00 UTC. The least significant 64 bits should be interpreted as a 64-bit unsigned integer. It represents the number of 2-64 seconds after the whole seconds specified in the most significant 64-bits. Each tick of this integer represents 0.05421010862427522170... attoseconds.

The absolute time represented by this 128-bit data type is the sum of the two 64-bit components.

Interpretation Examples

Date

Decimal Representation

Hex. Representation

01/01/1904 00:00:00.000 UTC

{ 0, 0 }

{ 0x0, 0x0 }

12/31/1903 23:59:59.500 UTC

{ -1, (0.5 represented by 263)) == 9223372036854775808 }

{ 0xFFFFFFFFFFFFFFFF, 0x8000000000000000 }

12/31/1903 23:59:54.800 UTC

{ -6, (0.8 * 264) == 14757395258967641293 }

{ 0xFFFFFFFFFFFFFFFA, 0xCCCCCCCCCCCCCCCD }

01/01/2002 00:00:00.000 UTC

{ 3092601600, 0 }

{ 0xB8555B00, 0x0 }

The second example in the table above represents 0.5 seconds before the Epoch. It sets the whole seconds value to the next smallest whole second since the Epoch (-1) and then set the fractional part to 0.5 seconds, such that the sum is -0.5 seconds. 

In the third example, some rounding has taken place because you cannot represent 0.8 exactly in binary. Also, if you try to construct some of these values from a double, they will not match exactly due to a lack of precision. In that case, the fractional part of the third example would instead be 0xccccccccccccc000.

Earlier versions of LabVIEW

LabVIEW 7.0 or earlier used a 64-bit double (DBL) to represent time, yielding 15 digits of precision. The number of seconds between 1st Jan 1904 (the timestamp Epoch or year zero) to 1st Jan 2000 is 3027456000. Representing this as a DBL would use 10 out of the 15 digits of precision. That leaves a very small resolution space to perform hardware timings using most of the resolution by simply going from 1904 to today.  Representing time as a DBL was not ideal since it did not meet industry requirements.

 

Was this information helpful?

Yes

No