← Back to blog

2026-04-18 · 9 min read

Understanding Unix Timestamps: What They Are and Why They Matter

A practical guide to epoch time, how Unix timestamps work across programming languages, and common pitfalls to avoid.

JW

James Whitfield

Founder & Lead Editor

What Is a Unix Timestamp?

A Unix timestamp is a single integer that represents a moment in time as the number of seconds elapsed since midnight on January 1, 1970, Coordinated Universal Time (UTC). This reference point is called the Unix epoch. The timestamp 0 corresponds to exactly that moment. The timestamp 1,000,000,000 — one billion seconds later — corresponds to September 9, 2001 at 01:46:40 UTC.

The appeal of Unix timestamps is that they reduce an arbitrary date and time to a single number that is easy to store, compare, and calculate with. Finding the duration between two events is just subtraction. Sorting events chronologically is just sorting by timestamp. No time zone, locale, or calendar system complicates the arithmetic.

Where Unix Timestamps Come From

Unix timestamps originate from the Unix operating system, developed at Bell Labs in the late 1960s and early 1970s. The choice of January 1, 1970 as the epoch was largely practical — it was close to when the system was being developed, it produced reasonably small integers for historical dates, and no special significance was attached to it.

The standard has since been adopted far beyond Unix systems. The C standard library, Python, JavaScript, Java, Go, and virtually every other major programming language have built-in functions for working with Unix timestamps. HTTP headers use them. Database systems store them. Log files are full of them. Understanding what a Unix timestamp is saves hours of confusion when reading system output or debugging time-related behavior.

Seconds, Milliseconds, and Microseconds

The original Unix timestamp is measured in seconds. But many modern systems use milliseconds (thousandths of a second) or microseconds (millionths of a second) for higher precision. JavaScript's Date.now() returns milliseconds. Many database systems store timestamps in microseconds. Some distributed systems use nanoseconds.

This creates a common confusion: a timestamp like 1,713,600,000 is April 20, 2024 in seconds. But 1,713,600,000,000 — the same number with three extra zeros — is April 20, 2024 only if you interpret it as milliseconds. Accidentally treating a millisecond timestamp as a second timestamp produces a date in the year 57,000. Getting this wrong in production code produces dates that are obviously wrong, but in configuration files or analytics queries the error can slip through.

When in doubt, check whether a timestamp falls in a plausible range. Unix second timestamps for dates from 2000 to 2030 fall between roughly 946 million and 1,893 million. If a timestamp has more than 10 digits, it is almost certainly milliseconds or finer.

The Year 2038 Problem

On 32-bit systems, Unix timestamps are stored as signed 32-bit integers. The maximum value of a signed 32-bit integer is 2,147,483,647. This corresponds to 03:14:07 UTC on January 19, 2038. After that moment, a 32-bit signed timestamp overflows and rolls back to a large negative number, which corresponds to December 13, 1901.

This is the Unix equivalent of the Y2K problem, and it is not fully resolved. While most modern systems use 64-bit integers for timestamps (which will not overflow for roughly 292 billion years), embedded systems, IoT devices, and legacy software still use 32-bit representations. Infrastructure running on such systems will encounter the 2038 problem if not updated.

For most application developers, 64-bit timestamps have been standard for over a decade. But if you are working with embedded firmware, legacy databases, or binary file formats that define timestamp fields as 32-bit integers, 2038 is a real deadline that requires planning.

Time Zones and Unix Timestamps

One of the most useful properties of Unix timestamps is that they are inherently UTC-based and time-zone-agnostic. The same moment in time has exactly one Unix timestamp regardless of where in the world the observer is. Converting to local time happens at the display layer, not the storage layer.

This design principle — store UTC, display local — is the recommended pattern for any system that operates across time zones. Storing local timestamps creates ambiguity during DST transitions: the same local timestamp can occur twice when clocks fall back, making it impossible to determine which of the two identical timestamps came first without additional context.

When you see a Unix timestamp in a log file, convert it to UTC first, then apply the relevant local time zone if needed. Never assume a Unix timestamp is in local time — it is always UTC unless documentation explicitly states otherwise and the system is misconfigured.

Working With Timestamps in Common Languages

Python

import time gives you time.time() for the current timestamp in seconds. The datetime module's datetime.fromtimestamp(ts) converts to local time; datetime.utcfromtimestamp(ts) gives UTC. The preferred modern approach uses datetime.fromtimestamp(ts, tz=timezone.utc) which returns a timezone-aware object.

JavaScript

Date.now() returns the current timestamp in milliseconds. new Date(ts * 1000) converts a second-based timestamp to a Date object (note the multiplication). Date.getTime() returns milliseconds from a Date object. Always verify whether your source timestamp is in seconds or milliseconds before converting.

SQL

Most SQL databases have functions like FROM_UNIXTIME() (MySQL) or to_timestamp() (PostgreSQL) to convert Unix timestamps to datetime values. The output time zone depends on the database server's time zone setting, so make the conversion explicit: to_timestamp(ts) AT TIME ZONE 'UTC'.

Using a Converter for Quick Lookups

For quick inspection of timestamps in logs, API responses, or configuration files, an epoch converter is faster than writing a one-off script. Paste the timestamp, confirm whether it is in seconds or milliseconds, and get the human-readable UTC equivalent instantly. This is especially useful when debugging time-related bugs where a timestamp that looks plausible at a glance may actually be pointing to 1970 or 57,000 AD.