Timestamp Converter

|

seconds or milliseconds
your browser timezone

Convert a Unix timestamp to a human-readable date — or pick a date and get the epoch value back. Both fields stay in sync. Supports seconds and milliseconds precision.

What a Unix timestamp is

A Unix timestamp (also called epoch time or POSIX time) is an integer counting the number of seconds that have elapsed since 00:00:00 UTC on January 1, 1970 — the Unix epoch. It has no timezone embedded; it always represents a fixed, unambiguous point in time regardless of where you are.

The format is universal: every major language, database, and operating system understands it. This makes Unix timestamps the standard mechanism for storing and comparing times across heterogeneous systems.

Because it is a plain integer, time arithmetic is trivial: add 3600 to advance one hour, subtract two timestamps to get elapsed seconds, compare two timestamps with a simple integer comparison.

Epoch zero0 → 1970-01-01 00:00:00 UTC
Typical current (s)1700000000 → 2023-11-14 22:13:20 UTC
Add 1 hour1700000000 + 3600 = 1700003600 → 2023-11-14 23:13:20 UTC
Elapsed time1700003600 − 1700000000 = 3600 seconds = 1 hour

10-digit vs 13-digit: seconds or milliseconds?

The digit count is the fastest way to tell seconds from milliseconds at a glance.

A 10-digit value is seconds. Current epoch time in seconds has been 10 digits since 2001 and will remain 10 digits until the year 2286. Server-side runtimes (Unix kernel, Go's time.Now().Unix(), Python's time.time(), PostgreSQL's EXTRACT(epoch …)) all return second precision.

A 13-digit value is milliseconds. JavaScript's Date.now(), browser Performance APIs, and many JavaScript libraries return milliseconds. A 13-digit value is simply the 10-digit value multiplied by 1000.

If you see year 1970 (or wildly wrong year) when you expected 2020+, you most likely passed a millisecond timestamp to code expecting seconds — divide by 1000. Conversely, if you see year 55000+, you passed seconds to code expecting milliseconds — multiply by 1000.

Auto-detection rule: values above 1 × 10¹¹ (100 billion) are treated as milliseconds. Values at or below that threshold are treated as seconds.
10 digits — seconds1700000000 → 2023-11-14 22:13:20 UTC
13 digits — milliseconds1700000000000 → 2023-11-14 22:13:20 UTC
ms read as s (wrong)1700000000000 seconds → year 55812 ← divide by 1000
s read as ms (wrong)1700000 ms → 1970-01-20 ← multiply by 1000

Getting the current timestamp

When you need the current epoch time — to compare against a stored value, set a token expiry, or verify a cron schedule — each environment exposes it differently.

JavaScript gives millisecond precision with Date.now(). Divide by 1000 and floor to get seconds, which is what most server-side systems and JWT fields (exp, iat) expect.

JavaScript (ms)Date.now() // → 1700000000000
JavaScript (s)Math.floor(Date.now() / 1000) // → 1700000000
Pythonimport time; int(time.time()) # → 1700000000
Gotime.Now().Unix() // → 1700000000
PostgreSQLEXTRACT(epoch FROM now())::integer
MySQLUNIX_TIMESTAMP()
Bash / shelldate +%s

JWT exp and iat claims

JSON Web Tokens use Unix timestamps in seconds for two standard claims: iat (issued at) records when the token was created; exp (expires at) records when it becomes invalid. Both are plain integers in the JWT payload.

To check whether a token has expired: extract the exp value from the JWT payload, paste it here, and compare the resulting date and time to now. If exp is in the past, the token is expired and any request using it will be rejected.

A common source of bugs: generating exp by calling Date.now() (which returns milliseconds) without dividing by 1000. The resulting token carries a value like 1700000000000 as the expiry, which the server interprets as seconds — placing expiry in the year 55812. The token never expires during testing, masking the bug until production code uses a strict validator.

JWT payload{ "sub": "user_abc", "iat": 1700000000, "exp": 1700003600 }
iat decoded1700000000 → 2023-11-14 22:13:20 UTC (issued at)
exp decoded1700003600 → 2023-11-14 23:13:20 UTC (valid for 1 hour)
Wrong exp (ms)Date.now() → 1700000000000 → server reads as year 55812
Correct exp (s)Math.floor(Date.now() / 1000) + 3600 → 1700003600

Timestamps in API responses and server logs

REST API responses and server logs are the most common places developers encounter raw timestamps they need to interpret on the fly.

APIs: when a JSON field named created_at, updated_at, timestamp, or issued_at contains a large integer, it is almost always a Unix timestamp. The digit count tells you seconds (10 digits) vs milliseconds (13 digits). GitHub, Stripe, and AWS all use Unix timestamps; Twitter historically used milliseconds.

Server logs: nginx, Apache, syslog, and most structured loggers record event times as Unix timestamps. When tracing an incident, convert the target time window to a Unix range first, then filter logs by numeric range — far faster than parsing date strings.

Databases: PostgreSQL stores timestamps as TIMESTAMPTZ internally; EXTRACT(epoch …) converts to a Unix integer. MySQL's UNIX_TIMESTAMP() does the same. SQLite has no native timestamp type — values are typically stored as Unix integers directly, and queries rely on integer comparison.

API response{"id": "ev_123", "created_at": 1700000000, "amount": 4200}
Nginx log field1700000000.123 → 2023-11-14 22:13:20 UTC
PostgreSQL extractSELECT EXTRACT(epoch FROM created_at)::integer FROM orders;
Log range filterawk '$1 >= 1700000000 && $1 < 1700003600' app.log

UTC vs local time

UTC output is stable and timezone-independent. Use it when comparing timestamps across machines or systems in different regions.

Local time output reflects your browser's configured timezone. This is what you see on your machine, and it may differ by hours from what a server in another region records for the same timestamp.

The tool shows both formats simultaneously so you can verify the relationship without converting manually.

Practical rule: store and transmit timestamps as UTC integers. Convert to local time only at the display layer, as close to the user as possible.

Common mistakes and off-by-1000 errors

Off-by-1000: the most frequent timestamp bug. Occurs when a millisecond value is used where seconds are expected, or vice versa. Symptoms: dates that land in 1970, or dates tens of thousands of years in the future.

Timezone assumption: constructing "2024-01-15" without specifying UTC and passing it to new Date() in JavaScript results in local-timezone interpretation. This produces a different Unix timestamp on every machine in a different timezone. Always append Z or +00:00 when the intent is UTC.

Float truncation: Python's time.time() and JavaScript's performance.now() return fractional seconds. If you need an integer (most storage contexts do), use math.floor() or int() explicitly — do not rely on implicit conversion, which may round up.

DST gaps and overlaps: when converting a wall-clock time in a DST-observing timezone to UTC, one hour per year is ambiguous (clock goes back) or does not exist (clock goes forward). Storing UTC integers avoids this class of bug entirely.

Off-by-1000 examplenew Date(1700000000 * 1000) → year 55812 — value was already seconds, not ms
Timezone trapnew Date("2024-01-15").getTime() / 1000 → varies by timezone; use "2024-01-15T00:00:00Z"
Float truncationint(time.time()) → 1700000000 (correct floor); round() may give 1700000001

Epoch to Timestamp Conversion Explained

Epoch time — also called Unix time or POSIX time — is the number of seconds elapsed since 1970-01-01 00:00:00 UTC, the moment chosen as the Unix epoch. Every conversion from epoch to timestamp anchors on this single fixed origin, which is why the format is unambiguous across machines, languages, and timezones.

The epoch was set when Unix was being developed in the early 1970s, and 1970-01-01 was chosen as a clean, recent boundary that would let the operating system store dates in a small signed integer for decades. The choice has stuck because it removed timezone ambiguity at the storage layer: an epoch value never needs an offset attached, so two systems comparing two integers will always agree on which one is earlier.

Negative epoch values are valid and represent times before 1970. For example, −86400 is 1969-12-31 00:00:00 UTC and −31536000 is 1969-01-01 00:00:00 UTC. Some historical archives, birth-date fields, and scientific datasets contain negative epoch values, so any robust converter should accept them rather than treating them as invalid input.

Epoch origin0 → 1970-01-01 00:00:00 UTC
One day before epoch-86400 → 1969-12-31 00:00:00 UTC
One year before epoch-31536000 → 1969-01-01 00:00:00 UTC
Why no timezoneepoch values are always UTC — no offset, no DST, no ambiguity

Common Epoch Conversion Examples

A handful of epoch values appear repeatedly in documentation, test fixtures, log files, and bug reports. Recognising them at a glance speeds up debugging when one shows up in an API response or a stack trace.

The reference table below covers the epoch origin, a clean billion-second mark from 2001, two recent round numbers, and the signed 32-bit maximum that triggers the Year 2038 problem. Together they bracket the range that production systems realistically operate within today.

The signed 32-bit maximum (2,147,483,647) is the last second a 32-bit time_t can represent. The next second wraps to a large negative number and decodes to 1901. 64-bit systems are not affected, but legacy C, embedded firmware, and old database columns may still rely on 32-bit storage.
Epoch 00 → 1970-01-01 00:00:00 UTC
Epoch 1,000,000,0001000000000 → 2001-09-09 01:46:40 UTC
Epoch 1,500,000,0001500000000 → 2017-07-14 02:40:00 UTC
Epoch 2,000,000,0002000000000 → 2033-05-18 03:33:20 UTC
Epoch 2,147,483,647 (32-bit max)2147483647 → 2038-01-19 03:14:07 UTC — Year 2038 problem boundary

JavaScript, Python, and Unix CLI Conversion

Every mainstream language ships a built-in for epoch-to-date conversion. The snippets below convert the same epoch value (1700000000) to a human-readable UTC string so you can sanity-check output across environments before wiring conversion logic into your code.

JavaScript: Date.now() returns the current epoch in milliseconds. Multiply a seconds value by 1000 before passing to the Date constructor, then format with toISOString() for a stable UTC string. Use toLocaleString() if you instead want the browser's configured timezone.

Python: datetime.fromtimestamp accepts seconds. Pass tz=timezone.utc to keep the result anchored to UTC instead of the host machine's local timezone. Pair with .isoformat() to produce an ISO 8601 string suitable for logs or API responses.

Unix CLI: GNU date accepts the @ prefix to read an epoch value. On macOS and other BSD systems use date -r instead. Both print the timestamp in the shell's configured timezone — set TZ=UTC or pass -u for portable output.

JavaScript — current epoch (ms)Date.now() // → 1700000000000
JavaScript — epoch to datenew Date(1700000000 * 1000).toISOString() // → "2023-11-14T22:13:20.000Z"
JavaScript — date to epochMath.floor(new Date("2023-11-14T22:13:20Z").getTime() / 1000) // → 1700000000
Python — epoch to datefrom datetime import datetime, timezone datetime.fromtimestamp(1700000000, tz=timezone.utc) # → 2023-11-14 22:13:20+00:00
Python — date to epochint(datetime(2023, 11, 14, 22, 13, 20, tzinfo=timezone.utc).timestamp()) # → 1700000000
Unix CLI — GNU datedate -u -d @1700000000 # → Tue Nov 14 22:13:20 UTC 2023
Unix CLI — BSD/macOS datedate -u -r 1700000000 # → Tue Nov 14 22:13:20 UTC 2023
Unix CLI — current epochdate +%s # → 1700000000

Milliseconds vs Seconds: Which Epoch Format?

Different ecosystems default to different precisions, and mixing them is the single most common source of timestamp bugs. The fastest rule of thumb: a 13-digit integer is milliseconds, a 10-digit integer is seconds. This holds for any value between roughly 2001 and 2286, which covers every realistic production timestamp.

Milliseconds by default: JavaScript (Date.now), Java (System.currentTimeMillis), C# (DateTimeOffset.ToUnixTimeMilliseconds), and the Web Performance API. Anything that originated in the browser or the JVM tends to use milliseconds because sub-second precision matters for UI and profiling.

Seconds by default: the Unix kernel (time(2)), Python (time.time and datetime.timestamp return seconds, with fractional precision available), Go (time.Now().Unix), Ruby (Time.now.to_i), Rust (SystemTime), and PostgreSQL (EXTRACT(epoch FROM …)). Most JWT libraries, OAuth flows, and HTTP cache headers also expect seconds.

Quick check before debugging further: count the digits. 1700000000 (10 digits) is seconds — November 2023. 1700000000000 (13 digits) is milliseconds — also November 2023. If you swap them by mistake, the seconds value gets read as 1970-01-20 (a few weeks after epoch) and the milliseconds value gets read as the year 55812.

When integrating two systems with different defaults — for example a Node.js backend talking to a Go service — normalise to seconds at the boundary and document the choice in the API schema. Inconsistent units inside a single payload are far harder to debug than a one-time conversion at the edge.
Seconds — 10 digits1700000000 → 2023-11-14 22:13:20 UTC
Milliseconds — 13 digits1700000000000 → 2023-11-14 22:13:20 UTC
Convert ms → sMath.floor(ms / 1000)
Convert s → msseconds * 1000
Detection rulevalue > 1e11 ? milliseconds : seconds

Frequently Asked Questions

I entered a timestamp and got a date in 1970 — why?
Either the value is 0 or very small (close to the epoch), or you entered a millisecond timestamp being read as seconds. A value of 1000 seconds is 1970-01-01 00:16:40 UTC. If you expected a 2020s date, your value is likely milliseconds — enter the 13-digit value directly (auto-detected above 10¹¹) or divide by 1000.
How do I check if a JWT has expired?
Extract the exp field from the JWT payload — it is a Unix timestamp in seconds. Paste it into this tool and compare the date to the current time. If the date is in the past, the token is expired. You can also use the JWT Decoder tool, which decodes and displays exp in human-readable form automatically.
Why does my JWT exp show a date in year 55000?
The exp value was set using Date.now() (milliseconds) instead of Math.floor(Date.now() / 1000) (seconds). For example, Date.now() returns ~1700000000000, but JWT exp expects ~1700000000. Fix: Math.floor(Date.now() / 1000) + <lifetime in seconds>.
How do I convert an ISO date string to a Unix timestamp?
JavaScript: Math.floor(new Date("2024-01-15T00:00:00Z").getTime() / 1000). Always include Z or +00:00 — without it, JavaScript uses local time, producing different results on different machines. Python: int(datetime.fromisoformat("2024-01-15T00:00:00+00:00").timestamp()). PostgreSQL: EXTRACT(epoch FROM TIMESTAMPTZ '2024-01-15 00:00:00 UTC')::integer.
What is the difference between Unix timestamp and ISO 8601?
A Unix timestamp is an integer counting seconds (or milliseconds) from January 1, 1970 UTC — compact and easy to compare mathematically. ISO 8601 is a human-readable string such as 2024-03-15T14:30:00Z — easier to read but requires a parser. APIs use both. The digit count identifies a Unix timestamp; anything with dashes and colons is likely ISO 8601.
What is the Year 2038 problem?
Systems that store Unix timestamps as signed 32-bit integers overflow on January 19, 2038 at 03:14:07 UTC, wrapping to a large negative number. Modern systems use 64-bit integers and are not affected. If you maintain legacy C code or embedded firmware, verify the integer width.
What is the maximum timestamp this handles?
JavaScript's Date object handles timestamps up to ±8.64 × 10¹⁵ milliseconds from the epoch, covering years from −271,821 to 275,760. Any realistic timestamp — including far-future scheduling values — falls well within that range.
What is epoch time?
Epoch time (also called Unix time or POSIX time) is the number of seconds elapsed since 1970-01-01 00:00:00 UTC. It is a single integer with no embedded timezone, which makes it the standard format for storing and comparing times across operating systems, programming languages, and database engines. A value of 1700000000, for example, represents 2023-11-14 22:13:20 UTC — exactly the same instant on every machine in the world.
How do I convert epoch to human-readable date in JavaScript?
For a seconds value, multiply by 1000 and pass to the Date constructor: new Date(epochSeconds * 1000).toISOString() returns a UTC string like "2023-11-14T22:13:20.000Z". For a milliseconds value, skip the multiplication: new Date(epochMs).toISOString(). Use toLocaleString() instead of toISOString() if you want the browser's configured timezone formatting. Going the other direction, Math.floor(new Date("2023-11-14T22:13:20Z").getTime() / 1000) converts an ISO date string back to an epoch in seconds.
What is the maximum Unix timestamp?
On a system using a signed 32-bit integer for time_t — the original Unix design — the maximum value is 2,147,483,647, which represents 2038-01-19 03:14:07 UTC. The next second overflows to a large negative number and decodes to 1901. This is the Year 2038 problem. Modern 64-bit systems use a 64-bit time_t and can represent dates billions of years into the future, so the practical limit becomes whatever the application or language enforces — JavaScript's Date object, for example, accepts ±8.64 × 10¹⁵ milliseconds from epoch.
Why is my epoch timestamp showing 1970?
You are almost certainly passing a milliseconds value to code that expects seconds, or feeding a very small number close to zero. Quick check: count the digits. A 10-digit value (e.g. 1700000000) is seconds; a 13-digit value (e.g. 1700000000000) is milliseconds. If your runtime is JavaScript, Date.now() returns milliseconds — divide by 1000 with Math.floor(Date.now() / 1000) before sending to APIs, JWT exp claims, Unix CLI tools, or any system that defaults to seconds. The opposite mistake (seconds where milliseconds are expected) lands you in the year 55812 instead of 1970.

Related Tools