There are exactly 13 DNS root server identities. Not 13 machines — 13 IP addresses, operated by 12 organizations, distributed across more than 1,700 physical instances on every continent including Antarctica's research stations. Every single DNS query that isn't already cached ultimately traces back to one of these 13 clusters. They don't store your domain records. They don't know the IP address of google.com. But they know who does, and that one job makes them the most critical infrastructure on the internet.
What Are DNS Root Servers?
DNS root servers sit at the top of the DNS hierarchy. When a recursive resolver needs to look up a domain name it has never seen before, the root server is the first stop. But calling it a "first stop" understates how narrow their role actually is.
A root server does exactly one thing: it answers the question "who is responsible for this TLD?" When your resolver asks a root server about docs.example.com, the root server doesn't try to resolve the full name. It looks at the rightmost label — .com — and responds with a referral: "I don't know about docs.example.com, but here are the authoritative name servers for .com. Go ask them."
That referral response contains NS records and glue records (IP addresses) for the TLD's name servers. The resolver takes that information and continues the chain, asking the .com servers about example.com, then asking the example.com authoritative servers about docs.example.com. The root server's job is done after that first referral.
This delegation model is what makes DNS scalable. No single system needs to know every domain. The root servers only need to track roughly 1,500 TLD delegations — a tiny dataset compared to the 300+ million registered domains below them.
How Root Servers Fit Into DNS Resolution
To understand where root servers sit in the resolution process, walk through what happens when you type docs.example.com into a browser:
Browser
→ Stub Resolver (OS-level)
→ Recursive Resolver (ISP or public like 8.8.8.8)
→ Root Server "Who handles .com?"
→ TLD Server (.com) "Who handles example.com?"
→ Authoritative NS "docs.example.com = 93.184.216.34"
The recursive resolver is the one doing the heavy lifting. It walks down the delegation chain, starting at the root, collecting referrals at each level until it reaches an authoritative answer. The root server only participates in step one of that chain.
If you want to see this process in detail, I wrote a full walkthrough in How DNS Queries Work that traces a query from browser to authoritative answer. The root server interaction is the opening move in every uncached resolution.
You can also observe this firsthand with the +trace flag in dig. Running dig +trace docs.example.com forces your resolver to walk the entire delegation chain from root to authoritative, showing every referral along the way. My dig command guide covers this and other diagnostic techniques.
The 13 Root Server Clusters
The most common question about root servers is: why 13? The answer is a protocol constraint from the early 1990s.
A DNS response over UDP must fit within 512 bytes without EDNS extensions. The root zone's NS response needs to include 13 NS records (the hostnames like a.root-servers.net) plus their IPv4 glue records (the actual IP addresses). Fitting 13 entries with their associated data is the practical maximum for a single 512-byte UDP packet. Go to 14 and the response would exceed the limit, forcing TCP fallback on every priming query — an unacceptable overhead at the scale root servers operate.
When IPv6 was added later, the response size grew beyond 512 bytes. EDNS(0) extensions (RFC 6891) solved this by allowing larger UDP payloads, but the 13-identity design was already entrenched. There was no compelling reason to add more root server identities when anycast could multiply capacity behind each one.
Here are all 13 root server clusters with their operators and network details:
| Letter | Hostname | Operator | IPv4 | IPv6 | Approx. Instances |
|---|---|---|---|---|---|
| A | a.root-servers.net | Verisign | 198.41.0.4 | 2001:503:ba3e::2:30 | ~6 |
| B | b.root-servers.net | USC-ISI | 170.247.170.2 | 2801:1b8:10::b | ~6 |
| C | c.root-servers.net | Cogent Communications | 192.33.4.12 | 2001:500:2::c | ~10 |
| D | d.root-servers.net | University of Maryland | 199.7.91.13 | 2001:500:2d::d | ~150 |
| E | e.root-servers.net | NASA Ames Research Center | 192.203.230.10 | 2001:500:a8::e | ~290 |
| F | f.root-servers.net | Internet Systems Consortium (ISC) | 192.5.5.241 | 2001:500:2f::f | ~300+ |
| G | g.root-servers.net | US DoD (DISA) | 192.112.36.4 | 2001:500:12::d0d | ~6 |
| H | h.root-servers.net | US Army Research Lab | 198.97.190.53 | 2001:500:1::53 | ~6 |
| I | i.root-servers.net | Netnod | 192.36.148.17 | 2001:7fe::53 | ~90 |
| J | j.root-servers.net | Verisign | 192.58.128.30 | 2001:503:c27::2:30 | ~200+ |
| K | k.root-servers.net | RIPE NCC | 193.0.14.129 | 2001:7fd::1 | ~90 |
| L | l.root-servers.net | ICANN | 199.7.83.42 | 2001:500:9f::42 | ~200+ |
| M | m.root-servers.net | WIDE Project | 202.12.27.33 | 2001:dc3::35 | ~10 |
A few things stand out in this table. Verisign operates two root server identities (A and J), making them the only organization running more than one. The operator mix is deliberately diverse — government agencies, academic institutions, nonprofits, commercial entities, and international organizations. This diversity is intentional: no single organizational failure, legal jurisdiction, or policy decision should be able to compromise all 13 roots.
The instance counts vary wildly. F-root (ISC) and E-root (NASA) each have hundreds of instances, while A-root, B-root, G-root, and H-root have single-digit deployments. The difference comes down to each operator's anycast deployment strategy and partnerships with hosting providers and IXPs (Internet Exchange Points) willing to host root server instances.
How Anycast Makes It Work
If there are only 13 IP addresses but over 1,700 physical machines, how does a query from Tokyo reach a machine in Tokyo instead of one in Virginia? The answer is anycast routing.
In a normal (unicast) setup, an IP address maps to exactly one machine in one location. Anycast breaks this assumption: the same IP address is announced via BGP from hundreds of different locations simultaneously. When a resolver in Tokyo sends a packet to 198.41.0.4 (A-root), BGP routing naturally delivers it to the nearest location announcing that prefix — which might be an A-root instance in a Tokyo data center.
The benefits of anycast for root servers are substantial:
Low latency. Queries are served by the geographically nearest instance. A resolver in Nairobi hits an African instance rather than crossing the Atlantic.
DDoS resilience. A volumetric attack against a root server IP gets distributed across all instances announcing that address. An attacker would need to overwhelm hundreds of geographically dispersed machines simultaneously — the attack traffic itself gets "absorbed" by the anycast distribution.
Automatic failover. If an instance goes offline, it stops announcing the BGP route. Traffic automatically reroutes to the next nearest instance with no manual intervention and no DNS-level changes. The IP address doesn't change; only the BGP routing table shifts.
Capacity scaling. Adding capacity means deploying a new instance and announcing the same prefix from the new location. No configuration changes needed on any resolver worldwide.
This is why the "13 root servers" framing is misleading. There are 13 logical services, but the physical infrastructure behind them is a globally distributed mesh of over 1,700 machines. The number keeps growing as operators partner with more IXPs and hosting providers to extend their anycast footprint into underserved regions.
The Root Zone File
Every root server serves the same data: the root zone file. This is the authoritative list of every TLD and its name servers. Here is what a snippet of it looks like:
com. 172800 IN NS a.gtld-servers.net.
com. 172800 IN NS b.gtld-servers.net.
com. 172800 IN NS c.gtld-servers.net.
a.gtld-servers.net. 172800 IN A 192.5.6.30
a.gtld-servers.net. 172800 IN AAAA 2001:503:a83e::2:30
For every TLD, the root zone contains NS records (which name servers are authoritative) and glue records (the IP addresses of those name servers). That's it. No A records for individual domains, no MX records, no TXT records — just the delegation pointers for each TLD.
The root zone is managed by IANA (a function of ICANN) and published by Verisign in their role as Root Zone Maintainer. The process for changes is deliberate: a TLD operator requests a change, IANA verifies it, and Verisign publishes the updated zone. Updates happen multiple times per day as TLD delegations change.
The file itself is surprisingly small — roughly 2MB. That's the entire top level of the global DNS hierarchy compressed into a file smaller than a typical JPEG photo. You can download the current root zone from internic.net, or query it directly:
dig . NS
This returns the 13 root server NS records and their glue addresses — the same data every recursive resolver needs to begin resolution.
The roughly 1,500 TLDs in the root zone represent every active top-level domain, from legacy TLDs like .com and .net to newer gTLDs like .app and .dev, plus country-code TLDs like .uk and .jp. You can explore the full directory of TLDs and their delegation details on the TLD directory.
Root Hints and Priming Queries
A recursive resolver needs to know the root server addresses before it can resolve anything. This creates a bootstrapping problem: how do you look up a DNS name when you don't have DNS yet?
The solution is the root hints file — a static file shipped with every recursive resolver containing the names and IP addresses of all 13 root servers. This file is built into BIND, Unbound, PowerDNS Recursor, and every other resolver implementation. It's not fetched from the network; it's compiled into the software.
When a resolver starts up for the first time, it sends a priming query — a direct NS query for the root zone (. IN NS) to one of the root server IPs from its hints file. The root server responds with the current, authoritative list of root servers. The resolver caches this response and uses it for all subsequent root referrals.
RFC 8109 formally defines this priming process. The key insight is that the hints file doesn't need to be perfectly current — it just needs to contain at least one working root server IP address. As long as the resolver can reach any root server, the priming query will return the complete, current set.
The root hints file changes rarely. The most recent significant change was in 2017 when B-root's IPv4 address changed from 192.228.79.201 to 170.247.170.2. Before that, changes were even less frequent. This stability is by design — changing a root server IP address requires every resolver on the internet to eventually update.
Do Root Servers Handle Every DNS Query?
No, and this is one of the most important things to understand about root servers. Aggressive caching means the vast majority of DNS queries never reach a root server.
Here's why: when a recursive resolver receives a root referral, it caches the response. The root zone NS records carry a TTL of 518,400 seconds — that's six days. The individual TLD NS records carry TTLs of 172,800 seconds — two days. A busy resolver that handles thousands of queries per second may only contact root servers once every couple of days per TLD, because the referral data stays cached.
RFC 8767 (Serving Stale Data) further reduces root server dependency. It allows resolvers to continue serving expired cached data when authoritative servers are unreachable, rather than returning SERVFAIL. This means that even if all root servers became temporarily unavailable, most DNS resolution would continue working for hours or even days on cached data.
Despite this caching, root servers collectively still handle substantial traffic — on the order of hundreds of thousands of queries per second across all 13 clusters. Studies of root server traffic consistently show that the majority of queries they receive are unnecessary: junk traffic from misconfigured devices, automated scanners, botnets, and applications that bypass local resolver caching. Legitimate, cache-miss queries from properly configured resolvers represent a relatively small fraction of total root server load.
This relationship between caching and root server load is directly connected to DNS propagation. When TTLs expire and resolvers re-query the delegation chain, that's the mechanism behind propagation delays. Root server referrals have long TTLs specifically to minimize the frequency of these re-queries.
Security and DNSSEC at the Root
The root zone was signed with DNSSEC on July 15, 2010 — a date that represents one of the most carefully coordinated infrastructure changes in internet history. From that point forward, every response from a root server includes DNSSEC signatures that resolvers can validate.
DNSSEC at the root works through a chain of trust:
- The root zone has a Key Signing Key (KSK) and a Zone Signing Key (ZSK)
- The root KSK is the trust anchor — it's hardcoded into validating resolvers
- Each TLD in the root zone has a DS (Delegation Signer) record that links the TLD's DNSSEC keys to the root's signature
- Resolvers can verify the chain from root KSK to root ZSK to TLD DS record to TLD keys, and so on down to individual domains
The root KSK is maintained through key ceremonies — quarterly events held at two secure facilities in the United States (El Segundo, California, and Culpeper, Virginia). These ceremonies involve multiple trusted community representatives, hardware security modules (HSMs), and physical safeguards. The ceremonies are filmed and publicly streamed as a transparency measure.
In October 2018, ICANN performed the first-ever root KSK rollover — replacing the original 2010 key with a new one. This required coordinating with every validating resolver on the internet to recognize the new trust anchor. The rollover was originally scheduled for October 2017 but was delayed by a year after telemetry suggested a significant number of resolvers hadn't yet picked up the new key. The successful rollover in 2018 demonstrated that the process works, but also highlighted how operationally complex root-level DNSSEC management is.
You can check whether a specific TLD has DNSSEC configured by looking up its DS records in the root zone, or use the DNS Inspector to query DS and DNSKEY records for any domain.
Common Misconceptions About Root Servers
"If root servers go down, the internet stops." This is the most persistent myth. Because of aggressive caching with multi-day TTLs and stale-serving mechanisms (RFC 8767), the internet would continue functioning for hours to days even if every root server simultaneously became unreachable. Domains with cached delegations would resolve normally. Only queries for TLDs not already in a resolver's cache would fail.
"There are only 13 root servers." There are 13 identities — 13 IP addresses operated by 12 organizations. Behind those 13 addresses are over 1,700 physical machines distributed globally via anycast. The "13 servers" framing dramatically undersells the redundancy of the system.
"Root servers are all in the United States." While the 12 operating organizations are predominantly US-based (10 of 12), the physical anycast instances are distributed worldwide. F-root alone has instances in over 100 countries. K-root (RIPE NCC, based in the Netherlands) and I-root (Netnod, based in Sweden) are the two non-US operators, but every operator's anycast network extends globally.
"Root servers store all domain records." Root servers know nothing about individual domains. They store only TLD delegations — roughly 1,500 entries. The root zone file is about 2MB. The actual domain records for .com alone would be hundreds of gigabytes.
"Querying a root server is slow." With anycast, a root server query typically completes in under 10 milliseconds. The nearest instance is usually at a local IXP or within the same country. And since resolvers cache root referrals for days, the query almost never happens during normal operation anyway.
