People don't usually notice that DNS exists. They use their browsers and send email, and they assume that they'll reach the expected destination. When they don't, it's alarming. The 2016 DDoS attack on Dyn's DNS servers brought a large portion of the Internet's traffic to a grinding halt. No one expected it, and few people understood it.
Domain name servers are surprisingly vulnerable. Even ICANN, the organization responsible for the Internet's domain structure, once had its websites defaced for twenty minutes through an attack on its DNS records by a Turkish group. Nobody is entirely safe.
The protocol for converting domains to IP addresses is one of the oldest parts of the Internet. The first domains were created in 1985, years before there was a World Wide Web. Security wasn't a high concern in those days, and the protocols for DNS show it. There have been major improvements, but the system remains more fragile than anyone would like. Understanding the risks in DNS systems is essential to avoiding security problems.
Types of servers
The two kinds of DNS servers are authoritative and recursive. Authoritative servers are the source for IP addresses for one or more domains. Recursive servers locate the authoritative servers for a domain and query them. They cache the results to prevent overloading of the authoritative servers. They're called recursive because a query may pass through more than one server before reaching the ultimate authority for a domain's address.
Each type carries its own risks. An authoritative server that crashes or slows down makes it harder to access its domain. Most domains have a handful of servers, so the failure of one isn't catastrophic, but an attack that paralyzes all of them cuts off all access.
A recursive server may be tricked with false information, in which case it could direct requests to a rogue server instead of the one that belongs to the domain. Attacks on recursive servers require more resources to do noticeable damage, since alternative paths are available.
Because the protocols are so antiquated, DNS requests from clients usually travel by UDP or TCP without encryption. This makes them vulnerable to interception and spoofing, especially if the servers are poorly configured.
Denial of service
Overloading DNS servers with requests, as in the Dyn attack, can make them stop working without having to break into any systems. Both authoritative and recursive servers are vulnerable, but attacks on authoritative ones are especially effective at damaging a particular site. On-premises servers intended to handle a relatively low volume of queries for a few domains aren't very hard to overwhelm.
Some attacks just hit the server with lots of traffic without any finesse. DNS servers rely mostly on UDP rather than TCP, which makes a flood attack easier to accomplish.
Others query randomly generated subdomains, such as xypqfghm.example.com, so that the results can't be cached. This is called an NXDOMAIN attack. The requests come from legitimate recursive servers and spoof the address of origin, so the server can't stop the attack with IP address filtering.
Having a scalable, cloud-based server will provide reserve capacity which will withstand most such attacks. Another protective method is to filter domain requests to exclude ones that are probably bogus. The filter can increase its strictness when the volume of traffic increases.
DNS amplification attack
A poorly configured DNS server can become a tool, rather than a target, for a DNS attack. Here the tactic is to send lots of requests to one or more servers, spoofing the IP address of the intended victim as the source of the request. The forged packets ask for the largest amount of data possible with each request. The victim didn't make any of the requests but is overloaded with the responses. This is called a DNS amplification attack.
Open recursive resolvers — ones which will accept a request from anywhere — are the favorite vectors for this type of attack. Internet experts discourage the deploying of such resolvers, just for that reason. It's safer to accept requests only from internal sources and customers. Implementing rate limiting on a server will reduce its usefulness to attackers.
Several techniques can make client machines get inaccurate IP addresses, redirecting them to rogue servers. Some of these methods fall under the category of "cache poisoning." The aim is to get the false information into recursive servers, where it will stay until it times out.
The "Kaminsky bug," named for its discoverer in 2008, takes advantage of weaknesses in the protocol. There is no actual authentication, and only a 16-bit query ID confirms that a response matches a request. A machine that spoofs a DNS server can use tricks to send a large number of bogus responses that guess the query ID, eventually matching it by chance. Today's servers use techniques that make it harder to do this, but the problem hasn't completely gone away.
Some servers intentionally give incorrect DNS information on certain sites. If a government is trying to block a site, it may direct the national service providers to substitute a dead-end IP address for the unwanted site's real address. This information spreads to other recursive servers, and sometimes it's been known to spread outside the country that applied the block. It can take hours to get the caches cleaned up. This effect occurred in 2010, when a service provider outside China connected to Chinese DNS servers.
An attacker could break into a domain registration account or even directly breach a registrar's servers. Someone who broke in could modify a DNS server list or reassign the domain to another registrar. This would direct all the domain's traffic to a rogue server, and fixing the problem may be very time-consuming. The hijacker might try to impersonate the site, display a protest message, or just keep anyone from reaching the site.
A subtler trick is to intercept all traffic to the site and then forward it to the real site. That allows man-in-the-middle attacks which read email or insert malware. It sometimes takes a long time to notice this is happening.
A would-be hijacker may try to intercept the administrative email account and issue a "forgot password" request to the registrar. The registrar will send a link which lets the administrator reset the password. If the hijacker gets the email first, that's enough to change the password and take control of the account.
Methods of protection
Businesses that have a lot invested in their domain and site need to take extra measures against DNS risks. Many registrars offer a premium registration package with extra protection. Customers with that option need to give extra information before the registrar will accept any changes to the domain records. Hijacking their domains is much harder.
It would seem that using SSL/TLS would be an excellent way to improve DNS security, but that approach hasn't gained much of a foothold. There is a "DNS over HTTPS" protocol, but it's still experimental. Political issues as well as technical ones stand in the way of widespread adoption.
Using HTTPS throughout a website won't prevent DNS attacks, but it will make them harder to exploit. If a domain hijacker tries to duplicate the site, it won't pass HTTPS verification, and browsers will display a stern warning that there isn't a valid certificate.
The best way to give extra security to DNS is to use software that supports the DNSSEC extensions. These features make sure domain information is authenticated, stopping the large majority of spoofing attempts. Records are digitally signed, so that the client can confirm they came from a legitimate server.
Having a DNS server with reserve capacity will increase resistance to DoS attacks. A cloud server can be an economical way to do this, since you pay for the extra capacity only when you use it.
Monitoring network traffic helps to discover problems early. It can identify DNS problems, give early warnings of DoS attacks, and catch suspicious traffic. Catching issues early is the key to stopping them from doing serious damage.
Ordinary users may take DNS for granted, but the people who run networks don't have that luxury. It's necessary to keep a close eye on the servers to keep them safe.