Yes, resolv.conf or equivalent information is at the
client end of DNS, so when apps call gethostbyname() they know where to go besides the hosts file or such. Being dumb clients, they call the first working DNS server in there to resolve the name, selecting randomly, possibly with their domain tacked on the right end. Names ending in '.' do not get domains tried on the right end.
The first DNS server probably does not know the answer, unless he serves that domain, so as the poor client asked for recursion, he will keep asking dns servers (without recursion, so he can build his cache) until he has an answer. Then, he will cache it for its lifetime. For
ftp.boulder.ibm.com, knowing nothing, he would call his root server for the "com" domain (top level), which should be his ISP, but if you are the ISP, you need to keep your list of real roots up to date. The "com" server will say go bother the name servers for "ibm.com" and give a list, the nameservers for "ibm.com" may say go ask "boulder.ibm.com" DNS nameservers, and give a list. One of them will answer you. You will cache all of these answers for their lifetime. The real root servers are a pile of computers in two tiers, with the first tier host forwarding to a right choice on the second tier based on database segmentation, and the second sending answers directly, a triangular circuit, since UDP is connectionless!
That was DNS server life on the client side.
The
DNS server side involves a parent that says you control some subtree of the world's namespace and knows your master and your slaves's names and IPs, zone transfers from your master to your slaves, domain and host information for the domains you control. BIND puts this in simple text files, but some implementations use RDBMS, LDAP, or even the Windows name server thing that I forget already! Any domain can have many servers, but only one should be master and be updated.
DNS is very simple for the query, UDP packets on one unconnected socket port 53, and one packet in drives one packet out, generally. Lost packets are not a big deal, as the end client will time out and resend his query. DNS Server internal state involves remembering recursion requests not filled, so when answers final or partial arrive, the answer or next question, respectively, can be sent. Zone transfers move the domain info from master to slave on the same socket number but TCP port 53 (slaves pull, as I recall). Security gets hacked when unsolicited bogus packets arrive, and are trustingly accepted, poisoning the cache. DNSSEC ensures the packets are from the real sending server, who is trusted by chain, using encryption and signatures.
Firewall DNS is common, so the hosts internally, either end clients or internal DNS servers supporting the end clients while protecting the firewall from that load, and possibly on unroutable addresses like 10.*, are not exposed as they seek IP addresses on the Internet. Your hosts accessible from the Internet can be name-hosted there, although you need an outside visible backup server or so for reliability if not bandwidth. Internal DNS can tell lies to send internal apps to a firewall for proxy access to real hosts on the outside. Since firewall tasks involve a lot of reverse DNS, having a server handy speeds things up and reduces network load.
DNS can provide failover reliability, if each app server is a DNS server for itself. Clients skip over dead DNS servers looking for live DNS servers, and the live DNS server says it is the app server. DNS server choice is random, spreading the load on all live servers somewhat evenly.
See, DNS is beautiful, elegant and not so hard. Did I miss anything Google and man cannot fill in?