From Wikitech
Jump to navigation Jump to search

DNS Discovery is a simple dynamic service discovery system to get the closest active endpoint of a given service that is running in multiple data centers.

This solution is meant only for simple discovery entries, if more complex data needs to be dynamically driven, the usage of a Confd / etcd managed configuration is required.

Active/active services

If a service is running in active/active mode, it means that it can be contacted in any data center. In this case the entry service-name.discovery.wmnet will return the IP of the endpoint of the same data center of the host that is performing the resolution, if that endpoint is pooled.

For example, with both data centers pooled, an host in eqiad that will resolve service-name.discovery.wmnet will get the IP of service-name.svc.eqiad.wmnet while an host in codfw will get the IP of service-name.svc.codfw.wmnet.

If the codfw data center entry is depooled, an host in codfw will get the IP of the endpoint in eqiad, if that is pooled.

An active/active service requires a geoip record in gdnsd.

Dns-discovery active-active.png

Active/passive services

If a service is running in active/passive mode, it means that it can be contacted only in the primary data center and not in the passive one. In this case the entry service-name.discovery.wmnet will always return the IP of the endpoint in the primary data center.

An active/passive service requires a metafo record in gdnsd.

Dns-discovery active-passive.png

Read-only and read-write

If a service can handle reads in an active/active way, but writes only in an active/passive way, two DNS Discovery records can be created, service-name-ro and service-name-rw so they can be treated as two different services, one active/active and the other active/passive.

Failure scenario

To handle the failure cases in which no datacenter is pooled for a given service, a failoid service was created that will always close the connection to any TCP port. In this way the DNS Discovery can have the failod IPs as fallback and is able to return always an IP, avoiding any negative DNS caching and such. The failoid service is present in both eqiad and codfw datacenters and the IP of the local one will be returned.

How to manage a DNS Discovery service

The DNS configuration is managed in Puppet while the current pooled/depooled state and the TTL are stored in etcd and can be managed via Conftool, either from the CLI or using it as a library. For example:

  • Get the current live state of the three main MediaWiki discovery entries:
$ confctl --quiet --object-type discovery select 'dnsdisc=(appservers|api|imagescaler)-rw' get
{"codfw": {"pooled": false, "references": [], "ttl": 300}, "tags": "dnsdisc=imagescaler-rw"}
{"eqiad": {"pooled": true, "references": [], "ttl": 300}, "tags": "dnsdisc=imagescaler-rw"}
{"eqiad": {"pooled": true, "references": [], "ttl": 300}, "tags": "dnsdisc=api-rw"}
{"codfw": {"pooled": false, "references": [], "ttl": 300}, "tags": "dnsdisc=api-rw"}
{"codfw": {"pooled": false, "references": [], "ttl": 300}, "tags": "dnsdisc=appservers-rw"}
{"eqiad": {"pooled": true, "references": [], "ttl": 300}, "tags": "dnsdisc=appservers-rw"}
  • Get the current live state of the parsoid entry:
$ confctl --quiet --object-type discovery select 'dnsdisc=parsoid' get
{"eqiad": {"pooled": true, "references": [], "ttl": 300}, "tags": "dnsdisc=parsoid"}
{"codfw": {"pooled": true, "references": [], "ttl": 300}, "tags": "dnsdisc=parsoid"}
  • Depool the codfw entry of the imagescaler-ro entry in codfw:
$ confctl --object-type discovery select 'dnsdisc=imagescaler-ro,name=codfw' set/pooled=false
  • Display the state of all services defined in codfw:
$ confctl --object-type discovery select name=codfw get

Add a service to production

In order to add a service named example to production:

  1. In operations/puppet, add an entry for example to conftool-data/discovery/services.yaml and hieradata/common/service.yaml. Pay particular attention to the value of the boolean active_active. Example. Merge the change and run puppet on all authdns servers: sudo -i cumin 'A:dns-auth' run-puppet-agent
  2. Add discovery entries to templates/wmnet in operations/dns. Make sure you add a metafo entry for active/passive services, and a geoip entry for active/active ones. Add an entry to utils/mock_etc/discovery-metafo-resources (active/passive) or utils/mock_etc/discovery-metafo-resources (active/active). Compare this Active/passive example with active/active
  3. Pool one DC for active/passive services: confctl --object-type discovery select 'dnsdisc=example,name=eqiad' set/pooled=true or both in case of active/active: confctl --object-type discovery select 'dnsdisc=kibana' set/pooled=true
  4. Merge the DNS change. Choose one authoritative DNS (eg: authdns1001.wikimedia.org), and run sudo -i authdns-update

Remove a service from production

With the goal of removing a service named example from production:

  1. Remove any reference to example.discovery.wmnet and example.svc.{eqiad,codfw}.wmnet from the configuration of other services
  2. Remove the discovery entries for example.discovery.wmnet as well as the geo-config-test part from DNS. Example
  3. Remove discovery's hieradata entries from the puppet repo (hieradata/common/discovery.yaml). Example
  4. Downtime the LVS endpoints in icinga
  5. Remove the lvs configuration from hiera (Example) and then EITHER:
    1. move the hosts to role::spare, or
    2. remove role::lvs::realserver from the hosts configuration
  6. (Optional) Run puppet on icinga.wikimedia.org
  7. Run puppet on the affected load balancers and rolling-restart pybal. To identify which load balancers need to be restarted, look at the class attribute of the service being removed on hieradata/common/lvs/configuration.yaml and see which lvs hosts belong to that class.
  8. (Optional) PyBal does not automatically remove ipvsadm services once they're gone from configuration. That can be done by hand with ipvsadm
  9. Remove conftool-data entries. Example
  10. Remove example.svc.{eqiad,codfw}.wmnet entries from DNS Example


When adding a new dns discovery service it might happen that the "discrepancy check" alert fires, in this case the code needs updating to cater for the new service. See for example Ib20391fdb25

If you reached this page from Icinga and you want to the services to check, you have two ways:

  • Click on the Icinga alert to expand it and you should see the complete message, containing the services list.
  • Execute the Nagios check directly on one of the authdns servers.

The Discrepancy check might also fire on regular maintenance tasks. This is alert IS NOT necessarily an actionable one. Yes we know that non actionable alerts are not good per se. The reasoning behind the addition is that we don't currently have any other way of knowing that our discovery is diverging from what we had codified as the "optimum".

Please DO NOT ACK this alert under any circumstances. Rather reach out to teams to figure out why things are diverging, there's probably a task already.