|Ross McFarland e5df31c8a9 Merge pull request #517 from github/dynamic-pool-validation||3 days ago|
|.dependabot||1 month ago|
|.github||4 months ago|
|docs||10 months ago|
|octodns||3 days ago|
|script||1 week ago|
|tests||3 days ago|
|.git_hooks_pre-commit||1 year ago|
|.gitignore||6 months ago|
|CHANGELOG.md||2 months ago|
|CODE_OF_CONDUCT.md||3 years ago|
|CONTRIBUTING.md||1 month ago|
|LICENSE||3 years ago|
|MANIFEST.in||2 years ago|
|README.md||3 weeks ago|
|requirements-dev.txt||2 months ago|
|requirements.txt||1 week ago|
|setup.py||2 months ago|
In the vein of infrastructure as code OctoDNS provides a set of tools & patterns that make it easy to manage your DNS records across multiple providers. The resulting config can live in a repository and be deployed just like the rest of your code, maintaining a clear history and using your existing review & workflow.
The architecture is pluggable and the tooling is flexible to make it applicable to a wide variety of use-cases. Effort has been made to make adding new providers as easy as possible. In the simple case that involves writing of a single
class and a couple hundred lines of code, most of which is translating between the provider's schema and OctoDNS's. More on some of the ways we use it and how to go about extending it below and in the /docs directory.
It is similar to Netflix/denominator.
Running through the following commands will install the latest release of OctoDNS and set up a place for your config files to live. To determine if provider specific requirements are necessary see the Supported providers table below.
$ mkdir dns $ cd dns $ virtualenv env ... $ source env/bin/activate $ pip install octodns <provider-specific-requirements> $ mkdir config
We start by creating a config file to tell OctoDNS about our providers and the zone(s) we want it to manage. Below we're setting up a
YamlProvider to source records from our config files and both a
DynProvider to serve as the targets for those records. You can have any number of zones set up and any number of sources of data and targets for records for each. You can also have multiple config files, that make use of separate accounts and each manage a distinct set of zones. A good example of this this might be
./config/production.yaml. We'll focus on a
--- manager: max_workers: 2 providers: config: class: octodns.provider.yaml.YamlProvider directory: ./config default_ttl: 3600 enforce_order: True dyn: class: octodns.provider.dyn.DynProvider customer: 1234 username: 'username' password: env/DYN_PASSWORD route53: class: octodns.provider.route53.Route53Provider access_key_id: env/AWS_ACCESS_KEY_ID secret_access_key: env/AWS_SECRET_ACCESS_KEY zones: example.com.: sources: - config targets: - dyn - route53
class is a special key that tells OctoDNS what python class should be loaded. Any other keys will be passed as configuration values to that provider. In general any sensitive or frequently rotated values should come from environmental variables. When OctoDNS sees a value that starts with
env/ it will look for that value in the process's environment and pass the result along.
Further information can be found in the
docstring of each source and provider class.
max_workers key in the
manager section of the config enables threading to parallelize the planning portion of the sync.
Now that we have something to tell OctoDNS about our providers & zones we need to tell it about or records. We'll keep it simple for now and just create a single
A record at the top-level of the domain.
--- '': ttl: 60 type: A values: - 126.96.36.199 - 188.8.131.52
Further information can be found in Records Documentation.
We're ready to do a dry-run with our new setup to see what changes it would make. Since we're pretending here we'll act like there are no existing records for
example.com. in our accounts on either provider.
$ octodns-sync --config-file=./config/production.yaml ... ******************************************************************************** * example.com. ******************************************************************************** * route53 (Route53Provider) * Create <ARecord A 60, example.com., [u'184.108.40.206', '220.127.116.11']> * Summary: Creates=1, Updates=0, Deletes=0, Existing Records=0 * dyn (DynProvider) * Create <ARecord A 60, example.com., [u'18.104.22.168', '22.214.171.124']> * Summary: Creates=1, Updates=0, Deletes=0, Existing Records=0 ******************************************************************************** ...
There will be other logging information presented on the screen, but successful runs of sync will always end with a summary like the above for any providers & zones with changes. If there are no changes a message saying so will be printed instead. Above we're creating a new zone in both providers so they show the same change, but that doesn't always have to be the case. If to start one of them had a different state you would see the changes OctoDNS intends to make to sync them up.
WARNING: OctoDNS assumes ownership of any domain you point it to. When you tell it to act it will do whatever is necessary to try and match up states including deleting any unexpected records. Be careful when playing around with OctoDNS. It's best to experiment with a fake zone or one without any data that matters until you're comfortable with the system.
Now it's time to tell OctoDNS to make things happen. We'll invoke it again with the same options and add a
--doit on the end to tell it this time we actually want it to try and make the specified changes.
$ octodns-sync --config-file=./config/production.yaml --doit ...
The output here would be the same as before with a few more log lines at the end as it makes the actual changes. After which the config in Route53 and Dyn should match what's in the yaml file.
In the above case we manually ran OctoDNS from the command line. That works and it's better than heading into the provider GUIs and making changes by clicking around, but OctoDNS is designed to be run as part of a deploy process. The implementation details are well beyond the scope of this README, but here is an example of the workflow we use at GitHub. It follows the way GitHub itself is branch deployed.
The first step is to create a PR with your changes.
Assuming the code tests and config validation statuses are green the next step is to do a noop deploy and verify that the changes OctoDNS plans to make are the ones you expect.
After that comes a set of reviews. One from a teammate who should have full context on what you're trying to accomplish and visibility in to the changes you're making to do it. The other is from a member of the team here at GitHub that owns DNS, mostly as a sanity check and to make sure that best practices are being followed. As much of that as possible is baked into
After the reviews it's time to branch deploy the change.
If that goes smoothly, you again see the expected changes, and verify them with
octodns-report you're good to hit the merge button. If there are problems you can quickly do a
.deploy dns/master to go back to the previous state.
Very few situations will involve starting with a blank slate which is why there's tooling built in to pull existing data out of providers into a matching config file.
$ octodns-dump --config-file=config/production.yaml --output-dir=tmp/ example.com. route53 2017-03-15T13:33:34 INFO Manager __init__: config_file=tmp/production.yaml 2017-03-15T13:33:34 INFO Manager dump: zone=example.com., sources=('route53',) 2017-03-15T13:33:36 INFO Route53Provider[route53] populate: found 64 records 2017-03-15T13:33:36 INFO YamlProvider[dump] plan: desired=example.com. 2017-03-15T13:33:36 INFO YamlProvider[dump] plan: Creates=64, Updates=0, Deletes=0, Existing Records=0 2017-03-15T13:33:36 INFO YamlProvider[dump] apply: making changes
The above command pulled the existing data out of Route53 and placed the results into
tmp/example.com.yaml. That file can be inspected and moved into
config/ to become the new source. If things are working as designed a subsequent noop sync should show zero changes.
| Provider | Requirements | Record Support | Dynamic | Notes |
| AzureProvider | azure-mgmt-dns | A, AAAA, CAA, CNAME, MX, NS, PTR, SRV, TXT | No | |
| Akamai | edgegrid-python | A, AAAA, CNAME, MX, NAPTR, NS, PTR, SPF, SRV, SSHFP, TXT | No | |
| CloudflareProvider | | A, AAAA, ALIAS, CAA, CNAME, MX, NS, SPF, SRV, TXT | No | CAA tags restricted |
| ConstellixProvider | | A, AAAA, ALIAS (ANAME), CAA, CNAME, MX, NS, PTR, SPF, SRV, TXT | No | CAA tags restricted |
| DigitalOceanProvider | | A, AAAA, CAA, CNAME, MX, NS, TXT, SRV | No | CAA tags restricted |
| DnsMadeEasyProvider | | A, AAAA, ALIAS (ANAME), CAA, CNAME, MX, NS, PTR, SPF, SRV, TXT | No | CAA tags restricted |
| DnsimpleProvider | | All | No | CAA tags restricted |
| DynProvider | dyn | All | Both | |
| EtcHostsProvider | | A, AAAA, ALIAS, CNAME | No | |
| GoogleCloudProvider | google-cloud-dns | A, AAAA, CAA, CNAME, MX, NAPTR, NS, PTR, SPF, SRV, TXT | No | |
| MythicBeastsProvider | Mythic Beasts | A, AAAA, ALIAS, CNAME, MX, NS, SRV, SSHFP, CAA, TXT | No | |
| Ns1Provider | ns1-python | All | Yes | No CNAME support, missing
NA geo target |
| OVH | ovh | A, AAAA, CAA, CNAME, MX, NAPTR, NS, PTR, SPF, SRV, SSHFP, TXT, DKIM | No | |
| PowerDnsProvider | | All | No | |
| Rackspace | | A, AAAA, ALIAS, CNAME, MX, NS, PTR, SPF, TXT | No | |
| Route53 | boto3 | A, AAAA, CAA, CNAME, MX, NAPTR, NS, PTR, SPF, SRV, TXT | Both | CNAME health checks don't support a Host header |
| Selectel | | A, AAAA, CNAME, MX, NS, SPF, SRV, TXT | No | |
| Transip | transip | A, AAAA, CNAME, MX, SRV, SPF, TXT, SSHFP, CAA | No | |
| AxfrSource | | A, AAAA, CNAME, MX, NS, PTR, SPF, SRV, TXT | No | read-only |
| ZoneFileSource | | A, AAAA, CNAME, MX, NS, PTR, SPF, SRV, TXT | No | read-only |
| TinyDnsFileSource | | A, CNAME, MX, NS, PTR | No | read-only |
| YamlProvider | | All | Yes | config |
You can check out the source and provider directory to see what's currently supported. Sources act as a source of record information. AxfrSource and TinyDnsFileSource are currently the only OSS sources, though we have several others internally that are specific to our environment. These include something to pull host data from gPanel and a similar provider that sources information about our network gear to create both
PTR records for their interfaces. Things that might make good OSS sources might include an
ElbSource that pulls information about AWS Elastic Load Balancers and dynamically creates
CNAMEs for them, or
Ec2Source that pulls instance information so that records can be created for hosts similar to how our
Most of the things included in OctoDNS are providers, the obvious difference being that they can serve as both sources and targets of data. We'd really like to see this list grow over time so if you use an unsupported provider then PRs are welcome. The existing providers should serve as reasonable examples. Those that have no GeoDNS support are relatively straightforward. Unfortunately most of the APIs involved to do GeoDNS style traffic management are complex and somewhat inconsistent so adding support for that function would be nice, but is optional and best done in a separate pass.
class key in the providers config section can be used to point to arbitrary classes in the python path so internal or 3rd party providers can easily be included with no coordination beyond getting them into PYTHONPATH, most likely installed into the virtualenv with OctoDNS.
While the primary use-case is to sync a set of yaml config files up to one or more DNS providers, OctoDNS has been built in such a way that you can easily source and target things arbitrarily. As a quick example the config below would sync
githubtest.net. from Route53 to Dyn.
--- providers: route53: class: octodns.provider.route53.Route53Provider access_key_id: env/AWS_ACCESS_KEY_ID secret_access_key: env/AWS_SECRET_ACCESS_KEY dyn: class: octodns.provider.dyn.DynProvider customer: env/DYN_CUSTOMER username: env/DYN_USERNAME password: env/DYN_PASSWORD zones: githubtest.net.: sources: - route53 targets: - dyn
Internally we use custom sources to create records based on dynamic data that changes frequently without direct human intervention. An example of that might look something like the following. For hosts this mechanism is janitorial, run periodically, making sure the correct records exist as long as the host is alive and ensuring they are removed after the host is destroyed. The host provisioning and destruction processes do the actual work to create and destroy the records.
--- providers: gpanel-site: class: github.octodns.source.gpanel.GPanelProvider host: 'gpanel.site.github.foo' token: env/GPANEL_SITE_TOKEN powerdns-site: class: octodns.provider.powerdns.PowerDnsProvider host: 'internal-dns.site.github.foo' api_key: env/POWERDNS_SITE_API_KEY zones: hosts.site.github.foo.: sources: - gpanel-site targets: - powerdns-site
Please see our contributing document if you would like to participate!
OctoDNS is licensed under the MIT license.
The MIT license grant is not for GitHub's trademarks, which include the logo designs. GitHub reserves all trademark and copyright rights in and to all GitHub trademarks. GitHub's logos include, for instance, the stylized designs that include "logo" in the file title in the following folder: https://github.com/github/octodns/tree/master/docs/logos/
GitHub® and its stylized versions and the Invertocat mark are GitHub's Trademarks or registered Trademarks. When using GitHub's logos, be sure to follow the GitHub logo guidelines.