Quantcast
Channel: berthubert – PowerDNS Blog
Viewing all 37 articles
Browse latest View live

PowerDNS: 2016 in review

$
0
0

Hi everyone,

As 2016 draws to a close, we’d like to share a few words on what has been achieved over the past year, our second year within Open-Xchange. This post will cover both our technical and commercial efforts, including the PowerDNS Platform which provides per-subscriber malware filtering & parental control. And, we are hiring!

At the end of 2015, we released ‘Technology Preview Releases’ of PowerDNS Authoritative Server 4, PowerDNS Recursor 4 and dnsdist 1.0. This was done to somewhat keep our promise of releasing those versions in 2015, but fell short of what we had hoped to achieve.

Now at the end of 2016 the news is a lot better. The actual 4.0 and 1.0 (dnsdist) releases have happened and are being deployed far faster than we’d been hoping for. This is probably due to some of the exciting new features:

  • RPZ for security & DNS filtering purposes (including IXFR)
  • dnsdist for reliability, flexibility and DoS protection
  • pdnsutil edit-zone for a pretty awesome way to edit DNS zones
  • DNSSEC validation in Recursor
  • Vastly more powerful Lua engines
  • ALIAS record type that now powers many of the .GOV search engines DNSSEC (including the White House!)

A notable DNSSEC deployment is over at our friends of xs4all who not only sign domains with the PowerDNS Authoritative Server, but recently have also turned on validation on their PowerDNS Recursors for their large userbase.

4.0 and dnsdist were both part of a ‘spring cleaning’ exercise. It is good to realize how rare it is for a software project to go through such an exercise. 4.0 and dnsdist are based on a much cleaned up and improved codebase.

We are also very grateful for our community that stepped up to contribute to 4.x in the form of code, great bug reports, design ideas, documentation and actual bug fixes. Our meagre offering of ‘PowerDNS Crew’ mugs is the least we could do!

Some stats that bear out the community involvement: In 2016, our Github repository was forked over a 100 times, yielding almost a 1000 Pull Requests most of which were merged, for a total of over 2500 new commits. These commits closed 1300 issue tickets.

As you may recall, since 2015 PowerDNS is part of OX, together with our cousins from Dovecot. When we announced the merger, some voiced fear about what this would mean for PowerDNS. We can now safely say that the state of the PowerDNS source in 2016 is way stronger than it was in 2015.

Besides finishing the spring cleaning of our open source products, 2016 also saw the release of the PowerDNS Platform which, unusually for us, is not fully open source. We explained this in our blog post as follows:

Putting it more strongly: we have learned that many organizations simply no longer have the time or desire to assemble all the technologies themselves around our Open Source products.

We will therefore be marketing the additional functionalities we have been delivering to our customers as a product tentatively called the “PowerDNS Platform”

The “PowerDNS Platform” as we ship it consists of our core unmodified Open Source products, plus loads of other open source technologies, combined with a management shell that is not an Open Source product that we’ll in fact sell.

The PowerDNS Platform is described here. Feedback on the move to supply the Platform has been good, both from our commercial users and from the PowerDNS development  and wider DNS community, for which we are grateful.

Now at the end of 2016 we can report that the PowerDNS Platform has been selected to provide a malware & parental control enabled DNS solution for over 10 million Internet subscribers in Europe. We will be displacing a fully closed solution, which is a win for an open internet.

In addition, this commercial progress provides a healthy & sustainable basis on which to continue to develop the PowerDNS nameservers and dnsdist.

PowerDNS.org

We have regained control over powerdns.org. As outlined in our blogpost:

Recently we decided it was time to get the .org back anyhow and after negotiating for a few days we finally paid up, and shortly after that we were back in control of powerdns.org, at a cost of $1000.

This personally left me with a bad aftertaste since effectively we have paid a chain of people that specialise in taking over domains for ransom purposes.

msf

To compensate for all this, we’ve decided to donate €1000 to the Doctors without Borders charity.

Mugs

We have shipped close to 500 PowerDNS Release mugs to contributors, friends and conference visitors. If you missed out on our giveaway, you can order PowerDNS mugs online from our friends over at Mugbug, who have been an absolute joy to work with.

Root-server speedup

We also had a good time working with the fine people of the RIPE NCC. Anand Buddhdev there decided to do some benchmarking to determine the root-server suitability of a bunch of nameservers. And lo, during his testing, he found that PowerDNS 4.0 was not very suitable. After a good month of investigations & improvements, we managed to achieve a 400% speedup in the PowerDNS Authoritative Server which actually also helped the PowerDNS Recursor.

We shared our learnings on modern optimization in this Medium post which at >10k visits is the second best read post we have ever done. These speedups will be available in the 4.1 releases of our software.

People

PowerDNS grew this year! Open-Xchange gained a product manager (Alexander ter Haar) and we are also benefiting greatly from Nico Cartron (previously of EfficientIP) and Andrea Tosatto who are helping with automation, deployability and pre-sales work. In addition, we continue to work happily with members of the extended PowerDNS family who we contract with for development, training, documentation and professional services.

But.. it is not enough. We are still looking for two permanent positions, one in professional services, one in front-end development with a smattering of backend. For more details, please head to our careers page.

Finally

Thank you for being involved with PowerDNS, the software and the community. Reading this post to the end means you really care.🙂

We wish you a great 2017!



PowerDNS Jobs, 4.1 roadmap, DNSSEC research

$
0
0

Hi everyone,

In this post, we want to mention a few things: PowerDNS Jobs, 4.1 plans & some DNSSEC research.

First, PowerDNS is growing rapidly as more and more large scale service providers displace closed DNS systems by PowerDNS, especially for security enhanced DNS and “parental control”. More on this PowerDNS Platform product can be found on the Open-Xchange website and here.

To support this growth, we have two job openings currently. Full details are here, brief descriptions:

Solution Engineer

Daily activities alternate between working on customer issues and actual Professional Services for customer implementations (both on-site and off-site). As Solution Engineer (with a focus on PowerDNS) you will work closely with the PowerDNS development team, as well as with other parts of Open-Xchange and Dovecot development, sales, and Product Management teams from within a European Services team.

We think Support & Implementation is a great step into a promising career. We are specifically looking for employees willing to learn quickly while delivering great support and service, while keeping an eye towards growing within the Global Services department or into different roles in the larger Open-Xchange organisation.

Versatile frontend developer with moderate middleware skills

We are looking for people with any or more of the following skills:

  • Modern web development (key words are AngularJS, JSON, RESTful, D3.js, Backbone and other frameworks that aren’t TOO hip)
  • Django
  • Ability to enhance middleware in Python
  • Ability to propose changes to core C++ code and make small additions
  • Automated UI testing

Full details and how to apply can be found here.

4.1 plans

We have started the process of 4.1 release planning. We have identified a number of areas that need to be addressed, but your input is most welcome. The 4.0 roadmap process was rather successful, but only because users vocally reminded us of what was missing.

So please let us know: what are we simply not talking about that you think is vital for PowerDNS. If we are not doing something, it is probably because we don’t know that you need it! So please let us know whatever you are missing on powerdns-ideas@powerdns.com.

DNSSEC research

We wrote some perhaps interesting stuff on DNSSEC here:
https://ds9a.nl/hypernsec3/

With this technique, we’ve been able to measure the DNSSEC penetration on all top level domains (including co.uk and com.br). The list is here: https://powerdns.org/dnssec-stats/, and here are the top domains:

screenshot-from-2017-02-07-104745

All in all we have found there are around 7.4 million signed DNSSEC domains.

Given what we know of the zones involved (.se, .nl, .de, .be), it looks like the majority of these are signed and mostly served by PowerDNS.

 


PowerDNS Recursor 4.1 Development Plans

$
0
0

Hi everyone,

In this message, we ask you to look at our intended PowerDNS Recursor 4.1 development plan. The 4.0 release train has been very successful and reliable for a major ‘.0’ release and is seeing wide production use, including DNSSEC validation for millions of clients.

However, we have found some things that need improving for the 4.1 release.  This is the focus for 4.1: general improvement of quality, rounding out of features, and adding a few specific new features.

We ask you to take a REAL good look at what we intend to do. It is entirely possible that you are running into issues and challenges you are sure we know about already, when we in fact don’t. So if the PowerDNS Recursor is somehow not making you happy, and what ails you is not in the list below, we would LOVE to hear from you!

We are aiming for a June release of Recursor 4.1, but depending on developments this might be earlier or later, and possibly not with all features communicated below. This post is not a roadmap you can rely on. If you need to rely on certain features appearing by a certain time, please head to www.powerdns.com/contactform.html – for commercially supported customers we regularly commit to dates & features.

Already addressed since last 4.0 release, so no need to ask for this:

github.com/PowerDNS/pdns/issues/

#4988 – Add `use-incoming-edns-subnet` to process and pass along ECS
#4990 – Native SNMP support for Recursor
#5058 – Faster RPZ updates
#4873 – Ed25519 algorithm support
#4972 – 2017 root KSK added
#4924 – EDNS Client Subnet tuning & length configuration

All issues scheduled for 4.1 can be viewed on the rec-4.1.0 milestone on GitHub github.com/PowerDNS/pdns/milestone/7

Important highlights:

Improvements:
#5077 – DNSSEC validation is in need of a refactor (ongoing)
#4000 – And other tickets: more love & performance for RPZ

New features:
#5079 – EDNS Client Subnet port number
#5076 – RPZ persistency
#440 – DNS prefetching
#4662 – Continue serving expired cache data if all auths are down

If you want to help, please check out the full milestone listing github.com/PowerDNS/pdns/milestone/7 and see if (your) older issues might have been addressed by now.

Also, if you have an opinion on certain fixes, features or improvements, please add them to the GitHub issues so we learn about your concerns! You can also weigh in on our mailing lists.

Thanks!


OX Summit & other conferences

$
0
0

Hi everyone,

As we are working on the 4.1 & 1.2 releases, please know you can also meet us in real life! We are just back from IETF in Prague, here is a list of other places where we will be present:

We hope to meet you there!

DNS performance metrics: the logarithmic percentile histogram

$
0
0

DNS performance is always a hot topic. No DNS-OARC, RIPE or IETF conference is complete without new presentations on DNS performance measurements.

Most of these benchmarks focus on denial-of-service resistance: what is the maximum query load that can be served, and this is indeed a metric that is good to know.

Less discussed however is performance under normal conditions. Every time a nameserver is slow, a user somewhere is waiting. And not only is a user waiting, some government agencies, notably the UK’s OFCOM, take a very strong interest in DNS latencies.  In addition, in contractual relations, there is frequently the desire to specify guaranteed performance levels.

So how well is a nameserver doing?

“There are three kinds of lies: lies, damned lies, and statistics.” – unknown

It is well known that when Bill Gates walks into a bar, on average everyone inside becomes a billionaire. The average alone is therefore not sufficient to characterize the wealth distribution in a bar.

A popular and frequently better statistic is the median. 50% of numbers will be below the median, 50% will be above.  So for our hypothetical bar, if most people in there made x  a year, this would also be the median (more or less). Now that Bill Gates is there, the median shifts up only a little. In many cases, the median is a great way to describe a distribution, but it is not perfect for DNS performance. The way DNS performance impacts user experience makes it useful to compare it to ambulance arrival times.

If on average an ambulance arrives within 10 minutes of being called, this is rather good. But if this is achieved by arriving within 1 minute 95% of the time, and after 200 minutes 5% of the time, it is pretty bad news for those one in twenty cases. In other words, being very late is a lot worse than being early is good. 

The median in this case is somewhat less than one minute, and the median therefore also does not  show that we let 5% of cases wait for more than three hours.

To do better, for the ambulance, a simple histogram works well:

ambulances2

This graph immediately makes it clear there is a problem, and that our ’10 minute average arrival time’ is misleading.

Although a late DNS answer is of course by far not as lethal as a late ambulance (unless you are doing the DNS for the ambulance dispatchers!), the analogy is apt. A 2 second late DNS response is absolutely useless.

Sadly, it turns out that making an arrival time graph of a typical recursive nameserver is not very informative:

fullhisto

From this, we can see that almost all traffic arrives in one bin, likely somewhere near 0.1 milliseconds, but otherwise it doesn’t teach us a lot.

A common enough trick is to use logarithmic scales, and this does indeed show far more detail:

logfull

From this, we can see quite some structure – it appears we have a bunch of answers coming in real quick, and also somewhat of a peak around 10 milliseconds.

But the question remains, how happy are our users? This is what we spent an outrageous amount of time on, inspired by a blog post we can no longer find. We proudly present:

The logarithmic percentile histogram

log-histo3

So what does this graph mean? On the x-axis are the “slowest percentiles”. So for example, at x=1 we find the 1% of answers that were slowest. On the y-axis we find the average latency of the answers in that “slowest 1%” bin: around 8 milliseconds for the KPN fiber in our office, and around 90 milliseconds for a PowerDNS installation in the Middle East.

As another example, the 0.01 percentile represents the “slowest 1/10,000” of queries, and we see that these get answered in around 1200 milliseconds – at the outer edge of being useful.

On the faster side, we see that on the KPN fiber installation, 99% of queries are answered within 0.4 milliseconds on average – enough to please any regulator! The PowerDNS user in the Middle East is faring a lot less well, taking around 60 milliseconds at that point.

Finally, we can spruce up the graph further with cumulative average log-full-avg

From this we see clearly that even though latencies go up for the slower percentiles, this has little impact on the average latency, ending up at 2.3 milliseconds for our KPN office fiber and 4.5 milliseconds for the Middle East installation.

So what can we do with these graphs?

Through a ton of measurements in various places, we have found the logarithmic percentile histogram to be incredibly robust. Over time, the shape of the graph barely moves, unless something really changes, for example by adding a dnsdist caching layer:

log-full

We can see that dnsdist speeds up both the fastest and slowest response times, but as could be expected does not make cache misses (in the middle) any faster. The reason the slowest response times are better is that the dnsdist caching layer frees up the PowerDNS Recursor to fully focus on problematic (slow) domains.

Another fun plot is the “worst case’ impact of DNSSEC, measured from a cold cache:

dnssec

As we can see from this graph, for the vast majority of cases, the impact of DNSSEC validation using the PowerDNS Recursor 4.1 is extremely limited. A rerun on a hot cache shows no difference in performance at all (which is so surprising we repeated the measurement at other deployments where we learned the same thing).

Monitoring/alerting based on logarithmic percentile histogram

As noted, the shape of these graphs is very robust. Temporary outliers barely show up for example. Only real changes in network or server conditions make the graph move. This makes these percentiles exceptionally suitable for monitoring. Setting limits on ‘1%’ and ‘0.1%’ slowest performance is both sensitive and specific: it detects all real problems, and everything it detects is a real problem.

How to get these graphs and numbers

In our development branchdnsreplay and dnsscope have gained a “–log-histogram” feature which will output data suitable for plotting. Helpfully, in the data output, a gnuplot script is included that will generate graphs as shown above. A useful output mode is svg which creates graphs suitable for embedding in web pages:

 

median

Note that this graph also plots the median response time, which here comes in at 21 microseconds.

Now that we have the code available to calculate these numbers, they might show up in the dnsdist webinterface, or in the metrics we generate. But for now, dnsreplay and dnsscope are where it is at.

Enjoy!

PowerDNS Authoritative: Lua Records

$
0
0

Hi everyone,

We are happy to share a new development with you, one that we hinted at over a year ago: Lua resource records. In this post, we ask for your help: did we get the feature right? Are we missing important things? Lua records will be part of Authoritative Server 4.2,  and we need your testing and feedback! At the end of this post you will find exact instructions how to test the new LUA records.

Note: The fine authors of the Lua programming language insist that it is Lua and not LUA. Lua means ‘moon’ in Portuguese, and it is not an abbreviation. Sadly, it is DNS convention for record types to be all uppercase. Sorry.

While PowerDNS ships with a powerful geographical backend (geoip), there was a demand for more broader solutions that include uptime monitoring, which in addition could run from existing zones.

After several trials, we have settled on “LUA” resource records, which look like this:

 @   IN   LUA   A   "ifportup(443, {'52.48.64.3', '45.55.10.200'})"

When inserted in a zone with LUA records enabled, any lookups for your domain name will now return one of the listed IP addresses that listens on port 443. If one is down, only the other gets returned. If both are down, both get returned.

But if both are up, wouldn’t it be great if we could return the ‘best’ IP address for that client? Say no more:

@    IN   LUA A ( "ifportup(443, {'52.48.64.3', '45.55.10.200'}, "
                  "{selector='closest'})                          ")

This will pick the IP address closest to that of the client, according to the MaxMind database as loaded in the geoip backend. This of course also takes the EDNS Client Subnet option into account if present.

But why stop there? Merely checking if a port is open may not be enough, so how about:

@ IN LUA A ( "ifurlup('https://powerdns.com/' ,                    "
             "{'52.48.64.3', '45.55.10.200'}, {selector='closest', "
             "stringmatch='founded in the late 1990s'})            ")

This will check if the IP addresses listed actually want to serve the powerdns.com website for us, and if the content served lists a string that should be there.

The ‘closest’ selector relies on third party data, and if you are a large access provider, you may have more precise ideas where your users should go. There are various ways of doing that. One way goes like this:

www IN LUA CNAME (";if(netmask('130.161.0.0/16', '213.244.0.0/24')" 
                  "then return 'local.powerdns.com' else          "
                  "return 'generic.powerdns.com' end              ")
local IN LUA A    "ifportup(443, {'192.0.2.1', '192.0.2.2'}       "
generic IN LUA A ("ifportup(443, {'192.0.2.1', '192.0.2.2',       " 
                  "'198.51.100.1'}, {selector='closest'}          ")

Note: the starting semicolon tells the Lua record that this is a multi-statement record that does not directly return record content. More specifically, PowerDNS will prepend “return ” to your statement normally.

Another way which works without CNAMEs, and thus at the apex, goes like this:

@ IN LUA A (";if(netmask('130.161.0.0/16', '213.244.0.0/24')      " 
            "then return ifportup(443, {'192.0.2.1', '192.0.2.2'})"
            "else return ifportup(443, {'192.0.2.1', '192.0.2.2'},"
            "'198.51.100.1'}, {selector='closest'}                ")

Doing dynamic responses at apex level is a common problem of other GSLB solutions.

To steer based on AS numbers, use if(asnum{286,1136}), for example. Countries can be selected based on their two-letter ISO code using if(country{‘BE’,’NL’,’LU’}).

In the examples above we have been typing the same IP addresses a lot. To make this easier, other records can be included to define variables:

config    IN    LUA    LUA (";settings={stringmatch='Programming in Lua'} "
                            "EUips={'192.0.2.1', '192.0.2.2'}             "
                            "USAips={'198.51.100.1'}                      ")

www       IN    LUA    CNAME ( ";if(continent('EU')) then return 'west.powerdns.org' "
                               "else return 'usa.powerdns.org' end" )

usa       IN    LUA    A    ( ";include('config')                              "
                              "return ifurlup('https://www.lua.org/',        "
                              "{USAips, EUips}, settings)                    " )

west      IN    LUA    A    ( ";include('config')                              "
                              "return ifurlup('https://www.lua.org/',        "
                              "{EUips, USAips}, settings)                    " )

This shows off another feature of ifurlup, it knows about IP groups, where it prefers to give an answer from the first set of IP addresses, and if all of those are down, it tries the second set etc etc. In this example, the ‘local’ set of IP addresses is listed first for both regions.

More possibilities

We use LUA records to power our ‘lua.powerdns.org’, ‘v4.powerdns.org’ and ‘v6.powerdns.org’ zones:

$ dig -t aaaa whoami.v6.powerdns.org +short
2a02:a440:b085:1:20d:b9ff:fe3f:8018
$ dig -t txt whoami-ecs.v6.powerdns.org +short @8.8.8.8
"ip: 2a00:1450:4013:c02::10a, netmask: 86.82.68.0/24"
$ dig -t loc latlon.v4.powerdns.org +short
51 37 15.236 N 5 26 31.920 E 0.00m 1m 10000m 10m
$ dig -t txt whoami.lua.powerdns.org +short
"2a02:a440:b085:1:20d:b9ff:fe3f:8018"

These queries deliver, respectively:

  • IPv6 address of your resolver (will not resolve without IPv6)
  • Any EDNS Client Subnet details over IPv6 (also works on v4.powerdns.org)
  • LOC record of where Maxmind thinks your resolver (or ECS address) is
  • A ‘pick your protocol’ equivalent of the v4 or v6 specific whoami queries

The actual records look like this:

whoami.lua     IN LUA TXT  "who:toString()"
whoami-ecs.lua IN LUA TXT  "'ip: '..who:toString()..', netmask: '..(ecswho and ecswho:toString() or 'no ECS')"
latlon.lua     IN LUA LOC  "latlonloc()"
whoami.v6      IN LUA AAAA "who:toString()"
whoami.v4      IN LUA A    "who:toString()"

Further details

Full documentation for this feature can be found here. To test, packages can be found on https://repo.powerdns.com/ where you should pick the ‘master’ repository for your distribution.

Setting up PowerDNS & Lua

Setup gsqlite3 as described here (or gmysql, gpgsql), then edit the pdns.conf to include:

launch=gsqlite3,geoip
gsqlite3-database=/location/of/powerdns.sqlite
local-address=0.0.0.0
local-ipv6=::
edns-subnet-processing
log-dns-queries
loglevel=9
geoip-database-files=/usr/share/GeoIP/GeoIPCity.dat,/usr/share/GeoIP/GeoIPASNum.dat
enable-lua-record

Most of this is generic to PowerDNS. Specific for our use is loading the geoip backend and its database files, enabling the LUA record, EDNS Client Subnet processing, and some debug logging so you see what is happening. The geoip-database-files path may be different depending on your operating system.

Next up, generate a test zone, and edit it:

$ pdnsutil create-zone geo.example.com ns1.example.com
Creating empty zone 'geo.example.com'
Also adding one NS record
$ pdnsutil edit-zone geo.example.com

This will fire up an editor, and allows you to insert your first LUA record. For fun, try:

geo.example.com 3600 IN LUA TXT "os.date()"

Save, and pdnsutil will ask you if you want to apply this change. Do so, and then query your PowerDNS:

$ dig -t txt geo.example.com @127.0.0.1 +short
"Thu Dec 14 21:49:00 2017"

After this you can try the zonefiles listed above, or paste from the ‘lua.powerdns.org’, ‘v4.powerdns.org’ and ‘v6.powerdns.org’ zones.

If this does not work for you (even after reading the documentation), please find us through our Open Source page. In addition, if it does work for you but you have feedback or features you need, please also let us know through powerdns.ideas@powerdns.com.

Thanks & enjoy!

PowerDNS end of year post: Thank you!

$
0
0

Greetings!

2017 has been a great year for PowerDNS and Open-Xchange. In this post, we want to thank everyone that contributed, and highlight some specific things we are happy about.

HackerOne bug bounty program

After some initial problems with over-reporting of non-issues, our experience with HackerOne is awesome right now. We are very happy we have a clean process for receiving and rewarding security bugs. Various PowerDNS security releases this year have originated as HackerOne reports.

Our community

PowerDNS continues to be a vibrant community. Our IRC channel has around 240 members, our mailing lists have 1225 subscribers. Even though we are now tougher in enforcing our ‘support, out in the open‘ policies, we continue to see many user queries being resolved every day, often leading to improvements in PowerDNS.

As in earlier years, 2017 has seen huge contributions from the community, not only in terms of small patches or constructive bug reports, but also in the revamping of whole subsystems. Specifically Kees Monshouwer was so important for Authoritative Server 4.1 that we would not have been able to do it without him. We hope to continue as a healthy community in 2018!

Facebook bug bounty program

Image result

PowerDNS is an active participant in keeping the internet secure. As part of our work we found a potential security problem in an important Facebook product which we reported to the their bug bounty program.  The bug was fixed quickly, and led to an award of $1500, with the option to turn that into a $3000 charitable donation. We have done so and supported Doctors without Borders in their work.

Our Open Source DNS friends

The DNS community is tight, and it has to be: all our software has to interoperate. New standards are developed cooperatively and problems are discussed together. We love the friendly competition that we have with our friends of CZNIC (Knot, Knot Resolver), ISC (BIND), NLNetLabs (NSD, Unbound, libraries) and others.

To a huge extent, DNS is exclusively Open Source software, sometimes repackaged and rebadged by commercial companies that close down that Open Source software again.

PowerDNS is proud to be part of the open DNS community, and we are grateful for the smooth & fun cooperation we experienced in 2017!

 

Open-Xchange

Since 2015, PowerDNS has been part of Open-Xchange, previously mostly known for the OX AppSuite email platform. The famous Dovecot IMAP project also joined Open-Xchange in 2015. The goal of these mergers was to allow us to focus on technology, while getting the legal, sales and marketing support to get our software out there.

In 2017 we have truly started to harvest the fruits of the merger, by simultaneously delivering important software releases as well as satisfying the needs of some very large new deployments.

We are very happy that PowerDNS not only survived the merger, but is now an important part of Open-Xchange, where we contribute to the mission of keeping the internet open.

Our users

Even without or before contributing code, operators can improve PowerDNS through great bug reports. We specifically want to thank Quad9 (a collaboration of Packet Clearing House, IBM and the Global Cyber Alliance) for taking a year long journey with us with dnsdist and Recursor “straight from GitHub”. Deployments sharing their experiences and problems with the PowerDNS community are vital to creating quality reliable software. Thanks!

Mattermost, the Open Source private Slack Alternative

As PowerDNS grows, we could no longer rely solely on IRC as our communication channel with developers, users and customers. Instead of moving to a third party cloud service that admits to datamining communications, we are very happy to host our own Mattermost instance. And because of PowerDNS user & contributor @42Wim, we can continue our IRC habit with matterircd.

4.1 evolution, dnsdist

In 2016 we released the 4.0 versions of the PowerDNS Authoritative Server and Recursor. As you may recall, the 4.0 releases represented a giant cleanup from the decade old frameworks found in 3.x. The 4.0 versions were a step ahead in functionality and sometimes performance, but the true gains of the new fresher codebase have now been realized in the 4.1 releases.

4.1 represents a big overhaul in caching (both Recursor and Authoritative) and DNSSEC processing (mostly Recursor). Both of these overhauls have been tested over the year by large PowerDNS deployments, and the huge amount of feedback has delivered a near flawless “battle tested” 4.1 release.

Specifically xs4all and two huge European incumbent operators have been instrumental in maturing dnsdist and our 4.1-era DNSSEC and EDNS Client Subnet implementations.

On to 2018!

In 2018 we hope to continue to improve our software and the state of the internet. See you there!

 

“The DNS Camel”, or, the rise in DNS complexity

$
0
0

This week was my first IETF visit. Although I’ve been active in several IETF WGs for nearly twenty years, I had never bothered to show up in person. I now realize this was a very big mistake – I thoroughly enjoyed meeting an extremely high concentration of capable and committed people. While RIPE, various NOG/NOFs and DNS-OARC are great venues as well, nothing is quite the circus of activity that an IETF meeting is. Much recommended!

DYv4Tt4V4AADCeT

Before visiting I read up on recent DNS standardization activity, and I noted a ton of stuff was going on. In our development work, I had also been noticing that many of the new DNS features interact in unexpected ways. In fact, there appears to be somewhat of a combinatorial explosion going on in terms of complexity.

As an example, DNAME and DNSSEC are separate features, but it turns out DNAME can only work with DNSSEC with special handling. And every time a new outgoing feature is introduced, like for exampled DNS cookies, new probing is required to detect authoritative servers that get confused by such newfangled stuff.

This led me to propose a last minute talk (video!) to the DNSOP Working Group, which I tentatively called “The DNS Camel, or, how many features can we add to this protocol before it breaks”. This ended up on the agenda as “The DNS Camel” (with no further explanation) which intrigued everyone greatly. I want to thank DNSOP chairs Suzanne and Tim for accommodating my talk which was submitted at the last moment!

Note: My “DNS is too big” story is far from original! Earlier work includes “DNS Complexity” by Paul Vixie in the ACM Queue and RFC 8324 “DNS Privacy, Authorization, Special Uses, Encoding, Characters, Matching, and Root Structure: Time for Another Look” by John Klensin. Randy Bush presented on this subject in 2000 and even has a slide describing DNS as a camel!

Based on a wonderful chart compiled by ISC, I found that DNS is now described by at least 185 RFCs. Some shell-scripting and HTML scraping later, I found that this adds up to 2781 printed pages, comfortably more than two copies of “The C++ Programming Language (4th edition)”. This book is not known for its brevity.

Screenshot from 2018-03-22 21-21-51

Artist impression of DNS complexity over time

In graph form, I summarised the rise of DNS complexity as above. My claim is that this rise is not innocent. As DNS becomes more complex, the number of people that “get it” also goes down. Notably, the advent of DNSSEC caused a number of implementations to drop out (MaraDNS, MyDNS, for example).

Also, with the rise in complexity and the decrease in number of capable contributers, the inevitable result is a drop in quality:

Screenshot from 2018-03-22 21-27-28

Orange = number of people that “get it”. Green is perceived implementation quality. Also lists work in the pipeline.

And in fact, with the advent of DNSSEC this is what we found. For several years, security & stability bugs in popular nameserver implementations were absolutely dominated by DNSSEC and cryptography related issues.

My claim is that we are heading for that territory again.

So how did this happen? We all love DNS and we don’t want to see it harmed in any way. Traditionally, protocol or product evolution is guided by forces pulling and pushing on it.

Screenshot from 2018-03-22 22-12-43

Actual number of RFC pages over time. Grows at around 2 pages/week. Shutdown of DNSEXT is barely visible

Requirements from operators ‘pull’ DNS in the direction of greater complexity. Implementors meanwhile usually push back on such changes because they fear future bugs, and because they usually have enough to do already. Operators, additionally, are weary of complexity: they are the ones on call 24/7 to fix problems. They don’t want their 3AM remedial work to be any harder than it has to be.

Finally, the standardization community may also find things that need fixing. Standardizers work hard to make the internet better (the new IETF motto I think), and they find lots of things that could be improved – either practically or theoretically.

In the DNS world, we have the unique situation that (resolver) operator feedback is largely absent. Only a few operators manifest themselves in the standardization community (Cloudflare, Comcast, Google, Salesforce being notably present). Specifically, almost no resolver operator (access provider) ever speaks at WG meetings or writes on mailing lists. In reality, large scale resolver operators are exceptionally weary of new DNS features and turn off whatever features they can to preserve their night time rest.

Screenshot from 2018-03-22 21-47-35

On the developer front, the DNS world is truly blessed with some of the most gifted programmers in the world. The current crop of resolvers and authoritative servers is truly excellent. DNS may well be the best served protocol in existence today. This high level of skill also has a downside however. DNS developers frequently see immense complexity not as a problem but as a welcome challenge to be overcome. We say yes to things we should say no to. Less gifted developer communities would have to say no automatically since they simply would not be able to implement all that new stuff. We do not have this problem. We’re also too proud to say we find something (too) hard.

Finally, the standardization community has its own issues. A ‘show of hands’ made it clear that almost no one in the WG session was actually on call for DNS issues. Standardizers enjoy complexity but do not personally bear the costs of that complexity. Standardizers are not on 24/7 call as there rarely is a need for an emergency 3AM standardization session!

Notably, a few years ago I was informed by RFC authors that ‘NSEC3’ was easy. We in the implementation community meanwhile were pondering that the ‘3’ in NSEC3 probably stood for the number of people that understood this RRTYPE! I can also report that as of 2018, the major DNSSEC validator implementations still encounter NSEC3 corner cases where it is not clear what the intended behaviour is.

Note that our standardizers, like our developers, are extremely smart people. This however is again a mixed blessing – this talent creates at the very least an acceptance of complexity and a desire to conquer really hard problems, possibly in very clever ways.

The net result of the various forces on DNS not being checked is obvious: more and more complex features.

Orthogonality of features

As noted above, adding a lot of features can lead to a combinatorial explosion. DNSSEC has to know about DNAME. CZNic contributed related the following gem they discovered during the implementation of ‘aggressive NSEC for NXDOMAIN detection’: it collides with trust-anchor signalling. The TA signalling happens in the form of a query to the root that leads to an NXDOMAIN, with associated NSEC records. These NSEC records then shut up further TA signalling, as no TA related names apparently exist! And here two unrelated features now need to know about each other:  aggressive NSEC needs to be disabled for TA signalling.

If even a limited number of features overlap (ie, are not fully orthogonal), soon the whole codebase consists of features interacting with each other.

We’re well on our way there, and this will lead to a reduction in quality, likely followed by a period of stasis where NO innovation is allowed anymore. And this would be bad. DNS is still not private and there is a lot of work to do.

Suggestions

I rounded off my talk with a few simple suggestions:

Screenshot from 2018-03-22 22-02-30

Quickly a 20 person long queue formed at the mic. It turns out that while I may have correctly diagnosed a problem, and that there is wide agreement that we are digging a hole for ourselves, I had not given sufficient thought about any solutions.

IETF GROW WG chair Job Snijders noted that the BGP-related WGs have implemented different constituencies (vendors, operators) that all have to agree. In addition, interoperable implementations are a requirement before a draft can progress to standard. This alone would cut back significantly on the flow of new standards.

Other speakers with experience in hardware and commercial software noted that in their world the commercial vendors provided ample feedback to not make life too difficult, or that such complexity would at least come at huge monetary cost. Since in open source features are free, we do not “benefit” from that feedback.

There was enthusiasm for the idea of going through the “200 DNS RFCs” and deprecating stuff we no longer thought was a good idea. This enthusiasm was more in theory than in practice though as it is known to be soul crushing work.

The concept however of reducing at least the growth in DNS complexity was very well received. And in fact, in subsequent days, there was frequent discussion about the “DNS Camel”:

camel

And in fact, a draft has even been written that simplifies DNS by specifying DNS implementations no longer need to probe for EDNS0 support. The name of the draft? draft-spacek-edns-camel-diet-00!

I’m somewhat frightened of the amount of attention my presentation got, but happy to conclude it apparently struck a nerve that needed to be struck.

Next steps

So what are the next steps? There is a lot to ponder.

I’ve been urged by several very persuasive people to not only rant about the problem but to also contribute to the solution, and I’ve decided these people are right. So please watch this space!

 


On Firefox moving DNS to a third party

$
0
0

DNS lookups occur for every website visited. The processor of DNS requests gets a complete picture of what a household or phone is doing on the internet. In addition, DNS can be used to block sites or to discover if devices are accessing malware or are part of a botnet.

(for the tl;dr, please skip right to the summary at the end)

Recently, we’ve seen Cloudflare (rumoured to be heading to IPO soon) get interested in improving your DNS privacy. Through a collaboration with Mozilla, Cloudflare is offering to move Firefox DNS lookups from the subscriber’s service provider straight onto its own systems. From a variety of blog posts it appears that Mozilla is aiming to make this the new default, although we also hear the decision has not yet been taken and that other organizations beyond Cloudflare may be involved. This new DNS service will be encrypted, using a protocol called DNS over HTTPS.

We are currently living in strange times where companies are willing to offer us services for “free” in return for access to our data. This data can then be used for profiling purposes (targeted advertising) or competitive analysis (market intelligence, for example what kinds of people visit what sites etc). In this way, if you are getting something for free, you frequently aren’t the customer, you are the product.

In addition, once our data flows through a third party, it is possible for that third party to influence what we see or how well things work: Gmail moving your school newsletter to the never opened ‘Promotional’ tab, Facebook suddenly no longer displaying your updates to users unless you pay up, Outlook.com deciding that most independent email providers should end up in the spam folder.

At Open-Xchange and PowerDNS, we think further centralization of the internet is a bad thing in and of itself, so we are not happy about the idea of moving DNS to a large, central, third party. Centralization means permissionless innovation becomes harder, when it was this very permissionless innovation that gave us the internet as we know it today.

We do of course applaud giving users a choice of encrypted DNS providers. Our worry is about the mulled plan to switch users over by default, or asking users to make an uninformed choice to switch to “better, more private DNS”, without making sure consumers know what is going on. Because that ‘OK, Got It’ button will frequently just get clicked.

Good thing it is encrypted and secure

Beyond our worries about centralization however there are concrete reasons to think twice before changing the DNS trust model & moving queries to a third party by default.

What will change?

When a user wants to visit ‘www.wikipedia.org’, the browser first looks up the IP address for this site. As it stands, by default, the service provider nameserver is consulted for this purpose. The setting for this is hidden in the Cable/DSL/FTTH-modem or phone. In the newly proposed world, the browser would ask Cloudflare for the IP address of ‘www.wikipedia.org’. Cloudflare says it takes your privacy more seriously than telecommunication service providers do because this DNS query will be encrypted, unlike regular DNS. They also promise not to sell your data or engage in user profiling.

Cloudflare and Mozilla have set out a privacy policy that rules out any form of customer profiling. Their story is that many ISPs are doing user profiling and marketing, and that moving your DNS to Cloudflare is therefore a win for your privacy.

Interestingly, this claim cannot be true in Europe.The EU GDPR and telecom regulations greatly limit what ISPs could do with the data. Selling it on is absolutely forbidden. Service providers would be risking 4% revenue fines because doing this secretly would be in stark violation of the GDPR, Europe’s privacy regulation.

In other countries, service providers do indeed study and use their user’s traffic patterns for marketing purposes.

So given this, under what circumstances would it be ok for Cloudflare (or any other third party) to take over our DNS by default?

Neutrality

Cloudflare is a Content Delivery Network (CDN). CDNs serve website content & videos from servers across the globe, so that content is closer to the end-user. As it stands, large scale CDNs like Akamai, Fastly, Google, Level3 and Cloudflare cooperate and coordinate intimately with service providers, to the point of co-locating caches within ISP networks to guarantee rapid delivery of content. When connecting to ‘www.whitehouse.gov’ for example, it is entirely possible to end up on an Akamai server hosted within your own service provider in the city you live in.  Only two companies were then involved in delivering that page to you: your ISP and Akamai. Neither your request, nor the response ever left your own country.

In the proposed future where Cloudflare does our DNS, all queries go through their networks first before we reach content hosted by them, or their competitors. We can legitimately wonder if Cloudflare will diligently work to protect the interests of its competitors and deliver the best service it can.

Interestingly enough, as of today, at least for KPN (a national service provider in The Netherlands) and www.whitehouse.gov this is not true: the IP address we mostly get from the KPN servers is 20% closer in terms of latency, and is reached through Internet peering. The IP address we get via Cloudflare is slower and additionally reached through IP transit, which is more expensive for both KPN and Akamai. Cloudflare is therefore slowing down access to an Akamai hosted website, at higher cost for everyone involved. Cloudflare, incidentally, explains that this is because of privacy reasons.

Any new default DNS provider should commit to working with all its competitors to deliver service that is as good as would have been provided through the service providers’ DNS.

Blocking

Any chokepoint of communications is susceptible to government blocking orders and legal procedures. In some countries the government shows up with a (long) list of what domains to block, in other countries this happens only after a series of long-winded lawsuits. In addition, child pornography researchers (& law enforcement organizations) frequently provide lists of domains they think should be blocked, and these often are.  

Local service providers typically fight attempts to block popular content, since their subscribers don’t like it. Once an international DNS provider is the default for lookups, it can also expect government orders and other legal efforts aimed to get domain names blocked.

A new default DNS provider should document its policies on how it will deal with lawsuits and government orders commanding it to block traffic. At the very least, blocks should be constrained regionally. It should also document what content they would block out of their own accord.

Government interception

Without going all “Snowden” on this subject, many governments grant themselves rights to intercept foreign communications with far less oversight than if they were intercepting national traffic. In other words, citizens of country X enjoy far less privacy protection in country Y. This is not a controversial statement and is explicitly written out in many countries’ interception laws and regulations. But the upshot is that for privacy, it pays to keep DNS within the country where you are a citizen.

In addition, most countries have legislated that communications service providers can and must break their own contracts, terms and conditions to comply with government interception orders. In other words, even though a company has committed in writing to not share your data with anyone, if the government shows up, they can be forced to do so anyhow.

It may well be that a third party DNS provider operates under a regime that has an interest in the DNS traffic that gets sent to it from all over the world.

New centralised DNS providers should document which governments have interception powers over them and be honest about their chances of standing up to such interception.

Losing control

DNS is currently under control of your network provider – which could be your employer, your coffee shop or frequently, your (Internet) service provider. Enterprise environments often filter DNS for malware related traffic, blocking requests for known harmful domain names. They will also use query logs to spot infected devices. Increasingly, large scale service providers are also offering DNS based malware filtering, especially in the UK.

When moving DNS to a centralised provider, such local filtering no longer functions. Enterprise network administrators will also lose visibility into what traverses their network. From the standpoint of the individual employee this may be great but it is not what the network operator wanted.

Interestingly enough, DNS over HTTPS has specifically been designed to be hard to block, as the designers envisioned that network operators would attempt to use firewall rules to disable forms of DNS they could not monitor or control.

When asking users if they should move their DNS to a new provider, they should be reminded they may be losing protection that was previously provided to them by their service provider or employer network administrators.

Is your service provider actually spying on you?

If we want to assess the benefit of moving DNS to a third party by default, it is important to know if we are being spied upon in the first place. In some cases and in some countries, this is definitely true. In Russia and China, DNS is routinely intercepted and even changed. Also, some providers replace ‘this domain does not exist’ DNS answers by the IP address of a ‘search page’ with advertisements.

But in many places, local service providers are bound by stringent rules that forbid any spying or profiling, mostly countries that fall under the European GDPR or GDPR inspired legislation.

Screenshot from 2018-09-04 17-15-20

A non-scientific Twitter poll

It has been argued that users are not sophisticated enough to reason about this subject and that the DNS move should happen by default, with an opt-out for those that care. Another idea that has been raised is a startup dialogue that proposes a more secure internet experience and a ‘Got it!’ button. This clearly does not go far enough in educating users about the change they will be authorizing.

Before moving DNS to a third party, users should be surveyed if they feel their current provider is spying on them or not, and if they think the new third party DNS provider would be an improvement. The outcome will likely be different per region. This survey could then lead to a well-designed, localized, opt-in procedure.

Summarising

Having a choice of (encrypted) DNS providers is good. Mozilla is pondering moving DNS resolution to a third party by default, initially Cloudflare. Before doing so, any third party should commit to:

  • Network neutrality: promise to work with competitors to ensure performance for other CDNs does not deteriorate compared to when the service provider DNS was used
  • A policy on blocking: how will the provider deal with government blocking requests or lawsuits demanding that content will be blocked.
  • Warning users the new DNS may not offer safety features they got from the network DNS provider
  • Being clear about the legislations it operates under: which governments could force it into large scale interception?

Finally, Mozilla should survey its users to find out their attitudes towards moving DNS from their current service provider to Cloudflare. To do so, those users must first be well informed about what such a move would mean. Based on the survey results, an honest consent page can be generated that makes sure users know what they are agreeing to.

We want to thank Rudolf van der Berg and Remco van Mook for their comments & input for this post. These opinions are ours alone though. 

Spoofing DNS with fragments

$
0
0

With some care, it turns out to be possible to spoof fake DNS responses using fragmented datagrams. While preparing a presentation for XS4ALL back in 2009, I found out how this could be done, but I never got round to formally publishing the technique. The presentation was however made available.

Update: this “discovery” has now been dated back to at least 2008 when Florian Weimer knew about it & tells us it was communicated clearly and widely back then.

In 2013, Amir Herzberg & Haya Shulman (while at Bar Ilan University) published a paper called Fragmentation Considered Poisonous. In this paper they explain how fragmented DNS responses can be used for cache poisoning. Later that year CZNIC presented about this paper and its techniques at RIPE 67.

A stunning 72 papers cite the original article, but as of 2018 not too many people know about this cache poisoning method.

More recently, The Register reported that another team, also involving Dr Shulman (now at Fraunhofer Institute for Secure Information Technology), has been able to use fragmented DNS responses to acquire certificates for domain names whose nameservers they do not control. They were able to demonstrate this in real life, which is a remarkable achievement. Incidentally, this team includes Amit Klein who in 2008 discovered & reported a weakness in PowerDNS.

Full details will be presented at the ACM Conference on Computer and Communications Security in Toronto, October 18. This presentation will also propose countermeasures.

Meanwhile, in this post, I hope to explain a (likely) part of their technique.

Whole datagram DNS spoofing

To match bona fide DNS responses to their corresponding queries, resolvers and operating system check:

  • Name of the query
  • Type of the query
  • Source/destination address
  • Destination port (16 bits)
  • DNS transaction ID (16 bits)

The first three items can be predictable, the last two aren’t supposed to be. To spoof in a false response therefore means we need to guess 32 bits of random. To do so, the attacker needs to send the resolver lots and lots of fake answers with guesses for destination port and the transaction ID. Over (prolonged) time, their chosen response arrives ahead of the authentic response, is accepted, and they are able to spoof a domain name. Profit.

Screenshot from 2018-09-10 22-12-57

In practice this turns out to be very hard to do. The 32 bit requirement plus the short timeframe in which to send false responses means that as far as I know, this has been demonstrated in a lab setting just once. Anecdotal reports of blindly spoofing a fully randomized source port resolver have not been substantiated.

Fragments

DNS queries and responses can be carried in UDP datagrams. A UDP datagram can be many kilobytes in size – far larger than most UDP packets. This means that a sufficiently large UDP response datagram can get split up into multiple packets. These are then called fragments.

Such fragments travel the network separately, to be joined together again on receipt.

Fragmented DNS responses happen occasionally with DNSSEC, for example in this case:

$ dig -t mx  isc.org @ams.sns-pb.isc.org +dnssec -4 +bufsize=16000
43.028963 IP 192.168.1.228.44751 > 199.6.1.30.53: 20903+ [1au] MX? isc.org. (48)
43.035379 IP 199.6.1.30.53 > 192.168.1.228.44751: 20903*- 3/5/21 
  MX mx.ams1.isc.org. 20, MX mx.pao1.isc.org. 10, RRSIG (1472)
43.035391 IP 199.6.1.30 > 192.168.1.228: ip-proto-17

The final line represents a fragment, which only notes it is UDP (protocol 17).

Matching fragments together is quite comparable to matching DNS queries to responses. Every IP packet, even a fragment, carries a 16 bit number called an IPID. This IPID is not copied from the query to the response, it is picked by the DNS responder.

Screenshot from 2018-09-10 22-13-30

On receipt, fragments are grouped by IPID, after which the checksum of the reassembled datagram is checked. If correct, the DNS response gets forwarded to the resolver process.

If we want to spoof a DNS response, we could pick a DNS query that leads to a fragmented datagram, and then try to spoof only the second fragment. On first sight, this does not appear to be much easier as we now need to guess the IPID (16 bits) and we also need to make sure the checksum of the whole datagram matches (another 16 bits). This then also requires a 32 bit guess to succeed.

However, if we send a server a DNS query, it will most of the time send the same DNS response to everyone who asks (also for fragmented answers). In other words, if the attacker wants to spoof a certain response, it will know exactly what that response looks like – with the exception of the destination port and the DNS transaction ID (32 bits).

But note that both of these unpredictable parts are in the first fragment. The second fragment is completely static, except for the IPID. Now for the clever bit.

The ‘internet checksum’ is literally .. a sum. So the checksum of the entire datagram consists of the checksum of the first fragment plus the checksum of the second fragment (modulo 16 bits).

Screenshot from 2018-09-10 22-13-49

This means that to make sure the whole reassembled datagram passes the checksum test, all we have to do is make sure that our fake second fragment has the same known partial checksum as the original. We can pick the checksum of our fake second segment easily through the TTL of the our chosen response record.

This leaves us with only 16 bits to guess, which given the birthday paradox is not that hard.

 

Randomness of the IPID

So how random is the IPID, does it even represent a 16-bit challenge? According to the 2013 paper, some operating systems pick the IPID from a global counter. This means an attacker can learn the currently used IPID and predict the one used for the next response with pretty good accuracy.

Other operating systems use an IPID that increments per destination which means we can’t remotely guess the IPID. It turns out however that through clever use of multiple fragments, this still allows an attacker to “capture” one of these. See the original paper for details.

Is that it?

Definitely not. In order to get a certificate issued falsely using this technique requires several additional elements. First we must be able to force many questions. Secondly, we must make sure that the original authoritative server fragments the answer just right. There are ways to do both, but they are not easy.

I await the presentation at the ACM conference in October eagerly – but I’m pretty sure it will build on the technique outlined above.

Countermeasures

In the meantime, DNSSEC does actually protect against this vulnerability, but it does require that your domain is signed and that your CA validates. This may not yet be the case.

PowerDNS Authoritative Server 4.2.0-alpha1: Lua records, ixfrdist, swagger

$
0
0
We’re proud to release the first alpha version of the PowerDNS Authoritative Server 4.2 series. While some users have already deployed this version straight from our package builders or master repositories, this is still a very fresh release.
4.2 represents almost a year of development over 4.1 and contains some major new features and improvements, while deprecating some functionality you may have been relying on (autoserial, for example).

Lua records

An important new feature is the support for Lua Records, which make the following possible, from any backend (even BIND!):

@ IN LUA A "ifportup(443, {'52.48.64.3', '45.55.10.200'})"

This will poll the named IP addresses (in the background) and only serve up hosts that are available. Far more powerful constructs are possible, for example to pick servers from regional pools close to the user, except if all servers in that pool are down. It is also possible to do traffic engineering based on subnets or AS numbers. A simple example:
@    IN   LUA A ( "ifportup(443, {'52.48.64.3', '45.55.10.200'}, "
                  "{selector='closest'})
For more about this feature, please head to the documentation.

Deprecations

4.2 will see the removal of the poorly documented ‘autoserial’ feature. This removal decision was not taken lightly but as noted, its removal allows us to fix other bugs. Autoserial was holding us back. We realise it is no fun when a feature disappears, but since Authoritative Server 4.1 is still around, you can still use that if you require ‘autoserial’.
Following RFC6986 and anticipating the publication of Algorithm Implementation Requirements and Usage Guidance for DNSSEC, support for both ECC-GOST signing and GOST DS digests have been removed.

ixfrdist

A new tool ixfrdist transfers zones from an authoritative server and re-serves these zones over AXFR and IXFR. It checks the SOA serial for all configured domains and downloads new versions to disk. This makes it possible for hundreds of PowerDNS Recursors (or authoritative servers) to slave an (RPZ) zone from a single server, without overwhelming providers like our friends over at Spamhaus/Deteque and Farsight.
Inspired by our Open-Xchange colleagues our API is now described by a Swagger spec!

Log-log histograms

Over at PowerDNS, we love statistics. Making sense of DNS performance is not that easy however – most queries get answered very quickly, but it is the outliers that determine how users “experience the internet”. It turns out that log-log histograms make it possible to fully capture the quality of a DNS service. As explained in this blog post, PowerDNS now comes with tooling to make such histograms:
log-full-avg

Note that this tooling is not specific to PowerDNS Authoritative or even PowerDNS: it will analyse any PCAP file with DNS in there.

Improvements, fixes

Much more

The changelog lists many more improvements and bug fixes.

Domain security outside of DNS: Getting hacked administratively

$
0
0

This is a brief blogpost on the news that has been sent to us by many people, namely that there is a suspected Iranian group that is “hijacking DNS”. I was about to be interviewed on this subject but sadly that fell through. I did however prepare notes already, so please find some possibly useful things on the subject here.

Briefly: the weakest part of your DNS security currently likely isn’t actually DNS. It is the login (and password reset mailbox) where you manage your domains & nameserver settings. 

In general, if an attacker wants to take over a service you provide (a website, email or whatever), this requires them to change or redirect traffic between users and the targeted service.

There are four “gates” that determine how information flows from/to a named service:

  1. The nameserver configuration for a domain name (“the names of the nameservers”)
  2. What those configured nameserver names respond with.
  3. Which cables those IP addresses are routed to: the Border Gateway Protocol
  4. The actual servers and the software they run

Many attacks have historically focused on item 4, hacking either servers or software. For decades this was the easiest way. Recently however, the most used pieces of software have become more secure, and operators also update their software far more faithfully. Often this is done on their behalf by cloud providers.

Meanwhile, we are seeing a lot more attacks that involve BGP hijacks to route the (correct) IP addresses to incorrect locations.

The currently discussed attack involves items one and two. Why attack a server if you can reroute all traffic with a simple login? Once an attacker is logged in to a domain management solution they can change whatever they want.

So, how are these systems attacked? In the simplest case, an important domain is hosted at a registrar and protected only by a weak or leaked password. We may wonder how this is possible but it happens a lot. The “most important domain” for many companies was frequently registered by the founder ages ago. And unlike all new domains, it still languishes at a relatively unknown provider where it was registered back in the 1990s. This for example is the case for our own ‘powerdns.com’ domain.

And this original founder may not have been a security professional and picked ‘password123’ as his password. Or perhaps he did pick a good password, but it has now leaked in one of the massive breaches over the past decades.

Secondly, almost all DNS control panels come with a password reset solution that typically sends a reset mail.. to a preconfigured email address. This email address again might not be a well secured Gmail account but some ancient Yahoo mailbox that hasn’t been touched in years – again likely with a 1990s-era  password.

If that doesn’t work, an attacker has further options. If gaining access to the control panel of ‘importantdomain.com’ fails on the first try, and the password recovery email address is ‘john@companyfounder.com’, we can repeat the whole process over at ‘companyfounder.com’!

Perhaps someone over time improved the password for ‘importantdomain.com’ but not for the control panel of ‘companyfounder.com’. And if we control the ‘companyfounder.com’ domain, we can hijack the account recovery email, and thence take over ‘importantdomain.com’.

Beyond, even if that fails, we can repeat the whole process for the accounts not of ‘importantdomain.com’, but for the domain that contains the names of the nameservers themselves. Once an atttacker changes those, they can substitute nameservers that give answers that are implement the hijack.

The options to attack a domain administratively go on and on and on. It is therefore indeed very plausible that attackers have been able to acquire control of large numbers of domains.

So, what should operators do? The standing recommendation is of course to enable two-factor authentication for all control panels and to make sure that any remaining account recovery mailboxes are very well secured. Despite our very low opinions of Google’s stance on privacy, currently almost nothing beats a GMail account that itself is two-factor secured.

In addition, some domains can be “registry locked”, which is also highly recommended for high-profile domains.

But the most important recommendation is to audit each and every domain name of the company to see if these security measures have actually been taken. Because through the hops as described above, ‘importantdomain.com’ may in fact be hijacked via the nameserver of the domain of the mailbox of the company founder.

As a final note, it is often claimed that DNSSEC and TLS will protect against these attacks. While adding cryptography does raise the bar, and sometimes significantly, the control panels we have discussed so far include options to disable DNSSEC, while a DNS hijack enables an attacker to get fresh all new TLS certificates under their control within seconds. Using DNSSEC and TLS judiciously requires attackers to work harder, but it is no guarantee.

I hope this has been helpful!

The big DNS Privacy Debate at FOSDEM

$
0
0

This weekend at the excellent FOSDEM gathering there were no less than three presentations on DNS over HTTPs. Daniel Stenberg presented a keynote session “DNS over HTTPS – the good, the bad and the ugly” (video), Vittorio Bertola discussed “The DoH Dilemma” while Daniel, Stéphane Bortzmeyer and I formed a DNS Privacy Panel expertly moderated by Jan-Piet Mens. I want to thank Daniel, Jan-Piet, Rudolf van der Berg, Stéphane & Vittorio for proofreading & improving this post, but I should add this does not imply an endorsement from anyone!

In what follows, I will attempt to give a neutral description of what I think we learned, and where we now are on DoH, with a focus on the European perspective. If you find a noticeable bias, please let me know urgently and I’ll address it. But to be clear, I’m no fan of centralizing DNS on a small number of cloud providers.

After the neutral description you will find some strong opinions on if “DNS over Cloud” is a good thing or not.

Screenshot from 2019-02-07 09-49-47

Daniel Stenberg’s interpretation of what worries people about DoH

Words & definitions

During the FOSDEM presentations, various visions on the desirability of DNS over HTTPS were discussed. We were sadly rather hampered by messy definitions. There are two definitions that sound the same but are different in practice. Firstly, there is “DNS over HTTPS” (DoH) which is a transport protocol so you can securely ask DNS queries over HTTPS.

Secondly, Google, Firefox and Cloudflare are working on using DoH to move DNS queries from the network service provider straight onto the cloud. In other words, where previously your service provider could see (and answer) your DNS queries, in this proposed future you would send your DNS requests to a “free-as-in-beer” cloud provider.

As Daniel pointed out well during his keynote, both of these things have been called DoH, which is highly confusing. “The Resistance” as Daniel labels it complains about “DoH” when in fact they are mostly complaining about centralizing DNS on cloud providers. We should not blame the protocol for what operators might do with it.

It may be that the greatest benefit we get out of the hours of FOSDEM DoH presentations is that we now know we should separate our concerns with DoH (the protocol) from our concerns about the application of this protocol to deliver what I propose we henceforth call DNS over Cloud (DoC).

DNS over HTTPs (the protocol, DoH)

The DoH protocol is designed to use the HTTP and TLS infrastructure to deliver encrypted and authenticated DNS answers that (crucially) are hard to block by network operators. An earlier protocol called DNS over TLS was already available but since it runs on port 853 and “does not look like HTTPS”, network operators that dislike DoT can easily block it. Most corporate networks will in fact do this by default.

DoH shares the benefits and downsides of HTTPS. It can send out more trackable data than regular DNS, simply because HTTP supports things like headers & cookies. TLS session resumption functions as another tracking mechanism. On the plus side, anything that can cache or redistribute HTTPS can now also be used to improve or proxy DNS. Also, DNS over HTTPS makes it possible to push DNS answers even before they are asked, which could increase page load performance.

It may be seen as good or bad that HTTPS can be made undetectable and unblockable, depending on who you are and what you worry about. If Google were to colocate a DNS over HTTPS service on the IP address also used for ‘Google.com’, countries and network operators would face a Solomon’s choice if they wanted to block DoH: give up Google searches or keep DoH alive.

Update: Google turns out to do exactly this, you can get DNS answers over an https://google.com/ request.

Network operators that feel they should be in control of their network will not like this standoff, while users that think their network operators should have no power over them will rejoice. For the second group, at FOSDEM we discussed the proverbial Turkish dissident that would benefit from unblockable DNS.

Finally, because DoH uses authenticated HTTPS (just when like visiting any website), we know we are talking to the nameserver we want to talk to. It protects against rogue nameservers, possibly injected by hijacking the DHCP request, or simply by spoofing IP packets.

DNS over Cloud (DoC)

As it stands, network operators (ISPs, service providers, your WiFi providing coffee shop) can see your DNS traffic. In addition they could (and actually often do) manipulate or block certain queries or responses. This is an intrinsic property of providing DNS service – everyone that provides DNS service to you can do these things, cloud based or not.

One concrete difference between typical network DNS and DNS over Cloud is that network DNS tends to be unencrypted while DoC can encrypt the transport component. And encryption is good.

Much like using a VPN to access the internet only moves your traffic from one place to another, choosing a different DNS service does not magically make your DNS more secure. It does change who you want to trust though. And if you are lucky your trusted provider is more secure in ways that are relevant to you.

Currently, that trust is not very intentional. Internet users often have little choice in what ISP to use. In many cases they may not even know. While local regulations (like GDPR, NIS, ePrivacy and EU telecommunications directives) may limit what a provider is allowed to do, users may not be sure if their actual network operator adheres to these legislations.

DNS over Cloud proponents advocate that the user did however consciously choose a browser and that the browser is therefore in a good position to suggest or even pick a DNS provider for their users. Users sometimes also can’t pick a browser either, but they may have freedom to select a phone and different brands of phones include different browsers. Cheaper phones all ship with the same browser however.

During our DNS Privacy Panel it was also established that we estimate that most users do not care very much about their DNS privacy, and are in any case not well informed about the tradeoffs. The choice of DNS provider therefore needs to be made for them, either by their phone, their operating system or their browser.

A brief interlude on DNS encryption

Everyone agreed more encryption is good. This can happen between client equipment and the nameserver, or between the resolving nameserver and authoritative nameservers. Warren Kumari from Google tested the waters for our thoughts on opportunistic DNS over TLS (DoT) between phone/browser and DNS service, and this went down well, except that it was noted it is subject to downgrade attacks. I noted that PowerDNS has been stimulating its customers to enable DoT already because Android Pie will attempt to use your DoT service if it is there. There was also a brief discussion on the efforts to encrypt traffic between resolver and authoritative servers, something that is also good.

What does DoH protect against? Because we use HTTPS anyhow for protection?

At the very end of Daniel’s keynote a question was asked what the point is even of protecting DNS queries and responses. The DNS response leads to the setup of a TLS connection and this TLS connection is itself already encrypted and private. We don’t need DNS for that. In addition, a TLS connection setup will typically include the name of the site being visited in plaintext, even with TLS 1.3 (the Server Name Indication or SNI field). Finally, the IP address we eventually end up connecting to may give a very good indication who this connection is going to. So it is generally possible to tell where a TLS connection is going – even without looking at DNS. Stéphane’s RFC 7626 discusses many of these tradeoffs.

As of February 2019, there is little privacy differential when using DNS over HTTPS since the name still travels in plaintext. It may however be more expensive for a snooping network provider to extract the SNI from packets. Also, work is ongoing to use encrypted DNS to encrypt the SNI field too, in which case DNS over HTTPS would actually give us more additional privacy.

What DoH does however deliver today is protection against DNS-based censorship.

Censorship & things that break

The PowerDNS DoC service quickly gained thousands of users, many of whom are in Indonesia. PowerDNS learned that Indonesian ISPs perform a lot of blocking and DoH servers are a great way around such blocking. It may be that doh.powerdns.org is small enough to fly under the radar of the Indonesian censors.

Separately there was a brief discussion on how DoC can break things like VPNs and split horizon. We did not explore this much further except that it was noted it actually breaks things in production. An open question is if the encryption is worth the amount of breakage observed, and if we could maybe work around it.

Differences between Cloud and Network DNS Providers

The highly regulated nature of service providers, at least here in Europe, is a double edged sword. It restricts what ISPs can do with your data but it also means they respond to court orders that block content and may implement blocking of child pornography even without such orders. Internet users may not be happy with such blocking, either because they want easy access to Torrents, or simply because they object to the very principle of communications being blocked.

In addition, while (European) service providers are under legal obligation not to monetize or otherwise sell your traffic (without very explicit permission), that does not mean they don’t do it. Specifically, all service providers here will respond to government (bulk) interception orders, and provide police & spies with a full copy of all your traffic, including the unencrypted DNS parts.

Cloud providers meanwhile are very adept at navigating the GDPR waters and are able to simultaneously promise you they won’t sell your data but also power most of their bottom line selling advertising based on what you do online. In addition, they are relatively out of reach of government interception or blocking orders, which take many months to travel to a foreign jurisdiction, and frequently never arrive.

Screenshot from 2019-02-07 10-03-37

Differing views on the panel

We lucked out with three speakers with three informed but still different opinions on the subject.

I (Bert) make my living selling software and services to telecommunication service providers. I know many European ISPs intimately and I do not believe they are engaging in secret user profiling. We have enough trouble with GDPR as it is to get any kind of DNS debugging data out of our customers. So my belief is that while service providers may not be “a force for good”, I do predict they’d have a very hard time breaking regulations to secretly run a surveillance economy. But, these people pay my company good money so I am biased to like them. I do not believe it is a good idea however to send a record of every website I visit to cloud providers like Google or Cloudflare.

Stéphane meanwhile is highly knowledgeable on how governments actually regulate the Internet. He even wrote a book on it that is subtitled “The Internet – a political space”. In his opinion, GDPR and other regulations may be great, but enforcement is scarce as data protection agencies do not understand DNS and do not prioritize it. This leaves room for even European service providers to sell and monetize DNS data. In addition, Stéphane is worried that when governments DO finally get interested in DNS, it is for censorship purposes.

Daniel offers a perspective inspired by his background in HTTPS – he sees the obvious benefits of not only encrypting DNS data but also authenticating its server. “You know who you are talking to”. He furthermore observes correctly that users spend time on different networks, and that we can’t possibly expect them to study the privacy practices and reputation of every school or coffee shop where they use Wifi. If users picked a suitable DoH provider that worked over all networks, they’d receive a constant level of trust – no matter what network they are on.

Daniel has separately argued that he regards an explicit promise from a cloud provider not to sell your traffic as a stronger guarantee than passively trusting that a provider will stick to the applicable laws. Finally, Daniel notes correctly that GDPR does not protect you if you are connected to a rogue nameserver (so not the one you were expecting to use). It may not be the service provider that spies on you but someone else on the path TO that provider. DoH protects against that scenario.

Who gets to pick who we should trust?

If a browser decides to use DoC for its lookups, which provider should it offer? Early in the discussion it was noted that there should be a transparent process for deciding who could be offered as a provider, where it was also noted that this process for Firefox has been far from transparent or even operational so far. A member of the audience spotted an interesting analogy with the CA/Browser Forum which has been used to determine which certificate authorities are to be trusted. Daniel noted that this is however also similar to search engine selection in browsers “and everyone picks the default, and that is the one that pays most”.

Stéphane opined that there should be many DoC providers to choose from, but since picking one is hard, the browser should present a list with a random one at the top. This allows choice but also prevents needless concentration if a user picks the default.

Why are cloud companies so anxious to host our DNS?

Warren Kumari from Google gave a very clear response – a better and faster internet leads to more internet use and more searches and therefore more advertisements and thus more money for Google. Such honesty is rare & it is appreciated. Warren also reconfirmed (as happened at FOSDEM 2018) that 8.8.8.8 sticks to its privacy policy, it is not being mined. As an aside, when 8.8.8.8 was launched, the state of ISP DNS was indeed dire. DNS in many service providers did not not have a good home – fitting awkwardly between network and application departments. The creation of 8.8.8.8 contributed to the vastly improved DNS we see today, and at the time it was necessary.

But why 1.1.1.1 or 9.9.9.9? Christian Elmerot from the Cloudflare (1.1.1.1) resolver team offered the explanation that people on 1.1.1.1 will get (slightly) faster answers from 1.1.1.1 for Cloudflare domains than when using other resolvers, and this makes their services more attractive.

It may however be that public DoC providers are not entirely disinterested in getting a copy of the world’s browsing behaviour.

Is it any faster?

DoH (or more precisely, DoC provided by Cloudflare) is actually 7 milliseconds slower on average than the system resolver, according to measurements performed at Mozilla back when Daniel still worked there. He does note however that the worst case performance of the Cloudflare DoC is much better than the worst case system resolver performance.

Should you run your own resolver?

It may be slower, but Stéphane noted that having your own resolver on your own machine, is actually also not good for your privacy, since authoritative servers will now see your personal IP address, instead of the service provider IP address. However, it does offer full control – at a possible performance and privacy penalty. Stéphane notes that a mixed mode local resolver, that uses a DoH provider for cache misses, may be an optimum. Some further thoughts on the benefits of a local resolver can be found in this post “Benefits of DNS service locality” by Paul Vixie.

What about EDNS Client Subnet?

There was a brief and somewhat angry discussion between me and Daniel that somehow got cut from the end of the video recording. This discussion was about EDNS Client Subnet, and how it impacts your privacy when used by a service provider.

Some large scale internet service providers include part of a customer’s IP address when sending queries to (for example) Akamai or Level3. This is currently necessary because these large scale CDNs perform load balancing via DNS and they need to see 24 or sometimes even 25 bits of the IPv4 address to determine the right server for a user. This is sometimes reported as a privacy problem, and in the general case it could be. However, when used between a hosting provider like Akamai and a service provider, in reality there is no loss of privacy – the customer is attempting to connect to an Akamai service, and Akamai will always see the subscriber IP address in that case.

Noteworthy is that DoC-providers that do not implement EDNS Client Subnet (ECS) may disadvantage competing cloud operators since they will send internet users to sub-optimal content distribution nodes. It may not be wise to rely on one CDN to connect to a competing CDN.

The balance

This is where the attempt at impartiality in this post ends.

First, we should separate a few things: DoH, DoC and “DoC-by-default”. It seems clear that the first two of these are not problematic. It is good that we have a secure DNS transport mechanism and some DoC providers may truly be a step up in privacy and security for users in some countries or places.

Our discussion should be about what we think of “DoC-by-default”, that is, any attempt by browser vendors to default people into moving their DNS to Cloudflare or themselves. My concern also extends to a weaker form where you get DoC-by-nudge if you press a little ‘Got it’ button when prompted if you want to benefit from the ‘Google Secure Lookup Service’.

Who should we believe, the highly regulated (European) service provider that says it is not allowed to spy on its users using DNS, and also says they aren’t doing it?

Or should we believe the cloud provider that claims those service providers are spying on their users and then asks us for our DNS traffic while promising not to sell it, although they will log each and every query?

Is it credible for DNS over Cloud providers to spend huge efforts on pushing their DNS services but also to claim they won’t do anything impactful with the results? Google has advanced that faster DNS means more revenues for them, which is likely true, but DNS over HTTPS will first slow things down!

Cloudflare meanwhile opines that if you use their DoC-service, this makes accessing Cloudflare domains a tiny bit faster compared to visiting competitor’s domain names. While this may be true, first the effect is tiny and second, it is not that great a sell for an actual Internet user.

It is currently true though that your coffee shop WiFi may be spying on you, or may enable an attacker to do so. However, as noted above, the names of sites you visit are still sent out unencrypted by TLS connections these days, so DoC does not even deliver on saving you from such spying right now.

Finally, much is made of the users in repressive regimes which might benefit greatly from unblockable DNS, and this may indeed be so. But as noble as it is to help the Turkish dissident to communicate with the world, it seems odd that to help her we need to send Cloudflare or Google a record of every website visited by 500 million Europeans!

Other arguments for DoC-by-default may be more appealing – if you are a cloud provider. Despite all promises to not sell the DNS data, not log it very long etc, the fact remains that a DoC operator gets sent a copy of every server and site name a user visits. Somehow someday that data is going to be monetized, and this will happen in ways users will not be consulted about.

This leaves us with other explanations for the DoH push, and none of these are very good. For one, it is plain and simple an attempt to fully control the Internet experience. As an example, there have been ISPs that have pondered adding ad-filtering as a network service. This does not sit well with advertising companies like Google, so they’d love to be sure such blocking is not possible. DoC-by-default gives that to them.

We’ve previously established that users do not have strong or informed opinions on the source of their DNS, so whatever happens will be decided by browser vendors, on behalf of internet users.

Given what we now know about the relative risks and benefits of DoC, it seems utterly unwarranted to decide that users should give their DNS to Google or Cloudflare because there is no credible claim it will actually improve their
lives.

Thanks for making it to the end of this long post!

  • Some more thoughts on DoH & specifically the Firefox plans can be found here.
  • Open-Xchange and several other groups are actively informing governments and data protection authorities about what is going on & why we feel DoC-by-default is a bad thing. Please contact vittorio.bertola@open-xchange.com if you are interested in joining in!

 

How PowerDNS is Open Source & a successful business, or, why are we talking about 5G?

$
0
0

What does PowerDNS actually do?

This is a good question, one we can ask about any company. How do they stay alive, what services do they deliver, who do they sell them to?

For Open Source companies, the question is doubly interesting: if your software is so great, and you give it away for free (as in freedom), how do you survive?

In this post I want to explain how PowerDNS (and our parent Open-Xchange) have squared this circle. In many large countries, PowerDNS & Open-Xchange are now the DNS supplier to the largest telecommunications companies.

Below you will also read why we are all of a sudden talking about end-to-end monitoring, “the 5G transition“, DNS over HTTPS and (Network Function) Virtualization (NFV).

Products
Everything starts with products of course, and PowerDNS has four main ones. The Authoritative Server hosts domain names, and it dominates the mid-size market of hosters running up to 10 million domains. While there are other very good open source authoritative nameservers, PowerDNS has an edge because of its wide support for databases, its DNS-aware checking API and lately the new LUA records which deliver DNS based traffic-engineering, failover and load-balancing.

The PowerDNS Recursor meanwhile has picked an interesting niche among resolvers & caches, where again the open source landscape delivers outstandingly good software. The Recursor supports the big important features of course, like DNSSEC and shortly QName minimization, but our focus has been on providing servers that deliver great performance & rock-solid stability for high-capacity operators, while retaining the flexibility to do malware filtering, parental control and security analysis. Of specific note is our support for interoperating with CDNs like Akamai that require EDNS Client Subnet, while retaining top performance.

Our third product, dnsdist, for now appears to be unique – a scriptable high performance, DoS-aware load balancer & distributor of DNS queries.  It protects installations from denial of service attacks, of which even small ones can burn up a lot of CPU. Dnsdist also delivers such modern encrypted variants of DNS as DNS over HTTPs and DNS over TLS. It has a built-in cache that delivers stellar performance even on top of slow backend. Dnsdist is highly flexible and can redirect queries based on almost every aspect of a question. It frequently replaces dedicated load-balancer hardware. Although only a few years old, we were very pleased to learn dnsdist was part of the recent NATO Locked Shields cybersecurity exercise in Estonia.

These first three products are built in close cooperation with our lovely community. A community is far more than people supplying patches. It also consists of users vocally telling us what they need or pointing out that what we do is exactly what they don’t need. It consists of the heroes that test pre-releases and let us know if the quality or the features are where they should be. We are also super happy with users that point out where documentation is missing or wrong. Conversely, we truly enjoy helping our users improve their lives with open source, where we cooperate daily with other open source projects.

Finally there is the part that is not open source, the PowerDNS Platform that delivers the first three products in an integrated, automated, monitored and graphed solution, with a central graphical & scriptable control plane. In addition, with OX Protect, this platform provides for malware filtering & parental control.

What we actually sell
Who would buy a nameserver when there are so many good ones available to download for free? Asking the question almost answers it: operators that do not wish to deploy and assemble the raw goods they can find on the internet. While it is entirely possible to have teams and infrastructure in place to do just that, many modern telecommunications operators have decided to only deploy fully supported units of functionality.

While it is entirely possible to assemble similar functionality to our Platform with open source components, this is a lot of work and operators would have to learn how to scale, monitor and control such a system. There is value in getting this as a preassambled whole – even as we retain our open interfaces for integration into existing monitoring and graphing systems. But beyond that – assembling platforms by hand is a risky business.

This is a variant of the old story that no serious company would run software without a support contract in place. While this was not quite true, what we are seeing today is a step even beyond that. A support contract is a suitable solution if the operator decides to take full ownership of architecting, deploying, testing and running a project. The support is important for the rare cases where things do not go as planned – it is in fact a warranty.

Although we have a number of excellent customers where we provide such support as a service, in almost all cases our engagement these days goes far beyond answering email messages.

Delivering functionality
A large scale enterprise, like a telecommunication service provider, is a complex organization. For every project there are many stakeholders – there are product departments that want specific functionalities and performance for the subscribers. There are legal and compliance departments that make sure vendors have the right certifications and can be held liable for intellectual property violations. Service Level Agreements need to be spelled out in great detail, including penalty clauses. Whenever consumer communications are touched, GDPR compliance is of utmost importance.

Then there are network and infrastructure teams that each have their own requirements for hardware, virtualization specifications and capacity. On top of this, there are always existing software installations with sometimes custom features that need to be retained and migrated.

Of supreme importance is high-level sign-off. Senior management needs to be reassured that this is a vendor worth betting on. Or as a big PowerDNS customer once phrased it “you need to hire more golf players to grow”. We took this message on board. This is also why you will be seeing PowerDNS opine on 5G deployments, on Network Function Virtualization and End-to-End performance monitoring and reporting.

To round this off, a project of any serious size will be run through a procurement department, often at group level, sometimes even in a different company. Navigating an RFP is a skill in itself – especially when third party integrators or vendors are fronting the project.

In short, to deliver a working solution requires coordination among all these departments and the creation of an architecture, a training plan, a support structure, a hardware/software layout, a migration procedure, and all of this needs to be ‘sold’ through the procurement department.

So if a modern telecommunications company wants to deploy a new nameserver constellation, it will require not just the software but all of the above.

Deploying functionality
After a project has been specced up properly and the papers are signed, next up is the actual deployment and migration. When we launched PowerDNS in the late 1990s, it was clearly up to the operator to perform deployment and migrations. This made sense on one level: testing & deploying software (or hardware) is the best way to make sure operators fully understand what they bought and that they can support it themselves.

Conversely however, a vendor deploys and migrates its products all the time. Vendors therefore have developed tooling and procedures to make this happen swiftly. We can’t expect a service provider that does a hardware refresh once every 4 years to have performed many migrations itself with the existing staff – it simply does not happen that often.

These days, most customers ask us to be very or even completely hands-on during testing, rollout and migration. We do however vastly prefer to perform such operations in close cooperation with the intended operators – because it remains true that “doing” is the ultimate form of training.

Collaborative operations
Traditionally, vendors grudgingly provide support in case of proven malfunctions. It is now so hard to open tickets with major network vendors that at least one company we know of sells “opening network vendor tickets” as-a-service – allowing operators to focus on solving problems. This is not how we want our customers to work with us however!

To our large scale operators, we provide collaborative operations services. This means there is no need to ‘escalate’ something so it is an issue. Whenever there is a need for a configuration change, or there is a worry because of a graph that is going in the wrong way, we are there to provide guidance, scripts or hands on help.

What we have managed to do is retain our open source collaborative nature, but deliver it also to the largest of operators, wrapped in solid service level agreements.

Summing it up
“The secret to PowerDNS’s success” is that we are able to take excellent open source software, and deliver it to large scale telecommunications service providers, while continuing to be an open and accessible vendor. And it turns out that everything we provide on top of the raw open source software is worth good money to our customers.

As of 2019, PowerDNS is growing rapidly. And as the rollout of DNS over TLS/HTTPS, 5G transition, (Network Function) Virtualization at service providers continues, it appears we will be an ever larger part of the telecommunication landscape.

If you our your company are interested in working with us for your next DNS project, please do not hesitate to contact us! For more about PowerDNS, please head to https://powerdns.com or to https://open-xchange.com/.

Centralised DoH is bad for privacy, in 2019 and beyond

$
0
0

Recently, Mozilla announced it would be moving Firefox DNS lookups to Cloudflare by default, for its American audience. There will be a notification about this for existing users, at which point they could choose to go back to provider DNS. But crucially, there will be no opt-in: it is Cloudflare by default, using a technology called DoH.

This reignited some controversy, mostly in Europe, where meetings and panels in Amsterdam, The Hague, Paris and Belfast went over the pros and cons of this move, because it might well come to our shores as well.

During these discussions, I noticed that we haven’t been very analytical about what moving and encrypting DNS does for privacy. Many people appear to conflate the concepts of privacy and encryption, which are in fact very different things.

In this post I argue that in September 2019, centralised DoH “by default” is a net-negative for privacy for everyone and that even in later years it will not improve privacy outside of the most privacy hostile environments – where no one should rely on partial measures like DoH to stay secure.

Recapping what DoH does

DNS is currently typically provided by the operator of a network, which could be your Internet Service Provider, your phone company, your employer or your proverbially evil coffeeshop WiFi.

DNS provided this way is never encrypted. Anyone observing your network traffic can see which DNS lookups are made. A more capable person could also inject fake answers, potentially rerouting your traffic.

DNS over HTTPS meanwhile encrypts DNS queries going over the network, which means that no one between you and the DoH server can see your DNS queries or modify the DNS responses.

Crucially, in both plain DNS and DoH, the operator of the DNS server can see, sell, block and modify your DNS data. It is only the people in between that get locked out.

DNS & Metadata Privacy

DNS privacy matters. Or more in general, knowing what sites you visit matters: your traffic metadata. A complete listing of sites (and servers) contacted will reveal where you work, live, study, what your hobbies are, what equipment/devices you own, what sports teams you follow, which health care providers you frequent, what brand of car you (want to) own & likely your sexual preferences.

Many governments will also be very interested in who communicates with political parties or organizations they don’t like.

Restricting and choosing who can see the meta-data of what sites you visit is therefore very worthwhile.

Metadata leaks

DNS is one of four ways in which such meta-data gets transmitted in plaintext. For starters, browsers do not exclusively perform HTTPS requests. Many visits still start with a plaintext HTTP request that then redirects to HTTPS.

Secondly, TLS (which underlies HTTPS) very often has to transmit, in plaintext, the name of the site (or server) the user intends to connect to. This is true even in TLS 1.3. There is an IETF draft standard for encrypting this plaintext Server Name Indication, but it is not widely adopted, and needs serious work before it can be standardised.

It is frequently and mistakenly thought that TLS 1.3 has plugged this leak, it hasn’t. To verify, try: sudo tshark -i eth0 -T fields -e ssl.handshake.extensions_server_name -Y ssl.handshake.extensions_server_name -n

Thirdly, to ensure that the certificate used for a TLS connection is valid, many browsers and TLS stacks will perform an OCSP lookup to the Certificate Authority provider. This lookup itself is also plaintext. Note that with some care, OCSP lookups can be prevented.

Finally, research has uncovered that over 95% of websites can uniquely be identified purely by the set of IP addresses they are hosted on, and these IP addresses also can’t be encrypted.

I should also note that unless special measures are taken, a whole horde of dedicated web tracking companies (like Facebook and Google) will record and monetize most of your moves online anyhow, no matter how well encrypted your connection.

Privacy before and after DoH

From the above, we see that DNS over HTTPS plugs only one of four (or five) avenues leaking sites visited.

But if we sum it up, pre-DoH, the following parties have access to the names of most of the sites you visit:

  1. Your own network provider
  2. Your own government, police, intelligence services (through court orders)
  3. Anyone capable of snooping your local network
  4. Certificate authority providers (through OCSP)
  5. Large scale tracking & advertising companies (Google, Facebook)

DNS over HTTPS in browsers is currently exclusively offered by/through American companies. So after switching to DoH, we have to add the following to our list:

  1. Cloudflare / your DoH provider
  2. The US Government, NSA, FBI etc

Because DoH does not encrypt anything that is not also present in plain text, there is nothing to remove from the list.

Based on this, we can conclude that as it stands, using DoH to a browser-provisioned cloud provider effectively worsens your privacy position.

Note: DNS over HTTPS is the protocol, and it could be used to enhance privacy. Using DoH to move DNS to the cloud is a specific way of using DoH that is damaging to privacy in 2019.

DNS over HTTPS offers additional tracking capabilities

DNS over HTTPS opens up DNS to all the tracking possibilities present in HTTPS and TLS. As it stands, DNS over UDP almost always gets some free privacy by mixing all devices on a network together – an outside snooper sees a stream of queries coming from a household, a coffeeshop or even an entire office building, with no way to tie a query to any specific device or user. Such mixing of queries provides an imperfect but useful modicum of privacy.

DNS over HTTPS however neatly separates out each device (and even each individual application on that device) to a separate query stream. This alone is worrying, as we now have individual users’ queries, but the TLS that underlies HTTPS also typically uses TLS Resumption which offers even further tracking capabilities.

In short, setting up an encrypted connection eats up precious CPU cycles both on client and server. It is therefore possible to reuse a previously established encrypted state for subsequent connections, which saves a lot of time and processor energy.

It does however make it possible to track an application from IP address to IP address because this TLS Resumption session ID is effectively a cookie that uniquely tracks users across network and IP address changes.

But what about the privacy agreement?

DoH providers typically publish privacy policies in which they pledge to provide you excellent DNS service without them benefiting in any way from your data, except possibly through very abstract research, or nebulous performance benefits that might attract customers for other products.

History has shown that the overwhelming majority of providers of free services that carry interesting user data have eventually failed at keeping this promise – either by being compromised or by accidentally using the data anyhow. Apologies ensue, but trust never returns.

In addition to this hypothetical future misbehaviour, no privacy agreement stands up to a court order to hand over data in bulk. It so happens that the US legal & intelligence climate frequently does in fact use subpoenas and national security letters to hoover up user data. It should also be noted that specifically US law affords far less privacy protection to “non-US persons“ than the already meager protection provided to American citizens.

Also, US law extends to all servers and services operated by US companies, so “hosting data in Switzerland” does not provide protection if the operator is American.

So relying on a privacy agreement as some kind of axiomatic guarantee of privacy is not grounded in history nor in legal fact.

DNS for Security

DNS itself is, oddly enough, not much of a security function in a browser. We derive secrecy and integrity from TLS, which in itself does not care about DNS. As an extreme example, a DNS provider could simply hand out 198.51.100.1 as answer to any browser query, receive TLS connections on that address & connect to the right server from there based on the Server Name Indication, and things would just work.

This would not allow any snooping (because TLS is end-to-end, and will check the certificate provided by the server hosting that name), but it does show that DNS integrity is irrelevant for browser security, as long as TLS is used faithfully (and we have no alternative to it anyhow).

DNS for adblocking, censorship, CDN distribution

Although DNS can not change TLS protected data, it can surely prevent access to such data. Countries frequently use DNS as a censorship choke point because it is easy and cheap. Russia, Turkey and Indonesia use DNS extensively to block access to sites their governments do not like.

Phones and increasingly browsers do not make it easy to block advertisements. One simple way of doing so anyhow is through DNS. Sabotaging the lookups for popular ad-servers is a very effective way of blocking advertising content.

Similarly, using lists of known malware associated domain names, it is very possible to cheaply block devices from accessing botnet infrastructure.

Finally, DNS can be used to optimize connectivity to streaming video caches, based on the IP address of the client computer. Several very large scale CDNs and service providers rely on this technique to route users to the right server.

One significant change with DoH is that the choice what to censor (or block) moves from the network operator to the browser vendor (who picks the DoH provider). If you are a privacy activist this is great, as long as you trust your browser vendor (and its government) more than your own country.

If you want to block ads, malware or if you need to route users to the best server, this will only be possible if the selected DoH vendor provides this service. This may not always be the case, especially if your browser or DoH vendor is also in the advertising business, or in fact competes with other CDNs.

Service provider originated DoH

I (and many others) argue that encrypted DNS is good and that we should be doing more of it. This often gets rejected out of hand because there is no encrypted way to provision a nameserver.

When we connect to a network, in almost all cases our devices get configured automatically with the right network settings & nameservers to use. Crucially, this autoconfiguration (be it DHCP or PPP) is not itself super-encrypted. So although our WiFi or 4G may be encrypted, the nameserver address is provided in plaintext over that connection.

This would allow a clever attacker to provision a snooping DoH server, defeating the point of DoH.

Because of this reason, browser vendors argue that they must ignore this autoconfiguration and hardcode a DoH server to use, over at a vendor they have selected on behalf of the user.

However – we should realize that the worst thing a network provider can do is inject a nameserver to learn what they could already learn from 3 other ways in which a browser leaks what it connects to!

DoH for oppressive regimes

It is frequently brought up that DoH is not built for privileged westeners living in countries with (perhaps deteriorating) rule of law. DoH is instead offered as a way for people living in oppressive regimes to evade censorship and scrutiny, which surely is a laudable goal.

It is often said that a little knowledge is a dangerous thing. I have no experience as a political freedom fighter, but I do have family members who have had to flee their profoundly undemocratic country because they are members of persecuted minority.

Some years ago I contacted them because a flaw had been found in TLS encryption that might be dangerous for them, and to my surprise no one there cared. They had been assuming their Internet traffic was being spied upon anyhow, and it turns they were right.

Later they told me “we all use VPNs” and was impressed how privacy conscious they were. But no they told me, everyone does that, because without a VPN the internet here is too slow, they suspected the spying machinery was generally overloaded. The VPN was for speed. It was not assumed to deliver privacy, on which point they were also proven right (most VPN providers are pretty shady).

I mention these two stories to show that our assumptions on oppressive regimes may be wildly off, and not represent the reality on the ground in China, Russia, Iran, Indonesia and Turkey. It is a lot of fun being an armchair imaginary political activist, but things are remarkably different if you actually live there.

Of course, more encryption is good if it makes the life of oppressive regimes harder. It is definitely a case of “we must do something, and this is something”. It is slightly (but only slightly) harder to extract the TLS Server Name Indication than it is to parse plain DNS.

But the dynamics of what will happen when people in those countries start relying on DoH for their safety are very hard to fathom. We hear that some governments have already moved beyond DNS based blocking, something we also saw in Russia during “the Telegram wars”.

In this context, it is instrumental to see DoH as a “very partial VPN” that only encrypts DNS packets, but leaves all other packets unmodified. And in fact, the various DoH apps for phones are implemented as VPN providers. If judged as a VPN, it does look like a terrible one full of weaknesses.

Given this, recommending DoH because it will help dissidents in dangerous countries may be a very “techbro” thing to do – assuming your invention must be helpful without fully understanding the situation. Because for all we know the false sense of security is actually more harmful.

We may wonder why proponents aren’t instead recommending a full VPN as a solution instead of pushing an incomplete solution. Perhaps the creep factor of routing all your traffic through a cloud provider is too much?

DoH as an incremental step

DoH proponents often agree that DoH itself does not fix all metadata privacy leaks, but insist it is a good step. The stated goal is to be able to eventually use any network, no mater how untrusted, and browse the web in complete privacy.

Oddly enough, it DoH proponents say it is fine to already move everyone’s DNS to a central place in another country, even though it currently does not provide any benefit, except sending your DNS to an additional party under control of a foreign snooping happy government.

To achieve the goal of perfect privacy on untrusted networks (without running a VPN) will require us to:

  1. Completely shut down plaintext HTTP
  2. Use encrypted DNS
  3. Deploy functional and downgrade-proof encrypted SNI.
  4. Disable OCSP/make OCSP stapling mandatory, or replacing it by an alternate mechanism.
  5. Host everything (every last widget) on large content distribution networks that are able to provide generic IP addresses, that have no discoverable link to the sites they are hosting.

If and only if all these steps are completed, shutting down entire internet industries in step 5, does DoH stand a chance to deliver actual privacy benefits.

Summarising

Centralised DoH is currently a privacy net negative since anyone that could see your metadata can still see your metadata when DNS is moved to a third party. Additionally, that third party then gets a complete log per device of all DNS queries, in a way that can even be tracked across IP addresses.

Even if further privacy leaks are plugged, DoH to a third party remains at best a partial solution, one that should not be relied upon as a serious security layer, since it will be hard to plug everything, especially if non-CDN content providers survive.

Encrypting DNS is good, but if this could be done without involving additional parties, that would be better.

And for actual privacy on untrusted networks, nothing beats a VPN, except possibly not using hostile networks.


DoH: (Anti-)Competitive and Network Neutrality aspects

$
0
0

Much has already been written on how moving to centralised DNS is bad for our privacy in 2019, and on that basis alone centralizing our DNS on a few large cloud providers seems like a bad idea.

In this post, I want to look at the business and commercial consequences of moving DNS from the Internet Service Provider to a centralised place in the cloud, paying special attention to network neutrality, (anti-)competitive & regulatory aspects.

I hope that afterwards, it will be clear that when service providers argue against DoH, this does not have to mean they were spying on their users and hope to continue doing so – there are other major problems as well.

The lay of the land

As of 2019, the internet roughly looks like this:

This is sampling of the big guns of content distribution. Most of these are reached directly from the ISP, with some content providers hosting their servers within the network service providers. The biggest Content Distribution Networks (CDNs) shift so much data it even makes sense to have regional caches spread out throughout an ISPs service area. 

In this layout, the ISP is completely in charge of distributing traffic. If it does a bad job, it will make its customers unhappy. If an ISP decides to prioritize one content provider over another, this is called a network neutrality violation, and various countries and regions (including the EU) have regulated the networking industry to outlaw this practice. Despite this fact, ISPs can sometimes wield significant power and for this reason they are under constant regulatory scrutiny. 

Note that some countries have an underdeveloped ISP market, with large fractions of the population having no choice of broadband service provider. Regulation is then of the utmost importance to keep everyone honest, but in some of these countries the regulator has been captured by industry and is no longer very effective. This mostly goes for the US. 

Technical details

Gaining access to content is a two-step process. Users, Apps and browsers almost exclusively connect to domain names (like ‘apple.com’) to retrieve content or perform actions. Such domain names can not be accessed directly on the internet because devices and servers talk to each other using IP addresses. DNS is used to find an IP address associated with a domain name, and then a connection can be made. Currently this mostly looks like this:

First (1) a device (computer, phone, tablet, tv, set-top box, streaming device) requests the IP address for ‘server1.apple.com’ from the ISP DNS server. This server either has the answer already (likely), or (2) it will talk to the CDN DNS server, which then (3) responds with the best IP address for the request, which is then (4) relayed to the original client. In the final step (5), the client device sets up a connection to that IP address. 

Of note is that steps 1 through 4 are essentially spent “waiting”. If this process is slow, ISP subscribers experience bad performance and the internet feels sluggish. 

Also noteworthy is that in step ‘2’, the customer’s network number (AS) is shared with the CDN. This can allow the CDN to pick the “best IP address” based on where the user is connected to the network, so content can be served to them from a well-placed cache. 

In this (the existing) configuration, ISPs and CDNs have very well aligned incentives – providing end-users with rapid and snappy access to content.

The brave new world of centralised DNS over HTTPS (DoH)

Centralised DoH is where browsers, operating systems, phones, tables or computers no longer send their DNS lookups to the network-provided (ISP) DNS server, but transmit the query to a server hosted by a third party (in this case, the first party is the customer, the second party is the ISP). 

The narrative behind centralised DoH is that regular DNS is unencrypted. In addition, Internet service providers are presumed to be profiling their customers and selling their browsing behaviour, and DoH is claimed to stop this (although it doesn’t).  DoH operators vow (with differing specificity) not to sell customer data. They will however keep 24 hour logs of all queries for analysis, for some reason.

So far three companies have been entertaining the idea of centralised DoH, Google, Mozilla (Firefox) and Cloudflare. Google has recently decided their browsers and phones will not use centralised DoH for now, but they are however doing it for their Google Home Wifi products.

Cloudflare is pushing heavily for the world to centralise DNS on Cloudflare. While their CEO tweets from time to time that he’d be happiest if other people also offered DoH, they are expending significant lobbying efforts in convincing (some) browser vendors, governments and regulators that it is a good idea to move DNS from regulated network providers to Cloudflare.

Specifically, in the US, these efforts have been successful, with Mozilla deciding all Firefox DNS traffic should be sent to Cloudflare by default. Firefox users there receive a notification about the move, but do not have to opt-in. If they want to go back to their network provided DNS, they have to click a scary button called “Disable Protection”:

Flow of control with Centralised DoH

Let’s say a Firefox user in the US wants to visit some Akamai hosted content. With centralised DoH, the DNS lookup bypasses the local ISP DNS and instead goes to a Cloudflare server. This server may have to in turn ask the Akamai nameserver for the IP address, and once this is returned to the user, the actual connection to Akamai can be established, providing access to the content.

We have to keep in mind that if a DNS lookup is slow, the entire internet feels sluggish. Slow DNS = Slow internet. In this new scenario, Cloudflare, an Akamai competitor, is responsible for making Akamai service snappy. In addition, for this to work, connectivity from the ISP to Cloudflare needs to be perfect, and the same goes for the connection between Akamai and Cloudflare – companies who previously did not exchange a lot of data, nor had much of an interest in doing so. 

In addition, where previously CDN operators could provide optimized DNS answers, because they could see where the query was coming from, Cloudflare has vowed not to provide such details to CDNs, ostensibly for privacy reasons. A CDN nameserver will henceforth only see that a query came from “Cloudflare”, and no longer from which ISP. This leads to sub-optimal routing, which I have personally experienced as “dog slow internet” when trying to access Akamai-hosted content through Cloudflare DNS.

Cloudflare, and connectivity to Cloudflare, now determine how quickly sites load to such an extent that we can well change our initial ‘Internet lay of the land’ diagram to this:

“Cloudflare-net”

Every website visit, every lookup of every domain name now passes through Cloudflare. If Cloudflare has a bad day, the internet has a bad day. If Cloudflare and the ISP have a mutual network issue, instead of this only impacting Cloudflare, it now impacts all sites a subscriber would like to visit. 

In addition, because of the flow of packets, not only does the ISP need to have top-notch perfect connectivity to Cloudflare, from now on, so must EVERY content provider in the world – the moment there is any congestion on the link, lookups slow down, and with that access to all content from that CDN.

Of special note is that regular ISPs are highly regulated precisely because they are in such a crucial position. Meanwhile, in its new position, Cloudflare has become critical internet infrastructure, but has somehow completely evaded regulation.

Why this is problematic

Within Cloudflare, there is no department called “Keeping Competitors’ Services Snappy”. In fact, Cloudflare lists many of the content providers above (and their suppliers) as outright competitors in their S-1 filing with the SEC:

Whenever ISPs have complained about Cloudflare inserting itself in the lookup chain, this has been framed as providers whining about no longer being able to violate their customers’ privacy. But for example in Europe where ISPs are not in the business of selling their user data, this rings hollow.

The real problem is that an unregulated entity is attempting to take over highly regulated services while gaining significant market power over both ISPs and content providers. 

The nature of ISPs is comparable to that of utilities and it is therefore proper to regulate them as such. It is hugely problematic if some of their indeed considerable market power is then usurped by a new third party that has managed to completely escape regulation.

Why are Cloudflare and others pushing for centralised DoH?

This is indeed somewhat of a mystery. Like many websites that claim to care about our privacy before stuffing our browsers with cookies and trackers, Cloudflare (and Google and Mozilla) tell us they are in it to improve our privacy. Only one of these three is actually a non-profit though. It is pretty hard to see Google or Cloudflare as publicly traded charities heavily invested in improving our privacy. Mozilla is a very credible privacy advocate (even if I disagree with how they want to improve my privacy).

When questioned, Cloudflare states they are doing it because 1) it does not cost them real money and 2) users of the Cloudflare DoH service get slightly faster access to Cloudflare-hosted content

The first bit could technically be true, although providing high speed encrypted DNS service does cost tons of CPU cycles. It appears however that Cloudflare is spending serious time lobbying governments in Europe and the US to get them behind centralised DoH – and unless there is a new pro-bono trend in lobbying I am not aware of, such efforts cost real money.

The second part is also interesting and somewhat revealing. If the impetus to centralise DoH on Cloudflare is indeed to speed up Cloudflare services relative to competitors, that is a clear network neutrality violation. It has also been claimed that this effect is in fact tiny, but if so, there is no good faith explanation left anymore why the company is attempting to centralise the internet on itself.

A now deleted Twitter conversation outlining how centralised DoH by Cloudflare specifically benefits Cloudflare customers

In the absence of good explanations the mind wanders to bad explanations. A crucial fact is that some CDNs that compete with Cloudflare face immediate challenges if DNS moves away from the ISP – CDNs will lose sight of where DNS queries are actually coming from, leading streaming video to (initially) be served from potentially sub-optimal locations. 

What could be done

From a European perspective, it is quite clear that any centralised DoH provider that manages to become the new default for lookups is, in fact, also a telecommunications service provider. With this comes all the fun of the NIS directive and the full force of the EU telecommunications framework directives. Governments here would do well to recognize this fact and regulate accordingly.

Meanwhile, Mozilla has negotiated a privacy contract with Cloudflare for the DoH services, and we can find the promises in that contract here. There is no trace of network neutrality in there, nor is there a commitment from Cloudflare to actively work on establishing top-notch service to relevant content delivery networks. Life would be a lot better if Mozilla required such commitments from Cloudflare. 

If regulated as such, centralised DNS over HTTPS could be made more palatable – but it might also make running a DoH service for free unattractive enough that operators will no longer bother. 

Summarising

Centralised DNS over HTTPS is pushed to keep ISP’s presumed prying eyes away from our DNS traffic, and grant such access to other parties like Cloudflare that then promise not do anything bad with our data. There are good reasons to assume centralised DoH is bad for privacy

In addition, by moving crucial telecommunication network functionality from the regulated ISP to unregulated cloud providers, there is significant risk of network neutrality violations. This is because the centralised DNS over HTTPs provider is now in charge of providing snappy service, including to its documented competitors. All this without regulation.

Governments should recognize centralised DoH operators that take over DNS lookups by default for what they are: providers that need to be regulated because of their systemic position. And finally, it would behoove Mozilla (who are strident fighters for a free and open internet) to make sure their contract with Cloudflare includes provisions that make sure all CDNs are equally well served by their chosen DoH providers.

Goodbye DNS, Goodbye PowerDNS!

$
0
0

After over 20 years of DNS and PowerDNS, I am moving on. Separate from this page, I am releasing a series of three huge posts on the history of PowerDNS, so I won’t dwell too much on that here.

This is not an easy story to write. I don’t like to grandstand, but when the founder of a project decides to leave after two decades, people do expect some form of an explanation.

It is also customary to describe such an exit in upbeat terms, sometimes to the point that you wonder that if things were so great, why is this person leaving?

But the reality is, I got bored and wanted to do new things. PowerDNS and the wonderful people who I met along the way have taught me so much – software development, operations, marketing, sales, business development, community building, writing internet standards & much more. It has been a wonderful ride.

But now it appears DNS and I are somewhat at the end of our relationship (even though I will remain a minor PowerDNS shareholder). Formally I leave on December 31st.

Helping build PowerDNS to what it is today – a flourishing department of Open-Xchange, able to fund itself by delivering its software to paying users, while maintaining good relations with the open source community, has been an incredible honour. 

As I leave the company, management and software development have long been in the hands of people I am proud to call my successors. They are doing a better job than I ever did – the only claim I have on the current success is that I helped recruit this next generation. I don’t think there is much more to aspire to when you create a company than leaving it behind in good shape.

(please do read on till the end of this post for the Oscar-speech round of thanks!)

Some observations

A few years ago, I became somewhat upset with DNS. This is not the main reason for quitting the profession, but now that I have your attention for one final time I do want to take one last stand on two important issues.

In 2018 I did a talk over at the IETF on the ever increasing size of the combined set of DNS specifications – I had looked through the upcoming work from the various standards groups. I plotted the amount of text involved, and also extended this to the historical beginnings of DNS. And it turned out that DNS was growing at one page every two days – without getting any better. I titled this talk “The DNS Camel”, and I wondered if just one more standard might break the back of the protocol.

Many listeners were sympathetic to this story, but also, nothing happened. The protocol just continued to grow. There was the legitimate question if I could please do more than complain. My main worry was that DNS would become even more inaccessible than it was already. I launched the ‘Hello DNS’ project to create a unified point to start learning about this protocol. I think that helped.

But I still fervently believe DNS is getting way, way too big. Not only does this make our software ever more complicated, it is also ever harder for new people to enter the field. You just don’t get all this *stuff* without half a decade of experience. This will lead to dangerous bugs but it also means we’ll miss out on younger talent that has not yet had the chance to incorporate the wisdom we’ve been imparting via the many RFCs we write each year.

And I think I am not alone in believing this – as I type this I am surrounded by no less than 4 (tiny) camels that people sent me as gifts (thanks!). 

DNS and the Cloud

Later, I saw that there was a push for “the cloud” to take over yet another part of our Internet. Encrypted DNS is great, we should all do far more of that. But I was (and am) tremendously unhappy that more and more of DNS is now set to move to (among others) Google and Cloudflare control – both of whom protest that they have nothing but the best intentions. But still I see yet more of the Internet getting centralised, and I worry where that will go

I also worry that people somehow are not worrying about this – somehow we’ve made peace with the fact that companies far away get very detailed records on everything we do online, and that we just have to live with that.

Together with Open-Xchange we spent two years spreading the word on centralised DNS over HTTPS, and I do hope we have made people think about this wisdom of moving DNS to centralised third parties.

A round of thanks and appreciations

To end on a happier note – I want to thank the PowerDNS people for the tremendous job they are doing. Already more than a year ago I started removing myself from more and more discussions, and the way you are running the business fills me with pride.

I want to thank the many people who believed in PowerDNS, who believed in me and worked with our technology, sometimes long before it was ready for prime time. You truly helped shape the product. I am very grateful for the people that decided to work with and for us, even when we did not look like much of a normal company. And of course, so much of PowerDNS actually came from the open source community, including key and core components. I can’t thank the contributors enough. 

One area of special pride is how we enabled a number of PowerDNS contributors & consultants to grow their own business or to enhance their own career. It is wonderful to see how we’ve been able to help each other get ahead in life, while doing useful things.

I also want to thank Open-Xchange (the PowerDNS parent company) for taking such good care of the company. As noted in the PowerDNS history posts, OX took in PowerDNS at a time where business was good, but the future was highly uncertain. Rafael and crew believed in the story and acquired the company. 

Open-Xchange provided a powerful sales organization, but also a rock solid project department that helped actually close deals and to deliver working solutions over at complicated customers.

It is very rare for acquisitions to be truly successful, but PowerDNS and Open-Xchange really are better together. Using the skills from both companies, PowerDNS expanded into the PowerDNS Platform that delivers the solutions that large scale internet operators need and can use.

I wish everyone the best of luck, and I sincerely hope PowerDNS continues to be a place where people love to work and that it continues to be a force that helps improve the open internet!

Signing off – 

Bert Hubert – PowerDNS co-founder (a title no one will ever take away from me!)

Viewing all 37 articles
Browse latest View live