20 Jun 2009
Who Pwns The Internet? (Take 2)
Filed under: DNSSEC, Security — Ben @ 20:08
Another interesting way to pwn the Internet is to control the routing of packets to critical nameservers. In practice, Internet routing is done by ASes (Autonomous Systems). If an AS wants to pwn a nameserver on a network it controls, it is a trivial matter: it just redirects the packets to its own nameserver. I’d draw you a picture, but I’m sure Matasano Chargen will do it prettier.
So. I thought it would be instructive to determine which ASes had control over which domains. More fun with dot.
The picture is no longer quite so rosy for the UK, but still, not bad, all things considered.
UK's AS dependencies
But France. I don’t know what to say about France. France is surreal. I’ve linked through to a much bigger version because, well, you’ve got to lose yourself in the spiderwebs. The SVG is here, though.
Small version of France's AS dependencies
As for Fiji, I’d love to show you Fiji, but the way I’m doing it doesn’t work for Fiji right now. And hence, obviously, not for the whole world, either. Coming soon, I hope.
Share This
Comments (0)
14 Jun 2009
Who Pwns The Internet?
Filed under: DNSSEC, Security — Ben @ 16:22
Update: Ben Hyde suggested I should use the (undocumented) “concentrate” option to dot, which certainly tidies up the graphs. So I did.
A remark on the IETF DNS Working Group’s mailing list got me thinking.
Suppose I were the owner of nordu.net (to pick an example at random), then I could take control of sunet.se, for about 25% of Internet users, since one of their four nameservers is server.nordu.net. Similarly, I could then take control of ripe.net for 25% of those 25% (via sunic.sunet.se). One in seven of those guys could fall victim to my ownership of nic.fr via ns-sec.ripe.net, and from there I have complete control of fr (that is, France) - ok, by now, for only a bit under 1% of the Internet, but even so, that’s kinda worrying, don’t you think? And obviously if I own sunet.se then it would be more like 3.5%…
On the other hand, uk does not suffer from this problem: it depends only on nic.uk. Which seems like a much better idea. Anyway, I got to wondering just how bad this problem actually is, which led to me having more fun with dot. So, for a taster, here’s France’s dependencies…
France's dependencies
And here’s the UK’s
UK's dependencies
And here’s Fiji (I include this for Jasvir, who is getting married there soon, and ought to know the terrible risk he’s taking)
Fiji's dependencies
And all the top level domains put together
All TLDs' dependencies
So that one is pretty but a bit hard to digest. Obviously the main news is that there are a lot of domains which could interfere with one or more TLDs!
Another way to think about this is to wonder who could pwn the most TLDs? Well, the answer (after the root, of course) is that nstld.com, gtld-servers.net, com and net come in equal first with 228 TLDs pwnable. Next up is Affilias, through a variety of domains, including org and info, able to control 187 TLDs. After that comes se (Sweden) with 158 and nordu.net, sunet.se, chalmers.se, kth.se, uninett.no, uu.se, edu, no, norid.no, lth.se and uit.no, all able to have a go at 157 TLDs.
Food for thought. Especially if you’re thinking about DNSSEC.
Share This
Comments (9)
12 Jun 2009
Ignorance Transfer Network
Filed under: Programming, Security — Ben @ 12:41
SSDSIG recognises that some commonly used languages (e.g. C, php etc.) allow, or even encourage, programming practices that introduce security vulnerabilities. Accepting that in time market forces may encourage the adoption of safer alternatives some members feel that the process needs to be accelerated. The reasons for the continued use of ‘unsafe’ ‘languages and the near-term feasibility of alternatives for commercial systems of modest criticality are complex and ill-understood. This also applies to the slow uptake of more formal methods Further data on this is required.
This is a gem from “Secure Software Development - a White Paper: Software Security Failures: who should correct them and how” by Bill Whyte and John Harrison, from the Cyber Security Ignorance (Knowledge, shurely? Ed) Transfer Network, presumably at the taxpayer’s expense. I hear through the grapevine that they’re planning to spend more of our money to set up a “Secure Software Development Panel” to deliberate on the deep thinking exemplified above. Awesome.
So, what’s wrong with that statement? Firstly, I think we’ve got past the idea that there’s something extra special about buffer overflows as a security issue. Yes, there are many languages that prevent them completely (e.g. PHP, amusingly), but they don’t magically produce secure programs either. Indeed, pretty much all languages used for web development are “safe” in this respect, and yet the web is a cesspit of security problems, so how did that help?
Secondly, the claim that the “reasons are … complex and poorly understood” is a great one to make if you want to spend your life wasting your time on government money, but, well, not exactly true. C is widely used because it is fast, portable, can do anything and has a vast amount of software already written in it that is otherwise difficult to get at. Which is, of course, why PHP is widely used: because it’s one way for the less capable programmer to get at all that C out there. As for “near-term feasibility of alternatives”, well, name an alternative and I’m pretty sure anyone knowledgeable in the field could give you a thorough rundown on its near-term feasibility in an hour or so.
Thirdly, talking about “unsafe” languages implies that there might be “safe” ones. Which is nonsense.
Fourthly, formal methods. Really? The reason there’s slow uptake is because they don’t work. Get with the program, guys!
Share This
Comments (1)
30 May 2009
Wave Trust Patterns
Filed under: Crypto, Open Source, Open Standards, Privacy, Security — Ben @ 6:04
Ben Adida says nice things about Google Wave. But I have to differ with
… follows the same trust patterns as email …
Wave most definitely does not follow the same trust patterns as email, that is something we have explicitly tried to improve upon, In particular, the crypto we use in the federation protocol ensures that the origin of all content is known and that the relaying server did not cheat by omitting or re-ordering messages.
I should note, before anyone gets excited about privacy, that the protocol is a server-to-server protocol and so does not identify you any more than your email address does. You have to trust your server not to lie to you, though - and that is similar to email. I run my own mail server. Just saying.
I should also note that, as always, this is my personal blog, not Google’s.
Share This
Comments (4)
29 May 2009
Google Wave Federation
Filed under: Crypto, Security — Ben @ 0:26
Today Google announced Google Wave. I’m not going to talk about Wave itself, just search for it and get a ton of articles. Suffice it to say that it is awesome.
What I want to mention is the Wave Federation Protocol, and in particular, General Verifiable Federation, which is the part my talented colleague Lea Kissner and I worked on. I know I’m a crypto geek, but I think this protocol is pretty interesting, with applications wider than just Google Wave, since it creates a platform for building federated messaging systems in which you do not trust intermediaries.
Lea and I welcome feedback on the protocol, which we are sure is full of mistakes right now, as we were in a bit of a rush to hit today’s deadline…
(And for those friends who are probably wondering now if this is why I went to Australia earlier this year, the answer is, unsurprisingly: yes).
Share This
Comments (1)
20 May 2009
ECMAScript 5
Filed under: Open Standards, Programming, Security — Ben @ 4:35
When I started working on Caja I had not really plumbed the depths of Javascript (or, as it is more correctly called, ECMAScript 3) and I was very surprised to learn how powerful it actually is. I was also pretty startled by some of the nasty gotchas lurking for the unwary (or even wary) programmer (had I known, perhaps I would never had tried to get Caja off the ground!).
For some time now, the ECMAScript committee has been working on a new version of Javascript which fixes many of these problems without breaking all the existing Javascript that is out there. This seems to me a remarkable achievement; Mark Miller, Mike Samuel (both members of the Caja team) and Waldemar Horwat gave a very interesting talk about these gotchas and how the ES5 spec manages to wriggle around them. I recommend it highly. Slides are available for those who don’t want to sit through the presentation, though I would say it is worth the effort.
Share This
Comments (0)
14 May 2009
So You Think Linux is Secure
Filed under: Security — Ben @ 10:56
“This action also made our offensive cybercapabilities ineffective against them, given the cyberweapons were designed to be used against Linux, UNIX and Windows,” he said, citing three popular computer operating systems.
If you ignore the cyberannoying cybertrend towards cyberusing “cyber” as a cyberprefix for everything, then you’ll notice that our man in DC is lumping Linux and Windows in the same attackable boat.
I guess we should also ignore the fact that he’s commenting on Kylin, which is derived from FreeBSD, which is, pretty much, UNIX - though I am told it doesn’t licence the UNIX trademark, unlike, say, MacOS.
Share This
Comments (3)
4 May 2009
Why Privacy Will Always Lose
Filed under: Identity Management, Privacy — Ben @ 17:05
In social networks, that is.
I hear a lot about how various social networks have privacy that sucks, and how, if only they got their user interaction act together, users would do so much better at choosing options that protect their privacy. This seems obviously untrue to me, and here’s why…
Imagine that I have two otherwise identical social networking sites, one with great privacy protection (GPPbook) and one that has privacy controls that suck (PCTSbook). What will my experience be on these two sites?
When I sign up on GPPbook, having jumped through whatever privacy-protecting hoops there are for account setup, what’s the next thing I want to do? Find my friends, of course. So, how do I do that? Well, I search for them, using, say, their name or their email address. But wait - GPPbook won’t let me see the names or email addresses of people who haven’t confirmed they are my friends. So, I’m screwed.
OK, so clearly that isn’t going to work, let’s relax the rules a little and use the not-quite-so-great site, NQSGPPbook, which will show names. After all, they’re rarely unique, so that seems pretty safe, right? And anyway, even if they are unique, what have I revealed? That someone signed up for the site at some point in the past - but nothing more. Cool, so now I can find my friends, great, so I look up my friend John Smith and I find ten thousand of them. No problem, just check the photos, where he lives, his birthday, his friends and so forth, and I can tell which one is my John Smith. But … oh dear, no friend lists, no photos, no date of birth - this is the privacy preserving site, remember? So, once more I’m screwed.
So how am I going to link to my friends? Pretty clearly the only privacy preserving way to do this is to contact them via some channel of communication I have already established with them, say email or instant messaging, and do the introduction over that. Similarly with any friends of friends. And so on.
Obviously the experience on PCTSbook is quite different. I look up John Smith, home in on the ones that live in the right place, are the right age, have the right friends and look right in their photos and I click “add friend” and I’m done.
So, clearly, privacy is a source of friction in social networking, slowing down the spread of GPPbook and NQSGPPbook in comparison to PCTSbook. And as we know, paralleling Dawkins on evolution, what spreads fastest is what we find around. So what we find around is social networks that are bad at protecting privacy.
This yields a testable hypothesis, like all good science, and here it is: the popularity of a social networking site will be in inverse proportion to the goodness of its privacy controls. I haven’t checked, but I’ll bet it turns out to be true.
And since I’ve mentioned evolution, here’s another thing that I’ve been thinking about in this context: evolution does not yield optimal solutions. As we know, evolution doesn’t even drive towards locally optimal solutions, it drives towards evolutionary stable strategies instead. And this is the underlying reason that we end up with systems that everyone hates - because they are determined by evolution, not optimality.
So, is there any hope? I was chatting with my friends Adriana and Alec, co-conspirators in The Mine! Project, about this theory, and they claimed their baby was immune to this issue, since it includes no mechanism for finding your friends. I disagree, this means it is as bad as it possible for it to be in terms of “introduction friction”. But thinking further - the reason there is friction in introductions is because the mechanisms are still very clunky. I have to use cut’n'paste and navigating to web pages that turn up in my email (and hope I’m not being phished) and so forth to complete the introduction. But if the electronic channels of communication were as smooth and natural as, say, talking, then it would be a different story. All of a sudden using existing communications channels would not be a source of friction - instead not using them would be.
So, if you want to save the world, then what you need to do is improve how we use the ‘net to communicate. Make it as easy and natural (and private) as talking.
Share This
Comments (12)
27 Apr 2009
Three Books
Filed under: Books — Ben @ 10:00
I do most of my reading when travelling (have to find some way to fill in the email gap), and the last three books I’ve read have been notably great. So, in no particular order…
Musicophilia: Tales of Music and the Brain, by the slightly insane Oliver Sacks. I generally enjoy Sacks’ books but they always feel a bit light on science. This book is different - full of fascinating anecdotes backed up by actual research. Most astonishing is the way that music can have a radical affect on people suffering from very debilitating conditions, such as Parkinson’s and Alzheimer’s. Great read.
Old Man’s War by John Scalzi. The cover compares Scalzi to Robert Heinlein. This strikes me as entirely unfair; Heinlein’s books are entirely populated by Heinlein talking to himself (if the character is male) and brainy bimbos that are hoplessly in love with him. Scalzi manages a much grittier and highly engaging version of Starship Troopers (which is admittedly a classic, even if Heinlein did write it).
Finally, Crooked Little Vein by Warren Ellis. A high speed romp through the perverts of the modern age by the world’s unluckiest private investigator in search of the lost, secret, alternative constitution of the United States of America, under the control of a monkey-crap injecting Most Powerful Man In The World. Really. Apparently it was supposed to shock me (said the back cover) but I was mostly laughing.
Share This
Comments (0)
7 Apr 2009
Trust Me, I’m Signed!
Filed under: Rants, Security — Ben @ 15:30
The W3C recently announced their spec for signing widgets. Signing things is a good idea, if you’d like to be assured that they come from where you think they come from, or you want to detect tampering. But I would have hoped we were way past statements like this
Widget authors and distributors can digitally sign widgets as a trust and quality assurance mechanism.
If trust and quality were assured by signatures then our lives would be so much easier - but sadly it is not so. Indeed, it is so much not so that CAs, in an amazing piece of marketing, have managed to persuade us that, since they work so poorly for trust, what we should do is pay them even more money to get more robust signatures (a.k.a. EV certificates)!
Anyway, I was sufficiently irritated by this stupidity that I felt it necessary to remark on it. Which prompted this absolutely brilliant response from my friend Peter Gutmann
From the report:
Of signed detected files, severity of the threats tended to be high or severe, with low and moderate threats comprising a much smaller number of files:
Severe 50819
High 73677
Moderate 42308
Low 1099
So there you go, signing definitely does provide a “trust and quality assurance mechanism”. If it’s a CA-certified signed rootkit or worm, you know you’ve been infected by the good stuff.
“the report”, by the way, is a large scale study by Microsoft which makes for some interesting reading. In particular, they also acknowledge that even the promise that signatures would at least let you track down the evil bastard that wrote the code has proven empty
Though also intended to identify the signing parties, Microsoft has been unable to identify any authors of signed malware in cooperation with CAs because the malware authors exploit gaps in issuing practices and obtain certificates with fraudulent identities.
Share This
Comments (5)
CodeCon Is Back!
Filed under: General, Open Source, Programming — Ben @ 10:43
Unfortunately, I can’t be there, but the lineup looks great. The bio-hacking track looks particularly fun.
Not long to go now, less than two weeks. Sign up!
Share This
Comments (1)
28 Mar 2009
More Banking Stupidity: Phished by Visa
Filed under: General, Rants, Security — Ben @ 14:21
Not content with destroying the world’s economies, the banking industry is also bent on ruining us individually, it seems. Take a look at Verified By Visa. Allegedly this protects cardholders - by training them to expect a process in which there’s absolutely no way to know whether you are being phished or not. Even more astonishing is that this is seen as a benefit!
Frame inline displays the VbV authentication page in
the merchant’s main window with the merchant’s
header. Therefore, VbV is seen as a natural part of the
purchase process. It is recommended that the top
frame include the merchant’s standard branding in a
short and concise manner and keep the cardholder
within the same look and feel of the checkout process.
Or, in other words
Please ensure that there is absolutely no way for your customer to know whether we are showing the form or you are. In fact, please train your customer to give their “Verified by Visa” password to anyone who asks for it.
Craziness. But it gets better - obviously not everyone is pre-enrolled in this stupid scheme, so they also allow for enrolment using the same inline flow. Now the phishers have the opportunity to also get information that will allow them to identify themselves to the bank as you. Yes, Visa have provided a very nicely tailored and packaged identity theft scheme. But, best of all, rather like Chip and PIN, they push all blame for their failures on to the customer
Verified by Visa helps protect you from fraudulent claims from cardholders – that they didn’t take part in, or authorise, a payment. Once you are up and running with Verified by Visa, you are no longer liable for chargebacks of this nature.
In other words, if the phisher uses your Verified by Visa password, then it’s going to be your fault - obviously the only way they could know it is if you told them! If you claim it was not you, then you are guilty of fraud; it says so, right there.
Share This
Comments (25)
Mining Is Easy
Filed under: Privacy, Security — Ben @ 13:33
I’ve written before about the risks involved in exposing the social graph. Now there’s a nice video showing just how easy it is to mine that graph, and other data we give away so freely, using Maltego2. Scary stuff.
Share This
Comments (2)
10 Mar 2009
Capabilities for Python
Filed under: Capabilities, Security — Ben @ 16:13
Guido van Rossum has never been a big fan of this idea, and he recently unloaded a pile of reasoning as to why. Much of this really boils down to the unsuitability of existing Python implementations as a platform for a capability version of the language, though clearly there are language features that must go, too. There’s more on this point from tav, but perhaps his idea of translating Capability Python into Cajita is a more fruitful course…
Anyway, what intrigued me more than the specifics was this statement from Guido
The only differences are at the library level: you cannot write to the filesystem, you cannot create sockets or pipes, you cannot create threads or processes, and certain built-in modules that would support backdoors have been disabled (in a few cases, only the insecure APIs of a module have been disabled, retaining some useful APIs that are deemed safe). All these are eminently reasonable constraints given the goal of App Engine. And yet almost every one of these restrictions has caused severe pain for some of our users.
Securing App Engine has required a significant use of internal resources, and yet the result is still quite limiting. Now consider that App Engine’s security model is much simpler than that preferred by capability enthusiasts: it’s an all-or-nothing model that pretty much only protects Google from being attacked by rogue developers (though it also helps to prevent developers from attacking each other). Extrapolating, I expect that a serious capability-based Python would require much more effort to secure, and yet would place many more constraints on developers. It would have to have a very attractive “killer feature” to make developers want to use it…
There are two important mistakes in this.
Firstly, capability enthusiasts don’t prefer a security model in the sense that Guido is suggesting; we prefer a way of enforcing a security model. App Engine does this enforcement through layers of sandboxing whereas capability languages do it by not providing the untrusted code with the undesirable capabilities. Of course, a side effect of this approach is that capabilities allow far more subtle security models (e.g. “you can only write this part of the file system” or “you can only write files a user has specifically designated” or “you can create sockets, but only for these destinations”) without much extra work and so capability enthusiasts have a tendency to talk about and think in terms of those subtler models. However, Guido’s all-or-nothing model can be implemented easily with capabilities - we don’t have to be subtle if he doesn’t want us to be!
This fallacy causes the second error - because the security model does not have to be subtler, there’s no particular reason to imagine it should take any longer to implement. Nor need it place many extra constraints on developers (I will concede that it must place some constraints because not all of Python is capability-safe). Developers are really only constrained by capability languages in the intended sense: they can’t do the things we don’t want them to do. If the security models are the same, the constraints will be the same, regardless of whether you use sandboxes or capabilities.
Incidentally, I tried to sell the idea of capabilities to the App Engine team several years ago. Given how far we’ve come with Caja in a year, working on a language that is definitely less suited to capabilities than Python is, I would be very surprised if we could not have done the same for Python by now.
Share This
Comments (0)
9 Mar 2009
The Telegraph Show How Not To Do It
Filed under: Security — Ben @ 3:54
I’m a bit stunned that an organisation the size of The Telegraph would store user passwords in plaintext, but, well … they do.
Share This
Comments (3)
7 Mar 2009
DNSSEC: Update
Filed under: DNSSEC — Ben @ 18:26
I’ve had feedback since I wrote about DNSSEC that my makefile didn’t work on many platforms. Why Linux and FreeBSD have to use different versions of make I have no idea, but at least it is possible to write makefiles that work on either, if you’re careful. So, I’ve updated the tarball with a version that should work most places. Give it a try.
For the geeky, here’s a diff:
iff -r 94acb807ca7c -r d4a50f0d790c Makefile
--- a/Makefile Sat Mar 07 16:41:39 2009 +0000
+++ b/Makefile Sat Mar 07 16:49:37 2009 +0000
@@ -1,4 +1,6 @@
all: run
+
+.PHONY: named.root anchors.xml isc-dlv.conf
push: dnssec.tgz
scp dnssec.tgz www.links.org:files
@@ -6,7 +8,7 @@
run: named.root rndc.key itar-trusted-keys.conf force-dnssec.conf isc-dlv.conf
named -c named.conf -d 10 -g
-named.root!
+named.root:
rm -f named.root
wget ftp://ftp.rs.internic.net/domain/named.root
@@ -17,7 +19,7 @@
./anchors2keys < anchors.xml > /tmp/itar-trusted-keys
mv /tmp/itar-trusted-keys itar-trusted-keys.conf
-anchors.xml! iana-pgp-keys
+anchors.xml: iana-pgp-keys
# appears to break without -v!
rsync -v rsync.iana.org::itar/anchors.xml rsync.iana.org::itar/anchors.xml.sig .
gpg --no-default-keyring --keyring ./iana-pgp-keys --verify anchors.xml.sig anchors.xml
@@ -46,7 +48,7 @@
gpg --export 1BC91E6C | gpg --no-default-keyring --keyring ./isc-pgp-keys --import
rm isc-key.tmp* 363
-isc-dlv.conf! isc-pgp-keys
+isc-dlv.conf: isc-pgp-keys
rm -f dlv.isc.org.named.conf*
wget http://ftp.isc.org/www/dlv/dlv.isc.org.named.conf http://ftp.isc.org/www/dlv/dlv.isc.org.named.conf.asc
gpg --no-default-keyring --keyring ./isc-pgp-keys --verify dlv.isc.org.named.conf.asc dlv.isc.org.named.conf
Share This
Comments (0)
3 Mar 2009
Native Client
Filed under: Security — Ben @ 13:36
I mentioned Native Client in passing a while back but I didn’t explain what it is…
Native Client is a way to sandbox code without resort to hardware assistance. In short, what it does is statically verify that the code obeys certain rules, and as a result, that the code can only use the interfaces to the rest of the system that the sandbox intends it to use. In other words, it’s a bit like Caja only for native code instead of Javascript. There’s also a version of gcc that produces code that will pass the static validation - which means that pretty much any C (or C++ or Fortran) program can be ported to Native Client with little difficulty.
The Native Client team think the point of Native Client is to allow web apps to have access to high speed code without compromising the security of the user. This is certainly a use, but I find the idea of using it to enforce security in other areas quite interesting, too. For example, with Native Client you could make Mark Seaborn’s Plash both portable and more useful - which Mark has been working on. Of course, before this can be relied on we need to know that NaCl is secure, so it is interesting that the team are offering cash for bugs. You could get paid for playing with NaCl!
Share This
Comments (0)
24 Feb 2009
Doing DNSSEC Right
Filed under: DNSSEC, Security — Ben @ 16:36
Since posting about DNSSEC, I’ve had lots of great feedback. So, in no particular order…
Various people have pointed out that DLV is not as bad as I suggested
* DLV is only activated for queries that cannot be proved secure in the cache
* DLV employs aggressive negative caching - it works out whether existing cached NSEC (and NSEC3?) records would prove nonexistence of a record before bothering to query it
* DLV is not used for domains that have trusted keys
Although the second measure is, as I remember it, strictly speaking against the rules (one is not supposed to calculate negative responses from the cache), clearly it can be stipulated that a DLV server must behave when serving NSEC records. Anyway, the net result is that the overhead of DLV is actually quite reasonable. I still say it should be run by every DLVed domain for every other, though. In any case, I am going to switch it on in my own caching resolver.
One thing I wanted to achieve is that a DNSSEC-ignorant resolver downstream of my caching resolver would only get validated results. I tried to do this with the dnssec-must-be-secure configuration option - but this is wrong. That option requires everything to be signed, whereas in DNSSEC it is perfectly OK for a zone to be unsigned so long as its parent delegates to it with no keys (bear in mind that with DNSSEC the nonexistence of the keys is provable, and so this is secure). In fact, BIND 5.3 behaves as I want it to with just DNSSEC enabled. In BIND 5.4 onwards I will have to switch it on with the dnssec-validation option (gee, thanks, ISC for making a backward incompatible change!).
Jelte Jansen operates a domain with various broken entries - this is very handy for testing and I now include its key in my configuration. Note that if you want to see a record that fails validation, then you need to set the CD bit (with dig, +cd or +cd +dnssec if you want to see the DNSSEC records).
Paul Hoffman wonders why I would prefer a signature (for anchors2keys) to download over HTTPS. The reason is that HTTPS download doesn’t really prove the file hasn’t been interfered with - the server will serve anything that happens to be in the filesystem over HTTPS, of course. A signature would be done with a key that I would hope is very strictly supervised, and so is far more trustworthy.
Incidentally, for DNSSEC newbies, one of the interesting features of DNSSEC is that it can be done entirely with offline keys. Proving negatives (i.e. the nonexistence of names) with such a constraint is an interesting problem - and one that I spent three years working on, leading in the end to RFC 5155.
I’m sure everyone is tired of reading my config and makefile, so there’s a tarball here.
Finally, thanks very much to all the experts for the excellent feedback.
Share This
Comments (2)
22 Feb 2009
DNSSEC With DLV
Filed under: DNSSEC, Security — Ben @ 18:38
Tony asks “what about DLV?”.
DLV is Domain Lookaside Validation. The idea is that if your resolver can’t find a trust anchor for foo.bar.example.com, then it can go and look in a lookaside zone, hosted at, say, dlv.isc.org, for trust anchors. So, it would first look for com.dlv.isc.org and then example.com.dlv.isc.org and so forth.
So, what do I think of this? It’s another way to solve the problem of having the root not signed.
How does it compare to IANA’s ITAR?
1. It’s much less efficient - all those extra lookups for every query.
2. It covers more than just TLDs - ITAR could, too, but it doesn’t, for whatever reason.
3. There doesn’t seem to be a way to force it, like there is for ITAR. That is, I would like to configure my caching server to force DNSSEC for every domain that exists in DLV, but I don’t believe I can. This makes DLV practically useless, since now only clients that check the AD bit will be aware of the failure.
Also, I think it would be organisationally better if all the participating domains would run DLV for each other, rather than have any single party running it.
Anyway, I modified my setup to also use DLV. Here’s the new Makefile
all: run
run: named.root rndc.key itar-trusted-keys.conf force-dnssec.conf isc-dlv.conf
named -c named.conf -d 10 -g
named.root!
rm -f named.root
wget ftp://ftp.rs.internic.net/domain/named.root
rndc.key:
rndc-confgen -a -c rndc.key
itar-trusted-keys.conf: anchors2keys anchors.xml
./anchors2keys < anchors.xml > /tmp/itar-trusted-keys
mv /tmp/itar-trusted-keys itar-trusted-keys.conf
anchors.xml! iana-pgp-keys
# appears to break without -v!
rsync -v rsync.iana.org::itar/anchors.xml rsync.iana.org::itar/anchors.xml.sig .
gpg --no-default-keyring --keyring ./iana-pgp-keys --verify anchors.xml.sig anchors.xml
anchors2keys:
wget --no-check-certificate https://itar.iana.org/_misc/anchors2keys
chmod +x anchors2keys
iana-pgp-keys:
html2text -nobs http://www.icann.org/en/general/pgp-keys.htm > iana-pgp-keys.tmp
# IANA's PGP keys suck. Clean them up...
awk '/^>/ { print substr($$0,2,100); next; } /^Version:/ { print; print ""; next; } { print }' < iana-pgp-keys.tmp > iana-pgp-keys.tmp2
gpg --import iana-pgp-keys.tmp2
gpg --export 81D464F4 | gpg --no-default-keyring --keyring ./iana-pgp-keys --import
rm iana-pgp-keys.tmp*
force-dnssec.conf: itar-trusted-keys.conf
awk '/^"/ { gsub(/"/, "", $$1); print "dnssec-must-be-secure \"" $$1 "\" true;"; }' < itar-trusted-keys.conf | sort -u > force-dnssec.conf
isc-pgp-keys:
rm -f 363
wget --no-check-certificate https://www.isc.org/node/363
html2text < 363 > isc-key.tmp
awk '/^Version:/ { print; print ""; next; } { print }' < isc-key.tmp > isc-key.tmp2
gpg --import isc-key.tmp2
gpg --export 1BC91E6C | gpg --no-default-keyring --keyring ./isc-pgp-keys --import
rm isc-key.tmp* 363
isc-dlv.conf: isc-pgp-keys
rm -f dlv.isc.org.named.conf
wget http://ftp.isc.org/www/dlv/dlv.isc.org.named.conf http://ftp.isc.org/www/dlv/dlv.isc.org.named.conf.asc
gpg --no-default-keyring --keyring ./isc-pgp-keys --verify dlv.isc.org.named.conf.asc dlv.isc.org.named.conf
mv dlv.isc.org.named.conf isc-dlv.conf
test:
dig -p5453 +dnssec www.dnssec.se @localhost
and here’s named.conf
options {
listen-on port 5453 { 127.0.0.1; };
pid-file "named.pid";
dnssec-enable true;
dnssec-lookaside . trust-anchor dlv.isc.org.;
include "force-dnssec.conf";
};
// obtain this file from ftp://ftp.rs.internic.net/domain/named.root
zone "." { type hint; file "named.root"; };
// include the rndc key
include "rndc.key";
controls {
inet 127.0.0.1 port 1953
allow { 127.0.0.1; }
keys { "rndc-key"; };
};
// include ITAR trust anchors
include "itar-trusted-keys.conf";
// include ISC DLV trust anchor
include "isc-dlv.conf";
Enjoy.
Incidentally, I have enabled “forced ITAR” on my main resolver, so we’ll see how that goes. I haven’t added DLV because, like I say, failure would not be noticed, so what’s the point of all the overhead?
Share This
Comments (4)
What Is DNSSEC Good For?
Filed under: Crypto, DNSSEC, Security — Ben @ 18:24
A lot of solutions to all our problems begin with “first find a public key for the server”, for example, signing XRD files. But where can we get a public key for a server? Currently the only even slightly sane way is by using an X.509 certificate for the server. However, there are some problems with this approach
1. If you are going to trust the key, then the certificate must come from a trusted CA, and hence costs money.
2. Because the certificate is a standard X.509 certificate, it can be used (with the corresponding private key, of course) to validate an HTTPS server - but you may not want to trust the server with that power.
3. The more we (ab)use X.509 certificates for this purpose, the more services anyone with a certificate can masquerade as (for the certified domain, of course).
One obvious way to fix these is to add extensions to the certificates that prevent their use for inappropriate services. Of course, then we would have to get the CAs to support these extensions and figure out how to validate certificate requests that used them.
But I have to wonder why we’re involving CAs in this process at all? All the CA does is to establish that the person requesting the certificate is the owner of the corresponding domain. But why do we need that service? Why could the owner of the domain not simply include the certificate in the DNS - after all, only the owner of the domain can do that, so what further proof is required?
Obviously the answer is: DNS is not secure! This would allow anyone to easily spoof certificates for any domain. Well, yes - that’s why you need DNSSEC. Forgetting the details of DNSSEC, the interesting feature is that the owner of a domain also owns a private key that can sign entries in that domain (and no-one else does, if the owner is diligent). So, the domain owner can include any data they want in their zone and the consumer of the data can be sure, using DNSSEC, that the data is valid.
So, when the question “what is the public key for service X on server Y?” arises, the answer should be “look it up in the DNS with DNSSEC enabled”. The answer is every bit as secure as current CA-based certificates, and, what’s more, once the domain owner has set up his domain, there is no further cost to him - any new keys he needs he can just add to his zone and he’s done.
Does DNSSEC have any other uses? OK, it would be nice to know that the A record you just got back corresponds to the server you were looking for, but if you trust a connection just on the basis that you used the right address, you are dead meat - you’ll need some key checking on top of it (for example, by using TLS) to avoid attacks by evil proxies (such as rogue wifi hotspots) or routing attacks and so forth. For me, the real value in DNSSEC is cryptographic key distribution.
Monday, June 22, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment