Monday, November 15, 2010

Are cyberattacks an act of war?

An interesting question is what constitutes "cyberwar?" That is war in cyberspace. It wasn't defined a few years ago, and it's still not well defined today.

I was reading Cyberdeterrence and Cyberwar by Martin C. Libicki of RAND today. It's a long document and I've only made it through the dozen or so pages at this point, but he makes a really interesting point (page xvii) talking about the role of cyberdeterrence in preventing cyberwar:
Might retaliation send the wrong message? Most of the critical U.S. infrastructure is private. An explicit deterrence policy may frame cyberattacks as acts of war, which would indemnify infrastructure owners from third-party liability, thereby reducing their incentive to invest in cybersecurity.
In other words, if cyberattacks can be considered acts of war, this would this trigger an Act of War exclusion common in many insurance coverages, allowing parties to escape liability from damage caused by a cyberattack by framing it as a act of (cyber)war?

Your money gets stolen from a bank: sorry, act of cyberwar.

House burned down by a virus attacking your smart meter: sorry, act of cyberwar.

However, the courts have decided 9/11 and similar acts of terrorism are not acts of war:
the courts have consistently held that a “war” within the meaning of an “act of war” exclusion can only exist as between two sovereign or quasi-sovereign governmental entities.
So, despite rhetoric about "war" in various forms, courts have set a pretty high bar for use of the term. Until two countries come out (or one at least) and declares cyberwar explicitly on another country, I doubt we'll see the term hold up in court.

Friday, November 12, 2010

Why security is not THE goal: TSA, built to fail.

There is an old joke in computer security:
Q: How do I make my computer secure?
A: Lock it in a safe and put it at the bottom of the ocean.
The joke being that you've made the computer really safe, but also completely unusable.

But this joke has a good lesson, and that is: Security is never THE goal.

It is a goal, one of many. But it's never the only thing you're trying to accomplish. There are always other goals, e.g. making something useful, perform well, appealing to the senses, or just available to enjoy.

Security is always a trade-off with these other goals. As Bruce Schneier puts very well in his talk (worth watching), the question is not "will it make us safer?" but "was it worth the trade-off?"

Following that logic, I've always thought that making security the goal of single organization, or group in an organization - "the security team", was a bad idea because it let the everyone else off the hook for security, they could assume "the security team has it" and ignore it.

But I've realized lately there is an even stronger reason why it is a bad idea - any organization or team who's sole focus is security will, by nature of doing their job, continuously increase security without regard for anything else.

And, to my point, one of these other things is protecting our civil liberties.

Like many others, I'm very unhappy with the TSA's new body scanners. Their policies to this point have been silly and annoying, but this is now crossing a line, in the opinion of many, myself included, from annoying to violating. (The no-fly list has arguably done so as well for years now.)

How did we get into this mess?

In thinking about this, we've created an organization in the TSA whose sole goal is security. Heck, it's one third of their name and it's baked into their mission:
The Transportation Security Administration protects the Nation’s transportation systems to ensure freedom of movement for people and commerce.
First and foremost is "protect the transportation systems." They started in the right direction with "ensure freedom of movement," but that's stopping far short of ensuring civil liberties, dignity, and privacy.

So fundamentally, "we've" created an organization that only cares about security. Back to Bruce's point, they only ask if something will make thing more secure, not whether it's a good trade-off against privacy, civil liberties, economics, or even if it's just silly. If it increases security, they've done their job.

Add to that the lack of any real oversight, congress is not going to risk looking weak on security, and we've created a department that has a mission to keep protecting the transportation system to a greater and greater degree, with no constraints.

And it will keep protecting that system more and more until we're all flying locked in safes, or worse.

Thursday, October 21, 2010

On the Government's request for more wiretapping regulation

The news that the federal law enforcement wants new regulations in order to make wiretapping easier is about a month old now and there are a number of pieces explaining why it's a bad idea, e.g. EFF, Steve Bellovin.

With my engineer's hat on and having designed secure systems, this regulation would be a nightmare. Effectively to add the ability to wiretap a secure system adds so much complexity that it would have a huge negative impact on the security of the system.

Every secure system can be used for good and bad. Yes, there will always be bad guys using them, but there will always be lots more good guys using them for good purposes, and if we weaken the systems we hurt the good guys more than the bad. Usually much more, because good guys obey the law, while bad guys don't.

Installing a backdoor in a system reminds me of the online poker scandal a couple years ago, where a backdoor was built into the online poker games for development was used for cheating. I grant that while the technical details in that case are largely unknown and its likely there could have been better security, but it's an example of how such backdoors get used: if there is something of value to be gained and the backdoor is there, bad guys will figure out how to use it.

Monday, September 20, 2010

Interpol scammed by FaceBook: A lesson in identity providers

This is just embarrassing. 

[Interpol] Security chief Ronald K. Noble revealed that two fake accounts were created in his name and used to find the details of highly-dangerous criminals.


It is believed the cyber-criminals created Facebook profiles claiming to be Mr Noble. From there they gathered sensitive information about the suspects.

The lesson to be learned here is that FaceBook is not a trustworthy identity provider. Just because someone claims an identity on FaceBook doesn't mean a thing as anyone and create an account and say they are whoever they want to be. And really there isn't a lot FaceBook and do about it, vetting 500 million people from around the world isn't practical.

Don't get me wrong, FaceBook is still useful for all sorts of stuff, but exchange of sensitive law enforcement information isn't one of them and if you are responsible for care taking that sort of information you should know better.

Tuesday, August 24, 2010

Congratulations to NCSA and ICSI on Bro award

Congrationulations to Adam Slagell and the rest of my old team at NCSA on their award from the National Science Foundation to work with International Computer Science Institute on improving the performance and overall quality of the Bro intrusion detection software. Bro is a very important and widely-used piece of network intrusion detection software and I'm excited to see NCSA involved with furthering its success.

Software Assurance Maturity Model (SAMM)

I just finished a (quick) read of the Software Assurance Maturity Model (SAMM). SAMM is a product of OWASP and is a freely-available model for software security. The specific version I read was the latest at this time, which is 1.0.

Frankly I was very impressed with the document. It is professionally laid out and obviously a lot of work has gone into not only the content but the presentation. One thing that struck me as a nice touch was the three "maps" (my term) through the document on page 5, which should you which sections to read or skim to accomplish specific goals. It's a 96 page document, but is organized nicely so you can skim it quickly and find specific sections easily.

The document defines a what seems to be a the most thorough model I've come across. It starts with four business functions: Governance, Construction, Verification and Deployment. Under each of these it defines three security practices and then three levels of maturity for each practice. It then goes into depth on what each level for each practice entails and what benefits result. While this seems like a lot, I think it's always good to know what you don't know and a complete model helps do that. The different levels would be useful for laying out a roadmap for getting to whatever level is appropriate.

It concludes with a case study giving a narrative of a company (VirtualWare, which seems to be modeled on a real company with a similar name) adopting a software assurance program for a new web-based product. The company starts with no assurance program and the case study discusses a four phase implementation strategy.

All together, a good document. Worth a read for anyone interested in software assurance and certainly something to consider for developing an assurance program.

Saturday, July 3, 2010

The National Strategy for Trusted Identities in Cyberspace

The White House announced a draft National Strategy for Trusted Identities in Cyberspace (NS-TIC). I've seen some strong negative takes on the government getting involved in this aspect of security cyberspace (e.g. Lauren Weinstein), but I'm cautiously optimistic.

First, as a vision document, NS-TIC pretty good. I had no big issues with it and thought it touched on all the key problems that we have today. One small complaint is that it doesn't mention higher education as a player in this space - alas, my pond is not as big as I might hope.

But NS-TIC is just a vision document, without specifics. It does lay out some action items: the first, and biggest in my mind, being Action 1: "Designate a Federal Agency to Lead the Public/Private Sector Efforts Associated with Achieving the Goals of the Strategy." Which agency is chosen will have a big impact on how this progresses.

My hope is that the Government will serve two roles: a key service provider that will serve as a catalyst to bring the current mis-mash of identities standards into a coherent whole, and as a coordinator for those standards.

With regards to the first role - what I mean is that the Government should be a provider of services using identities, and not a provider of identities themselves. There seem to be enough companies who want to play the identity provider role (for a small sample, just click the 'Sign In' button on the NS-TIC web page). And by having a range of companies do it, that fosters some competition and privacy in it's own right, as opposed to have a single source of identities, that being the Government. In an ideal world, some non-profits will emerge in addition to for-profit companies, who will always have commercial tensions, specifically to be identity providers.

In terms of the latter role of coordination, I think the National Institute of Standards and Technologies (NIST), who has done a good job on brining us key cryptographic standards such as AES, and has already been serving, in a small way, to drive a community around identities with by participating in activities such as the Symposium on Identity and Trust on the Internet, would be the best choice of agency. I note they were selected to coordinate the new cybersecurity education initiative, so I'm hopefully about this.

The worst case senario is the National Security Agency is selected as the agency. Sociologically, this would be a non-starter as no one would trust the process or results. Objectively, being in charge of both securing cyberspace and spying in cyberspace is a conflict of interest for the NSA.

In the middle would be the Department of Homeland Security. At least the conflict of interest wouldn't be as overt a problem (I suspect many would argue this and I have to agree it would still be a problem to some degree), but, frankly, DHS just hasn't demonstrated they have the ability to handle something of a technical nature such as this and there would be huge skepticism in the community in their ability to lead. I think other agencies that might be selected (Commerce perhaps?) fall into this same boat in my mind.

So, I'm cautiously optimistic. I'm glad to see cyber identities getting this attention. If this vision gets implemented in the right way, with NIST leading, the government acting as a service and not identity provider, and it helping to coalesce all the competing standards, it is a good thing.

Added 7/12, other comments on NS-TIC:

Friday, July 2, 2010

Paper on TeraGrid federated identity presented

My long-time colleague Jim Basney recently presented our (along with Terry Fleury) paper on "Federated Login to TeraGrid" at the 9th Symposium on Identity and Trust on the Internet.

This paper represents a culmination of years of work on providing interoperability between InCommon and and high-performance computers and cyberinfrastucture such as the TeraGrid. It also led to the CILogin service, which will provide this interoperability for infrastructure beyond TeraGrid. This infrastructure will allow scientists and other users of high-performance computing to access that infrastructure using their existing campus logins instead of having to get new passwords (or other credentials). This is an important step towards truly integrated computing.

The paper itself (pdf). Jim's presentation (pdf). Jim's poster (pdf). (Many thanks to Jim for making the presentation.)

Updated 8/31/2010: I'm happy to say our paper received the Best Paper award in the Science Gateway Track of the program. I'm very honored.

Tuesday, June 22, 2010

White paper: Open Source Vulnerability Response

I've been working on a white paper on vulnerability response for open source projects. A few colleagues have been good enough to give me comments and I figure it's ready for public scrutiny. Any feedback welcome.

Intelligence Squared US podcast on CyberWar Threat

The Intelligence Squared US podcast has a good episode on the question "Has the cyber war threat been grossly exaggerated?"

The debaters were all top notch: Marc Rotenburg (Executive Director of the Electronic Privacy Information Center), Bruce Schneier (Chief Security Officer of British Telecom), Johnathan Zittrain (Harvard Professor of Law) and Mike McConnel (formel U.S. Director of National Intelligence).

The podcast is well worth listening to for a good overview of the issues involved in this complicated subject.

A couple things jumped out at me...

Bruce made the good point that the language matters. By even calling it cyber "war" as opposed to cyber "crime" we've made it a military issue as opposed to a police issue. Cyber crime is clearly an issue today without a doubt, but we still don't have an example, I would argue, of a true military use of cyber offense - as Bruce put it, examples to date are "kids playing politics."

I think everyone agreed some sort of Government help is needed to secure cyberspace, but it's clear there is disagreement regarding what part of the government and in what form. Marc echoed concerns I think many have (including myself) of the NSA being put in charge. All other arguments aside, it's a conflict of interest for a organization interesting in spying, and hence wanting to leverage weakness in cyberspace, to be put in charge of strengthening cyberspace (for an example, see the Clipper Chip).

Jonathan Zittrain also made a good point that if we want to do something, it's better to do it now, with relatively calmer heads, than immediately after a cyber September 11th equivalent.

Again, well worth the hour listening to.

Friday, June 18, 2010

My take on Google's wireless capture oops...

Regarding Google's wireless capture oops - in a nutshell, Google admitted while gathering information about geolocations of WiFi access points (to allow for location detection by non-GPS devices),  they accidentally captured small amounts of packet data from unencrypted wireless networks. Google's story is that it was an unintentional configuration of third party software they were using that caused it to capture more than they needed.

I believe Google that the over-collection was unintentional. I doubt anything interesting was even captured given the nature of the captured data and I can't imagine any reasonable motivation.

Unless someone can prove actual damages, which I highly doubt, I think this is being blown way out of proportion and I hope any lawsuit is thrown out.

So what should be taken away from this?

A couple weeks ago I was talking to someone from one of the big accounting firms whose job it is to audit online advertising (among other things). What that means is that they auditing companies who sell online advertisements to accredit that when someone buys an ad with certain parameters, they get what they pay for.

It seems reasonable that collection of public data should have the same scrutiny. When a company collects data about the public, they should have a policy about what they collect, what they do with it, who gets access and how long they store it - and they should have audits throughout the process to make sure they are actually doing what their policy says. If Google had does this, defining what data they were collecting and having an audit early in the process of the collected data, that should have caught the fact they were over-collecting and this would never have happened.

Next time it might be data that actually matters.

Edited to add: Interesting post by Lauren Weinstein on the matter.

Tuesday, June 8, 2010

InCommon Certificate Service

The InCommon Federation recently announced their offering of a new X.509 certificate service.

Personally, I think this is a great step in the right direction for the U.S. higher education and research community, who need good identity services for collaboration. InCommon has served as a focus point for Shibboleth and SAML-based services, and I think they are a good organization to expand to take on other identity technologies such as X.509. My personally view is that we'll never have one identity technology and we'll always be dealing with a mix - SAML, OpenID, X.509 being the obvious big three (with Kerberos and SSH keys having their niches and of course the ubiquitous username and passswords).

A good question is "what does this mean for the community served by the International Grid Trust Federation (IGTF)?" The IGTF currently serves an an accreditation body for the use of X.509 in most "Grid" projects. If the InCommon certificate service takes off (which I'm guessing it will), the pressure to use its certificate for those Grid projects will be strong and could potentially become the 800 pound gorilla in the room.

IGTF has done some good work defining standards for X.509 usage, and I think this is a good opportunity for them to collaborate with InCommon in bringing that work to this new service. As usual, the challenge will be getting busy people to pay attention and talk.

In the meantime, I expect translation services (which handle both technical and policy translation) to continue to play a significant role.