Bruce Schneier

 
 

Schneier on Security

A blog covering security and security technology.

Two Security Camera Studies

From San Francisco:

San Francisco's Community Safety Camera Program was launched in late 2005 with the dual goals of fighting crime and providing police investigators with a retroactive investigatory tool. The program placed more than 70 non-monitored cameras in mainly high-crime areas throughout the city. This report released today (January 9, 2009) consists of a multi-disciplinary collaboration examining the program's technical aspects, management and goals, and policy components, as well as a quasi-experimental statistical evaluation of crime reports in order to provide a comprehensive evaluation of the program's effectiveness. The results find that while the program did result in a 20% reduction in property crime within the view of the cameras, other forms of crime were not affected, including violent crime, one of the primary targets of the program.

From the UK:

The first study of its kind into the effectiveness of surveillance cameras revealed that almost every Scotland Yard murder inquiry uses their footage as evidence.

In 90 murder cases over a one year period, CCTV was used in 86 investigations, and senior officers said it helped to solve 65 cases by capturing the murder itself on film, or tracking the movements of the suspects before or after an attack.

In a third of the cases a good quality still image was taken from the footage from which witnesses identified the killer.

My own writing on security cameras is here. The question isn't whether they're useful or not, but whether their benefits are worth the costs.

Posted on January 13, 2009 at 6:58 AM22 CommentsView Blog Reactions


Shaping the Obama Administration's Counterterrorism Strategy

I'm at a two-day conference: Shaping the Obama Adminstration's Counterterrorism Strategy, sponsored by the Cato Institute in Washington, DC. It's sold out, but you can watch or listen to the event live on the Internet. I'll be on a panel tomorrow at 9:00 AM.

I've been told that there's a lively conversation about the conference on Twitter, but -- as I have previously said -- I don't Twitter.

Posted on January 12, 2009 at 12:44 PM20 CommentsView Blog Reactions


Bad Password Security at Twitter

Twitter fell to a dictionary attack because the site allowed unlimited failed login attempts:

Cracking the site was easy, because Twitter allowed an unlimited number of rapid-fire log-in attempts.

Coding Horror has more, but -- come on, people -- this is basic stuff.

Posted on January 12, 2009 at 6:48 AM34 CommentsView Blog Reactions


DHS's Files on Travelers

This is interesting:

I had been curious about what's in my travel dossier, so I made a Freedom of Information Act (FOIA) request for a copy. I'm posting here a few sample pages of what officials sent me.

My biggest surprise was that the Internet Protocol (I.P.) address of the computer used to buy my tickets via a Web agency was noted. On the first document image posted here, I've circled in red the I.P. address of the computer used to buy my pair of airline tickets.

[...]

The rest of my file contained details about my ticketed itineraries, the amount I paid for tickets, and the airports I passed through overseas. My credit card number was not listed, nor were any hotels I've visited. In two cases, the basic identifying information about my traveling companion (whose ticket was part of the same purchase as mine) was included in the file. Perhaps that information was included by mistake.

Posted on January 12, 2009 at 5:15 AM21 CommentsView Blog Reactions


Movie-Plot Threat: Terrorists Using Insects

Fear sells books:

Terrorists could easily contrive an "insect-based" weapon to import an exotic disease, according to an entomologist who's promoting a book on the subject.

Posted on January 11, 2009 at 12:47 PM31 CommentsView Blog Reactions


Friday Squid Blogging: Squid Hats

Awesome.

Posted on January 9, 2009 at 4:50 PM5 CommentsView Blog Reactions


Friday Squid Blogging: Bizarre Squid Reproductive Habits

Lots of them:

Hoving investigated the reproductive techniques of no fewer than ten different squids and related cuttlefish -- from the twelve-metre long giant squid to a mini-squid of no more than twenty-five millimetres in length. Along the way he made a number of remarkable discoveries. Hoving: "Reproduction is no fun if you're a squid. With one species, the Taningia danae, I discovered that the males give the females cuts of at least 5 centimetres deep in their necks with their beaks or hooks -- they don't have suction pads. They then insert their packets of sperm, also called spermatophores, into the cuts."

Posted on January 9, 2009 at 4:06 PM6 CommentsView Blog Reactions


Impersonation

Impersonation isn't new. In 1556, a Frenchman was executed for impersonating Martin Guerre and this week hackers impersonated Barack Obama on Twitter. It's not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don't know, we interact with people through "narrow" communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.

Traditional impersonation involves people fooling people. It's still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.

These tricks work because we all regularly interact with people we don't know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That's just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman's badge from a forged one?

Still, it's human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a Web site, we use the professionalism of the page to judge whether or not it's really legitimate -- never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.

Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company's tech support from a hacker trying to break into your network -- or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.

These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with -- and I'm not making this up -- a photograph of a face held in front of their own. Even the most bored policeman wouldn't fall for any of those tricks.

This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn't know any better.

Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.

This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn't always better. Banks make this trade-off when they don't bother authenticating signatures on checks under amounts like $25,000; it's cheaper to deal with fraud after the fact. Web sites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don't bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.

Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.

Decentralized authentication systems work better than centralized ones. Open your wallet, and you'll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver's license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn't give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.

Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That's why all of a corporation's assets and information isn't available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it's why identity theft won't be solved by making personal information harder to steal.

We can reduce the risk of impersonation, but it will always be with us; technology cannot "solve" it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won't believe it's him.

This essay originally appeared in The Wall Street Journal.

Posted on January 9, 2009 at 2:04 PM30 CommentsView Blog Reactions


Interview with Me

I was interviewed for CSO Magazine.

Posted on January 9, 2009 at 1:04 PM2 CommentsView Blog Reactions


Allocating Resources: Financial Fraud vs. Terrorism

Interesting trade-off:

The FBI has been forced to transfer agents from its counter-terrorism divisions to work on Bernard Madoff's alleged $50 billion fraud scheme as victims of the biggest scam in the world continue to emerge.

The Freakonomics blog discusses this:

This might lead you to ask an obvious counter-question: Has the anti-terror enforcement since 9/11 in the U.S. helped fuel the financial meltdown? That is, has the diversion of resources, personnel, and mindshare toward preventing future terrorist attacks -- including, you'd have to say, the wars in Afghanistan and Iraq -- contributed to a sloppy stewardship of the financial industry?

It quotes a New York Times article:

Federal officials are bringing far fewer prosecutions as a result of fraudulent stock schemes than they did eight years ago, according to new data, raising further questions about whether the Bush administration has been too lax in policing Wall Street.

Legal and financial experts say that a loosening of enforcement measures, cutbacks in staffing at the Securities and Exchange Commission, and a shift in resources toward terrorism at the F.B.I. have combined to make the federal government something of a paper tiger in investigating securities crimes.

We've seen this problem over and over again when it comes to counterterrorism: in an effort to defend against the rare threats, we make ourselves more vulnerable to the common threats.

Posted on January 9, 2009 at 6:54 AM31 CommentsView Blog Reactions


Biometrics

Biometrics may seem new, but they're the oldest form of identification. Tigers recognize each other's scent; penguins recognize calls. Humans recognize each other by sight from across the room, voices on the phone, signatures on contracts and photographs on driver's licenses. Fingerprints have been used to identify people at crime scenes for more than 100 years.

What is new about biometrics is that computers are now doing the recognizing: thumbprints, retinal scans, voiceprints, and typing patterns. There's a lot of technology involved here, in trying to both limit the number of false positives (someone else being mistakenly recognized as you) and false negatives (you being mistakenly not recognized). Generally, a system can choose to have less of one or the other; less of both is very hard.

Biometrics can vastly improve security, especially when paired with another form of authentication such as passwords. But it's important to understand their limitations as well as their strengths. On the strength side, biometrics are hard to forge. It's hard to affix a fake fingerprint to your finger or make your retina look like someone else's. Some people can mimic voices, and make-up artists can change people's faces, but these are specialized skills.

On the other hand, biometrics are easy to steal. You leave your fingerprints everywhere you touch, your retinal scan everywhere you look. Regularly, hackers have copied the prints of officials from objects they've touched, and posted them on the Internet. We haven't yet had an example of a large biometric database being hacked into, but the possibility is there. Biometrics are unique identifiers, but they're not secrets.

And a stolen biometric can fool some systems. It can be as easy as cutting out a signature, pasting it onto a contract, and then faxing the page to someone. The person on the other end doesn't know that the signature isn't valid because he didn't see it fixed onto the page. Remote logins by fingerprint fail in the same way. If there's no way to verify the print came from an actual reader, not from a stored computer file, the system is much less secure.

A more secure system is to use a fingerprint to unlock your mobile phone or computer. Because there is a trusted path from the fingerprint reader to the stored fingerprint the system uses to compare, an attacker can't inject a previously stored print as easily as he can cut and paste a signature. A photo on an ID card works the same way: the verifier can compare the face in front of him with the face on the card.

Fingerprints on ID cards are more problematic, because the attacker can try to fool the fingerprint reader. Researchers have made false fingers out of rubber or glycerin. Manufacturers have responded by building readers that also detect pores or a pulse.

The lesson is that biometrics work best if the system can verify that the biometric came from the person at the time of verification. The biometric identification system at the gates of the CIA headquarters works because there's a guard with a large gun making sure no one is trying to fool the system.

Of course, not all systems need that level of security. At Counterpane, the security company I founded, we installed hand geometry readers at the access doors to the operations center. Hand geometry is a hard biometric to copy, and the system was closed and didn't allow electronic forgeries. It worked very well.

One more problem with biometrics: they don't fail well. Passwords can be changed, but if someone copies your thumbprint, you're out of luck: you can't update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you're stuck. The failures don't have to be this spectacular: a voice print reader might not recognize someone with a sore throat, or a fingerprint reader might fail outside in freezing weather. Biometric systems need to be analyzed in light of these possibilities.

Biometrics are easy, convenient, and when used properly, very secure; they're just not a panacea. Understanding how they work and fail is critical to understanding when they improve security and when they don't.

This essay originally appeared in the Guardian, and is an update of an essay I wrote in 1998.

Posted on January 8, 2009 at 12:53 PM59 CommentsView Blog Reactions


Reporting Unruly Football Fans via Text Message

This system is available in most NFL stadiums:

Fans still are urged to complain to an usher or call a security hotline in the stadium to report unruly behavior. But text-messaging lines -- typically advertised on stadium scoreboards and on signs where fans gather -- are aimed at allowing tipsters to surreptitiously alert security personnel via cellphone without getting involved with rowdies or missing part of a game.

As of this week, 29 of the NFL's 32 teams had installed a text-message line or telephone hotline. Three clubs have neither: the New Orleans Saints, St. Louis Rams and Tennessee Titans. Ahlerich says he will "strongly urge" all clubs to have text lines in place for the 2009 season. A text line will be available at the Super Bowl for the first time when this season's championship game is played at Tampa's Raymond James Stadium on Feb. 1.

"If there's someone around you that's just really ruining your day, now you don't have to sit there in silence," says Jeffrey Miller, the NFL's director of strategic security. "You can do this. It's very easy. It's quick. And you get an immediate response."

The article talks a lot about false alarms and prank calls, but -- in general -- this seems like a good use of technology.

Posted on January 8, 2009 at 6:44 AM22 CommentsView Blog Reactions


Powered by Movable Type. Photo at top by Steve Woit.

Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.

 
Bruce Schneier