On Brinksmanship

January 18, 2013 5 comments

Reality is what refuses to go away when you stop believing in it.

The reality – the ground truth—is that Aaron Swartz is dead.

Now what.

Brinksmanship is a terrible game, that all too many systems evolve towards.  The suicide of Aaron Swartz is an awful outcome, an unfair outcome, a radically out of proportion outcome.   As in all negotiations to the brink, it represents a scenario in which all parties lose.

Aaron Swartz lost.  He paid with his life.  This is no victory for Carmen Ortiz, or Steve Heymann, or JSTOR, MIT, the United States Government, or society in general.  In brinksmanship, everybody loses.

Suicide is a horrendous act and an even worse threat.  But let us not pretend that a set of charges covering the majority of Aaron’s productive years is not also fundamentally noxious, with ultimately a deeply similar outcome.  Carmen Ortiz (and, presumably, Steve Heymann) are almost certainly telling the truth when they say – they had no intention of demanding thirty years of imprisonment from Aaron.  This did not stop them from in fact, demanding thirty years of imprisonment from Aaron.

Brinksmanship.  It’s just negotiation.  Nothing personal.

Let’s return to ground truth.  MIT was a mostly open network, and the content “stolen” by Aaron was itself mostly open.  You can make whatever legalistic argument you like; the reality is there simply wasn’t much offense taken to Aaron’s actions.  He wasn’t stealing credit card numbers, he wasn’t reading personal or professional emails, he wasn’t extracting design documents or military secrets.  These were academic papers he was ‘liberating’.

What he was, was easy to find.

I have been saying, for some time now, that we have three problems in computer security.  First, we can’t authenticate.  Second, we can’t write secure code.  Third, we can’t bust the bad guys.  What we’ve experienced here, is a failure of the third category.  Computer crime exists.  Somebody caused a huge amount of damage – and made a lot of money – with a Java exploit, and is going to get away with it.  That’s hard to accept.  Some of our rage from this ground truth is sublimated by blaming Oracle.  But some of it turns into pressure on prosecutors, to find somebody, anybody, who can be made an example of.

There are two arguments to be made now.  Perhaps prosecution by example is immoral – people should only be punished for their own crimes.  In that case, these crimes just weren’t offensive enough for the resources proposed (prison isn’t free for society).  Or perhaps prosecution by example is how the system works, don’t be naïve – well then.

Aaron Swartz’s antics were absolutely annoying to somebody at MIT and somebody at JSTOR.  (Apparently someone at PACER as well.)  That’s not good, but that’s not enough.  Nobody who we actually have significant consensus for prosecuting, models himself after Aaron Swartz and thinks “Man, if they go after him, they might go after me”.

The hard truth is that this should have gone away, quietly, ages ago.  Aaron should have received a restraining order to avoid MIT, or perhaps some sort of fine.  Instead, we have a death.  There will be consequences to that – should or should not doesn’t exist here, it is simply a statement of fact.  Reality is what refuses to go away, and this is the path by which brinksmanship is disincentivized.

My take on the situation is that we need a higher class of computer crime prosecution.  We, the computer community in general, must engage at a higher level – both in terms of legislation that captures our mores (and funds actual investigations – those things ain’t free!), and operational support that can provide a critical check on who is or isn’t punished for their deeds.  Aaron’s law is an excellent start, and I support it strongly, but it excludes faux law rather than including reasoned policy.  We can do more.  I will do more.

The status quo is not sustainable, and has cost us a good friend.  It’s so out of control, so desperate to find somebody – anybody! – to take the fall for unpunished computer crime, that it’s almost entirely become about the raw mechanics of being able to locate and arrest the individual instead of about their actual actions.

Aaron Swartz should be alive today.  Carmen Ortiz and Steve Heymann should have been prosecuting somebody else.  They certainly should not have been applying a 60x multiple between the amount of time they wanted, and the degree of threat they were issuing.  The system, in all of its brinksmanship, has failed.  It falls on us, all of us, to fix it.

Categories: Security

Actionable Intelligence: The Mouse That Squeaked

December 14, 2012 3 comments

[Obligatory disclosures -- I've consulted for Microsoft, and had been doing some research on Mouse events myself.]

So one of the more important aspects of security reporting is what I’ve been calling Actionable Intelligence. Specifically, when discussing a bug — and there are many, far more than are ever discovered let alone disclosed — we have to ask:

What can an attacker do today, that he couldn’t do yesterday, for what class attacker, to what class victim?

Spider.IO, a fraud analytics company, recently disclosed that under Internet Explorer attackers can capture mouse movement events from outside an open window. What is the Actionable Intelligence here? It’s moderately tempting to reply: We have a profound new source of modern art.

psmap
(Credit: Anatoly Zenkov’s IOGraph tool)

I exaggerate, but not much. The simple truth is that there are simply not many situations where mouse movements are security sensitive. Keyboard events, of course, would be a different story — but mouse? As more than a few people have noted, they’d be more than happy to publish their full movement history for the past few years.

It is interesting to discuss the case of the “virtual keyboard”. There has been a movement (thankfully rare) to force credential input via mouse instead of keyboard, to stymie keyloggers. This presupposes a class of attacker that has access to keyboard events, but not mouse movements or screen content. No such class actually exists; the technique was never protecting much of anything in the first place. It’s just pain-in-the-butt-as-a-feature.  More precisely, it’s another example of Rick Wash’s profoundly interesting turn of phrase, Folk Security. Put simply, there is a belief that if something is hard for a legitimate user, it’s even harder for the hacker. Karmic security is (unfortunately) not a property of the universe.

(What about the attacker with an inline keylogger? Not only does he have physical access, he’s not actually constrained to just emulating a keyboard. He’s on the USB bus, he has many more interesting devices to spoof.)

That’s not to say spider.io has not found a bug. Mouse events should only come from the web frame for which script has dominion over, in much the same way CNN should not be receiving Image Load events from a tab open to Yahoo. But the story of the last decade is that bugs are not actually rare, and that from time to time issues will be found in everything. We don’t need to have an outright panic when a small leak is found. The truth is, every remote code execution vulnerability can also capture full screen mouse motion. Every universal cross site scripting attack (in which CNN can inject code into a frame owned by Yahoo) can do similar, though perhaps only against other browser windows.

I would like to live in a world where this sort of very limited overextension of the web security model warrants a strong reaction. It is in fact nice that we do live in a world where browsers effectively expose the most nuanced and well developed (if by fire) security model in all of software. Where else is even the proper scope of mouse events even a comprehensible discussion?

(Note that it’s a meaningless concept to say that mouse events within the frame shouldn’t be capturable. Being able to “hover” on items is a core user interface element, particularly for the highly dynamic UI’s that Canvas and WebGL enable. The depth of damage one would have to inflict on the browser usability model, to ‘secure’ activity in what’s actually the legitimate realm of a page, would be profound. When suggesting defenses, one must consider whether the changes required to make them reparable under actual assault ruins the thing being defended in the first place. We can’t go off destroying villages in order to save them.)

So, in summary: Sure, there’s a bug here with these mouse events. I expect it will be fixed, like tens of thousands of others. But it’s not particularly significant.  What can an attacker do today, that he couldn’t do yesterday?  Not much, to not many.  Spider.io’s up to interesting stuff, but not really this.

Categories: Security

DakaRand 1.0: Revisiting Clock Drift For Entropy Generation

August 15, 2012 21 comments

“The generation of random numbers is too important to be left to chance.”
Robert R. Coveyou

“One out of 200 RSA keys in the field were badly generated as a result of standard dogma.  There’s a chance this might fail less.”
–Me

[Note:  There are times I write things with CIO's in mind.  This is not one of those times.]

So, I’ve been playing with userspace random number generation, as per Matt Blaze and D.P. Mitchell’s TrueRand from 1996.  (Important:  Matt Blaze has essentially disowned this approach, and seems to be honestly horrified that I’m revisiting it.)  The basic concept is that any system with two clocks has a hardware number generator, since clocks jitter relative to one another based on physical properties, particularly when one is operating on a slow scale (like, say, a human hitting a keyboard) while another is operating on a fast scale (like a CPU counter cycling at nanosecond speeds).  Different tolerances on clocks mean more opportunities for unmodelable noise to enter the system.  And since the core lie of your computer is that it’s just one computer, as opposed to a small network of independent nodes running on their own time, there should be no shortage of bits to mine.

At least, that’s the theory.

As announced at Defcon 20 / Black Hat, here’s DakaRand 1.0.  Let me be the first to say, I don’t know that this works.  Let me also be the first to say, I don’t know that it doesn’t.  DakaRand is a collection of modes that tries to convert the difference between clocks into enough entropy that, whether or not it survives academic attack, would certainly force me (as an actual guy who breaks stuff) to go attack something else.

A proper post on DakaRand is reserved, I think, for when we have some idea that it actually works.  Details can be seen in the slides for the aforementioned talk; what I’d like to focus on now is recommendations for trying to break this code.  The short version:

1) Download DakaRand, untar, and run “sh build.sh”.
2) Run dakarand -v -d out.bin -m [0-8]
3) Predict out.bin, bit for bit, in less than 2^128 work effort, on practically any platform you desire with almost any level of active manipulation you wish to insert.

The slightly longer version:

  1. DakaRand essentially tries to force the attacker into having no better attack than brute force, and then tries to make that work effort at least 2^128.  As such, the code is split into generators that acquire bits, and then a masking sequence of SHA-256, Scrypt, and AES-256-CTR that expands those bits into however much is requested.  (In the wake of Argyros and Kiayias’s excellent and underreported “I Forgot Your Password:  Randomness Attacks Against PHP Applications“, I think it’s time to deprecate all RNG’s with invertable output.  At the point you’re asking whether an RNG should be predictable based on its history, you’ve already lost.)  The upshot of this is that the actual target for a break is not the direct output of DakaRand, but the input to the masking sequence.  Your goal is to show that you can predict this particular stream, with perfect accuracy, at less than 2^128 work effort.  Unless you think you can glean interesting information from the masking sequence (in which case, you have more interesting things to attack than my RNG), you’re stuck trying to design a model of the underlying clock jitter.
  2. There are nine generators in this initial release of DakaRand.  Seriously, they can’t all work.
  3. You control the platform.  Seriously — embedded, desktop, server, VM, whatever — it’s fair game.  About the only constraint that I’ll add is that the device has to be powerful enough to run Linux.  Microcontrollers are about the only things in the world that do play the nanosecond accuracy game, so I’m much less confident against those.  But, against anything ARM or larger, real time operation is simply not a thing you get for free, and even when you pay dearly for it you’re still operating within tolerances far larger than DakaRand needs to mine a bit.  (Systems that are basically cycle-for-cycle emulators don’t count.  Put Bochs and your favorite ICE away.  Nice try though!)
  4. You seriously control the platform.  I’ve got no problem with you remotely spiking the CPU to 100%, sending arbitrary network traffic at whatever times you like, and so on.  The one constraint is that you can’t already have root — so, no physical access, and no injecting yourself into my gathering process.  It’s something of a special case if you’ve got non-root local code execution.  I’d be interested in such a break, but multitenancy is a lie and there’s just so many  interprocess leaks (like this if-it’s-so-obvious-why-didn’t-you-do-it example of cross-VM communication).
  5. Virtual machines get special rules:  You’re allowed to suspend/restore right up to the execution of DakaRand.  That is the point of atomicity.
  6. The code’s a bit hinky, what with globals and a horde of dependencies.  If you’d like to test on a platform that you just can’t get DakaRand to build on, that makes things more interesting, not less.  Email me.
  7. All data generated is mixed into the hash, but bits are “counted” when Von Neumann debiasing works.  Basically, generators return integers between 0 and 2^32-1.  Every integer is mixed into the keying hash (thus, you having to predict out.bin bit for bit).  However, each integer is also measured for the number of 1′s it contains.  An even number yields a 0; an odd number, a 1.  Bits are only counted when two sequential numbers have either a 10 or a 01, and as long as there’s less than 256 bits counted, the generator will continue to be called.  So your attack needs to model the absolute integers returned (which isn’t so bad) and the amount of generator calls it takes for a Von Neumann transition to occur and whether the transition is a 01 or a 10 (since I put that value into the hash too).
  8. I’ve got a default “gap” between generator probes of just 1000us — a millisecond.  This is probably not enough for all platforms — my assumption is that, if anything has to change, that this has to become somewhat dynamic.

Have fun!  Remember, “it might fail somehow somewhere” just got trumped by “it actually did fail all over the place”, so how about we investigate a thing or two that we’re not so sure in advance will actually work?

(Side note:  Couple other projects in this space:  Twuewand, from Ryan Finnie, has the chutzpah to be pure Perl.  And of course, Timer Entropyd, from Folkert Van Heusden.  Also, my recommendation to kernel developers is to do what I’m hearing they’re up to anyway, which is to monitor all the interrupts that hit the system on a nanosecond timescale.  Yep, that’s probably more than enough.)

Categories: Security

Black Ops 2012

August 6, 2012 3 comments

Here’s my slides from Black Hat and Defcon for 2012.  Pile of interesting heresies — should make for interesting discussion.  Here’s what we’ve got:

1) Generic timing attack defense through network interface jitter
2) Revisiting Random Number Generation through clock drift
3) Suppressing injection attacks by altering variable scope and per-character taint
4) Deployable mechanisms for detecting censorship, content alteration, and certificate replacement
5) Stateless TCP w/ payload retrieval

I hate saying “code to be released shortly”, but I want to post the slides and the code’s pretty hairy.  Email me if you want to test anything, particularly if you’d like to try to break this stuff or wrap it up for release.  I’ll also be at Toorcamp, if you want to chat there.

Categories: Security

RDP and the Critical Server Attack Surface

March 18, 2012 18 comments

MS12-020, a use-after-free discovered by Luigi Auriemma, is roiling the Information Security community something fierce. That’s somewhat to be expected — this is a genuinely nasty bug. But if there’s one thing that’s not acceptable, it’s the victim shaming.

As people who know me, know well, nothing really gets my hackles up like blaming the victims of the cybersecurity crisis. “Who could possibly be so stupid as to put RDP on the open Internet”, they ask. Well, here’s some actual data:

On 16-Mar-2012, I initiated a scan across approximately 8.3% of the Internet (300M IPs were probed; the scan is ongoing). 415K of ~300M IP addresses showed evidence of speaking the RDP protocol (about twice as many had listeners on 3389/tcp — always be sure to speak a bit of the protocol before citing connectivity!)

Extrapolating from this sample, we can see that there’s approximately five million RDP endpoints on the Internet today.

Now, some subset of these endpoints are patched, and some (very small) subset of these endpoints aren’t actually the Microsoft Terminal Services code at all. But it’s pretty clear that, yes, RDP is actually an enormously deployed service, across most networks in the world (21767 of 57344 /16′s, at 8.3% coverage).

There’s something larger going on, and it’s the relevance of a bug on what can be possibly called the Critical Server Attack Surface. Not all bugs are equally dangerous because not all code is equally deployed. Some flaws are simply more accessible than others, and RDP — as the primary mechanism by which Windows systems are remotely administered — is a lot more accessible than a lot of people were aware of.

I think, if I had to enumerate the CSAS for the global Internet, it would look something like (in no particular order — thanks Chris Eng, The Grugq!):

  • HTTP (Apache, Apache2, IIS, maybe nginx)
  • Web Languages (ASPX, ASP, PHP, maybe some parts of Perl/Python. Maybe rails.)
  • TCP/IP (Windows 2000/2003/2008 Server, Linux, FreeBSD, IOS)
  • XML (libxml, MSXML3/4/6)
  • SSL (OpenSSL, CryptoAPI, maybe NSS)
  • SSH (OpenSSH, maybe dropbear)
  • telnet (bsd telnet, linux telnet) (not enough endpoints)
  • RDP (Terminal Services)
  • DNS (BIND9, maybe BIND8, MSDNS, Unbound, NSD)
  • SNMP
  • SMTP (Sendmail, Postfix, Exchange, Qmail, whatever GMail/Hotmail/Yahoo run)

I haven’t exactly figured out where the line is — certainly we might want to consider web frameworks like Drupal and WordPress, FTP daemons, printers, VoIP endpoints and SMB daemons, not to mention the bouillabaisse that is the pitiful state of VPNs all the way into 2012 — but the combination of “unfirewalled from the Internet” and “more than 1M endpoints” is a decent rule of thumb. Where people are maybe blind, is in the dramatic underestimation of just how many Microsoft shops there really are out there.

We actually had the opposite situation a little while ago, with this widely discussed bug in telnet. Telnet was the old way Unix servers were maintained on the Internet. SSH rather completely supplanted it, however — the actual data revealed only 180K servers left, with but 20K even potentially vulnerable.

RDP’s just on a different scale.

I’ve got more to say about this, but for now it’s important to get these numbers out there. There’s a very good chance that your network is exposing some RDP surface. If you have any sort of crisis response policy, and you aren’t completely sure you’re safe from the RDP vulnerability, I advise you to invoke it as soon as possible.

(Disclosure: I do some work with Microsoft from time to time. This is obviously being written in my personal context.)

Categories: Security

Open For Review: Web Sites That Accept Security Research

February 26, 2012 11 comments

So one of the core aspects of my mostly-kidding-but-no-really White Hat Hacker Flowchart is that, if the target is a web page, and it’s not running on your server, you kind of need permission to actively probe for vulnerabilities.

Luckily, there are actually a decent number of sites that provide this permission.

Paypal
Facebook
37 Signals
Salesforce
Microsoft
Google
Twitter
Mozilla
UPDATE 1:
eBay
Adobe
UPDATE 2, courtesy of Neal Poole:
Reddit (this is particularly awesome)
GitHub
UPDATE 3:
Constant Contact

One could make the argument that you can detect who in the marketplace has a crack security team, by who’s willing and able to commit the resources for an open vulnerability review policy.

Some smaller sites have also jumped on board (mostly absorbing and reiterating Salesforce’s policy — cool!):

Zeggio
Simplify, LLC
Team Unify
Skoodat
Relaso
Modus CSR
CloudNetz
UPDATE 2:
EMPTrust
Apriva

There’s some interesting implications to all of this, but for now lets just get the list out there. Feel free to post more in the comments!

Categories: Security

White Hat Hacker Flowchart

February 20, 2012 4 comments

For your consideration. Rather obviously not to be taken as legal advice. Seems to be roughly what’s evolved over the last decade.

Categories: lulz, Security
Follow

Get every new post delivered to your Inbox.

Join 344 other followers