6

Stuxnet is embarrassing, not amazing (2011)

 1 year ago
source link: https://rdist.root.org/2011/01/17/stuxnet-is-embarrassing-not-amazing/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

January 17, 2011

Stuxnet is embarrassing, not amazing

As the New York Times posts yet another breathless story about Stuxnet, I’m surprised that no one has pointed out its obvious deficiencies. Everyone seems to be hyperventilating about its purported target (control systems, ostensibly for nuclear material production) and not the actual malware itself.

There’s a good reason for this. Rather than being proud of its stealth and targeting, the authors should be embarrassed at their amateur approach to hiding the payload. I really hope it wasn’t written by the USA because I’d like to think our elite cyberweapon developers at least know what Bulgarian teenagers did back in the early 90’s.

First, there appears to be no special obfuscation. Sure, there are your standard routines for hiding from AV tools, XOR masking, and installing a rootkit. But Stuxnet does no better at this than any other malware discovered last year. It does not use virtual machine-based obfuscation, novel techniques for anti-debugging, or anything else to make it different from the hundreds of malware samples found every day.

Second, the Stuxnet developers seem to be unaware of more advanced techniques for hiding their target. They use simple “if/then” range checks to identify Step 7 systems and their peripheral controllers. If this was some high-level government operation, I would hope they would know to use things like hash-and-decrypt or homomorphic encryption to hide the controller configuration the code is targeting and its exact behavior once it did infect those systems.

Core Labs published a piracy protection scheme including “secure triggers”, which are code that only can be executed given a particular configuration in the environment. One such approach is to encrypt your payload with a key that can only be derived on systems that have a particular configuration. Typically, you’d concatenate all the desired input parameters and hash them to derive the key for encrypting your payload. Then, you’d do the same thing on every system the code runs on. If any of the parameters is off, even by one, the resulting key is useless and the code cannot be decrypted and executed.

This is secure except against a chosen-plaintext attack. In such an attack, the analyst can repeatedly run the payload on every possible combination of inputs, halting once the right configuration is found to trigger the payload. However, if enough inputs are combined and their ranges are not too limited, you can make such a brute-force attack infeasible. If this was the case, malware analysts could only say “here’s a worm that propagates to various systems, and we have not yet found out how to unlock its payload.”

Stuxnet doesn’t use any of these advanced features. Either the authors did not care if their payload was discovered by the general public, they weren’t aware of these techniques, or they had other limitations, such as time. The longer they remained undetected, the more systems that could be attacked and the longer Stuxnet could continue evolving as a deployment platform for follow-on worms. So disregard for detection seems unlikely.

We’re left with the authors being run-of-the-mill or in a hurry. If the former, then it was likely this code was produced by a “Team B”. Such a group would be second-tier in their country, perhaps a military agency as opposed to NSA (or the equivalent in other countries). It could be a contractor or loosely-organized group of hackers.

However, I think the final explanation is most likely. Whoever developed the code was probably in a hurry and decided using more advanced hiding techniques wasn’t worth the development/testing cost. For future efforts, I’d like to suggest the authors invest in a few copies of Christian Collberg’s book. It’s excellent and could have bought them a few more months of obscurity.

84 Comments

  1. 3eda6fcd3204ef285fa52176c28c4d3e?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    I will suggest a third alternative. The authors weighed the risk of not being successful vs the risk of someone analyzing the worm. The latter was inevitable but the former would have been disastrous. Those protections would have only slowed down malware analysts. If this was normal malware that would be the goal, exist for as long as possible without being detected. ‘Normal’ malware has a high tolerance for failure. In this case the goal appears to be ‘break some sensitive equipment before a particular deadline hits’, with a razor thin margin for error. But your points are not lost, good post.

    Comment by None — January 17, 2011 @ 9:19 am

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Yeah, that’s just another form of the “too little time” theory. The NYT claims they had a full duplicate test environment, so with enough time, they could reduce the likelihood of failure a lot.

      However, you completely miss my whole point if you think this would only slow down malware analysts. The way predicate encryption (secure triggers) work is that unless you can exactly replicate the target environment, at least in terms of the parameters that are inputs to the hash function, you can’t even decrypt the payload.

      The only ways to even get to the payload are:

      1. Isolate the payload decrypter and brute force all inputs until you get a valid key
      2. Set up a series of Step 7 configurations, hoping to accidentally trigger the payload (variant of #1)
      3. Invert SHA-256

      The #2 and #3 approaches are well outside the range of any malware analyst. Approach #1 (chosen plaintext attack) is the best approach. But if enough variables are present, the search space is too big and brute-force will fail.

      For example, if you can have 8 bytes of input parameters that all span the full range, then you have 2^64 possible inputs to the hash function. Increase that to 10 or more bytes and your malware analyst is stuck. Only if you had access to the factory and found the payload installed there could you begin analyzing it.

      Comment by Nate Lawson — January 17, 2011 @ 9:56 am

      • cde1e1b2b5604db68c77a2e8f0dd9b20?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        And clearly, it is impossible for the Iranians to fully replicate the target environment so they can’t get at the bytes. ;)

        Now let’s be serious: What exactly would’ve been gained by adding complexity to the infection step ? Stuxnet’s PLC payload wasn’t properly analyzed even without obfuscation, and in the greater scheme of things, the fact that somebody could extract the payload was hardly important.

        Comment by Halvar Flake — January 17, 2011 @ 12:58 pm

      • d2d5d816f163609124684c4a43fe2c0a?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        I agree with ‘none’, and disagree that this is just a variant of ‘too little time’. It is more a specific third option of ‘keep it simple stupid’ – if, indeed, the goal of the malware was to infect and change parameters in a specific Iranian plant, then there are already many things that must go right for it to work. Adding additional complexity to this, simply to allow for the malware to exist longer when that very complexity may be preventing it from doing its job, is going against the design brief. For a ‘secure trigger’ system as you describe to be useful, you would need to have a large range of input configurations.

        You discuss 8 bytes, for 2^64 values, but how many systems out there would have a full range of 2^64 different configurations? If they don’t, if they have something more like 2^32 options all equally likely, then the malware examiner can brute force the payload.

        Even if 2^64 (or more) equally likely options exist, then the malware programmer is left with the option that if you do not have exact intel on configuration of the target system – or if they decide to change one of the values – your code is useless. The right tool for the job is sometimes not the most elegant, but the one that is guaranteed to work.

        Finally, this provides a perfect way for people to protect themselves from the worm as soon as it is discovered – just change one single bit in your configuration (if there are truly 2^64 different values, I am quite certain some of them are redundant), and viola!, worm protection enabled. Now, if a variant pops up with the exact configuration you have changed to, you then know (i) it is exactly you they are after, and (ii) you have a leak.

        None of these points change even if the malware programmers had an infinite amount of time. Now, I am not saying that Stuxnet is a paragon of expertise, and I would certainly agree that some of the comments I have seen on it that say things like “it could only have come from a government agency” are completely over-stating the skill and cost required to put this together. However, I do not see how, presuming the target was Iran, this could be seen as anything other than a complete success. I am sure if this is the case, whomsoever was responsible is enjoying the plaudits from management.

        Comment by Andrew Jamieson — January 17, 2011 @ 2:55 pm

      • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        halvar: yes, the Iranians or anyone with the targeted factory configuration can get at the payload. However, Symantec wouldn’t be able to. If it matters enough to avoid publicity, you’d do it the way I suggest.

        andrew:

        re: limited range of inputs — they are multiplicative. So hashing number of PLCs, IDs for each, type of each, etc. quickly adds up. Even with 1 bit per PLC (say type A or B), you only need 64 PLCs to reach an input difficult to brute-force.

        re: this would hard-code the target configuration — that’s already how Stuxnet works. It looks for a bitstring at a fixed offset, checks versions, etc. All I’m describing is how to do this without giving away what you’re looking for. This is a solved problem in terms of encrypted database search and software protection.

        Comment by Nate Lawson — January 17, 2011 @ 3:40 pm

      • d2d5d816f163609124684c4a43fe2c0a?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        @Nate
        (1) Multiplicative inputs – Sure, that works well, if you can know the target system that well. However, from what I understand of the function of Stuxnet, the writers of the malware either did not know the system _that_ well, or they deliberately shot wider to ensure that they hit their target. This also feeds into,
        (2) In regards to Stuxnet looking for a fixed target, I must confess I have not gone through the code myself, but from the Semantec report we have:

        “First, the PLC type is checked using the s7ag_read_szl API. It must be a PLC of type 6ES7-315-2.
        The SDB blocks are checked to determine whether the PLC should be infected and if so, with which sequence (A or B).” (http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_stuxnet_dossier.pdf, p35)

        So, you have an eight character version string – which certainly could have been used to encrypt the payload. The SDB block must also be equal to 0100CB2Ch. Yes, that would have made the system more difficult to reverse engineer, etc, and if that is the argument, I agree whole heartedly. However, I am not an expert on PLC type version numbering, and I don’t know how common these values are, so I certainly cannot state if this would _prevent_ reverse engineering (which you state above is ‘your whole point’), or what the actual size of the key domain would be for someone trying to reverse it.

        The next step then looks for number of values, which must be in excess of 33 to execute the payload. This cannot be used as the key, as any attempt to obfuscate will have to reveal the trigger value anyway.

        I still think that KISS is the most likely reason for the lack of obfuscation. Assuming that they had a target in mind, I think it quite reasonable that they built to infect that target, and did not care what happened after the fact. It seems to have worked (from what we know through the press), why care for analysis after the fact?

        Comment by Andrew Jamieson — January 17, 2011 @ 6:05 pm

      • 23463b99b62a72f26ed677cc556c44e8?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        >For a ‘secure trigger’ system as you describe to be useful,
        >you would need to have a large range of input configurations.

        No you don’t. Even the DOS-era Cheeba virus with its basic filename triggers (see my comment at the end of this thread) was hard enough to analyse, anything more sophisticated would be well-nigh impossible in practice even if you can claim it’s easy in theory.

        Comment by Dave — January 20, 2011 @ 2:28 am

  2. 93efd66aabf9c97b59200102c0b1b5ed?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Hey Nate,

    Do you think an encrypted payload should be HMAC’d to determine if you got the right key, or are a few known plaintext bits enough? I seem to recall PGP using 16 bits of known plaintext, I’m just wondering if HMAC would be the proper solution (using a hammer as a hammer, and a screwdriver as a screwdriver, and all that). Presumably one wouldn’t want to use the same key for HMAC and decryption – that’s also a no-no – but it should be straightforward to derive both independently (via HMAC) from the hashed parameters that would have been used for decryption in the simpler, one-key scheme.

    Comment by Travis — January 17, 2011 @ 10:26 am

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Sure, you could just hash the expected key and only use it if you get a match. You would want some kind of checksum to prevent crashes due to jumping into improperly-decrypted code.

      Comment by Nate Lawson — January 17, 2011 @ 11:25 am

  3. 93efd66aabf9c97b59200102c0b1b5ed?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Oh yeah, it’s worth mentioning that I think PGP’s 16 bits of known plaintext in symmetric mode was chosen because they wanted to make a good guess as to whether you typed in the right password or not, but not reduce the keyspace overmuch to brute-force. The math is kind of interesting; if your base rate of mistyping the passphrase is, say, 50%, then you get a false positive – an improper recognition – only 2^-15 of the time. But it reduces the brute force keyspace only by 16 bits; you still have to do a plaintext-recognition pass 1/65536 of the time, which can be a lot on, say, a 256-bit keyspace.

    But a HMAC would be even better; it’s definitely more expensive for an analyst than checking a known-plaintext cookie, but a legitimate receipient has to only do it once. It may be more expensive than a plaintext-recognition pass, and even if it weren’t, you could truncate it to 16 bits, and have the same effect as known-plaintext cookies. More generally, doing encrypt-then-HMAC also neatly avoids PKCS #5 padding oracle attack (I hope those links came out right).

    Comment by Travis — January 17, 2011 @ 11:01 am

  4. 02c8576a089acd73d486b4407475fb97?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    “A man of great wisdom often appears slow-witted.” Have you heard this quote, Nate?

    If the developer of Stuxnet built it in a way that’s so smart and sophisticated to be stealthy and obfuscated, there isn’t much of the guesswork to be done – which leaves the trail of its origin – only a few big agencies/organizations/groups/individuals can do such work. It might be a tactic for the author to write Stuxnet this way, in turn it hides the origin. Maybe that was unintentional. But things aren’t always as obvious as it looks.

    Comment by Devy — January 17, 2011 @ 11:21 am

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Yeah, this was clever, intentional misdirection. Doing this right would narrow the possible candidates down to anyone who bought Surrepititious Software, has seen a Moti Yung paper in the past 15 years, or reads this blog.

      Comment by Nate Lawson — January 17, 2011 @ 11:28 am

      • 5a0829b5d513e9f91a244156e8f24dcc?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        Not just that – it’d also narrow it down to organisations that could justify the additional development cost. Correct me if I’m wrong, but I don’t think common commercial malware uses this kind of advanced technique, which means that Stuxnet would’ve stood out even more than it already did.

        Comment by makomk — January 19, 2011 @ 5:21 am

  5. 9e0a574f3d87ba1c16875d0cb76bc1f7?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    i’ve been saying much the same thing about stuxnet, but with regards to some of its other technical aspects. i think a lot of people are giving the code way more credit than it’s worth and i think a lot of people are severely limiting their analysis by assuming it must be a “very advanced” group that produced this. the fact that a very moderate group could have produced this should be very, very scary to anyone paying attention …

    Comment by munin — January 17, 2011 @ 1:25 pm

  6. eb69242357e150e580be56604b208a83?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    As far as I can tell, the malware did what it was designed to do, and there’s no conclusive evidence who is behind it (only speculation).

    Isn’t that a full success, from the creator’s point of view? Working code beats pretty concepts any time.

    Comment by Moritz — January 17, 2011 @ 1:39 pm

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      If you’re a professional attacker, generating lots of public discussion about your techniques is usually a bad thing. Especially if there’s another way of doing the exact same thing without generating publicity.

      Comment by Nate Lawson — January 17, 2011 @ 3:41 pm

      • 00fdb1e53a54518ccd86f7d8e78e2aa7?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        As you correctly pointed out, “your techniques” refers to stuff that’s old and well-known. Probability of unambiguously inferring Stuxnet authors from techniques used = 0.

        Comment by Michael — January 17, 2011 @ 10:09 pm

      • 000801987fb3dd4d9b92db484c455ba0?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        I’m not entirely sure that that’s true if the point you’re trying to make is partly political. Look at the NYT (and elsewhere) coverage: once discovered, it can be beneficial from a political perspective to have lots of newspapers talking up what you’ve done.

        Comment by Ian Betteridge — January 18, 2011 @ 6:33 am

      • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        Yes, and therefore the worm is not the work of professional attackers. That is my exact point.

        Comment by Nate Lawson — January 19, 2011 @ 8:10 am

  7. bdd65f9536adaef50ee9a1555231695d?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Consider there may never even have been a virus attack and that this whole thing is designed to cover the tracks of someone INSIDE of the facilities tampering with the equipment.
    This whole thing is quite possibly psyops, reported by that reliable journal of establishment view of reality, the New York Times.

    http://www.roytov.com/articles/awb.htm

    Comment by american — January 17, 2011 @ 1:40 pm

    • eeceadd742d1101c2177fd5654f2cfaf?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Yeah, right. Millions of dollars worth of zero-day exploits disclosed, and the world’s first glimpse at a SCADA rootkit, all as misdirection to cover up … something much more mundane, a physical insider attack. I should dig through my spam folder and find some good deals on third-world olanzapine and risperidone for you.

      Comment by Take Your Neuroleptics — January 18, 2011 @ 5:43 pm

  8. 23463b99b62a72f26ed677cc556c44e8?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    The authors of Stuxnet had a specific target in mind. All it needed to do was evade detection until it had done the damage. Why would they care if someone figures out what it does after it has already destroyed their target? If it happens to be discovered before its job is finished you do not need to figure out the payload to get rid of it.

    Malware designed to enter and spread on small private networks for a one-time purpose are not the same as massive internet worms that need to hide the workings of their command and control structure.

    Comment by Anonymous — January 17, 2011 @ 1:41 pm

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Except Stuxnet works a lot like traditional malware in terms of C&C, regular updates, etc. Why spend any effort on releasing updates if you’ve accomplished your goal?

      Comment by Nate Lawson — January 17, 2011 @ 3:43 pm

      • c5f1c6480ca33090c5d668f2c42430f7?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        why indeed.

        the 2 obvious answers are 1) they haven’t actually achieved their goal, or 2) they have serious problems gathering the requisite intelligence to know that they have achieved there goal (which in turn would call into question how they could have targeted their payload so precisely in the first place).

        Comment by kurt wismer — January 18, 2011 @ 10:21 am

  9. 30d6b6f4c0c25f26bb58cefdf01d4285?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Or maybe they wanted to send a message: “you’re going down pal, be very afraid”.

    Or “Process control engineers had better not bring their laptops to help out Iranian infrastructure”.

    Wikipedia quotes some Iranian official saying they’re not applying the patches from Siemens because they believe that they may simply be updates. The target now no longer trusts his own systems and vendors. The attacker may be intentionally playing on his paranoia.

    I’m not so sure that it would have been so easy to obscure the payload. There simply may not that much entropy in the model and serial numbers of the targeted units. After all, there are a finite number of Siemens deployments in the world and it wouldn’t take long at all to brute force them. People had a pretty good idea that Iran was the target just from the pattern of initial infection rates.

    To make it overly-targeted would mean that Iran could simply change one parameter and rid themselves of it. Besides, why couldn’t Iran just go ahead and publish it when they caught it on their own systems? (Or anyone else who happened to be in there too :-)

    Comment by Marsh Ray — January 17, 2011 @ 2:49 pm

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      The target would find out eventually when investigating all the broken centrifuges. The malware analysts finding it only sped up that process. So it couldn’t be hidden from the actual target.

      From what I’ve read, there is plenty of entropy. You can always obscure things a bit by adding invariants to the mix. The analyst will have trouble finding out which inputs matter and which don’t.

      It’s already specific in its targeting. I’m just describing a more secure way of accomplishing the exact same thing.

      Comment by Nate Lawson — January 17, 2011 @ 3:50 pm

  10. c1e7f7fcc712561e51d14100dea4958c?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    The biggest flaw in that NYT article was that it’s a fluff piece that does nothing but try to make statements of “fact” based on heresay “evidence” from sources with no direct knowledge. I’ve still not seen a shred of evidence anywhere that gives credence to the “government(s) dunnit” theory. The US/Israeli government types doing the nudge-nudge, wink-wink, puffed out chest thing aren’t actually in the know, AFAICT. While it may yet turn out to be a government effort, I don’t think one can rule out a pissed off contractor getting revenge for not being paid by the Iranians, or some such more mundane thing. Nor that the Iranians aren’t even the target.

    Comment by Leopold — January 17, 2011 @ 3:19 pm

  11. 6c691c8519e61f235790731ea218c426?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Sometimes in politics– the art of foreseeing war and winning it at all costs– it is prudent to flank the enemy, and let them see you do it.

    Comment by Jack — January 17, 2011 @ 3:37 pm

    • 2ec08c3d0d46f783a9e39ade3dd2b3cf?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      But to also let you see them *how* you did it?

      (I do like the quote though :))

      Comment by Bean Taxi — January 18, 2011 @ 1:54 pm

  12. b92f71f0de6af07171b3d65b74be3e15?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    “First, there appears to be no special obfuscation. Sure, there are your standard routines for hiding from AV tools, XOR masking, and installing a rootkit. But Stuxnet does no better at this than any other malware discovered last year. It does not use virtual machine-based obfuscation, novel techniques for anti-debugging, or anything else to make it different from the hundreds of malware samples found every day.”

    But these things are more likely to be detected by heuristics (within AV labs, if not in their end products) and cause deeper manual analysis to occur sooner. Once people are looking, any of those technologies is only going to slow a skilled analyst down by a day or two, so the best strategy would seem to be to avoid analysis for as long as possible – which means looking as normal as possible.

    Comment by James — January 17, 2011 @ 4:37 pm

    • b45fe53de778440be014939f6a2499e1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Right. Some AV companies dug around and found samples of early versions of Stuxnet in their archives going back to 2009 that nobody really spend time analyzing. If Stuxnet used a completely novel obfuscation and anti-debugging techniques, it would have been noticed a lot sooner. Some of the most clever malware samples I’ve seen are those that ‘hide in plain sight’ – the obfuscation is at a level way higher than machine or assembly level. Implementing a new machine/assembly level obfuscation method would make a malware sample stick out like a sore thumb against a background of mediocre malware.

      Stuxnet was able to spread for a year or more before being detected for what it was. The one thing the authors messed up was making it spread too wide.

      If anyone should be embarrassed about Stuxnet, it’s the AV industry and not the Stuxnet authors.

      Comment by Jon — January 17, 2011 @ 7:50 pm

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Since Stuxnet does “encryption” of various files, I don’t think AV heuristics such as entropy analysis would have triggered any sooner if they used better obfuscation. Also, I’m talking about obfuscation of the PLC payload, not the Windows malware/exploits. Are you sure those 2009 samples had the PLC payload?

      I agree the AV industry is often behind the curve. But that’s a bit off-topic.

      Comment by Nate Lawson — January 18, 2011 @ 9:23 am

      • c5f1c6480ca33090c5d668f2c42430f7?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        the damage that is now being attributed to stuxnet happened (if i’m not mistaken) in or by november ’09. if the PLC payload wasn’t in the ’09 version then attributing that damage to stuxnet makes no sense.

        i could be mistaken though. at one point i was under the impression that the damage actually occurred in november 2010, long after stuxnet reached the peak of it’s notoriety (which also makes little sense unless it’s just a convenient scapegoat).

        Comment by kurt wismer — January 18, 2011 @ 10:28 am

  13. e9e6e7dd3adf6895114677747b31b66a?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    If we are to believe what the NYTimes reports, Stuxnet could have been written years ago, which would align with the non-state-of-art techniques. I suspect it was written hastily as an alternative to military strikes. They may not have had the testing/development time available.

    Then again, this was a government(gubment) job…….

    Comment by Matt — January 17, 2011 @ 8:45 pm

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      A previous commenter said Stuxnet samples were later found dating back to 2009. If true, I think it is likely these did not have the PLC payload and were just the Windows malware component.

      Comment by Nate Lawson — January 18, 2011 @ 9:24 am

      • 23463b99b62a72f26ed677cc556c44e8?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        I think you need to distinguish between Stuxnet the final product and the engine that it’s built around, if they licensed a COTS malware engine from the Russians and then bolted on the C&C and payload themselves (which would be the most straightforward explanation for the apparent two-teams design) then what’s been around for awhile is the malware engine, but not necessarily Stuxnet as a whole. Bits and pieces of malware engines are re-packaged and re-sold all the time, could the core of Stuxnet be just another example of this?

        Comment by Dave — January 20, 2011 @ 2:38 am

  14. e8b32ba13e8c12216302081f4b306ee9?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    How about those 0 days that Stuxnet uses?
    Don’t they give information about the attackers?
    I don’t believe some dude in a basement or an unhappy contractor could have come up with that.
    They wanted to infect their target, do the payload and that’s it. Why is it so hard to believe the US and Israeli did it? Dont they have good motives?

    Comment by Jerome — January 17, 2011 @ 9:34 pm

  15. 29a80d66b31412a7108deae26c7bfc1a?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    right, stuxnet was likely a quick whip-and-slap type deal, just one blow of a barrage of slap jabs, with more being quickly whipped up now.

    Comment by billyd — January 17, 2011 @ 9:52 pm

  16. 3bb9f154385d1f2afcb5661e4d0e55d0?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    I will not consider stuxnet a success, because IRAN nuclear program is still going on.

    Comment by programmer75 — January 17, 2011 @ 11:52 pm

    • 000801987fb3dd4d9b92db484c455ba0?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      That’s a bit like saying the Battle of Britain wasn’t a success, because it did not end WWII on its own.

      Comment by Ian Betteridge — January 18, 2011 @ 6:35 am

  17. dfe4527dc613a8b74c44261a74af1a6b?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Nice theory. There’s one problem. As you already said, in order to successfully use payload encryption, you need a sufficiently large set of parameters that are present on the targeted systems. If too few parameters are used, the choice of parameters alone could reveal what the hackers are after. But if too many parameters are used, the probability a targeted system deviates in a single parameter, increases. So the most important prerequisite to using this technique is to know the targeted system well enough. Now apparently they targeted Step 7 systems in Iran, but no Step 7 systems in any other environment. I guess ‘Iran’ would be the hard part to catch in parameters. Maybe they just didn’t know the targeted configurations well enough to use this technique.

    Comment by Peter — January 18, 2011 @ 1:14 am

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Stuxnet is already highly targeted. It checks various PLC parameters, such as controller type, and only installs its patches if a certain number of each type of controller is found.

      Hash-and-decrypt does not require matching only one configuration. You can build an array of keys and choose one based on the inputs as well as hashing the inputs.

      For example, let’s say you want to only decrypt the payload for controllers with IDs 130-179 in the lowest byte of the ID field. You build an array of 256 keys. 206 of them are random garbage. 50 of them are the correct key, encrypted with a key derived as SHA-256(130), SHA-256(131), etc.

      Without brute-forcing this field and trying each resulting key, you can’t decrypt the payload. Now of course, this example has extremely low entropy and can be searched exhaustively. But multiply this by N different parameters and repeat this process N times and you can build something that can’t be brute-forced.

      Comment by Nate Lawson — January 18, 2011 @ 9:30 am

  18. a7cba1ff877f470329a1ab4eb84eb566?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Wouldn’t heavy obfuscation work against Stuxnet?

    What if some guy discovered Stuxnet in a nuclear facility and was unsure what it was. He decides to take a quick peek and the code appears to be normal. Nothing out of the ordinary unless he actually started to reverse engineer it. Given that it was digitally signed it had to be legit, right?

    Now, what if the code was obfuscated. Really heavy stuff, like you say, VM-machine obfuscation. Why would they have programs with such heavy obfuscation far, far beyond the capabilities of an in-house developer. Who made it, why is it signed and what does it actually do?

    Comment by AnotherIdiot — January 18, 2011 @ 1:18 am

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Yes, because discovering ordinary Windows malware in your air-gapped industrial computer is so common it will be ignored.

      Comment by Nate Lawson — January 18, 2011 @ 9:32 am

  19. 7416ac16ad573723deb26640bc6e6fcb?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Well folks there are other issues then just taking down some specific piece of equipment. Perhaps the creators wish to make it known that they can bring down these systems with pretty basic means. The message being, I’m not considering you to be much of a threat, see I can hamstring your program with my basic toolset…. That would tend to bring me back to the bargining table.
    As a general rule you do not want to use your wartime capabilities till the war breaks out. This is how we won (if there is such a thing as wining a war) the cold war. We got the soviets to spend a huge sum of money countering a fake starwars program. I suspect the Iranians are now scrambling for folks that are experts in security.
    This brings up an other possability, now that “we/they” know that the Iranians are looking for cyber security folks wouldn’t that be a great way to get one of your covert ops guys in deep deep deep?
    Things are almost never what they seem. I know CNN certianly didn’t go to the same wars I’ve been to.

    Comment by William roosa — January 18, 2011 @ 3:58 am

    • dfe4527dc613a8b74c44261a74af1a6b?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      I wonder if that would be a wise strategy. It would cause them to better secure their systems and in a real war situation they would be harder to hit.

      Comment by Peter — January 18, 2011 @ 6:14 am

  20. c5f1c6480ca33090c5d668f2c42430f7?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    nate – you are, in fact, not the only person who talks about serious deficiencies in the malware (as the comments here no doubt reveal). i myself spent some time last fall pointing out stuxnet’s failings. one of my conclusions was that it was made by people who don’t have experience in the malware field. they lack the tactical sophistication that professional malware gangs have had for several years, which makes them (despite the sophistication of their PLC code) relative beginners in the world of launching cyber-attacks.

    Comment by kurt wismer — January 18, 2011 @ 4:57 am

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      kurt, I’d appreciate any links to other articles that point out its deficiencies. Thanks.

      Comment by Nate Lawson — January 18, 2011 @ 9:33 am

      • c5f1c6480ca33090c5d668f2c42430f7?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        http://anti-virus-rants.blogspot.com/search/label/stuxnet has all 5 of my posts that touch on stuxnet (includes some criticism of the speculation that was rampant at the time). the failings i was concerned with were more tactical than technical, however.

        for example, one of the major discussions here seems to focus on the lack of advanced obfuscation, but i would suggest that staying hidden from AV was more important than being hard to analyze. AV vendors can add detection for a thing without fully dissecting it (in fact they do this most of the time). once it was found, being hard to analyze would have made little difference in achieving their goal because preventative measures could have been deployed before manual analysis even started.

        Comment by kurt wismer — January 18, 2011 @ 10:43 am

  21. f07affd4289e03cb8da1301c3df2bd62?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Could it be they wanted to make it look simple?

    Not knowing the full details on “Who” wrote this, isn’t that what everyone is now thinking that the NSA would never write something so bad…. hmmm….

    Comment by Chris Adams — January 18, 2011 @ 5:36 am

  22. 3aef78f7f1f38e9b7d78cf1d9008061a?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    The comments about not sticking out using novel obfuscation techniques sound plausible. Additionally, I think it is plausible that the attackers wanted them to question the security of the specific hardware. We can’t always depend on Saudi Arabia to block the export of controllers destined to Iran. Perhaps part of the intent was not just physical damage, but also a loss of trust in the hardware. If the Iranians can no longer trust the controllers and begin designing a new system that does not rely on them, that could be the real win in terms of buying us time.

    Comment by Adam Smith — January 18, 2011 @ 7:36 am

  23. 2e08fd64f10f3970adb316b7421d3fb3?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    I got a followup on the “too little time theory”. My guess is the Iran nukes program was WAY closer to success than the government let us public know (because it might have created panic and chaos in the streets, especially in Israel). In fact they might have intercepted some secret Iranian transmission that said they will have a nuke by next week and immediately plan to hit Israel with it. So the action to IMMEDIATELY get a targeted virus to Iran’s nuke program was needed.

    Comment by Videogamer555 — January 18, 2011 @ 7:58 am

    • dfe4527dc613a8b74c44261a74af1a6b?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Really? I don’t think either Israel nor the US expect Iran to carry out a nuclear attack on Israel out of the blue. They may be religiously fanatic, but they not stupid in Iran.

      Comment by Peter — January 18, 2011 @ 8:09 am

      • 5ea45f5dc68337b58fa1a2ac92cf18e0?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        I respectfully disagree with you Peter.

        A. Both “religiously fanatic” and crazy actually does have a correlation to stupid.

        B. Iran’s government is at least as stupid as GW Bush’s government which was stuffed full with graduates of an (at the time) unaccredited religious university and only by passing a loyalty and stupidity test.

        Peace,

        Comment by BillyBob — January 18, 2011 @ 9:24 am

      • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        I don’t think “respectfully” is accurate. Any more political discussion will be deleted without explanation. This is a technical blog. Thanks.

        Comment by Nate Lawson — January 18, 2011 @ 11:27 am

  24. 5ea45f5dc68337b58fa1a2ac92cf18e0?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    My friend Steph points out to me “according to science fiction, any new weapon never gets used by only one side. And sometimes the side it’s used against now has some working model (used against them) to design something unexpected.”

    Forget the origin of the observation (me, my buddy Steph, SF) and in difference to a similar point above, but why show all your cards? Hell they used four, FOUR, zero day attacks [http://en.wikipedia.org/wiki/Stuxnet#Windows_infection]. This makes abundantly clear they *intentionally* used what state of the art cards were required and no more.

    Also another post above mentions this being caught in a net in 2009 so ipso facto it is not state of the art. Throw in development time etc and you may have 2008 tools at play.

    I am also fond of the “get them to the bargaining table theory” theory.

    Who knows how many other more and less sophisticated
    attacks are in play in Iran now. Certainly the attacker would unleash the simpler one’s first, or at least design them totally differently.

    Terrific article Nate. Thank you.

    Comment by BillyBob — January 18, 2011 @ 8:43 am

  25. d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    The most interesting theory I’ve heard (from Thomas Ptacek) is that Stuxnet may just be a smokescreen. Let’s say that a software attack has set back Iran’s nuclear program in some way. The PLC abuser code could have been introduced to the target systems in many ways — at the Siemens factory in Germany, during transit, by a code update applied by contractors, or by moles in Iran. None of these involve the worm.

    Now to protect the actual source, you release the worm with the same PLC payload some time later. You know the damage has been done, but you want to draw attention away from your real method of delivery and confuse the target with mistrust. This magnifies the results because now they are going to be wasting lots of time reviewing all USB drives and looking for how the worm got onto their systems. You shut down their ease of working. This doubles the effect and for no additional cost.

    It’s a fun theory but there’s no evidence for it. Of course, there’s not much evidence for alternative theories either.

    Comment by Nate Lawson — January 18, 2011 @ 9:40 am

  26. 357a20e8c56e69d6f9734d23ef9517e8?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    It would seem that there is real risk in using advanced hashing techniques that they might accidentally cause the virus to fail due to an incidental configuration change on the target systems. If you want it to be widely permissive on a wide range of machines that function as Stage 7, then it’s hard to use machine hashes. It’s not entirely impossible, but it does get a lot trickier.

    Comment by Dave — January 18, 2011 @ 10:10 am

  27. b657086b9f60030f857f1840b1b68b38?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    You miss a couple points.

    1) It worked. Perhaps it even reached the limit of what it could do, so more effort would have been wasted effort.

    2) Maybe they wanted it to be reversed. Maybe they needed the attack to be discovered in the wild by researchers in order to provide cover for an operative who directly delivered the attack. Can anyone really assume that a USB-drive-based attack would reach all the needed targets?

    Comment by Seth — January 18, 2011 @ 10:53 am

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      No, there is no solid evidence that “it” (the worm) worked. You have a series of disjoint reports that “something happened” and “oh, here’s a worm”.

      Your second point may be true and goes against the narrative that this was some slick government job. Taking a 2009-era malware and slapping the PLC payload into it may indeed be cover for an inside job.

      Comment by Nate Lawson — January 18, 2011 @ 11:32 am

      • 2e08fd64f10f3970adb316b7421d3fb3?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        Actually I think “how it worked” part is clear by now, and indeed from the investigation it has been determined that from the inner workings of the virus, if it got onto an Iranian centrifuge controller it would INDEED force a malfunction (even the nature of the malfuction has been determined to be random inervals of highspeed spinning) yet render the controller’s logging software to report correct opperation.

        Comment by Videogamer555 — January 18, 2011 @ 5:51 pm

      • 9d1c7a94ea922a8d76a93f04463a95c1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        Just a few keywords for the interested.

        Zippe centrifuge, rotor, variable frequency drive, resonance, bellows, supercritical.

        Stux slowed to nearly stopping, a very fast spinning very frail long tube. This torqued and and caused oscillations that destroyed the thing. The whole trick of running enrichment centrifuges is avoiding oscillations of the meters-long plumbing you’re spinning.

        Comment by Major Variola (ret) — January 18, 2011 @ 7:17 pm

      • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        Yes, there is evidence that the PLC payload could have caused problems. And there’s a worm that’s been around in varying forms on the Internet since 2009. What I’m saying is there is no evidence that the Internet worm with PLC payload made it onto Iranian centrifuge controllers.

        Assuming the reported centrifuge problems were caused by a PLC programming issue, there’s still no evidence that the code wasn’t introduced at a Siemens factory, during transit to Iran, by technicians at the factory, by foreign contractors later on, software updates, etc.

        A worm is an extremely unreliable delivery mechanism, even with the USB flash drive vector. It’s unlikely a skilled saboteur would place their hopes in it.

        Comment by Nate Lawson — January 18, 2011 @ 9:16 pm

  28. 9d1c7a94ea922a8d76a93f04463a95c1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Stuxnet was novel because it ‘looked’ for its target system. And the PLC aspect is new.

    And it actually shattered about a thousand .ir rotors.

    That bought its authors some time.

    You should feel lucky that most malware pros, like most criminals, only want money. Those with ideological motives are much more dangerous. Much more.

    Comment by Major Variola (ret) — January 18, 2011 @ 7:15 pm

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      There is no proof that the Internet worm + PLC payload caused the setbacks in the news. You’re jumping to conclusions that aren’t warranted by the evidence.

      Also, motive is not what makes you dangerous, skill is. Motive + no skill = underpants bomber. Fear.

      Comment by Nate Lawson — January 18, 2011 @ 9:19 pm

      • cd0336e48a8bd538ac9c5d81d87a04c0?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        This comment makes me realize you’re just being silly.

        1) There’s a tool out there to destroy centrifuges in Iran.
        2) Suddenly there’s a lot of broken centrifuges in Iran.
        … absolutely _must_ be a coincidence?

        I’m going back to reading decent blogs.

        Comment by Lyla — January 19, 2011 @ 12:52 am

  29. 2e08fd64f10f3970adb316b7421d3fb3?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    If it wasn’t a virus on there, then WHY did the Iranians have to disenfect their comps before continuing operations? I specifically heard that they ran AVs on their PLCs after hearing that they were infected. I also know it did not destroy them as an above user claimed. I specifically heard on the news it merely “damaged” them. Also there is no reason that an equipment manufacturer would sabatoge their OWN equipment before selling it to Iran. They aren’t based in the US or Israel so they would not even have a national-security interest in mind.

    Most likely a secret agent (sleeper cell style) from the US or Israel impersonated an Iranian citizen, went through tech training there and spent years living in Iran. All with the knowledge of his secret operation. He the (with enough Iranian nuclear tech training at a local university) applied to be hired at the plant. Then he probably got a secret package from his home country in his mail with the infected flash drive when it was the time to strike. Now hired at the plant, he was able to (when nobody happened to be watching him) causally insert his thumb drive into one of the control computers, and a second later he removed it. Then he just kept on walking to where he was supposed to go according to his job. But in that second the USB flash drive was in the computer, it had copied the virus and autorun the virus on the computer. Mission accomplished.

    Comment by Videogamer555 — January 18, 2011 @ 11:26 pm

  30. f5dc04b28dbdaca8f5340cc4db970e22?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    “Stuxnet has gained a lot more media interest than any work I’ve done in the field of system security. Since Stuxnet does not use a particular obfuscation method that I like, and because I am jealous that I had nothing to do with its original design and implementation, Stuxnet is an inferior system and I must rant on my blog about it.”

    Comment by John Byrd — January 19, 2011 @ 8:32 am

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Nice! But next time please include some response to the article itself in addition to parody.

      Comment by Nate Lawson — January 19, 2011 @ 2:48 pm

  31. fd8ca48c60d0b4e1b1aef8fbe5ce4229?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    I think it was intended to be found. And it didn’t have to be too spiffy since it was not Internet delivered. Top level plants of any kind never directly connect their critical internal systems to the public Internet, that would be foolish. And should be.

    Comment by Oris — January 19, 2011 @ 5:39 pm

    • c5f1c6480ca33090c5d668f2c42430f7?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      if it was meant to be found, why did it have so much stealth designed into it?

      also, the internet is irrelevant in this case. stuxnet doesn’t spread over the internet, it spreads on USB flash drives.

      Comment by kurt wismer — January 19, 2011 @ 9:51 pm

  32. a06b3e750fb0377f6f4980a0982ecf3a?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    It was done by SERCO.

    Also, everyone seems to miss that they trigger flag date, 9th May 1979 was the signing day when the US & USSR sign Salt 2 treaty, thereby limiting nuclear weapons.

    Anon.

    Comment by Anon — January 19, 2011 @ 9:18 pm

  33. 23463b99b62a72f26ed677cc556c44e8?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Interesting that you should mention secure triggers, this technique has been used by a number of pieces of malware all the way back to DOS viruses (using filenames as keys to trigger different payloads), which made them a serious pain to analyse. So the technique is certainly known in the malware community.

    Comment by Dave — January 20, 2011 @ 2:21 am

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Yes, this is exactly my point with “Bulgarian teenagers”.

      Comment by Nate Lawson — January 21, 2011 @ 8:42 am

  34. 8cd973d7df0be50cbd861022ad081f2e?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Perhaps the lack of sophistication was intentional, in order to make this look like the work of the usual hacker suspects, and not a government agency. Reasonable Deniability is the norm for the letter agencies.

    Comment by Mike — January 20, 2011 @ 8:58 am

    • c5f1c6480ca33090c5d668f2c42430f7?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      unfortunately the malware component of stuxnet lacks sophistication even by the standard of “the usual hacker suspects”.

      the usual hacker suspects these days are professionals for whom online attacks are a business. their techniques and tactics have evolved and become quite polished.

      stuxnet lacks that polish. presence or lack thereof of advanced obfuscation techniques aside, malware that broadcasts itself is not something professionals use for stealthy targeted attacks.

      Comment by kurt wismer — January 20, 2011 @ 9:07 am

      • c510febb9bed68b5cc4a09f076701e0f?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        Of course not all hackers are Linux-using uber geeks. Some prefer to use normal hacking Windows based software, like Sub7, Netbus, Backorifice, Bifrost, etc. These NON-professional tactics still CAN be very effective. And maybe THAT type of ORDINARY hacker is what this virus was trying to simulate.

        Comment by anon — January 20, 2011 @ 1:34 pm

      • c5f1c6480ca33090c5d668f2c42430f7?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

        “Some prefer to use normal hacking Windows based software, like Sub7, Netbus, Backorifice, Bifrost, etc. These NON-professional tactics still CAN be very effective. And maybe THAT type of ORDINARY hacker is what this virus was trying to simulate.”

        i concede – if the makers of stuxnet were trying to make their attack look like it came from the 90’s, they succeeded.

        Comment by kurt wismer — January 20, 2011 @ 1:44 pm

  35. c38bb126e452e9cfdd8b414bdfc7388f?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Thanks Nate for the reference and making an interesting point. The security community sometimes amazes us repeating old processes from the ’90s failing to learn the right lessons. And being unaware of attacker techniques is one of the many pains that IT suffers.

    Maybe this is one of the subjects where crackers, AV vendors and other few have made progress; little was published; and even less was digested by the anti-botnet community.

    We came up with a few techniques, more flexible, to trigger attacks (e.g., our PacSec ’06 talk). Say, given a string of bits (or the concatenation of strings) the code gets decrypted only if certain bits within the string hold a specific value. Hence, the bot may concatenate a many parameters describing the infected machine (so that the parameter space is non-bruteforceable) and then deploy a trigger that will only look for a few of these parameters to hold a pre-fixed value (e.g., a value characteristic of certain SCADA networks).

    Cheers

    Comment by Ariel — January 20, 2011 @ 9:57 am

    • d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

      Thanks Ariel. We really like your work.

      Comment by Nate Lawson — January 21, 2011 @ 8:43 am

  36. d0c01d70ede8af2f696f36d3f89b8be1?s=32&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fmu.gif&r=G

    Thanks to all who commented on the technical aspects of this post. I have deleted all the political posturing and locked it now since the productive discussion seems to have ended.

    Comment by Nate Lawson — January 21, 2011 @ 8:47 am

RSS feed for comments on this post.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK