Print this page
Friday, 20 November 2015 10:43

It’s a failure of imagination that will always get you

Written by

Reading time is around minutes.

Back in 2007 or so I was asked to write a white paper on the subject of why Intel was able to pass AMD as quickly as they did. This is back in the AM2+ days when Intel was dropping Conroe on the world. Many people were surprised that Intel made this shift so quickly when you consider how badly AMD had beaten the P4. It was incorrectly assumed that AMD had reach a peak that Intel could not touch. Because of this they did not push their advantage. Instead they opted to move in a very different direction and purchase ATi for way more money than they should have. This one move started the long decline of AMD as we knew it. It was a massive strategic error and it all came down to one thing. A failure of management and stockholders’ to imagine that Intel could so easily blow past AMD’s performance lead. This type of failure can have catastrophic consequences in the business world and in security.

Let’s take a look at one way this happens in the security field. In 2013 a security researcher decided to see what fun he could have with a group of Bluetooth enabled toilets in a fancy hotel. He found that the manufacturer had programed in a default connection code of 00000 so he could connect to any toilet in range. With some simple coding he was able to force multiple toilets to do all kinds of fun things. When asked about why this would be open on this type of device, the manufacturer stated that they did not think anyone would do something like that.

The same statement was made when it was discovered that a whole generation or wirelessly controlled pacemakers had no security on them. The connection was open to ANYONE. Not really what you want in a device that controls your hear now is it? There are more examples of this type of failure, but you would honestly get bored with me listing them.  

My last example is inspired by a conversation I recently had. In it there was an assumption (a correct one too) that if an application is local, does not talk to anything and does not listen on any ports that it is secure on its own. This is baring a compromise of the machine or direct compromise of the file system. Now it is important to note that the application is an encrypted data repository and that keys and access is controlled locally. So looking at this system it was assumed that there is no remote way to compromise this application without compromising the system it is on. However, in the conversation it was missed that the application performs an update check at launch. This single call home is an exploitable vector for remote attack.

So we now have a vector for attack. We have to check and see if there are ways to mitigate it. Can you turn off this function? Can you control who it talks to? These are the types of questions you have to ask to ensure it does not become a problem. If you can turn off the call home or limit the ability of communication to the outside world to a specific IP (or range) and port then the threat is minimized. This is the type of check that should be done for every piece of software that contains critical data or is installed on a system with access to critical data.

Of course in the conversation the application would not have been a priority target. The information it contained have little value overall and the effort needed to really compromise it would not be worth the pay off. Still it was an interesting academic example of how easy it can be to overlook a seemingly harmless part of an application which could end up as a vector for attack.

Read 2369 times
Sean Kalinich

Latest from Sean Kalinich

Related items