Thursday, 03 August 2023 18:45

The Odd Duality of AI and its Unexpected Negative Impact on Cybersecurity

Written by

Reading time is around minutes.

As we head into Hacker Summer Camp in Las Vegas, the emails are already flowing freely into my inbox. Some of them are the regular players that I see every year and others are new. Still more are people that I hear from each year but with new faces to talk to. This is part of what I love about going out to Black Hat, talking new people, talking to well known people in the industry and then getting an understanding of what everyone thinks is the “big thing” for cybersecurity. This year, by far, it is AI and automation.

If you were to read some of the emails I have you would think that AI is going to end all of the worlds security challenges. I have seen everything from musings on how AI based EDR will fix the skill set gap, to leveraging AI as a force multiplier. If I took just these emails as my source of truth, I would be inclined to believe them. Now, because I would never just rely on the marketing message (marketing speak is why DecryptedTech was started in the first place), I also know that many in the security industry are well aware of the limitations and dangers of AI.

This “Schrodinger’s AI” is one where AI is both a savior and a failure depending on your observation of it in action. You could argue that AI also falls under Heisenberg’s Uncertainty Principal when it comes to marketing and the push for it. The technology that I have seen and had access to seems to know where it is going, but either does not know or cannot know the speed it is going to get there. Additionally, they seem to know the benefit without knowing (willfully or other) the risks and dangers. Psychologists would call this Optimism Bias where the observation is that the benefit (good things) more than outweigh the potential for bad things to happen. I like to call this last one a failure of imagination.

Some will say, “AI is coming embrace it now or get left behind”. Personally, I am not fond of this type of push. It enforces a defeatist attitude in the part of the people being asked to “embrace” the new thing. It also steamrollers over all concerns, legitimate and other, without addressing them. We still have issues like indirect prompt injection attacks for many AI models to deal with before AI takes over as a true mainstream product for cybersecurity. So, what do we do about AI? Well, let’s break down a couple of big arguments for and against its injection into the market today.

One of the biggest items in the “for” column is its speed. AI models are fast, they have a ton of computing power behind them and can spit out results at lightning speed. Even some of the original AI cybersecurity tools like Cylance Protect were just blindingly fast. The mathematical model behind Cylance Protect was constantly learning about Malware which allowed it to quickly identify markers in static binary files (at rest on disk) as well as markers during the pre-execution and execution phases to catch and stop the launch of a potentially malicious binary. It was fantastic, revolutionary, and a huge pain in the ass to configure. In fact, it was such a pain in the ass that most people paid for services to assist in getting it tuned to the best effect for their environment. I worked as a part of that team for a while and while I can 100% attest to the efficacy that Cylance Protect provided, it also needed a lot of attention to get started and to maintain. Cylance Protect was indeed fast, but it had the double-edged sword of a steep learning curve for organizations.

Which brings us to the next “for”. It is a force multiplier and can bridge the skills gap. IO have bundled these two together as I often see them in close proximity to each other. AI is being marketed as a way to extend the capabilities of smaller security teams. With hooks into systems via API it can be “everywhere at once” and keep an eye on your environment. AI, (mathematical models or LLMs) due to their speed and training are presented as better than their human equivalent. It is like finding that proverbial uniron employee that can do everything. AI can be trained to code, review code, look for vulnerabilities, monitor for potential attacks, review infrastructure, all while washing and folding your laundry… ok not that last bit. In a down economy while attacks are on the rise, this seems like water to a person dying of thirst.

So, what is the downside here? Seems like this one is the one thing to rule them all. I would agree with you except for one small problem. Attackers have AI too and any AI is still a software tool that is vulnerable to attack. While AI can help to ease the stress of monitoring an environment it still cannot replace human intelligence. I, and others, have witnessed an AI call something a false positive only to find out it was actually the opening stages of an attack. The one part of the “skills gap” coverage that AI marketing missed is, what do you do to detect and counter AI failures? If you have gone all in on AI and you have nothing but blinking light watchers, you are very at risk of an attack that gets around your new toy. The history of the threat landscape if littered with cases of APT groups compromising automated checks and tools as part of their attack. I have seen automated active threat hunting miss a pen tester in an environment and they (usually) only focus on known vulnerabilities. How will these automations and AI do against a determined and sophisticated APT group leveraging a 0-day?

My least favorite of the marketing “for” arguments is “having AI and automation is on the roadmap for Government Cybersecurity Strategy”. This argument relies on the claim that Human error and speed is a thing and AI can fix that… The pause there was for me to take a few deep breaths. I will say this so that everyone can hear it; ALL security tools are vulnerable to attack, error, and speed issues, even AI ones. The mistaken belief that AI can magically fix human error is so absurd it is laughable. Yes, human error and speed issues exist, but remember that the AI and automation tools are being built… gasp… by Humans with those same tendencies towards error and slowness. This becomes a regressive argument; if humans are prone to error, then anything they build is prone to those same errors. Because the errors and bias are part of the programming and learning of the model, anything which is built from that model is also prone to have those same flaws in it, especially when you are talking about intelligence and thought process ad infinitum.

Before you break out your pitch forks and torches in preparation to storm the castle, understand that I am no against AI. I think that AI and all machine learning shows great promise when applied in the right manner. I am opposed to the lack of consideration and concern for the hazards that go with AI. I am also very concerned with the trend of companies thinking they can be rid of staff because they have the latest “AI” tool. To me both of these show a complete optimism bias and only set people up for failure. The massive logical gymnastics needed to replace seasoned staff with a new and untested tool defies imagination. Well almost. The prevailing thought is very financially motivated but ignores the fact that cybersecurity should not be looked at as an expense, but as a part of general revenue generation and protection. Just as you hire the right staff to bring the money in, you need to hire the right staff to keep it. On top of that you can leverage good cybersecurity, data governance, and hygiene as part of your sales pitch.

This round of the AI revolution is already starting to wane, after the big rush to embrace ChatGPT, we are starting to see some people get over the uniqueness and fun, the luster is no longer there. People are still interested, but not in the way the creators probably hoped. After all most of the people that I know use ChatGPT and other LLMs and AI image creation tools to screw with them or come up with goofy images like Geroge Washington as a punk rocker. Threat actors looked at what it could do along with the limitations and filters then just built their own. AI is coming, that is true. Whether this is going to play out to be a good thing or bad depends on what we choose to do with it. So far the metrics are not looking good as the skill gap in staff increases while AI tools are still vulnerable to outside manipulation and attack.

Read 1296 times

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.