Displaying items by tag: AI

Black Hat 2023 – Las Vegas. Sitting in one of my favorite bars in the Mandalay Bay Shoppes, 1923 Prohibition Bar, I had an opportunity to sit down and talk with Stuart McClure. For those that do not know, I worked at Cylance while Stuart was CEO there and left shortly after Blackberry purchased the company. We spent a few moments talking about the Cylance days and how the concept of Cylance impacted the cybersecurity industry. It was one of those times when the right concept was introduced at the right inflection point and combined with the right team to get it into the hands of the public. From there we moved on to Stuart’s latest endeavor, Qwiet AI.

Published in Security Talk

As we head into Hacker Summer Camp in Las Vegas, the emails are already flowing freely into my inbox. Some of them are the regular players that I see every year and others are new. Still more are people that I hear from each year but with new faces to talk to. This is part of what I love about going out to Black Hat, talking new people, talking to well known people in the industry and then getting an understanding of what everyone thinks is the “big thing” for cybersecurity. This year, by far, it is AI and automation.

Published in Security Talk

The arguments for and against AI as a threat all seem to be centered on the point of AGI (Artificial General Intelligence). This is the point where the reasons skills of AI are on par with the average human brain. When reached it would mark an evolution in AI. The people saying AI is a threat are trying to slow down progress towards this, while those arguing it is harmless all say we are nowhere near that stage. I have argued that this point is irrelevant in terms of assessing the dangers of a blind rush to build and shove AI into everything.

Published in Editorials

Wait, another danger of AI article? Yes, another one. Since far too many people and companies are ok with ignoring the dangers simply for the sake of the next big shiny thing, we thought we would at least be part of the awareness of it. I might also say “I told you so” when things do start to go sideways… ok I would not be that much of a jackass, but I do think that making sure to point out issues with new technology while others seem ok with glossing them over is a good idea.

Published in News

Geoffrey Hinton, a former engineering fellow at Google and a vice president focusing on AI has made comments after his retirement from Google earlier this month (May 2023). Although his retirement was about more than his change of mind on AI (he was also 75), he has said that his concern has only grown seeing the state of AI and how hard organizations are pushing for it.

Published in Editorials

With some of the news around AI I feel like I should just create a “what could go wrong” series of articles. After all, as we see the term “AI” pushed around as the savior for all the things, we should be aware of the fact that things could go horribly wrong with any of these systems. So, it is with that in mind that we bring you news that Microsoft is now offering an AI content moderation system called Azure AI Content Safety. I mean having a system that was taught what is harmful content to control speech in online platforms… what could possibly go wrong?

Published in News

After learning that there were malicious ads containing links to ChatGPT apps (for Windows), Apple launched a legitimate app for IOS. The app brings the very popular LLM to Apple users at a time when some are becoming more hesitant about its use. It has not been that long since Samsung accidentally leaked confidential information via the platform. This prompted both Microsoft (a heavy investor) and OpenAI themselves to start work on private environments where data put into the model is not used to train it.

Published in News

There is a quote from the movie “The Matrix” that has always stuck with me. It was a scene where Morpheus (Lawrence Fishburne) is explaining to Neo (Keanu Reeves) the state of the real world and the history that allowed it to get there. The line is “We marveled at our own magnificence as we gave birth to AI.” There is another important line from the HBO series “From the Earth to the Moon. This line take place when Frank Borman (David Andrews) was asked what caused the Apollo One fire, he replied “A failure of imagination.” These two lines compete for how I view the state of AI development. As we marvel at our own magnificence, we should not stop thinking about the potential risks involved as we push to advance AI. Yet that seems to be what is happening.

Published in Editorials

If I were to build a list of companies that I would not want to build an AI project Meta, the parent company of Facebook is probably sitting at the top of the list. Yet here we are with a company known for manipulating users, user data and a proven habit of abusing the information it has. Meta is building an AI tool they are calling ImageBind that looks to expand on AI currently understands an environment. Most current AI image generators are (in very simple terms) texts to image generators. They take input in the form of words and create an image from learned input (again in very simple terms).

Published in Editorials
Thursday, 04 May 2023 12:26

Who do you trust with AI? Well… No One

The other day while wading through the sludge that is the internet, I stumbled across a poll on Twitter asking the binary question “Who do you trust more with AI; Bill Gates or Elon Musk?” This led to a fun few hours diving deeper into that particular rabbit hole. I stumbled across articles where Bill Gates talks about AI via interviews as well as some interviews of Elon where he disparages Gates’ grasp on AI. Like I said, fun.

Published in Editorials
Page 1 of 2