Displaying items by tag: AI

Wait, another danger of AI article? Yes, another one. Since far too many people and companies are ok with ignoring the dangers simply for the sake of the next big shiny thing, we thought we would at least be part of the awareness of it. I might also say “I told you so” when things do start to go sideways… ok I would not be that much of a jackass, but I do think that making sure to point out issues with new technology while others seem ok with glossing them over is a good idea.

Published in News

Geoffrey Hinton, a former engineering fellow at Google and a vice president focusing on AI has made comments after his retirement from Google earlier this month (May 2023). Although his retirement was about more than his change of mind on AI (he was also 75), he has said that his concern has only grown seeing the state of AI and how hard organizations are pushing for it.

Published in Editorials

With some of the news around AI I feel like I should just create a “what could go wrong” series of articles. After all, as we see the term “AI” pushed around as the savior for all the things, we should be aware of the fact that things could go horribly wrong with any of these systems. So, it is with that in mind that we bring you news that Microsoft is now offering an AI content moderation system called Azure AI Content Safety. I mean having a system that was taught what is harmful content to control speech in online platforms… what could possibly go wrong?

Published in News

After learning that there were malicious ads containing links to ChatGPT apps (for Windows), Apple launched a legitimate app for IOS. The app brings the very popular LLM to Apple users at a time when some are becoming more hesitant about its use. It has not been that long since Samsung accidentally leaked confidential information via the platform. This prompted both Microsoft (a heavy investor) and OpenAI themselves to start work on private environments where data put into the model is not used to train it.

Published in News

There is a quote from the movie “The Matrix” that has always stuck with me. It was a scene where Morpheus (Lawrence Fishburne) is explaining to Neo (Keanu Reeves) the state of the real world and the history that allowed it to get there. The line is “We marveled at our own magnificence as we gave birth to AI.” There is another important line from the HBO series “From the Earth to the Moon. This line take place when Frank Borman (David Andrews) was asked what caused the Apollo One fire, he replied “A failure of imagination.” These two lines compete for how I view the state of AI development. As we marvel at our own magnificence, we should not stop thinking about the potential risks involved as we push to advance AI. Yet that seems to be what is happening.

Published in Editorials

If I were to build a list of companies that I would not want to build an AI project Meta, the parent company of Facebook is probably sitting at the top of the list. Yet here we are with a company known for manipulating users, user data and a proven habit of abusing the information it has. Meta is building an AI tool they are calling ImageBind that looks to expand on AI currently understands an environment. Most current AI image generators are (in very simple terms) texts to image generators. They take input in the form of words and create an image from learned input (again in very simple terms).

Published in Editorials
Thursday, 04 May 2023 12:26

Who do you trust with AI? Well… No One

The other day while wading through the sludge that is the internet, I stumbled across a poll on Twitter asking the binary question “Who do you trust more with AI; Bill Gates or Elon Musk?” This led to a fun few hours diving deeper into that particular rabbit hole. I stumbled across articles where Bill Gates talks about AI via interviews as well as some interviews of Elon where he disparages Gates’ grasp on AI. Like I said, fun.

Published in Editorials

If you are a fan of science fiction movies, then you have probably seen multiple movies where an AI (Artificial Intelligence) has gone mad and decided that humankind needed to be eradicated. Everything from the Terminator series, through to the Matrix warns us of the dangers of creating something that is smarter and more powerful than ourselves. Of course, these are works of fiction, but they do represent an understanding of humankind’s hubris when it comes to creating artificial intelligence.

Published in Editorials

Hate is a powerful item and when it spills out it can be violent, rude and many other things. Because of the power of hate found in images, media, mems, etc., many have wondered why there are not more efforts to prevent the posting or sharing of such information. After all why would a media (social or other) want to allow hate speech or images posted on their pages. Facebook took this thought process and turned it into a policy that is designed to help stop hate speech from showing up. Now the system has inadvertently started censoring the wrong people.

Published in News
Tuesday, 28 January 2014 13:33

Google buys up AI company DeepMind...

British start-up DeepMind became a part of Google, for which the search giant paid about 400 million US dollars. Thus, this company became financially Google's largest acquisition so far carried out in Europe.

Published in News
Page 1 of 2