From The Blog

Wednesday, 24 May 2023 10:25

Microsoft Announces AI Run Moderation System to Prevent “harmful” Content

Written by

Reading time is around minutes.

With some of the news around AI I feel like I should just create a “what could go wrong” series of articles. After all, as we see the term “AI” pushed around as the savior for all the things, we should be aware of the fact that things could go horribly wrong with any of these systems. So, it is with that in mind that we bring you news that Microsoft is now offering an AI content moderation system called Azure AI Content Safety. I mean having a system that was taught what is harmful content to control speech in online platforms… what could possibly go wrong?

With Microsoft’s very large investment into OpenAI it was only a matter of time before they started rolling out a new wave of AI driven products. The Microsoft Build conference was as good a place as any so that is there they chose to marvel in their own magnificence as they went all in on AI. These announcements also come at a time when more and more people are aware of bias in existing content moderation on social media platforms. The “algorithm” is what most will call it. These algorithms are actually first iterations of what is now being labeled as AI. Google, Twitter (even under Elon although worse under previous leadership), Facebook (Meta) all have some form of automated content moderation. These have been shown to have some rather disturbing bias when it comes to content and most AI Chat Bots have also shown a similar bias.

As we have said before, the bias in AI has to do with either conscious or unconscious bias being injected into the model during training. Each person brings their understanding of the world into this training and the more focused they are in their thinking, the more likely that bias is going to come to the surface in their work (this is basic psychology). These models learn only what they are taught and people that add to the mix can change the patterns used by these models. This is not new information and there are a number of reports that show this from a variety of sources.

If we narrow the focus to Microsoft, they alone have a history of missteps with AI. Just look at the Bing Chat Bot and how easily manipulated it was in previous iterations. Even as recent as February of 2023 the chatbot was not what you would call a good source of information. Microsoft has also had some internal shake ups with the removal of the Ethics and Society team from the AI group. We have potential training bias, poor track record with Bing Chatbot, and now no one to provide an ethical reference for the development of AI related services. I mean… what could go wrong?

AI is the new shinny thing, and everyone is rushing to get a piece of the pie before it is all gone. Microsoft is no different. You can see this in their investment in OpenAI. The problem is that AI is not really ready to take over everything from people. While it is cool technology, it still struggles with some very basic concepts and, despite claims otherwise, does have a challenge with understanding context, sarcasm, satire, and humor. All these things require reasoning that is just not there in current AI models. It Is why they can be so easily manipulated by user input. Letting something that has no true reasoning ability along with no understanding of humor or satire control what is “harmful” based on likely biased training is not something I want to imagine, yet here we are.

Read 1248 times

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.