Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
World News

Microsoft launches AI moderation tool — RT World News


Azure AI Content Safety claims to be able to detect “inappropriate” images and text and assign severity scores

Microsoft on Tuesday launched an artificial intelligence-powered content moderation tool that is supposed to be able to report “inappropriate” images and texts in eight languages ​​and ranking of their severity.

Called Azure AI Content Safety, the service is “designed to foster safer online environments and communities“, according to Microsoft. It is also intended to neutralize”biased, sexist, racist, hateful, violent and self-destructive contentthe software giant told TechCrunch via email. While integrated with Microsoft’s Azure OpenAI Service AI dashboard, it can also be deployed in non-AI systems such as social media platforms and multiplayer games.

To avoid some of the most common pitfalls of AI content moderation, Microsoft allows the user to “refine“Azure AI’s filters for context and severity rankings are meant to allow for quick auditing by human moderators. A spokesperson explained that “a team of language and equity expertshad designed the guidelines for his program,culturally sensitive [sic]language and context.”


A Microsoft spokesperson told TechCrunch that the new model is “able to better understand content and cultural context” than previous models, which often failed to pick up on contextual cues, pointing out benign things unnecessarily. However, they recognized that no AI was perfect and “recommend[ed] using a human-in-the-loop to verify results.”

The technology that powers Azure AI is the same code that keeps Microsoft Bing’s AI chatbot from going rogue — a fact that could alarm early adopters of competitor ChatGPT. During his introduction in February, Bing memorably tried to convince reporters to leave their spouses, denied obvious facts like the date and year, threatened to destroy humanity, and rewrote history. After significant negative media attention, Microsoft limited Bing chats to five questions per session and 50 questions per day.

Like all AI content moderation programs, Azure AI relies on human annotators to label the data it’s trained on, which means it’s as unbiased as its programmers. This has historically led to PR headaches – nearly a decade after Google’s AI image analysis software labeled photos of black people as gorillas, the company still disables the option to visually search for not just gorillas but all primates, lest its algorithm accidentally flag a human.

Microsoft fired its AI ethics and safety team in March even as the hype around big language models like ChatGPT reached fever pitch and the company itself poured billions more into its partnership. with the developer of the OpenAI chatbot. The company recently dedicated more staff to pursuing artificial general intelligence (AGI), in which a software system becomes capable of conceiving ideas that were not programmed into it.

You can share this story on social media:

rt Gt

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button