Microsoft’s artificial intelligence capabilities have sparked controversy, as allegations surface that the company’s AI image generator produces offensive and inappropriate content.
According to a CNN reportA Microsoft employee has filed a complaint with the US Federal Trade Commission, warning that the company’s AI system, Copilot Designer, has “systemic issues” allowing it to regularly create sexualized and humiliating images of women.
The whistleblower, software engineering manager Shane Jones, accused Microsoft of falsely promoting the product as safe for young users, despite its propensity to generate such harmful images. This revelation comes as the technology industry grapples with the challenges of responsible development and deployment of AI image generation tools.
Earlier, Google faced similar backlash over its Gemini AI the program’s treatment of images based on race, prompting the company to pause the generation of human images while working on improvements. As artificial intelligence increasingly permeates consumer products, ensuring these systems meet ethical standards remains a major concern.
Read also: Kerala School Introduces Iris, India’s First AI Teacher