Microsoft dropped some potential deals over AI ethical concerns

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (

According to a director at Microsoft Research Labs, the company has dropped some potential deals with customers over ethical concerns their AI technology may be misused.

Speaking at the Carnegie Mellon University – K&L Gates Conference on Ethics and AI in Pittsburgh, Eric Horvitz made the revelation. He says the group at Microsoft looking into possible misuse on a case-by-case basis is the Aether Committee (“Aether” stands for AI and Ethics in Engineering and Research.)

“Significant sales have been cut off,” Horvitz said. “And in other sales, various specific limitations were written down in terms of usage, including ‘may not use data-driven pattern recognition for use in face recognition or predictions of this type.’”

Horvitz, of course, did not reveal the specific companies of which Microsoft decided not to formulate a deal with. However, it’s pleasing to hear the company putting ethics above money when it comes to artificial intelligence. Abuses will be widely covered and hamper its potential.

Amidst the fallout of the Cambridge Analytica and Facebook scandal, and the use of this stolen data to target voters during the 2016 U.S. presidential campaign, people are naturally more wary of anything which involves mass data analysis.

Manipulating votes is one of the key concerns Horvitz raises for the abuse of AI, along with human rights violations, increasing the risk of physical harm, or preventing access to critical services and resources.

In reverse, we’ve already seen how AI itself can be manipulated — from Microsoft itself, no less. The company’s now infamous ‘Tay’ chatbot was taught by people online to spew racist comments. “It’s a great example of things going awry,” Horvitz acknowledged.

Rather than replace humans, Horvitz wants AI to be complementarity and not a replacement — often as more of a backstop for human decisions. However, it could still be used when invoked for tasks where a human would not be as effective.

For example, Horvitz highlights a program from Microsoft AI that helps caregivers to identify patients most at risk of being readmitted to a hospital within 30 days. Scholars who assessed the program determined that it could reduce rehospitalisations by 18 percent while cutting a hospital’s costs by nearly 4 percent.

The comments made by Horvitz once again highlight the need for AI companies to ensure their approach is responsible and ethical. The opportunities are endless if AI is developed properly, but it could just as easily lead to disaster if not.

Update: A previous headline ‘Microsoft has dropped some deals over AI ethical concerns’ was misconstrued as meaning the company dropped some existing deals. It has been updated to reflect Microsoft decided against some possible future partnerships over ethical concerns.

What are your thoughts on Microsoft’s approach to AI ethics? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Tags: , , , , , , ,

View Comments
Leave a comment

Leave a Reply