US Department of Defense adopts ethical principles for AI use

James has a passion for how technologies influence business and has several Mobile World Congress events under his belt. James has interviewed a variety of leading figures in his career, from former Mafia boss Michael Franzese, to Steve Wozniak, and Jean Michel Jarre. James can be found tweeting at @James_T_Bourne.

The US Department of Defense (DoD) has formally adopted a set of principles to use artificial intelligence (AI) for military use.

In October 2019, the recommendations for use of the technology were provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board. These recommendations came after 15 months of consultation with leading AI specialists in several industries like the government, academia, commercial, and the public.

The move aligns with the DoD’s AI strategy objective directing the country’s military lead in AI ethics and the legal use of AI systems. The principles will be designed on the US military’s existing ethics framework based on the US Constitution, Title 10 of the US Code, Law of War, existing international treaties and longstanding norms and values. As the existing framework delivers technology-neutral and enduring foundation for ethical behaviour, the use of AI increases new ethical uncertainties and risks. This is where the new principles will play their role by addressing the new challenges.

The next in line to announce ethics and transparency as its motto is the European Union, which has launched strategies for AI and the “data economy”. In its statement, the European Commission (EC) said: “European society powered by digital solutions that put people first, open up new opportunities for businesses, and boost the development of trustworthy technology to foster an open and democratic society and a vibrant and sustainable economy”. According to the EC, the focus would be on three key objectives in digital: “technology that works for people, a fair and competitive economy, and an open, democratic and sustainable society”.

According to a UK government report issued earlier this month, the government was ‘failing on openness’ with regards to its AI usage, although a specific regulator was not proposed as the answer. The report added that fears over ‘black box AI’, whereby data produces results through unexplainable methods, were largely misplaced. The report advocated the use of the Nolan Principles in bringing through AI for the UK public sector, arguing they did not need reformulating.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G ExpoIoT Tech ExpoBlockchain ExpoAI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.


View Comments
Leave a comment

Leave a Reply