A university in China has recruited 27 boys and four girls to become the world’s youngest AI weapons scientists.
All of the students are under 18 and were picked from a list of 5,000 candidates by the Beijing Institute of Technology (BIT).
Beyond academic prowess, the BIT sought other qualities in the candidates.
“We are looking for qualities such as creative thinking, willingness to fight, a persistence when facing challenges,” a BIT professor told the South China Morning Post.
The recruitment of students from such a young age marks a new point in the race to weaponise AI, primarily led by the US and China.
Students on the ‘Experimental Program for Intelligent Weapons Systems’ course will be mentored by two senior weapons scientists.
Following their first semester, the students will be asked to choose a speciality field in order to be assigned to a relevant defence laboratory for hands-on experience.
The course is four years long and students will be expected to progress onto a PhD at the university to lead China’s AI weapons initiatives.
Last year, Chinese President Xi Jinping emphasised his country will be putting a much greater focus on military AI research.
AI News reported back in July that China is planning for a new era of sea power with unmanned AI-powered submarines. The country hopes to have them operational by the early 2020s to patrol areas home to disputed military bases.
“The AI has no soul. It is perfect for this kind of job,” said Lin Yang, Chief Scientist on the project. “[An AI sub] can be instructed to take down a nuclear-powered submarine or other high-value targets. It can even perform a kamikaze strike.”
Of particular concern is that China’s subs are being designed not to seek input during the course of a mission. The international norm being promoted by AI researchers is that any weaponised AI system ultimately requires human input to make decisions.
If China is prepared to fully automate their submarines, it’s likely they’re willing to do so for other weapons systems.
There’s the infamous story of Soviet Officer Stanislav Petrov who decided not to launch the country’s nuclear warheads after a computer glitch made it appear like that five Minuteman intercontinental ballistic missiles had been launched by the US towards the Soviet Union.
Human instinct averted a nuclear disaster that day.
“We are wiser than the computers,” Petrov said in a 2010 interview with the German magazine Der Spiegel. “We created them.”
Had it been an AI instead of Petrov making the decision in 1983, the outcome would likely have been very different. China’s apparent willingness to fully automate weapons should be a concern to us all.
Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.