As weapons of mass destruction and the current threat of nuclear war parades the headlines, there has been an alternative claim which bares an even deeper threat to the human race – artificially intelligent weaponry. Expanding on comments Vladmir Putin made to an audience of schoolchildren on the subject – ‘whoever becomes the leader in this area will rule the world’, tech tycoon Elon Musk has tweeted that AI threats need to be taken more seriously, stating ‘WWIII may be initiated not by the country leaders, but one of the AI’s, if it decides that a pre-emptive strike is most probable path to victory’. Of course the idea of robots turning against humans and taking over Earth is not a new idea in the world of sci-fi, but as technology develops at such a rapid pace, these works of fiction may soon become a reality.

In response to Musk’s call to take potential AI threats more seriously, much of the debate is centered on Russia’s involvement with the technology. The country is flying under the radar with AI technological advancements, thanks to Silicon Valley and Western Universities stealing the limelight. That there is no way Russia could possibly win the AI weaponry arms race due to their sheer lack of reported technological advancements. But if Putin is talking about the power of AI, it certainly means he is investing in it. It’s also worth pointing out that Russia is barely to be feared in the AI race due to their sheer absence in any form of ranking to do with AI research or investors. But Russia aside, what are the detrimental results for the rest of the world if any country wins this race?

In 2015, it was reported that an AI computer taught itself to become a chess world champion in less than 72 hours. A game that can take years to master is suddenly conquered within hours and has made any form of competition generated by the human mind obsolete. But of course, this small scale example highlights the frightening reality of bigger things to come. Apply this concept to the framework of a world war and we are going to have destructive machinery and weapons managing to defeat any form of human thought or processing at an unseen pace. Essentially, the result would lead to self-sufficient AI programmed machinery fighting against other AI programmed machinery, and from then on the power of it all is alarmingly uncertain. AI researchers and notable scientists have consistently condemned the concept of AI weaponry. Stephen Hawking has notably said that ‘once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever increasing rate’. Apply this theory to weapons of mass destruction and you have a disaster waiting to happen. If a computer can decipher how to make a game of chess unbeatable, then where is the reassurance that AI weaponry can be controlled once it surpasses a particular point? Would a computer used for the purpose of world war suddenly realise that it also has the ability to take over the human race, whether that means working with other AI machines or simply all by itself?

By taking AI threats seriously, it seems sensible not to underestimate Russia in the AI race, as the world largely underestimates them in other areas of technological advancements. Russian start up companies are generally only known to experts, and while its rogue tech companies have invested extensive resources into research, their achievements are often overshadowed by larger scale rivals. This has even lead to opportunities being sought outside of Russia. But funnily enough, this is not the first time they have made headlines on the subject. In July this year, there were reports that Russian arms manufacturer Kalashnikov will be launching a range of autonomous combat drones with the intent of identifying targets and being able to generate important decision making without human interference. Although these drones have been around for over a decade now, they have never reached the extent of being independently functional. These new machines are expected to run on a self-taught basis, where the longer they are in operation, the smarter they will become. Additionally, there have also been reports of Russia utilising AI for military purposes with drones, and assisting pilots fly fighter planes. These reports however are all state-sponsored, meaning it is unclear how accurate they are, or whether they have been implemented to stir a reaction.

Interestingly, Putin has claimed that if Russia were to become the first country to master the art of AI, it would share its findings with the rest of the world, in a similar vein to how their nuclear technologies were. But if Russia were not to win the AI race, then what would be the potential consequences to the planet if another country were to come out on top? Of course this largely depends on which country is successful. Would it be worse if a country claimed to have mastered AI and refused to share it with the rest of the world? Or would it be more beneficial for this knowledge to be shared? And would sharing this knowledge be selective, ultimately sparking even more conflict? At this point the answers are largely unknown, as is the predictability of AI intelligence once it eventually, inevitably takes form.