EXTRAS
60 countries sign the “call for the responsible military use of Artificial Intelligence”
Many countries, led by the US, are working on what is understood as the “responsible” use of Artificial Intelligence for military purposes. In fact, 60 countries have signed a non-binding pre-agreement that lays the original foundations of what they aspire to achieve in a future treaty, now legal. The “Call to Action to Support Responsible Military Use of AI” it has a second part launched from the United States: the “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy”. What topics does it cover exactly?
The international summit on the responsible use of military AI in The Hague, in the Netherlands. It is the first conference of its kind and the US is very interested, for unknown reasons, in reaching an agreement on this highly controversial issue of Artificial Intelligence, are there hidden interests?
AI hardware and algorithms will be key in the wars of the future
At that summit, US Assistant Secretary of State for Gun Control Bonnie Jenkins said:
“We invite all states to join us in implementing international standards, regarding military development and the use of AI and autonomous weapons.”
That “to us” makes it clear who wants to lead the regulations and that was clearly seen in its preamble:
The military use of AI can and should be ethical, responsible, and focused on improving international security. The use of AI in armed conflict must be in accordance with applicable international humanitarian law, including its fundamental principles. Military use of AI capabilities must be responsible, including through such use during military operations within a human chain of command and control, being responsible. A principled approach to the military use of AI must include careful consideration of risks and benefits, and must also minimize bias and unintended accidents. States must take appropriate measures to ensure the development, deployment and responsible use of its military artificial intelligence capabilities, including those that allow autonomous systems.
The text is very ambiguous and leaves many assumptions up in the air, so the US has worked these days and has shown what they consider to be the 12 best practices listed in that political statement that we talked about earlier.
A whole military arsenal that will have Artificial Intelligence in a short time
It is not only the military equipment with the hardware that is included in the future, as well as the new algorithms that give it life, we are also talking about the current nuclear weapons, the current military designs and other assumptions that have also been discussed. The weapons of now are already relatively intelligent and this is not the most worrying and what is addressed, but the autonomous weapons that could win future wars.
For this reason, in the 12 practices that are mentioned there are points that deal with personnel training, audit methodologies or even the military capabilities that each type of AI can have. But above all and above all, it is constantly cited that the human being must be responsible for the chain of command, even in autonomous systems, especially with nuclear weapons.
But it goes a bit further even:
“States should desEsports Extrasand implement military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to shut down or disable deployed systems that demonstrate unintended behavior.”
As we can see, everything is thought of, but equally this issue of AI for military use is still lacking in determination and complexity, since the affected systems must be addressed, the definition of Artificial Intelligence in this field and an endless number of particularities. . To do this, they will have to join more than the 60 countries that have already met and take into account all the limits so that the war does not become something like the battlefield of Skynetwhich no longer seems so far-fetched if we stick to what has been said.