Feature
Rise of killer robots seems inevitable at EU conference
-
Belgian police officers apprehend a protester outside the European Defence Agency's annual conference. (Photo: Vredesactie)
By Peter Teffer
Either Europe's military-industrial complex is incredibly shy - or it thinks that the debate about whether Europe should use lethal autonomous weapons is already over before it began.
The European Defence Agency held its annual conference in Brussels on Thursday (29 November), titled 'From unmanned to autonomous systems: trends, challenges and opportunities'.
Join EUobserver today
Become an expert on Europe
Get instant access to all articles — and 20 years of archives. 14-day free trial.
Choose your plan
... or subscribe as a group
Already a member?
At the last panel, one of the speakers said that it was inevitable that Europe would develop such military systems, because its adversaries would.
EUobserver subsequently asked if any of the other panel members, or anyone in the audience, believed that it was not inevitable that Europe would use fully autonomous lethal weapons.
No one among the roughly 100 participants - which included EU military brass, companies, civil servants working in defence ministries and EU institutions - raised their hand.
The only one who responded was Milan Rollo, who had made the remark about inevitability, to clarify what he had said.
Rollo is a robotics researcher at the Czech Technical University, as well as chief technological officer at AgentFly Technologies, a company offering services related to drones.
"I wouldn't say it's inevitable to deploy them. It is inevitable to possess it," he said.
He drew an analogy with nuclear weapons, saying Europe needed to have them to deter the enemy.
"Even just having these capabilities should frighten the opponent from any adversarial actions," he noted.
"They are already on the battlefield, we cannot avoid it," added general Riho Terras, chief of defence of Estonia.
Civil society
Perhaps it was not surprising that no one took the position that so-called killer robots can still be stopped, when looking at the guest list.
Ahead of the conference, peace activists had criticised the European Defence Agency (EDA) for not inviting anyone from civil society - something that was demonstrated by an access to documents request filed by Belgian activist Bram Vranken.
Attempts were made to suggest otherwise.
In her opening speech, Federica Mogherini, the EU's foreign policy supremo and the defence agency's head, specifically mentioned civil society.
"If you look around today, you will see people in uniform, civilians, civil servants from the European institutions, from member states, as well as representatives of the European defence industry and of the civil society," she said.
This website was not able to see any civil society representative and all audience questions came from military men (almost exclusively men), industry representatives, civil servants, and press.
Several journalists were allowed in the conference, which was not broadcast and of which the EDA said no recording existed.
The moderator of one panel introduced speaker Frans Bekkers, director of the security programme at the Hague Centre for Strategic Studies, as a representative of civil society.
But Bekker himself thought that to be something of a stretch, saying he only represented think tanks.
The panels or speakers did not include anyone who was seriously opposed to lethal autonomous weapons, although some suggested that the debate should focus on military applications of artificial intelligence that was non-lethal.
Demonstration
At the beginning of the conference, protesters had gathered outside the conference building in Brussels.
Activist Vranken's organisation Vredesactie said in a press statement that activists had been coarsely removed by police - and published photos of the incident.
Mogherini referred to the protests in her speech.
"I really hope that the security concerns that the Belgian police might have do not impede anyone to express their opinions and ask to have a dialogue that I am sure is and can always be constructive with everybody in civil society," said Mogherini.
But the incident hung in the air, at what was meant to be a meeting to get citizens on board with new developments in the field.
"We need to get the buy-in of citizens," said Jorge Domecq, chief executive of the European Defence Agency.
"We cannot provide security with tools that are rejected by the population where armed forces are deployed, or by our own countries' [citizens]," he said.
"It is very important that we show to our citizens what these systems are going to provide," he added.
For his part, researcher Frans Bekkers stressed that the use of artificial intelligence (AI) in the military is not only about killer robots.
AI can be non-lethal
Autonomous cargo drones could drop off supplies to the frontline, self-driving machines could remove land mines, and AI can be used to develop prediction models.
"I don't want to belittle the killer robot [debate] because it is very important. But at the same time it shouldn't hijack the whole discussion … on autonomous systems," said Bekkers.
Despite the absence of peace activists, their concerns were mentioned by speakers.
Mogherini said that the EU was trying to shape the debate of what rules should be introduced in the field of autonomous weapons.
"All weapon systems should comply with international law, and humans must always remain in control of the use of lethal force," said Mogherini.
But while several speakers acknowledged the dangers of an out-of-control rise of the machines, there were few ideas on how to prevent it.
International diplomacy and rule-setting was mentioned, as well as certification of and standards for weapons systems.
Global arms race
The general tone was one of urgency, and the notion that Europe needed to catch up, or at least not to fall behind other powers like China and the US.
Some said, as a pro-automation argument, that humans were responsible for the greatest atrocities of the 20th century.
But this overlooked the fact automated systems are programmed by the same flawed humans.
As with drones and other military innovations, the argument was made that automated systems could reduce the loss of civilian life in warfare, by reportedly being more accurate than humans.
Privately, one participant told this website that in democracies, no politician in their right mind would order a lethal autonomous weapon system which lacked accountability for its actions.
But he also said he thought dictatorships would have fewer qualms about that.
Site Section
Related stories
- AI experts tell MEPs ban on 'killer robots' is unrealistic
- MEPs delay debate about 'killer robots'
- 'Killer robot' projects eligible for EU defence fund
- 'Killer robots' are not about Terminator
- EU's AI military strategy poses 'threat to Europeans'
- EIB invests €50m in autonomous delivery robot operator