nuclear-news

The News That Matters about the Nuclear Industry Fukushima Chernobyl Mayak Three Mile Island Atomic Testing Radiation Isotope

On the warpath: AI’s role in the defence industry

BBC, By Christine Ro, Technology reporter, 23 Aug 23

Alexander Kmentt doesn’t pull his punches: “Humanity is about to cross a threshold of absolutely critical importance,” he warns.

The disarmament director of the Austrian Foreign Ministry is talking about autonomous weapons systems (AWS). The technology is developing much faster than the regulations governing it, he says. “This window [to regulate] is closing fast.”

A dizzying array of AI-assisted tools is under development or already in use in the defence sector.

Companies have made different claims about the level of autonomy that is currently possible.

German arms manufacturer has said, for a vehicle it produces that can locate and destroy targets on its own, there’s no limitation on the level of autonomy. In other words, it’s up to the client whether to allow the machine to fire without human input.

An Israeli weapons system previously appeared to identify people as threats based on the presence of a firearm – though such systems can, like humans, make errors in threat detection.

And the Australian company Athena AI has presented a system that can detect people wearing military clothes and carrying weapons, then put them on a map.

“Populating a map for situational awareness is Athena’s current primary use case,” says Stephen Bornstein, the chief executive of Athena AI……………………………………….

Many current applications of AI in the military are more mundane.

They include military logistics, data collection and processing in intelligence and surveillance and reconnaissance systems…………………………………………………………………………

However, it’s in the realm of weapons deployment that people really tend to worry about militarised AI.

The capacity for fully autonomous weapons is there, cautions Catherine Connolly, the automated decision research manager for the campaign network Stop Killer Robots.

“All it requires is a software change to allow a system to engage the target without a human actually having to make that decision,” according to Ms Connolly, who has a PhD in international law & security studies. “So the technology is closer, I think, than people realise.”

One argument advanced by proponents of AI-enabled weapons systems is that they would be more precise. But Rose McDermott, a political scientist at Brown University in the US, is sceptical that AI would stamp out human errors.

“In my view the algorithms should have brakes built in that force human oversight and evaluation – which is not to say that humans don’t make mistakes. They absolutely do. But they make different kinds of mistakes than machines do.”

It can’t be left to companies to regulate themselves, says Ms Connolly……………………………….

So that the speed and processing power of AI don’t trample over human decision making, Ms Connolly says the Stop Killer Robots campaign is looking for an international legal treaty that “ensures meaningful human control over systems that detect and apply force to a target based on sensor inputs rather than an immediate human command”.

She says regulations are urgent not only for conflict situations, but also for everyday security.

“We so often see that the technologies of war come home – they become used domestically by police forces. And so this isn’t only just a question of the use of autonomous weapon systems in armed conflict. These systems could also then become used by police forces, by border security forces.”…………………………. more https://www.bbc.com/news/business-66459920

August 27, 2023 - Posted by | technology

No comments yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.