Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
In the future, if armies deploy autonomous robot soldiers and they fire on the wrong targets, who will we hold responsible - the general who deployed them or their designer, Singapore law professor Simon Chesterman asks in We, the Robots. Photo: Getty Images/Tetra Images RF

Review | When AI goes wrong who’s to blame, Singapore law professor asks; do we legally treat algorithms and machines as we once did mercenaries and miscreant animals?

  • Simon Chesterman, a law professor in Singapore, asks some sobering questions about legal responsibility for the decisions of AI machines and algorithms
  • Like mercenary troops, algorithms that decide on your guilt or innocence, or right to entitlements, lack moral intuition, he notes. So are we still in control?

We, the Robots? Regulating Artificial Intelligence and the Limits of the Law by Simon Chesterman, pub. Cambridge University Press

Isaac Asimov’s 1950s I, Robot stories were mostly to do with the problems of programming and regulating artificial intelligence (AI), and the practical, moral and legal consequences of hardwiring three governing laws into mechanical servants. He was less interested in the dystopian futures run by self-aware machines seen later in The Terminator or The Matrix but already common in science fiction even in his day.
Simon Chesterman’s non-fiction We, the Robots? is also less concerned with a hypothetical, still higher-tech future than with dealing with issues arising from the introduction of machine autonomy and the use of algorithms in decision-making.

Chesterman, dean and professor of law at the National University of Singapore, brings a sober but readable approach to a subject otherwise much given to speculation and fearmongering. He enlivens his work with stories from the real world: accidents involving self-driving cars; stock market collapses caused by automated trading; biases in the opaque proprietary software used to assess the likelihood an individual will default on a loan or repeat a criminal offence.

Passengers get off an autonomous bus developed by Chinese tech giant Baidu in Chongqing, where China’s first autonomous bus line started commercial operations in April, 2021. Photo Chen Shichuan/VCG via Getty Images

Existing laws and regulations, whose design was predicated on the direct involve­ment of humans, are already struggling to cope with problems arising merely from the speed of transaction made possible by ever-faster processors and computer-to-computer communications. Examples include the “flash crash” of 2018, in which US stock markets took a tumble driven by overenthusiastic algorithmic trading systems doing thousands of deals with each other in a matter of seconds.

When an autonomous vehicle hits a pedestrian, who is to be held responsible? Is it the human supervisor who should have overridden the system in time? The system’s designer? The vehicle’s owner?

At what point does it become possible to view the system itself as some sort of responsible legal entity? For the AI, punishment now simply takes the form of treating the error as data used to make improvements.

What if autonomous weapons systems already in development violate international humanitarian law, by failing to distinguish between targets that are incapacitated or surrendering and those that remain a threat? What military commander will bear responsibility for a system that selects targets and acts by itself?

Even here historic debate may provide some solutions, as questions of control and responsibility have been asked for centuries concerning equally autonomous mercenaries, such as those hired for use in African civil wars or the Swiss private regiment that has protected successive popes since the early 1500s.

A trader works on the floor of the New York Stock Exchange. In 2018, algorithmic stock market traders triggered a “flash crash” of US stock prices. Photo: Scott Heins/Getty Images

As Chesterman points out, reliance on mercenaries came to be seen as “not only inefficient but suspect: a country whose men did not fight for it lacked patriots; those individuals who fought for reasons other than love of country lacked morals.”

What love of country do fighting machines have? What moral intuition can be found in systems now beginning to make decisions on who should receive government benefits? What legal fairness is there in algorithms whose secretive nature means their reasoning cannot be analysed and challenged?

“Some functions,” Chesterman suggests, “are ‘inherently governmental’ and cannot be conferred to contractors, machines, or anyone else.”

In discussing attempts to hold AI to account, he again shows that not every­thing is new about the legal problems that face us, drawing parallels with medieval trials of animals that had caused harm to human beings. He discusses whether existing laws might be adapted to hold programmers or remote operators responsible for negative outcomes from the use of AI, but wonders whether such systems will eventually become so autonomous that responsibility must be reconsidered. Perhaps AI will eventually have to be involved in regulating AI.

And he does conclude by looking into the future, and particularly the application of AI to legal matters, although AI judges are already being tested in mainland China. What chance of the sort of transparency required for fairness there?

Simon Chesterman.
The cover of Chesterman’s book.

“The emergence of fast, autonomous, and opaque AI systems forces us to question the assumption of our own centrality,” says Chesterman. He concludes that it’s not yet time for us to relinquish our overall control. But that such a serious commentator even considers this matter may alarm some readers more than hysteria about a potential AI takeover.

Post