Advertisement
Advertisement
An Allied correspondent stands in a sea of rubble before the shell of a building that once was a movie theatre in Hiroshima in Japan on September 8, 1945, a month after the first atomic bomb ever used in warfare was dropped by the US to hasten Japan’s surrender. Most of those with severe radiation symptoms died within three to six weeks. Others who lived beyond that developed health problems related to burns and radiation-induced cancers and other illnesses. Photo: AP
Opinion
Peter T. C. Chang
Peter T. C. Chang

Oppenheimer’s story is a cautionary tale in the age of AI

  • Just as the advent of the atomic bomb marked humanity’s capability for self-inflicted devastation, the era of AI signals the potential for artificial intelligence to assume control over our destiny
At the heart of the Oppenheimer tragedy lies the transformation of scientific knowledge into weaponry, bestowing upon humanity the chilling capacity for self-inflicted obliteration. Unless humankind rises above its mutual hostilities, there is a risk that even our power to self-terminate might fall under the control of a malevolent artificial intelligence.
The US’ dropping of atomic bombs on the Japanese cities of Hiroshima and Nagasaki continues to be profoundly controversial due to the staggering loss of civilian life and the catastrophic devastation inflicted. When US president Harry S. Truman congratulated J. Robert Oppenheimer on the success of the Manhattan Project, the latter reportedly said, “I feel I have blood on my hands.”

Morally conflicted, the “father of the atomic bomb” subsequently emerged as a fervent proponent of nuclear arms control. But Oppenheimer’s advocacy cast doubt on his allegiance, leading to accusations of harbouring communist sympathies and resulting in the revocation of his security clearance.

The Oppenheimer backstory unveils the intricate relationship between humankind and science, where the former wields science as a tool that, unfortunately, is susceptible to misuse for ignoble ends. In the Manhattan Project, there were two fateful features of this delicate relationship.

Firstly, while humans have historically misappropriated science on various occasions, the atomic bomb presents a unique case involving rivalry between nation-states to produce a specific weapon. Oppenheimer was in a race against Nazi Germany to develop the first atomic bomb.

Secondly, while scientific misapplication has been detrimental to humankind, the atomic bomb stands as an ominous culmination of such misuse, embodying a potentially apocalyptic weapon that has left humanity with an unsettling legacy – an enduring sense of vulnerability beneath the shadow of total annihilation.

Riddled with moral conflict, Oppenheimer sought solace in the belief that the harrowing aftermath of the bombings in Japan would serve as a deterrent against the future use of similarly devastating weapons. Indeed, since Hiroshima and Nagasaki, another nuclear bomb has never been used.

02:26

‘Oppenheimer’ sparks debate in Japan ahead of 78-year anniversary of Hiroshima atomic bombing

‘Oppenheimer’ sparks debate in Japan ahead of 78-year anniversary of Hiroshima atomic bombing
But the mutually assured destruction (MAD) principle failed to curtail proliferation; instead, an increasing number of countries are acquiring nuclear weapons to bolster their deterrence capabilities. Consequently, countries still have large nuclear stockpiles, capable of wreaking devastation on the planet multiple times over.
Furthermore, the MAD doctrine does not eliminate the risk of rogue actors and inadvertent escalations that might culminate in the use of nuclear weaponry. For example, in the context of the war in Ukraine, Russia has openly invoked the nuclear option, underscoring the persistent danger posed by such scenarios.

Compounding the nuclear risk is the rising spectre of AI-related threats. Geoffrey Hinton, often dubbed “the godfather of AI”, warned that generative AI’s self-learning capacity is surpassing expectations and could become an “existential threat” to human civilisation.

For some, AI represents a modern-equivalent of the Oppenheimer moment in the 21st century – signifying the emergence of yet another potentially world-ending technology.

Similar to the Manhattan Project, the peril associated with AI is aggravated by ongoing great power rivalry. The US and China are locked in an intense tech war. Just last week, US President Joe Biden issued an executive order that restricts American hi-tech investment, specifically aimed at curbing China’s advancement in AI and other critical domains.
An aftermath of the evermore tense US-China rivalry is the heightened scrutiny faced by American scientists of Chinese ethnicity. This resurgence of suspicion rekindled the Cold War shadow of McCarthyism that similarly haunted Oppenheimer. Actually, this undercurrent of suspicion towards Chinese scientists predates the present crisis.

1,400 US-based ethnic Chinese scientists left American institutions for mainland

Two decades ago, Taiwanese-American scientist Wen Ho Lee at the Los Alamos National Laboratory – where the Manhattan Project was once housed – was accused of spying. Subsequently, as part of a plea bargain, US prosecutors opted to lessen the initial charges – totalling 59 indictments – down to a solitary count of mishandling classified information. The case was criticised as an example of government overreach, encroachment on civil liberties and racial profiling.

Of grave concern, the US-China competition for AI dominance has now extended into the military domain. The automation of some aspects of nuclear weapons systems increases the risk of accidental conflict and introduces the danger of one day relinquishing human oversight over the conduct of AI-driven nuclear system.

A military aide carries the “nuclear football”, which contains launch codes for nuclear weapons, as he follows US President Joe Biden onto Marine One on the South Lawn of the White House in Washington on October 7, 2022. Photo: Bloomberg

At the core of the MAD doctrine lies a fundamental assumption: that human beings will act in their own self-preservation, avoiding actions that could result in their own obliteration. However, a pivotal question emerges: can the same principle be attributed to artificial intelligence? Will AI demonstrate a commitment to safeguarding humanity?

The reality is that just as the advent of the A-bomb marked humanity’s capability for self-inflicted devastation, the era of AI signals the potential for artificial intelligence to assume control over our destiny, including the very authority to determine our own termination.

As nations race for AI supremacy, what about cost of creative destruction?

Throughout history, science has faithfully served as humanity’s handmaid. However, with the emergence of AI reshaping this dynamic, science may no longer be as subservient to the dictates of its human creator.

Beyond the personal, Oppenheimer’s story is also a tragedy for humanity. Fuelled by enmity between nation-states, the weaponisation of science has rendered humanity in danger of nuclear annihilation.

Today, great power rivalry for AI supremacy is exacerbating that risk. Unless the US and China overcome their mutual animosity, graver scenarios loom. Oppenheimer’s chilling quote from the Bhagavad Gita – “Now I am become death, the destroyer of worlds” – could come back to haunt us, this time uttered by a malevolent AI.

Peter T.C. Chang is deputy director of the Institute of China Studies, University of Malaya, Kuala Lumpur, Malaysia

Post