The warning on 30 May 2023 by industry experts that Artificial Intelligence (AI) might lead to human extinction was a sobering moment. Given this, I wondered again what are the ethical leadership challenges – relating to the protection of human rights and restoration of the environment – this new technology poses?
I started by asking a “Large Language Model” (LLM) AI system ChatGPT:
“1. Ensuring data privacy: …; 2. Promoting responsible use of AI: Leaders should ensure that AI systems are being used in a responsible way, free from bias and discrimination. 3. Developing AI systems ethically: Leaders should ensure that AI systems are developed with ethical considerations in mind, including avoiding the use of algorithms that have the potential to cause harm. 4. Addressing job displacement: ….; 5. Ensuring accountability: Leaders should ensure that AI systems are held accountable for their actions, and that any mistakes made by AI systems are addressed. 6. Encouraging responsible innovation: Leaders should encourage the responsible use of AI and the development of ethical AI systems.”
It’s okay in so far as it goes, and a bit more candid about risks associated with AI than, for example, Microsoft’s happy-clappy advertising for “Co-pilot”, the AI that it is embedding into its Office suite. This focuses on the undeniable promise of productivity that AI brings, for some
professionals at least.
But the ethical choice-making that ChatGPT asserts is so important in AI is dependent on
morality, and it is not at all clear what is the moral basis upon which AI algorithms are being
constructed.
In the April 2023 edition of the Economist, the paper noted that some AI systems “produced
strange results. Bing Chat suggested to a journalist that he should leave his wife. ChatGPT
has been accused of defamation by a law professor. LLMs produce answers that have the
patina of truth, but often contain factual errors or outright fabrications.” I found that when I
asked ChatGPT about myself: some biographical details were correct, such as that I have
written two books, but it could not find anything close to their correct names and so just
made stuff up. I think that may be the sort of thing that Microsoft, euphemistically, calls
“usefully wrong.”
But these are trivial enough errors: they are not going to cause an existential crisis for
humanity. But, as leading experts have already warned, AI itself might yet. In April 2023 the Economist reported that, “The degree of existential risk posed by AI has been hotly debated. Experts are divided. In a survey of AI researchers carried out in 2022, 48% thought there was at least a 10% chance that AI’s impact would be “extremely bad (eg, human extinction)”. But 25% said the risk was 0%; the median researcher put the risk at 5%. … researchers worry that future AIs may have goals that do not align with those of their human creators.”
A 5% risk is not a trivial one. This sort of risk was a matter that Isaac Asimov famously pondered when he developed his laws of robotics in the 1940s. Having formulated three laws, including his first, that, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Asimov realised, as any viewer of the movie, I Robot, will remember, that something was missing. So, he formulated his “Zeroth Law”: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
There is an argument that you cannot, and some would say should not, build morality into
machines. For example, Asimov’s first law would incapacitate some of the lethal hardware
so beloved of armchair militarists. But it seems incontestable, indeed inconceivable, that
any AI should be permitted without some robust moral systems to constrain its most dangerous excesses.
There may be better moral systems to guide AI than Asimov’s laws. But if AI is trying to
break up marriages on a whim or defaming a law professor, or anyone else for that matter,
it appears that it does not yet have any moral guidance at all.
So, here’s the rub. If programmed from the outset with some key moral principles, computers will not forget to remember them, as they write increasingly advanced programs for future AI generations. However, it seems that many of the human beings initiating these AI processes have sometimes eschewed moral principles in the rush to technological advance.
This should not, perhaps, be surprising. In recent years we have seen a number of controversies in relation to the use of information technology: In the UK, for example, a group of wealthy ideologues convinced a plurality of British voters, in part through the manipulation of information systems, to vote for Brexit unconcerned with the damage it would do to the economy, to Irish peace, and to the fragile bonds that hold their own country together. Similar information manipulation was at play in the election of Donald Trump in 2016. Yet more seriously still, the manipulation of information systems was also a major factor in instigating the genocide against the Rohingya people in Myanmar that same year.
When confronted with the issues arising from these events, some of the leading industry
figures involved have proven themselves moral vacuums. And these are the people who will
be leading much of the industrial development of AI. Will they be as concerned as Asimov
was about any potential threats to humanity arising from their work?
In spite of the information industry’s warning about the risk of human extinction, I would not want to bet my life on this. The leaders of so many other industries are already overseeing an environmental collapse with no discernible concern for a future that will threaten the lives and livelihoods of their children and grandchildren. The 30 May 2023 warning of the perils of AI aside, tech leaders have so far proven themselves no more concerned with the consequences of the moral choices that they are making for their businesses. For some, the scientific innovation associated with it will be just too fascinating to eschew. Others will not be concerned with the future if they can make lots of money now.
The Economist reports that the EU is considering robust regulation on the development of
AI, and the Biden administration has started a consultation on the same thing. These are
positive moves, but no one should rest easy yet. Unsurprisingly, for a government (and
opposition) that lacks the moral courage to tell the truth about the realities of Brexit, the UK
has until now been proposing a “light touch” approach to AI regulation. This is in the hope of attracting some unregulated tech businesses to compensate somewhat for the industries that their Brexit has already devastated.
In the face of such a pusillanimous abrogation of responsibilities, ethical leaders in business
and the citizenry alike need to respond: to make different professional choices that ensure
that the preservation of life and the restoration of the environment are at the heart of their
organisational strategies, and, through protest and political engagement, to demand that
politicians do the right thing not the easy one.
Protest is, and always has been leadership. But, given the crises facing humanity currently, it
has never been so urgent. And, given the rapidity of AI’s development, the moment at which
it can be constrained by law, regulation and morality may be receding as quickly as the opportunity to stave off ecological collapse.