Artificial Intelligence and Ethical Leadership

The warning on 30 May 2023 by industry experts that Artificial Intelligence (AI) might lead to human extinction was a sobering moment. Given this, I wondered again what are the ethical leadership challenges – relating to the protection of human rights and restoration of the environment – this new technology poses?

I started by asking a “Large Language Model” (LLM) AI system ChatGPT:

“1. Ensuring data privacy: …; 2. Promoting responsible use of AI: Leaders should ensure that AI systems are being used in a responsible way, free from bias and discrimination. 3. Developing AI systems ethically: Leaders should ensure that AI systems are developed with ethical considerations in mind, including avoiding the use of algorithms that have the potential to cause harm. 4. Addressing job displacement: ….; 5. Ensuring accountability: Leaders should ensure that AI systems are held accountable for their actions, and that any mistakes made by AI systems are addressed. 6. Encouraging responsible innovation: Leaders should encourage the responsible use of AI and the development of ethical AI systems.”

It’s okay in so far as it goes, and a bit more candid about risks associated with AI than, for example, Microsoft’s happy-clappy advertising for “Co-pilot”, the AI that it is embedding into its Office suite. This focuses on the undeniable promise of productivity that AI brings, for some
professionals at least.

But the ethical choice-making that ChatGPT asserts is so important in AI is dependent on
morality, and it is not at all clear what is the moral basis upon which AI algorithms are being
constructed.

In the April 2023 edition of the Economist, the paper noted that some AI systems “produced
strange results. Bing Chat suggested to a journalist that he should leave his wife. ChatGPT
has been accused of defamation by a law professor. LLMs produce answers that have the
patina of truth, but often contain factual errors or outright fabrications.
” I found that when I
asked ChatGPT about myself: some biographical details were correct, such as that I have
written two books, but it could not find anything close to their correct names and so just
made stuff up. I think that may be the sort of thing that Microsoft, euphemistically, calls
“usefully wrong.”

But these are trivial enough errors: they are not going to cause an existential crisis for
humanity. But, as leading experts have already warned, AI itself might yet. In April 2023 the Economist reported that, “The degree of existential risk posed by AI has been hotly debated. Experts are divided. In a survey of AI researchers carried out in 2022, 48% thought there was at least a 10% chance that AI’s impact would be “extremely bad (eg, human extinction)”. But 25% said the risk was 0%; the median researcher put the risk at 5%. … researchers worry that future AIs may have goals that do not align with those of their human creators.”

A 5% risk is not a trivial one. This sort of risk was a matter that Isaac Asimov famously pondered when he developed his laws of robotics in the 1940s. Having formulated three laws, including his first, that, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Asimov realised, as any viewer of the movie, I Robot, will remember, that something was missing. So, he formulated his “Zeroth Law”: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

There is an argument that you cannot, and some would say should not, build morality into
machines.
For example, Asimov’s first law would incapacitate some of the lethal hardware
so beloved of armchair militarists. But it seems incontestable, indeed inconceivable, that
any AI should be permitted without some robust moral systems to constrain its most dangerous excesses.

There may be better moral systems to guide AI than Asimov’s laws. But if AI is trying to
break up marriages on a whim or defaming a law professor, or anyone else for that matter,
it appears that it does not yet have any moral guidance at all.

So, here’s the rub. If programmed from the outset with some key moral principles, computers will not forget to remember them, as they write increasingly advanced programs for future AI generations. However, it seems that many of the human beings initiating these AI processes have sometimes eschewed moral principles in the rush to technological advance.

This should not, perhaps, be surprising. In recent years we have seen a number of controversies in relation to the use of information technology: In the UK, for example, a group of wealthy ideologues convinced a plurality of British voters, in part through the manipulation of information systems, to vote for Brexit unconcerned with the damage it would do to the economy, to Irish peace, and to the fragile bonds that hold their own country together. Similar information manipulation was at play in the election of Donald Trump in 2016. Yet more seriously still, the manipulation of information systems was also a major factor in instigating the genocide against the Rohingya people in Myanmar that same year.

When confronted with the issues arising from these events, some of the leading industry
figures involved have proven themselves moral vacuums. And these are the people who will
be leading much of the industrial development of AI. Will they be as concerned as Asimov
was about any potential threats to humanity arising from their work?

In spite of the information industry’s warning about the risk of human extinction, I would not want to bet my life on this. The leaders of so many other industries are already overseeing an environmental collapse with no discernible concern for a future that will threaten the lives and livelihoods of their children and grandchildren. The 30 May 2023 warning of the perils of AI aside, tech leaders have so far proven themselves no more concerned with the consequences of the moral choices that they are making for their businesses. For some, the scientific innovation associated with it will be just too fascinating to eschew. Others will not be concerned with the future if they can make lots of money now.

The Economist reports that the EU is considering robust regulation on the development of
AI, and the Biden administration has started a consultation on the same thing. These are
positive moves, but no one should rest easy yet. Unsurprisingly, for a government (and
opposition) that lacks the moral courage to tell the truth about the realities of Brexit, the UK
has until now been proposing a “light touch” approach to AI regulation. This is in the hope of attracting some unregulated tech businesses to compensate somewhat for the industries that their Brexit has already devastated.

In the face of such a pusillanimous abrogation of responsibilities, ethical leaders in business
and the citizenry alike need to respond: to make different professional choices that ensure
that the preservation of life and the restoration of the environment are at the heart of their
organisational strategies, and, through protest and political engagement, to demand that
politicians do the right thing not the easy one.

Protest is, and always has been leadership. But, given the crises facing humanity currently, it
has never been so urgent. And, given the rapidity of AI’s development, the moment at which
it can be constrained by law, regulation and morality may be receding as quickly as the opportunity to stave off ecological collapse.

“A (hu)man must have a code”: ethical leadership and saving the world.

The recent People Management article, “Codes of ethics: does every company need one?” raised a number of interesting questions.

The article revealed that only 54% of FTSE 250 have published codes of ethics, according to research by the Institute of Business Ethics. Of these only 57% are considered as “good”.

As Ms McConville, my English teacher at school in Newry, used to regularly ask in her efforts to coax more lucid writing from even her most inarticulate pupils, “What does ‘good’ even mean?”

Milton Friedman would have said that “good” meant making a profit for shareholders within the law. This is a moral perspective that is still widely prevalent in government and business. I have met more than one business executive who has been admiring of such guidance as an amoral underpinning to their strategic approaches. But such amorality is also wholly inadequate for dealing with the existential challenges facing humanity in the 21st Century. Each of those challenges – from climate change to contemporary slavery – is already a product of thousands of business and political leaders thinking that such things are somebody else’s problem.

The People Management article quotes Ian Peters, director of the Institute of Business Ethics, with another perspective on “good”. He says, “A code of ethics should be the cornerstone for any organisation, ensuring it’s doing the right thing for the right reasons.”

This organisational focus on ethics is one that I am strongly in agreement with, though this also begs the question, “What is ‘right’?” It is further striking that others quoted in the article instead emphasise only personal conduct in the workplace and whistle-blowing duties and protections.

These are, of course, important issues. No one should have to endure fear and bullying in any workplace. But in my view ethics is a yet more fundamental thing. It is, at heart, a strategic question and, consequently a leadership one.

In my book, Ethical Leadership: moral decision making under pressure, I define ethical leadership as the effort “to optimize life-affirming choices that seek to protect human rights and advance ecological restoration irrespective of how inhospitable the political, social or professional environment.

Sometimes this requires dissent or “whistle-blowing”: protest is often, after all, just another name for leadership.

But ethical leadership is also about strategic choice making. For example, a business executive who, decides to source from a textile, electronics or fisheries supply chain in Asia or Africa that they know to be highly destructive of the environment and rife with exploitative labour practices, will often be behaving completely legally. They may also be acting in the spirit of a code of conduct that emphasises legal compliance. But there is, nevertheless, the sulphurous whiff of the banality of evil in such choices.

A recent leading article in the Economist reported that researchers estimate a 5% risk that the current development of Artificial Intelligence systems may result in something “extremely bad (eg, human extinction).” So, I for one am concerned that the executives leading the development of this technology are thinking about ethical standards beyond mere compliance with law, particularly given that so much of the necessary law to constrain dangerous AI development does not yet exist.

Perhaps they are actively thinking about these risks. But as some of them at least also seem untroubled with the manipulation of information systems that was a major factor in instigating the genocide against the Rohingya people in Myanmar in 2016, I would not want to bet my life on it.

But, like the rest of us, I may be forced to. The current precariousness of continued human existence on this planet is a result of so many political and business leaders not looking beyond the short-term questions of immediate profit rather than the long-term question of sustainability or, for that matter, human survival.

For humanity to have a chance requires now that business executives and politicians focus on promoting choices that protect human rights and restore the environment, not just those that comply with the law and obtain short-term financial gains.

So, all businesses, indeed all leaders, need ethical codes of conduct that will compel them to make life-affirming choices the core of their business and economic strategies.