top of page
Search
  • Writer's pictureStephen Hill

Could we Create a Collective Conscience?

In practice this means: could we build an AI system that tells us the right, moral decision to make in any circumstance?

Let me begin by stating that I believe the answer to this is an emphatic YES. We are definitely capable of doing so. However, building such a system will take considerable resources as well as the cooperation of a large number of people.

The question is itself both an individual and a collective question:

(a) Could it provide the basis for each one of us to make decisions in the best interests of everyone, as well as myself?

(b) Could it provide a means of guiding those of us in control of significant amounts of collective resources (most obviously our political and business leaders, and the billionaires who have benefitted most from our present economic system) to act for the common good?

First though, what is wrong with relying on people’s individual judgment?

The heart of the problem is that our brains have evolved for survival, not to serve any higher goal. They have evolved to compute and filter often starkly contradictory stimuli and make sense of it all, with the absolute priority of preserving the vehicle that carries it. This means our behaviour is generally driven by our need to protect a single organism, and/or its perceived supporting network. In practice this usually means a family or community. Morality is then reduced to a means of evaluating the merits of different courses of actions, and using the calculation to justify those actions.

This level of morality is understandable in young children but (unless they are greatly abused) is generally over-ridden by the time they reach their teens, when they realise that some things are just plain right or wrong. At an individual level then, people will most often choose a course of action they perceive as good for them, rather than good for all. Examples abound, from deciding to buy an electric car to deciding to walk to work to deciding to pay for a child to be privately educated.

Our political systems have also evolved to perpetuate themselves. That’s why it takes a revolution to overturn them. We know from many examples that governments take decisions to suit their purpose, rather than the good of all. Again, examples abound: the British government’s delayed decision (based on flawed models) to mandate a coronavirus lockdown, taken because they feared the political fallout of dying people overwhelming the health service. Or, the Chinese government’s decision to lie about the source and extent of the virus and to subsequently engage in a continuing disinformation campaign. Or equally infamous, the invasion of Iraq justified by non-existent weapons of mass-destruction even though both American and British governments knew they did not exist.

Not only can we not rely on our leaders to act in our best interests, we should probably conclude they are institutionally and constitutionally unable to do so. This conclusion is not original: it was reached in ancient Greece by Plato, who decided that only philosophers (like him) should be allowed to govern.

We have recently confirmed that the same applies in business. Research conducted in the UK and elsewhere, clearly shows that how we behave in business is different to how we behave in our private lives, and the core behaviour of those businesses is often contrary to what we consider to be right or moral, even when they claim to be environmentally and socially responsible. The consequent dissonance is a major cause of stress for employees.

Since we cannot rely on humans or collections of humans to make decisions in our best interests, it is surely better to build an AI system that can do so. AIs are not bound by evolutionary history. They also make decisions dispassionately. Provided we can ensure they enshrine our values then, in principle, they should be able to make decisions that maximally benefit all. This question of enshrined values is of course, the heart of the issue.

So, what is required?

Mostly conducted by universities in the USA, research shows that children learn their morality by rote. In other words, they learn right and wrong through the behaviour of their parents, teachers and peers, as opposed to their words and declarations, or to any methodical study of moral or social principles.

Of course, such teaching can go wrong. Children quickly learn that what people say “It’s wrong to hit people” may be different to what they do “Daddy, why did you hit that man?” Or worse, mommy. In this example, both the morality and the conclusions to be drawn are quite clear.

Machine learning systems work in the same way: multiple instances of related examples, from which algorithms derive the probability of repetition. Computer systems are also capable of storing and processing vast amounts of data. So if we could create a database capable of being extrapolated to ALL decisions we are likely to ever face then, in theory at least, we could teach a machine learning engine to make any decision on our behalf.

Creating such a database is a big task. However, anyone who has ever implemented a large-scale system knows they work best when they evolve over time.

So, the question is: could we begin to build a database of moral decisions or moral opinions?

The answer is again YES, by creating a framework or context for making moral judgments, which could allow us to build one question at a time a database that could inform our AI of the best course of action to take for any particular type of circumstance.

We could do this by trusting to democratic principles: if enough people express an opinion on a subject then, by removing those opinions that lie outside the standard deviation, we should be able to find a result that the vast majority would agree is a “good” or “the best” outcome. Those opinions could be collected by a new social network, for example, where the answers to regular questions provide our database. Those answers could also be assessed to understand the moral psychology or principles of each respondent, which would further inform the database.

Would the differences of opinion on almost any subject would be so vast as to make the creation of a moral database impossible?

Well, again research (this time conducted by Oxford University) shows that there is a surprisingly large symmetry between judgments of good and bad across all cultures. There are taxonomy and language challenges – different words are used to describe the same principles – but most people and most societies agree on seven principles as being desirable. These are: help your family, help your group, return favours, be brave, defer to superiors, divide resources fairly, respect others’ property. I would personally add ‘respect for our planet’ but otherwise these seem comprehensive and inarguable.

These principles could be used to inform any machine learning system. In practice, this would work by adjusting any algorithms or their results to ensure these principles are adhered to. In my terminology, the “context” they give converts the machine learning system into an AI system. The theory works in principle: it is already being used to adjust recruitment selections to remove bias inherent in data, most notably where predominantly white, male interviewers would otherwise select mostly white male candidates. Assuming this is proven, then we could over time build a database of shared moral opinions that an AI system could utilise.

The next question to consider is what problems or concerns will arise?

There are two very obvious concerns:

(a) Is there a danger of the AI taking control?

This should be controllable by ensuring the means of decision-making is always divorced from any means of execution. That is, the computer systems that collect and process our moral evaluations and judgments are always kept apart from any computer systems that process any specific recommendations or take any specific actions,

(b) Could it be manipulated by individuals who control it?

The simple answer is that powerful institutions already manipulate information. Provided the basis of this system is and remains democratic and multi-cultural, it should limit their existing ability to do so, as well as proscribe their future abilities to do so. It is likely, as well as being clear from recent work we have undertaken, that only ethical organisations produce ethical systems and products. Thus a pre-requisite for any organisation entrusted to run such a system is that its management are continually surveyed and encouraged to be moral, guided by those same seven principles. And that their actions remain completely independent from any governmental interference.

So, if we built over time an AI system that could protect us all from ourselves and especially from our leaders, surely this is a good thing?

Its decisions would be directed by a database of moral opinions that most of us consider right, rather than by any individual or arbitrary government or set of programmers. Those decisions could evolve over time, as more opinions are gathered and more experiences are encountered. Because everyone who participates is engaged, there should be a much greater level of engagement and feeling of involvement with such a system, than with any existing systems, AI-based or otherwise.

The system should also generate benefits along the way. For example, we are working to provide a better means of distributing loans based on assessing the moral principles of applicants, whether they are individuals or companies. Thus loans will be granted more easily to those who are honest and genuinely seek to do good, and are thus more likely to be successful and to repay those loans. At present loans are largely given only to those who have a track record of repaying loans, with the exception of example such as micro-loans, which rely on the same principle as ours – identify those people most likely to do whatever it takes to remain honest.

The problem with our approach is not technical, it is regulatory: the British FCA for example correctly insists that such assessments are transparent, but equates transparency with “linear and easy to follow”. Even though these are by definition far less accurate and fair than our AI system would be. This prejudice will eventually be overcome, of course.

In the long term, the emergence of embedded nanobiotechnology devices could be problematic for both of the above. But we could and should use our Ethical AI system to help us decide how best to deploy and to control such systems. Since these systems will by definition have the ability to go beyond informing to active guidance or instruction, for the sake of retaining free will we must also review our system to take particular cognizance before their release of avoiding any ‘tyranny of the majority’.

The level of AI involvement could be tweaked perhaps; the sweet spot may lie somewhere between making decisions on our behalf or presenting compelling, evidence-based recommendations we can then choose to follow, or not. It’s the most exciting part of my endeavours. I’m curious to gather and understand your thoughts – in this context, how do you see the future of human endeavour? Do you fear the advances in AI or think they’ll benefit us?

Leave your comments or email your views at sjh@ethicalai.co.uk . And if you want to find out more about my work in ethical AI, head to my website: www.ethicalai.co.uk.

Next time: Which of our mental faculties could be improved by a ‘merging’ with AI systems?

Stephen Hill is Chairman of Ethical AI Ltd, a British-based advisory company. He is also CEO of AICHOO, a machine-learning software platform, and an advisor to OnePlanet, a strategic sustainability-planning technology. He holds an honours degree in Philosophy and is a Fellow of the RSA.

8 views0 comments

Recent Posts

See All

Recent or Ongoing AI Projects

These are all interesting, innovative projects, mainly in cybersecurity or IT Operations, and reliant upon use of AI. Automated anomaly/threat detection A valuable addition to a strong security manage

Impending AI Regulation

Financial regulators have now issued a formal request to all substantive banks in the United States for information on how they use artificial intelligence (AI). It is certain that this will be follow

bottom of page