Search
  • Stephen Hill

Is it possible to ensure AI systems behave ethically, i.e. in accordance with our norms/laws?

In a world where it seems that ‘anything goes’ – where claims are often unsubstantiated and where ‘fake news’ has created blurred lines of fact versus fiction – it is vital that we don’t lose sight of the importance of ethics; the moral guidance we follow.


While some of our political and business leaders seem to be lacking in principles, yet still need to be held to account, how can we ensure that our technological developments are also held accountable to the same level? Surely a pre-requisite for developing ethical AI systems is that the people and organisations developing them are themselves behaving ethically. Yet there is very little formal education in ethics, and precious little in the way of effective ethical reviews of government or business practices and management. The outcome of the last financial crash provides clear evidence for that!


Artificial intelligence is a big deal. Over the next few decades AI will dominate everyone’s lives. In many ways it already is – predictive models exist in large areas of society, from crime prevention to financial forecasting. It is therefore vital that our AI systems reflect commonly-accepted standards, ones that we are ourselves willing to adhere to. A good example of this need lies in recruitment, where the use of machine learning should be able to ensure that hiring practices are not biased against certain types of individuals.


Of course, a quick glance through recent history demonstrates the changing mores in our society – we’ve seen certain things that were once frowned upon become not only tolerated but even encouraged – and our laws have changed accordingly. Ethical AI must therefore capture those values - and follow their changes - evolving organically as society does.


One of the many roles of ethical AI is to ensure that any such societal evolution is morally safe and sound – preventing damaging ideologies from taking hold, for example. Imagine an application of ethical AI which could identify far-right dangers and issue warnings to the authorities, the AI perhaps even taking its own preventive action. Some may see this as purely within the bounds of science-fiction, but this is not SkyNet.


Our work in the development of ethical AI systems has shown us beyond doubt that they will play an increasingly major role in the near future. We're involved in several projects that are already bearing fruit – in recruitment and business leadership as well as in credit risk analysis, for example – where AI is helping to make processes more efficient and businesses more profitable and socially responsible.


One of our major touchpoints is the need to implement systems that are beyond reproach. Rather than dwelling on the myth that AI systems seek to replace humans, it is precisely the involvement of humans that must keep the AI in check. After all, who decides what is right and wrong? The context within which AI controls or directs its machine learning algorithms – its ‘mind’, if you will - must some replicate our own ‘good’ impulses.


While some people may not want to see AI eventually reach a self-aware state where it can operate autonomously, as someone who’s involved at the very heart of research and development I can confidently say this will happen, some day, and probably sooner than you think – and that’s why we're putting so much emphasis on exploring human behaviour.


To contribute to the debate, please leave a comment.

26 views0 comments

Recent Posts

See All

These are all interesting, innovative projects, mainly in cybersecurity or IT Operations, and reliant upon use of AI. Automated anomaly/threat detection A valuable addition to a strong security manage

Financial regulators have now issued a formal request to all substantive banks in the United States for information on how they use artificial intelligence (AI). It is certain that this will be follow

I’m so pleased to say we have just started an exciting new AI project, aiming to reduce the number of false warnings received by operators in a SOC. Steve