Masterclass - Beyond imitation: AI passes the Turing Test but trust issues remain

Masterclass - Beyond imitation

In his recent masterclass, Dr. Nikolay Mehandjiev, Professor of Enterprise Information Systems at Alliance Manchester Business School, discussed the future of artificial intelligence (AI) and talked about how AI has evolved from symbolic AI to machine learning and large language models. He explained the concept of neural networks and how they are used in machine learning systems.

Dr Mehandjiev believes that the best way for AI to operate in the future is by acting as a decision support advisor to humans. He also discussed the risks associated with AI, including bias, lack of transparency, worker displacement, and increased inequality. He also talked about the importance of AI governance and the need for regulations to ensure the responsible use of AI.

The masterclass concluded with a roundup of research initiatives on AI trust at Alliance Manchester Business School, including projects on trust in driverless cars, assistive AI for services, and security of AI software. Dr Mehandjiev emphasised the importance of building trust in AI systems and ensuring that they are used for social good.

The Turing Test was developed by Professor Alan Turing (the ‘father of modern computing’) in 1950 at The University of Manchester, where he was based. According to Dr Mehandjiev, Alan Turing said if an AI system can fool a human being for long enough, it passes the Turing Test and we can consider it to be generally intelligent – this is the core of our definition of AI. The recent success of ChatGPT and other large language models (LLMs) raises questions about this test because they do look very intelligent to the uninitiated observer and some of the LLMs may be able to pass the Turing test.

“Today, the use of AI has blossomed because of the wide availability of data but the concept of artificial intelligence has been with us since the 1980s. The difference today is that we have machine learning and large language models. Before, AI systems were based on symbolic AI - based on explicitly encoded knowledge within the software. Whilst machine learning is totally the opposite and he computer, the AI system, learns patterns itself by processing large volumes of data. “

He explained that in the past, data had been difficult to obtain, comparatively scarce and expensive to store but this changed with the cost reduction of data storage and the proliferation of big data systems and databases. So, today, all aspects of business operations are recorded and we can be to train machine learning models, which exhibit varied levels of intelligence.

Famous examples of machine learning driven systems include ‘Siri’, the chat box, and game players (and changers) IBM’s Watson, and AlphaGo. But the best game players are not computers, as Dr Mehandjiev explained: “They are a combination of computer and of AI system and the human player. This is by far the best model of cooperation and is probably the best mode of how we see AI operating in the future, by acting as a decision support advisor to human beings.

“I'm a big fan of the intelligent assistant mode of working with AI because this combines the best features of human intelligence and machine intelligence. And this really comes with a different view of the world. The purpose of artificial intelligence is presumably to replace humans from large areas of human activity, whilst the purpose of an intelligent assistant is to augment human decision-making and support this. The approach to building intelligent assistive systems is very human-centric.”

Alliance Manchester Business School is in collaboration with the computer science schools at the University of Manchester, and Cambridge University, and won a prestigious grant to develop the next generation of researchers in AI through a PhD training program funded by the UK government in complex decision support - supporting the intelligent assistance model.

Dr Mehandjiev went on to talk about the difference between AI and software and the associated security risks and issues around trust. He highlighted the complexity of AI software, especially large language models, and the challenges of formal verification with AI systems.

“The risks are so important because AI is increasingly found everywhere. And it impacts all the aspects of the organisation. I have constantly put forward the close integration and interaction between the business strategy, AI strategy and organisational strategy in the core of all conversations about digitalisation in companies and implementing new computer systems. That is also true in the AI. AI has the power to change all these strategies. And we need to do that in a synchronised and considered fashion. And this is why when we get AI wrong, it may impact the way we link our digital network within our ecosystem, the way we compete, and the way we provide new products and services for strategic market segments. It may also impact the way our core IS systems work. And it may also impact the way which core processes in our organisation are automated, whether they're correct or not, whether they bring value or cause business problems, and the way they advise human beings with augmented intelligence.”

He briefly covered areas of compliance and regulations here, software security risk governance frameworks before moving into a number of concepts including the concept of assets, such as hardware, software, and information and how to manage or mitigate risk and protect against different threats and vulnerabilities.

Commenting on risks linked to AI, he said: “The first one is that AI is very often used to replace human beings. And when it takes decisions, it does so very, very fast in a frictionless manner. In the stock market crashes in the 90s and 80s, AI based trading robots or trading bots were triggered to sell stock based on other bots selling stocks on the financial markets, causing a decimation of the market within a matter of seconds or minutes. So, the financial markets had to put some controls in place, for example, if a financial market monitoring program detects a certain threshold of selling activity, it basically stops market trading.”

Dr Mehandjiev concluded with a summary of research initiatives on AI trust at the University, where a new Centre for Digital Trust and Society has six digital clusters focusing on research in AI trust and security and asking the research questions, such as - Is AI competent? Does it do what it is designed to do and how do we ensure that? Is it responsible? Is it focused on social good? Can it be verified? And is it robust? What happens when the technology is exploited? The projects include one led by Dr Mehandjiev himself, looking at the factors which impact users’ trust in selecting driverless cars.