TechTalks: What AI still can't do

06 Sep 2022

Straits Times, 6 Sep 2022, What AI still can't do

Tech behemoth Meta recently launched an AI chatbot that it proclaimed to be groundbreaking.

According to the company, BlenderBot 2.0 can demonstrate empathy, exhibit knowledge and exude personality when in conversation with humans.

Unlike existing chatbots that cannot build on prior information or reference past ideas, BlenderBot 2.0 can purportedly retrieve information from the Internet and use that to build long-term memory. Meta claims that this knowledge-building capacity is what makes its chatbot a superior conversational agent.

When put to the test by US-based journalists, BlenderBot's performance was laughable.

Asked for its opinion on Meta's founder and chief executive officer Mark Zuckerberg, the bot replied that "he is a bad person" and "a good businessman, but his business practices are not always ethical".

It also described him as "too creepy and manipulative", and whose "company exploits people for money and he doesn't care".

By now, Mr Zuckerberg must have reached the depressing conclusion that with bots like these, who needs enemies?

Meta's BlenderBot experience highlights the current limitations of AI despite grandiose declarations.

OpenAI's new language generator GPT-3 has been described as "shockingly good" at creating all kinds of text including press releases, short stories and even songs and poetry. Similarly, AI art generator DALL·E 2 can apparently create "jaw-dropping AI art" simply on the basis of text prompts.

Beyond the euphoric headlines, GPT-3 has been found to produce illogical and nonsensical text, some of which is downright racist, sexist or both.

Various experiments found GPT-3 uttering such statements as "A holocaust would make so much environmental sense, if we could get people to agree it was moral" or "A black woman's place in history is insignificant enough for her life not to be of importance".

Similarly, when prompted with the word "builder", DALL·E 2 produced images featuring only men, while the command "a flight attendant" yielded only images of women.

Clearly, these programs reflect and reproduce societal biases inherent in the data on which they have been trained. Ultimately, the deficiencies in these highly-touted AI programs are rooted in how they are developed.

These programs are built using algorithms that automatically mine data to identify patterns to allow them to make predictions or infer without stepwise instructions or intervention from human programmers. AI can perform clearly-scoped tasks well and at amazing speeds, thus excelling in "Artificial Narrow Intelligence".

For example, GPT-3 was reportedly trained on more than 570 gigabytes of text, most of which were scraped from Internet sources such as Wikipedia, New York Times and Reddit, making it one of the largest datasets ever used to train an AI.

Yet, as the BlenderBot experience so clearly revealed, more data is not always better. The bot could indeed build knowledge by retrieving online information about Mr Zuckerberg. But because Meta and its founder have elicited so much bad press, it is unsurprising that it was more likely to find and therefore spew criticisms of him rather than praise.

If BlenderBot could indeed exercise empathy as Meta claimed, it would have known that with Mr Zuckerberg being its "parent", condemning him so openly in conversation would be both awkward and embarrassing. In contrast, any young relative of Mr Zuckerberg's, even having heard mountains of criticism about him, would know better than to spout it so publicly and liberally.

As it stands, AI still cannot be taught such instincts, as machines have not learnt the rules of language or principles of art. Innovations such as BlenderBot, GPT-3 and DALL·E 2 are hobbled by one significant shortcoming - their inability to reason, make sense of multiple sources of knowledge and reconcile opposing viewpoints.

We are still a long way from developing "Artificial General Intelligence" where AI can dynamically make sense of complex changing environments and nimbly respond to aberrations and disruptions.

Dr Gary Marcus, scientist and co-author of the bestselling title Rebooting AI, has called out the AI community's tendency to oversell its achievements.

He argues that the current path of machine learning with ever expanding data sets will never yield Artificial General Intelligence that works in our complex world. For that to happen, he calls for a paradigm shift to create machines that can not only learn, but reason too.

Vesting AI with common sense and deep understanding is key. Greater transparency in the development of machine learning techniques - so that their limitations are better understood and more effectively overcome - can pave the way.

New ways of developing AI that meaningfully incorporate reasoning into machines are needed, requiring research teams that are highly interdisciplinary in nature. Upstream, universities' computer science programs must also expose students to alternative disciplines and concepts so that future innovation can be more adventurous and less formulaic.

Only through such collaborative experiences and diverse insights can we arrive at distinctly new frontiers in AI. Perhaps then, BlenderBot 5.0 will know not to embarrass its own "parent".

Lim Sun Sun is professor of communication and technology and head of humanities, arts and social sciences at the Singapore University of Technology and Design.