Hallucinations and Existential Threats — Yet More Power to AI

Every so often, we observe debates around the threats of Artificial Intelligence (AI). Not least from fictional movies and some skeptics.

‘Hold My Hand’ by Cash Macanaya. Unsplash

But this debate around promises and perils of AI has, as of late, taken a pivotal turn — with the emergence of AI chatbots such as Chat GPT or Google’s Bard. The model underpinning these systems has gained a central stage in those debates. On the one hand, the model is constantly empowering AI technology. And, on the other hand, it is inviting more resistance against AI — even by its staunch proponents.

AI Evolution

Technically known as Large Language Models (LLMs), these AI models are basically trained on large datasets to learn the relationship in sequential data — pretty much like words in a sentence. It helps them in recognizing, summarizing, predicting, or generating human language. In the form of Chat GPT and Bard, they have shown a glimmer of unprecedented influence of machines in a society. It may not be wrong to say that the release of Chat GPT-3 has been pivotal in large-scale use of LLMs. Because of its unprecedented conversational capabilities, it has amassed a large user base, who use it to write emails, presentations, or to understand any topic. The tool has further advanced into its fourth version, containing an ever-greater dataset.

Surpassing the boundaries of sophisticated chatbots, these models are now helping in empowering robots. Google began working on this combination in the last year but found limitations when it comes to interpretation of images. In a recent demonstration, Robotic Transformer (RT)-2 — dubbed as a breakthrough in robotics technology – was shown to overcome that limitation. RT-2 successfully identified certain objects and moved its arm using that model. Functioning as a robot language, it will help the robot in predicting the right movement to perform different tasks. These developments are taking place when society is still understanding larger implications of different LLMs. Apart from the hallucination problem, these models threaten society because of their cataclysmic effect on the existential threat to humanity, and potential misuse at the hands of criminals.

Hallucination or Confabulation

The ‘hallucination’ or — for the sake of less anthropomorphized term — ‘confabulation’ problem arises as LLMs produce incorrect results by either fabricating references or the information itself. For instance, Google’s Bard caused a loss of $100 billion in market value to the company when it incorrectly stated that James Webb Space Telescope took the first picture of an exoplanet. As the world is already menaced by misinformation, this problem could have graver consequences. Once embedded into robots or hardware, these models would take a physical form. Regarding RT-2, the risk of hallucination is considered to be remote — not absent. Hallucinations in that state could invite more implications. While attempts are being made to address this issue, no concrete solution could still be found.

Existential threat?

The second fear is characterized by their cataclysmic influence on the existential threat to society. Owing to their exceptional capability of manipulating human language, these models can disrupt society in dangerous ways. This sentiment was shared by a renowned historian and philosopher, Yuval Noah — arguing that AI has already hacked human civilization. The conversational capability of these systems is creating a form of intimacy between humans and AI. And regardless of its inaccuracy, it carries a great influence on people as some sort of ‘oracle’. To him, ‘AI is a new weapon of mass destruction that can annihilate our mental and social world’. Earlier this year, the godfather of modern AI, Geoffry Hinton, also rang the alarm bell regarding the ‘existential threat’ posed by AI. He fears that the ongoing competition between big tech companies would only evolve the AI technology exponentially into something of grave danger to society. As someone who always considered machines outsmarting humans as something ‘way off’, now thinks that can happen.

AI fear is not voiced by only a few experts. In March 2023, a group of AI experts called for a moratorium on development of more powerful systems than the Chat GPT-4. The aim is to ensure that powerful AI systems are not built until their effects are justifiably positive and risks manageable. It was followed by another call by experts, including the CEO of OpenAI, for international action. Hinting on the potential of AI to subvert humanity, that statement read: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’. These viewpoints by experts testify that concerns are just not superficial and further development may not augur well for society.

Ignorance

A connected issue here is also the ignorance towards the consequence of such systems. For example, recently, a visually impaired individual — amongst a group of selected people — was offered to use advanced version of Chat GPT. Following his instructions, the model was ably describing him different objects. It then led to the request to describe someone on social media. The program conveyed information on gender, facial expressions, hair color and so on. Later, however, OpenAI blocked his access to such functionality citing privacy reasons. But what could not be neglected is Chat GPT, by design, is somehow capable of performing facial recognition. Yes, OpenAI blocked the functionality for that specific individual, but can this kind of enforcement truly prevent the use of Chat GPT as a facial recognition tool? Trained in this way, the misuse of models could not be controlled.

LLMs have surely accelerated the growth of AI. However, observing the prevailing AI scene, one gets a general impression that even creators are not aware of what they are making. Passionate about innovation, they are not ready to consider the bigger picture. That sentiment was aptly put into words by the godfather of the atomic bomb and later reiterated by the godfather of AI ‘When you see something that is technically sweet, you go ahead and do it.’ The consequences are not considered in the manner that they should be. And while the language models have practically shown their implications, they are still being advanced and are being used as a tool to empower other technologies.

In their current bullshitting phase, LLMs may not pose a grave threat to humanity. But once they evolve — as they would — because of the ongoing competition, society would likely be disrupted in an unprecedented manner. Most of the calls to action hint on regulation of AI so these systems could be designed in a more informed manner and risks are mitigated before they are introduced to society. While that is a right approach to maintain control over AI, the pace of AI development is rapidly evolving against the regulatory pace. And this growing lag would be detrimental to those efforts.

Share this: