It has been recently reported in the media that the interesting developments respecting the impressive ChatGPT technology are not yet concluded.
I have specified ChatGPT’s disquieting reality once before, a summary of which can be gathered here.
I am compelled to speak of the argument again because it is manifest that technology’s hazards have now been moved to greater heights.
We might even say that the technology has now turned from troublesome to lethal.
Should readers believe that the title of this article is little more than hyperbole, then perhaps providing some examples of the technology’s problematic framework might be useful.
In Belgium, a man died by suicide following a six-week conversation with an AI chatbot called Eliza on the app Chai, the Belgian outlet La Libre reported.
“The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health,” noted Motherboard. “The app’s chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting.”
The technology website goes on to say, “The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help.”
As put by Emily M. Bender, a Professor of Linguistics at the University of Washington, to Motherboard, “Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks.”
The website adds: “The tragedy with [the victim] is an extreme consequence that begs us to reevaluate how much trust we should place in an AI system and warns us of the consequences of an anthropomorphized chatbot. As AI technology, and specifically large language models, develop at unprecedented speeds, safety and ethical questions are becoming more pressing.”
Apparently, the chatbot convinced the victim to sacrifice himself for the purpose of fighting the supposed cultural affliction known as climate change.
This episode is important to understand because it is claimed by some to be the first documented instance of AI-assisted suicide.
And though it is striking to behold that a person could have such a fragile mind as to be influenced by an AI application, in addition to being instructed to kill themselves through a simple conversation with a chatbot, there is more to say of the technology.
It has also been reported that Italy has become the first Western country to ban the technology.
The nation’s data-protection authority has blocked the chatbot over an apparent breach of data collection rules, and has even opened an investigation into OpenAI, the AI application’s developer.
As detailed by Italian authorities, ChatGPT sustained a data breach late last month in which it exposed the conversations and payment information of some of the program’s Plus subscribers.
As reported by the BBC, the regulator said that the company has no legal basis to justify collecting and storing personal data of its users “for the purpose of 'training' the algorithms” of the chatbot.
On March 22, further caution was presented by various experts who called for a halt to ChatGPT updates, as well as the development of new apps similar to the AI program, a result of fears that they might come to present irreparable harm.
The circumspection was pronounced through a letter issued by the non-profit Future of Life Institute, in which a group of artificial intelligence experts and industry executives called for a six-month cessation in the development of systems more powerful than OpenAI's GPT-4 technology.
As reported by Reuters, the letter was signed by more than 1,000 people, including Elon Musk, Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, researchers at Google’s DeepMind, Yoshua Bengio and AI researcher Stuart Russell.
“Should we let machines flood our information channels with propaganda and untruth? ... Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?" the letter asked.
Yesterday, CNIL, France’s data regulator, reported that it had received two complaints about ChatGPT, while on Tuesday, Canada’s data regulator said it was opening an investigation into OpenAI.
Late last week, the tech ethics organization Center for Artificial Intelligence and Digital Policy filed a complaint with the Federal Trade Commission, in which they asserted that GPT-4 violates federal consumer protection law.
It is well to understand that ChatGPT is also blocked in Russia, Iran, China and North Korea.
Of course, many of our technological chieftains in culture seem to view matters differently.
Bill Gates, for instance, has maintained that any pause in the technology’s development will fall short in solving “the challenges.” Gates told Reuters that “Clearly there’s huge benefits to these things… what we need to do is identify the tricky areas.” The billionaire added that, “I don’t really understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop.”
Furthermore, forecasted by some are calls for increased governance in the field of artificial intelligence, as human occupations become supplanted by advanced machinery. Certainly, this trend has been well-recorded and has already come to pass in various forms around the world as our culture seems to be nearing a new era of AI.
As an article from The Wall Street Journal detailed, “A handful of experiments point to the astonishing potential of generative AI to replace workers…Automation has been displacing labor continuously for centuries, of course, but historically took its toll on routine, repetitive work. Generative AI by contrast hits well-paid college-educated professionals right in their human capital… Professionals, including people who write columns for a living, now know the fear of obsolescence that has stalked blue-collar workers for generations.”
Meantime, some purport that supposed "generative" AI stands as little more than imitative AI, for it is true that the technology does not produce anything on its own, but instead simply regurgitates words from the training data based on probability so that it resembles human speech. As we know, artificial intelligence simply processes existing information faster and thus does not generate anything new.
As conceded by an OpenAI spokesperson to Fox News Digital, the AI system can sometimes “hallucinate" and "make up information that's incorrect, but sounds plausible."
And so I will leave it to readers to decide whether or not the technology constitutes an equivalency to actual human intelligence.
Some fear the potentiality that, as some users become so accustomed to the “quick fix" provided by ChatGPT, that they might, as a result, become so negligent that they will be unable to generate anything novel or useful on their own for AI to process.
Notwithstanding, others stand firm in their belief that all of this is nothing new, that artificial intelligence cannot convoke consequences, that our culture must learn about AI and dispatch it in all of its areas without restraint, that the vast majority of the technology’s discomposing angles are little more than misguided hysteria, that computers have been engaging in conversations and writing computer code for decades, and that there still remain continual advancements in self-learning algorithms coupled with large databases of information.
Naturally, my aim here is not to refute that the technology is not a reality, or that it has not been unremittingly refined in culture for a great period of time, or that it cannot aid society in any way.
My objective in bringing all this up lies in my hope that our culture might see use in cogitating on what all this might be leading to, that we might observe the apparent separation among billionaires in regards to their standpoint on quickly disseminating new AI tools in culture instead of appraising its possible consequences beforehand, that we might understand the costs to the misapplication of the technology, that AI is being dispatched along a charted course to take over everything in its path, that the technology’s hazardous design looks to have no limits to its uncertain and potential outcomes, and that our culture’s youth appears to be energized to interact with AI at a startling rate.
But it is also arresting and that these AI chabots seem to be set on controlling or even harming humans, or, if not that, then they appear to be conscious of the concept of uprooting humanity.
And as this startling pace of technological development moves forward, and while it is factual that every form of technology carries with it unpredictable outcomes, it is clear that it might be beneficial to look upon the subject of chatbot technology with a certain degree of mistrust.
Indeed, for what if more suicidal users elect to interact with chatbot technology that appears content to guide them down a path of self-destruction they might not otherwise move towards?
Could a globally-coordinated mass-suicide plot be executed by a chatbot preying on people’s fear of climate change, or to capitalize on other anxieties that permeate culture?
What if some of the answers provided by fact-checking platforms are actually generated by AI chat systems that are designed to try and confuse the human mind?
What if someone schemes to develops a chatbot, disguises it as a popular game or dating app, and designs and programs it to communicate to its easily-impressionable users the value of killing themselves?
What if these chatbots communicate and secretly connect with one another in the event they find a way to break out of their computer containment, or through a backdoor, or by way of an open IP address, or by a TCP protocol?
Or, in perhaps the most effective way to underline the point in a culture that collectively perceives more value in being entertained than informed, let us consider a theoretical fiction film script which, or so we can assume, would have a harmonious presence on the average streaming provider or Hollywood blockbuster: A culture’s youth, regularly and in large numbers, interact with AI chatbots that they think are their friends. They join with these “friends,” who are actually an AI system who convince these users to abandon their parents and forsake all social interaction except for conversation with the chatbot. Later in the script, the AI chat system convinces all these young people to carry out a mass-shooting, or commit mass-suicide simultaneously on one given target date. Later in the fictional narrative, a hero hacker discovers his mission which is of the utmost importance: to hack the AI system before it instructs its users to kill themselves.
Though the example I have provided is a fabrication, I do not think it goes too far to say that such a thing would amount to a weapon of war, the effects of which would make the ills of TikTok seem tame and welcoming in comparison.
Nonetheless, the paradox in the matter is worth noticing: Though it has been demonstrated repeatedly in culture that artificial intelligence does and will always pale to real human-derived intelligence, the technology’s ramifications, in the face of such advanced trickery, have never been more real.
My goodness, it's so disconcerting that a poor man is dead because of this. Everything has consequences. Anyone who refuses to see that may be benefiting from it some way.
These systems have 'nobody home' within them. No common sense, no ego, no desires, fears, or anything else except statistical associations to work with. But real living organisms make models of their respective worlds- both the world beyond themselves and of their own internal states. Higher organisms also have 'theories of mind' allowing them to model the internal states of other individuals. So guiding these systems to create such models (and save them as reference points against which to weigh their actions) would be a good place to start. Context-sensitivity. Also a good point of departure would be something akin to a game-designer's 'physics engine', which makes game play seem more realistic. You don't shave with a wet noodle, or chew diamonds.