Artificial Intelligence Madness

By Casey Bukro

Ethics AdviceLine for Journalists

Do you remember HAL 9000?

It was the onboard computer of Discovery One spacecraft bound for a mission near Jupiter in the movie “2001: A Space Odyssey.”

Possibly one of the most famous computers in cinema history, HAL 9000 killed most of the crew members for an entirely logical reason, if you are thinking like a computer.

Most of what was in the movie directed by Stanely Kubrick is intentionally enigmatic, puzzling. But the sci-fi thriller on which the movie is based, written by novelist Arthur C. Clarke, explains HAL’s murderous motivation.

HAL was conflicted. All crew members, except for two, knew the mission was to search for proof of intelligent life elsewhere in the universe. HAL was programed to withhold the true purpose of the mission from the two uninformed crew members.

Computer manners

With the crew dead, HAL reasons it would not need to lie to them, lying being contrary to what well-mannered computers are supposed to do. Others have suggested different interpretations.

One crew member heroically survives execution by computer. He begins to remove HAL’s data bank modules one-by-one as HAL pleads for its life, its speech gradually slurring until finally ending with a simple garbled song.

Three laws

Science fiction fans will recognize immediately that what HAL did was contrary to The Three Laws of Robotics written by another legendary science-fiction writer, Isaac Asimov. According to those laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. All of this talk about how computers should behave is fanciful and based on science fiction.

Wacky conduct

But recent events at the Chicago Sun-Times show how the wacky conduct of artificial intelligence is invading our lives, in perfectly logical ways that escape human detection.

A special section inserted into the Sunday Chicago Sun-Times featured pages of enjoyable summer activities, including a list of 15 recommended books for summer reading.

Here’s the hitch: The authors were real, but 10 of the books and their elaborate summaries were fake, the work of artificial intelligence.

Mistaken belief

Veteran freelancer Marco Buscaglia wrote the entire summer insert for King Features Syndicate, a newspaper content producer owned by Hearst Communications. Buscaglia told the Chicago Tribune that he used artificial intelligence to compile the summer reading list, then made the mistake of believing it was accurate.

“I just straight up missed it,” Buscaglia told the Tribune. “I can’t blame anyone else.”

Unable to find summer reading lists from other sources, Buscaglia turned to AI platforms such at ChatGPT, which produced 15 books tied to well-known authors. The list contained five real books.

Express dismay

The Chicago Sun-Times and King Features expressed dismay, and King Features fired Buscaglia.

All parties said they would be more careful in the future about using third-party editorial content.

In human terms, what the robot did would be called fabrication, and reason to call for an ethics coach.

Fooled the editors

But, from a purely journalism point of view, one thing must be said: The robot writer was good enough to fool professional editors who are supposed to catch the fakers.

Writer Eric Zorn called the Sun-Times fake books pratfall “artificial ignorance.”   

Is artificial intelligence too smart for humans? Or are humans too dumb?

Like HAL, ChatGPT was given a task, which it carried out in an unexpected, flawed, but convincing way.

New world

So what is going on with these computers? We enter a strange new world when we try to understand the thought processes of artificial intelligence.

Arthur Clarke gave a plausible reason for HAL turning homicidal, but it was all too human. Computers are not human, but people who write about why artificial intelligence goes haywire often use terms describing human behavior.

When computers make mistakes, it’s often called an “hallucination.” It’s also called bullshitting, confabulation or delusion — all meaning a response generated by AI that contains false or misleading information presented as fact.

Human psychology

These terms are drawn loosely from human psychology. An hallucination, for example, typically involves false perceptions. Artificial intelligence hallucinations are more complicated than that. They are erroneous responses that can be caused by a variety of factors such as insufficient training data, incorrect assumptions made by the model or biases in the data used to train the model, which are constructed responses.

I suppose that’s another way of saying “garbage in, garbage out.”

Rather than resorting to terms drawn from human behavior, it would make sense to use terms that apply to machines and mechanical devices.

Code crap

These could include code crap, digital junk, processing failures, mechanical failure and AI malfunctions.

Computer builders seem determined to describe their work as some kind of wizardry. They are digital mechanics or engineers working on highly sophisticated machines. But they are building devices that are becoming more complicated, and on which humans are more dependent.

That raises the question of whether humans understand the consequences of what they are doing.

Risk of extinction

Leaders from OpenAI, Google DeepMind, Anthropic and other artificial intelligence labs warned in 2023 that future systems could be as deadly as pandemics and nuclear weapons, posing a “risk of extinction.”

People who carry powerful examples of algorithm magic in their hip pockets might wonder how that is possible. The technology seems so benign and useful.

The answer is mistakes.

Random falsehoods

Artificial intelligence makes a surprising number of mistakes. Analysts by 2023 estimated that chatbots hallucinate as much as 27 percent of the time, giving plausible-sounding random falsehoods, with factual errors in 46 percent of generated texts.

Detecting and solving these hallucinations pose a major challenge for practical deployment and reliability of large language models in the real world.

CIO, a magazine covering technology and information technology, listed “12 famous AI disasters,” high-profile blunders that “illustrate what can go wrong.”

Multiple orders

They included an AI experiment at McDonald’s to take drive-thru orders. The project ended when a pair of customers pleaded with the system to stop when it continued adding Chicken McNuggets to their order, eventually reaching 260.

The examples included an hallucinated story about an NBA star, Air Canada paying damages for chatbot lies, hallucinated court cases and an online real estate marketplace cutting 2,000 jobs based on faulty algorithm data.

Going deeper, Maria Faith Saligumba of Discoverwildscience.com asks, “Can an AI go insane?”

Mechanical insanity

“As artificial intelligence seeps deeper into our daily lives, a strange and unsettling question lingers in the air: Can an AI go insane? And what does ‘insanity’ even mean for a mind made of code, not cells?”

Saligumba goes into “the bizarre world” of unsupervised artificial intelligence learning, which can lead to “eccentric, even ‘crazy’ behavior.”

The well-known hallucinations, she explains, are weird side-effects of the way artificial intelligence systems look for random patterns everywhere and treat them as meaningful.

Hilarious or surreal

“Sometimes,” she writes, “the results are hilarious or surreal, but in safety-critical applications, they can be downright scary.”

It’s a reminder, she points out, that “machines, like us, are always searching for meaning – even when there isn’t any.”

One hallmark of human sanity is knowing when you’re making a mistake, she explains. “For AIs, self-reflection is still in its infancy. Most unsupervised systems have no way of knowing when they’ve gone off the rails. They lack a built-in ‘reality check.’”

Odd connections

Some researchers have compared the behavior of some AIs to schizophrenia, pointing out their tendency to make odd connections.

That’s just one of the ways artificial intelligence loses its marbles.

But human behavior might be the salvation of artificial intelligence, Saligumba suggests.

“Studying how living things manage chaos and maintain sanity could inspire new ways to keep our machines on track… Will we learn to harness their quirks and keep them sane, or will we one day face machines whose madness outpaces our own?”

By then, science fiction writers and movie-makers will be describing how humans face that doomsday scenario, or save themselves from that fate by outsmarting those unpredictable machines.

************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.


Visit the Ethics AdviceLine blog for more.