AI Born with Warnings

http://www.researchgate.net image

By Casey Bukro

Ethics AdviceLine for Journalists

Like nuclear power, artificial intelligence is described as a threat to humanity.

A difference is that the atomic bomb was intentionally invented as a weapon of mass destruction.

For some, artificial intelligence (AI) seems more like a technology that stealthily places a suffocating pillow over the face of sleeping humanity, causing extinction. AI development could lead to machines that think for themselves, and there lies the problem.

Warnings sounded

Warnings are sounded repeatedly, most recently by the Bletchley Declaration on Artificial Intelligence Safety on Nov. 1-2, 2023, a new global effort to unlock the benefits of the new technology by ensuring it remains safe.

At the two-day summit in England, 28 governments, including the United States, the United Kingdom, the European Union and China, signed the declaration acknowledging the potentially catastrophic risks posed by artificial intelligence.

The warning seems well-timed, since 2024 is expected to be a transformative year for AI. It is the year, predicts The Economist magazine, that “generative AI will go mainstream.”

Year of experimentation

Large companies spent much of 2023 experimenting with the new technology, while venture-capital investors poured some $36 billion into the new invention. That laid the foundation for what is expected next.

“In 2024 expect companies outside the technology sector to start adopting generative AI with the aim of cutting costs and boosting productivity,” The Economist, a Britain-based publication, predicted.

For some, this is unsettling.

Business leaders, technologists and AI experts are divided on whether the technology will serve as a “renaissance” for humanity or the source of its downfall, according to Fortune Magazine.

At a summit for chief executive officers in June, 42 percent of them said they believe AI “has the potential to destroy humanity within the next five to 10 years.” Fortune added that one AI “godfather” considered such an existential threat “preposterously ridiculous.”

Science fiction

The Washington Post reported similar findings: “Prominent tech leaders are warning that artificial intelligence would take over. Other researchers and executives say that’s science fiction.”

Why should we fear AI?

Among the scenarios postulated is that self-governing AI robots designed to tend to human needs might decide that extermination is the most logical solution to ending human tendencies to wage war. An autonomous machine might think humans are routinely killing themselves in vast numbers anyway. To end such suffering, the machine might decide to copy human behavior. Destroy them for their own good.

Putting a humorous spin on it, a cartoon shows a robot telling a man: “The good news is I have discovered inefficiencies. The bad news is that you’re one of them.”

A conundrum

At the root of this conundrum is trying to think like AI robots of the future.

At the British AI safety summit at Bletchley Park, tech billionaire and Tesla CEO Elon Musk took a stab at describing the AI future.

“We should be quite concerned” about Terminator-style humanoid robots that “can follow you anywhere. If a robot can follow you anywhere, what if they get a software update one day, and they’re not so friendly anymore?”

Musk added: “There will come a point where no job is needed – you can have a job if you want for personal satisfaction.” He believes one of the challenges of the future will be how to find meaning in life in a world where jobs are unnecessary. In that way, AI will be “the most disruptive force in history.”

Musk made the remarks while being interviewed by British prime minister Rishi Sunak, who said that AI technology could pose a risk “on a scale like pandemics and nuclear war.” That is why, said Sunak, global leaders have “a responsibility to act to take the steps to protect people.”

Full public disclosure

Nuclear power was unleashed upon the world largely in wartime secrecy.  Artificial intelligence is different in that it appears to be getting full disclosure through international public meetings while still in its infancy. The concept is so new, Associated Press added “generative artificial intelligence” and 10 key AI terms to its stylebook on Aug. 17, 2023.

The role of journalists has never been more important. They have the responsibility to “boldly tell the story of the diversity and magnitude of the human experience,” according to the Society of Professional Journalists code of ethics. And that includes keeping an eye on emerging technology.

The challenge of informing the public of mind-boggling AI technology, which could decide the future welfare of human populations, comes at a tumultuous time in world history.

Journalists already are covering two world wars – one between Ukraine and Russia, and the other between Israel and Hamas. The coming U.S. presidential election finds the country politically fragmented and violently divided.

Weakened mass media

These challenges to keep the public more informed about what affects their lives comes at a time when U.S. mass media are weakened by downsizing and staff cuts. The Medill School of Journalism reports that since 2005, the country has lost more than one-fourth of its newspapers and is on track to lose a third by 2025.

Now artificial intelligence must be added to issues demanding journalism’s attention. This is no relatively simple story, like covering fires or the police beat. Artificial intelligence is a story that will require reportorial skill involving business, economics, the environment, health care and government regulations. And it must be done ethically.

It is a challenge already recognized by the International Consortium of Investigative Journalists (ICIJ), which joined with 16 journalism organizations from around the world to forge a landmark ethical framework for covering the transformative technology.

Paris Charter

The Paris Charter on AI in Journalism was finalized in November during the Paris Peace Forum, which provides guidelines for responsible journalism practices.

“The fast evolution of artificial intelligence presents new challenges and opportunities,” said Gerard Ryle, ICIJ executive director. “It has unlocked innovative avenues for analyzing data and conducting investigations. But we know that unethical use of these technologies can compromise the very integrity of news.”

The 10-point charter states: “The social role of journalism and media outlets – serving as trustworthy intermediaries for society and individuals – is a cornerstone of democracy and enhances the right to information for all.” Artificial intelligence can assist media in fulfilling their roles, says the charter, “but only if they are used transparently, fairly and responsibly in an editorial environment that staunchly upholds journalistic ethics.”

Among the 10 principles, media outlets are told “they are liable and accountable for every piece of content they publish.” Human decision-making must remain central to long-term strategies and daily editorial choices. Media outlets also must guarantee the authenticity of published content.

“As essential guardians of the right to information, journalists, media outlets and journalism support groups should play an active role in the governance of AI systems,” the Paris Charter states.

***********************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.


Visit the Ethics AdviceLine blog for more.