Ah! Channel

Ah Channel Magazine

$10 – $15 / Week
Search
Close this search box.

Ah Channel

Search
Close this search box.

A Warning about Potential Damage with A.I.

New York Times

Anybody wondering about the dangers of artificial intelligence should seriously consider what the New York Times just did. The Times just hired Zach Seward to help it with the establishment of principles for “how The Times will and won’t use generative A.I.” Seward is best known for getting the website Quartz off the ground and building it into a multi-million-dollar platform for international news.

Why it matters

You may not think much of it, but in the world of journalism The New York Times is a behemoth. With a reported 10-million subscribers, more than one-hundred Pulitzer prizes and revenue in the hundreds of millions of dollars, The Times has a lot on the line. It can’t afford to get generative artificial intelligence wrong, and there’s a lot to get wrong. We know, now a year after the release of ChatGPT. We have a much better idea of the of repercussions generative A.I. The downside is potentially devastating to anyone who makes their living off the spoken and written word and The Times decision to create a position of Director of A.I. Initiatives is testament to that.

Quality, Authenticity at Stake

The biggest threat is a race to the bottom. Generative A.I. is built off huge datasets called large language models, or LLMs, created by scraping the Internet of digital content. That in turn allows it to come up with text, images and now even realistic video. This raises the risk that the quality of content will suffer. Think of it like ants invading your picnic. One or two are not a problem but enough of them will carry off your meal and give it to the rest of the colony. What this means is that publishers using Gen A.I. face the prospect of turning out product that becomes inauthentic and bland as it’s based off much the same data.

An image generated with artificial intelligence by the Airt app

Besides the danger to authentic, quality content, Gen A.I. poses the risk of damaging reputations while also drowning us in factually incorrect content. For one, algorithms used in the process provide responses that can’t be explained by its programming, a “black box,” if you will. Then there’s the issue of the systems churning out inaccurate material. “Hallucinations,” as they are called, are well documented in a significant amount of Gen A.I. In conclusion, you’ll find tremendous damage to the reputations of the people and businesses who rely too much on A.I. Once readers, clients and customers pick up on how your content is just like everyone else’s, it may be just flat-out wrong and the process for creating it can’t be explained, you’ll lose them.

Tremendous Benefits

Don’t mistake this position as being against generative artificial intelligence. The reality is that it’s very attractive for its ability to increase output while also saving time and other resources. As the Boston Consulting Group and Harvard Business School found, specific tasks can be completed quicker and with higher quality with the assistance of A.I. Therein lies the conundrum, one which Zach Seward is undoubtedly tasked with wading into in his new role as editorial director of Artificial Intelligence Initiatives for the New York Times

Share this post:

Stay Connected