Let’s Not Learn the Wrong Lessons from What Happened at CNET

One article on CNET, the American website that covers and reviews technology and consumer electronics, was titled  “What is a credit card?”. Another was titled “How to close a bank account?”. 

While these explainers were published under the unassuming byline “CNET Money Staff”, the articles were not written by staff members, or any human beings at all.

The tech website Futurism revealed last month that should one click on “CNET Money Staff”, there was a drop-down description that read: “This article was generated using automation technology and thoroughly edited and fact-checked by an editor on our editorial staff.”

Futurism added in its article that there had been no official announcement from CNET at that time disclosing its use of AI despite more than 70 such articles being seemingly AI-generated at that time. 

The use of AI in journalism is not new. The news agency Associated Press started using AI to generate short articles about business earning reports as early as 2014, eventually expanding its use and making use of AI for sports reporting as well. Bloomberg News reporters use the system Cyborg — which can sift out the most pertinent facts and figures from a financial report very quickly the moment it is released, with the New York Times reporting in 2019 that roughly a third of the content published by Bloomberg was produced with the use of some form of automated technology. 

The use of technology is particularly crucial for companies like Bloomberg and Reuters that are in the business of financial journalism, where outlets race to publish crucial information that informs the trading decisions of many readers ahead of their competitors without compromising the accuracy of their journalism. 

Technology has also been used in newsrooms for purposes other than to generate stories. The Financial Times, for instance, developed a bot that tracks the number of women quoted in their news stories, in response to findings that women featured far less than men in articles — identified as a potential reason for lagging female readership. 

However, news articles by AP, for instance, are not exactly “written” by AI. Instead, they require reporters and editors to craft several versions of a story upfront for the various possible outcomes of an event. The AI software then creates an article by inputting data, once made available, into the pre-written story templates. CNET, on the other hand, is utilising AI to write and put together not routine stories about a company’s earnings but whole explainers breaking down complex financial ideas to consumers, with editors only making edits and amends to an article mostly written/ generated by AI. 

Futurism’s story about the use of AI to write stories at CNET came just weeks after ChatGPT has taken the world by storm and teachers, regulators and technology executives are grappling with the far-reaching consequences that generative AI chatbots will have on the future of work and education. 

What followed the story was outrage from critics who were worried about the use of AI decreasing the quality of journalism while potentially eliminating work — especially for entry-level writers, a post from a CNET editor confirming their “experiment” with AI, as well as much scrutiny of the nearly 75 pieces generated by AI. 

With heightened scrutiny came a flurry of correction notices, as Futurism called out CNET for making a number of “dumb errors” in their AI-generated pieces. The stories, for instance, suggested that one will earn $10,300 a year after depositing a mere $10,000 dollars into a savings account that pays 3 percent interest compounding annually (instead of $300); asserted that the interest for Certificate of Deposits (CDs) only compounds once when in reality, they can compound monthly and even daily depending on the specific product; and misrepresented the way interest rate payments are made on a car loan. Beyond factual errors, the stories were also said to be rife with plagiarism, at times plagiarising from CNET itself or its sister websites. All this was despite supposedly thorough fact-checking and editing by editors. 

At the back of these revelations came much negative news coverage. A column in the LA Times called the AI Chatbot “a plagiarist — and an idiot”, adding that: “This level of misbehaviour would get a human student expelled or a journalist fired.” A Washington Post headline called CNET’s efforts “a journalistic disaster”. Futurism, which has been leading the coverage of this incident, has called the chatbot “a moron”. 

However, such alarmist coverage of the incident risks obscuring and burying a more nuanced conversation we need to have about the use of generative AI in financial journalism and finance content creation.

Firstly, we need to understand what generative AI can and cannot do. 

Generative AI cannot convince a whistle-blower to leak documents, build relationships with sources, land scoops or go out to the field and ask questions. Even when it puts together a story based on a prompt, what an AI chatbot does is “assemble” a story by churning through vast repositories for information available to it. But the fact that it does not “understand” questions or prompts the way humans do does not necessarily suggest that generative AI software has hit a ceiling in terms of its capabilities. As the AI program is fed more and more prompts and processes more and more material, the quality of its responses is only likely to get better. 

But secondly and very crucially, it will be a mistake to attribute all the missteps at CNET solely, or even largely, to the deficiencies of the technology. Subsequent hard-hitting reporting by the publication The Verge has revealed that even within CNET, few people other than a small team knew much about the use of AI. Staff who were aware of the use of AI also told The Verge that workflows were unclear and that they themselves were often unable to ascertain which stories were written by colleagues and which ones weren’t. 

The use of AI to generate content also needs to be understood as part of CNET’s parent company, Red Ventures’, efforts to generate more cash by churning out more search engine optimised content with advertising nestled within the article. When more people read their articles and click on advertising links, Red Ventures generates more revenue. The use of AI instead of humans to write such articles is thus an effort to increase the profit from having more articles to read. 

The use of AI technology by particular companies or organisations for predatory purposes, or without much thought given to how to incorporate them in a transparent, responsible manner, is not an indictment of the technology and its potential. 

AI has enormous promise in the Fintech and financial journalism space. Fintech startups can potentially use generative AI to help them ideate and even write drafts of marketing material or content for their websites. AI can also help journalists compile information, or help them plough large troughs of data and flag up areas that they should look into. It can also do more mundane tasks like write up press releases and work on more straightforward stories, while journalists can focus on more enterprise reporting and investigative work. 

What will be dangerous is say, the use of AI by a news organisation giving them significant advantages over their competitors in terms of speed or revenue, thereby kick-starting a race to the bottom where other organisations feel the need to also recklessly adopt the technology or risk being left behind without being able to consider the ethical and practical questions the usage of technology raises. 

Therefore, it is crucial for us to think of the norms that should surround the use of AI and generative AI in fintech and business journalism. Prominent disclosures about the use of AI to generate content, and not obscuring its use, is one important one. The procedures by which AI-generated content is edited by humans is another area where standards and governance policies will be important. Especially as it concerns financial journalism, consumers and businesses are potentially making enormously significant decisions based on the information available on topics such as interest rates and loans and thus, accuracy is vital. 

Speaking to The Washington Post, Hany Farid, a professor at the University of California, Berkeley who is an expert in deepfake technologies wondered if “the seemingly authoritative AI voice led to the editors lowering their guard,” making them “less careful than they may have been with a human journalist’s writing”. In another possible explanation, Futurism drew parallels to self-driving cars, suggesting that editors perhaps went on “auto-pilot” the same way drivers behind the wheel of autonomous vehicles tend to do when they no longer need to actively work the controls of the car. Regardless of the reason, it is vital that better practices are developed to ensure that misinformation does not slip through the cracks. 

Years after the downfall of Theranos and Elizabeth Holmes, The New York Times reported that there continued to be hesitation to invest in diagnostic companies and that female entrepreneurs were frequently compared to Ms. Holmes, even when it was unwarranted. Similarly, while investors and executives should be careful about the use of AI in journalism, it is crucial that they do not overreact and immediately dismiss any potential use because of what happened at CNET, but instead remain open to the possibilities AI may offer, especially in the future. 

Scroll to Top