Media companies face backlash for undisclosed use of artificial intelligence in creating content
Sports Illustrated, a once-powerful publication, is the latest media company to come under fire for its use of artificial intelligence (AI) in generating articles. The magazine recently fired a company that produced content for its website under the byline of non-existent authors. While Sports Illustrated denies claims that the articles themselves were written by an AI tool, this incident raises questions about transparency and ethics in the journalism industry as it navigates the era of AI.
AI Experiments Gone Awry in Journalism
Earlier this year, both the Gannett newspaper chain and the CNET technology website faced similar issues with AI-generated content. As companies test the capabilities of AI, concerns about potential job losses among human workers arise. However, the application of AI in journalism is particularly challenging due to the industry’s reliance on truth and transparency. While media companies are free to experiment with AI, the problem lies in attempting to conceal its use, especially when done poorly.
Sports Illustrated’s Reputation at Stake
Sports Illustrated, now operated as a website and monthly publication by the Arena Group, was once a weekly magazine renowned for its exceptional writing. However, the magazine’s reputation has been called into question after it was revealed that some product review articles were attributed to authors that could not be identified. Futurism, a website, discovered that one of the authors listed, Drew Ortiz, had an AI-generated portrait available for sale on a website. Upon inquiry, Sports Illustrated removed all authors with AI-generated portraits from their website without providing an explanation.
Conflicting Accounts and Denials
Futurism quoted an unnamed source within Sports Illustrated who confirmed the use of artificial intelligence in creating some content, contrary to the magazine’s denial. The magazine attributed the articles in question to a third-party company, AdVon Commerce, which claimed that the content was written and edited by humans. However, Sports Illustrated condemned the use of pen names by the writers and stated that they are investigating the matter internally. The Sports Illustrated Union demanded transparency from the Arena Group management and adherence to basic journalistic standards.
Lessons from Similar Cases
Gannett, another media company, faced a similar situation when it paused an AI experiment that generated articles on high school sports events. The articles were bylined as “LedeAI.” The lack of explicit disclosure about the role of AI in the creation of these articles led to negative publicity. Similarly, CNET used AI to create news articles about financial service topics, attributed to “CNET Money Staff.” Only after the experiment was exposed did CNET discuss it with readers. Other media companies, like Buzzfeed, have been more transparent about their use of AI, attributing content to both AI and human writers.
The Associated Press and Responsible AI Usage
The Associated Press (AP) has been using technology to assist in articles about financial earnings reports since 2014 and, more recently, in some sports stories. At the end of each story, the AP includes a note explaining the role of technology in its production. This commitment to transparency helps maintain trust between the news organization and its readers.
Conclusion: The use of artificial intelligence in journalism is a topic that raises important ethical questions. Media companies must navigate the balance between technological advancements and the principles of truth and transparency. While experimentation with AI is not inherently problematic, concealing its use can damage a publication’s reputation. The Sports Illustrated incident serves as a reminder that honesty and transparency must be at the forefront of journalism as it enters the age of artificial intelligence.
Leave a Reply