An incident at Sports Illustrated reignites the debate about AI. (Image: Firn – adobe.stock.com)
Artificial intelligence has long since found its way into the everyday lives of many and there is a heated debate about whether AI-generated content needs to be identified so that false images of the Pope in a white down jacket are not taken at face value.
The US magazine Sports Illustrated has now fueled the discussion.
Texts from the AI under a false name
That happened: Futurism discovered that renowned sports publication Sports Illustrated was publishing articles created by an artificial intelligence under a non-existent author name.
Drew Ortiz, a supposed writer on the Sports Illustrated website, even had his own fake resume.
KI Drew grew up in a farmhouse. (Image: Futurism)
This is how the matter came to light: In research, Futurism discovered that the face of the alleged Drew Ortiz can be purchased on a website that sells AI-generated faces.
Drew’s likeness is available for purchase by anyone. (Image: Futurism)
Just a mistake?
The author of the Futurism article ruled that out. She spoke to one of the people responsible for the content created by the AI. There were even more fake authors under the guise of the computer. So there was a system behind it.
When the whole thing was discovered, the magazine’s employees were horrified and demanded that the magazine adhere to journalistic integrity.
Arena Group, which owns Sports Illustrated and the website, licensed the AI content through a third party and published it under its name.
Incidentally, Google recently presented the New York Times with an AI that could write news articles for the newspaper.
This is the current status
All articles written by the AI, as well as the supposedly real employees, have now been deleted from the publication’s website.
The publisher’s owner disputed the accuracy of Futurism’s report, but said it had launched an internal investigation.
The third-party provider through which the texts were obtained asserts that all of its content was written by humans (which can be questioned when you read an article like this). They allegedly only used pseudonyms in “single articles” to protect the authors’ privacy.
A discussion has now broken out about to what extent – or at all – AI should be used when writing articles, as it supposedly undermines journalism and could cheaply replace editors, authors and journalists.
Author’s opinion
The problem is not the use of AI itself, but the fact that it was kept secret. Deceiving people that they are reading an article written by a real person when that is not the case is simply a lie.
Artificial intelligence will be used more and more, but such cases prevent people from gaining trust in the technology.
Added to this is the fact that with AI you can easily spread false information, as the example with the picture of the Pope shows. Many people thought it was real.
AI cannot replace real journalism, but when used correctly, it can help the people behind the texts, simplify our work and support us – as long as we don’t do anything stupid with it.
Maxe put the AI to the test in two self-experiments and came to a clear conclusion:
A renowned US magazine used AI for its articles and published them without identifying it. This is now shaking up journalists. How do you feel about this? Can AI be used in written texts if it is identified? Or should only people be allowed to work on articles? Write it to us in the comments.