- The Publisher Newsletter
- Posts
- Friday 7th April: ChatGPT is making up fake Guardian articles. Here's how it's responding
Friday 7th April: ChatGPT is making up fake Guardian articles. Here's how it's responding
Good morning from Chris. For those of you in the UK, hope you're enjoying your Bank Holiday!
User needs are at the heart of building a true bond with your audience - one that is based on trust and value. The new User Needs Model shows that it pays off to make fewer ‘Update me’ articles and seek a better balance of coverage across all needs, preferences and pain points of your website visitors.
In this whitepaper, we provide tools, techniques and examples, showing you how the model can work for you. Download it here.
Today's Media Roundup is brought to you by Smartocto. (Book this ad slot)
The crunch is upon us. There have been tools that mock up fake news articles for many years, but it's all been supercharged by generative AI tools like ChatGPT. Even if you believe the threat is overblown, you have to admit that it would be the height of hubris to not consider how newspapers should respond. So it's great to see this really well-argued piece from the Guardian's head of editorial innovation Chris Moran.
He states the Guardian is approaching use of AI cautiously: "Instead, we’ve created a working group and small engineering team to focus on learning about the technology, considering the public policy and IP questions around it, listening to academics and practitioners, talking to other organisations, consulting and training our staff, and exploring safely and responsibly how the technology performs when applied to journalistic use."
That's a robust response to a threat — and also potentially provides the Guardian with a point of differentiation from other news outlets that are rushing into the space. There are trust issues to consider here, and at the moment newspapers can ill afford to score any more own goals in that respect.
And speaking of AI, here's a practical guide from Full Fact as to how to go about spotting images ginned up by tools like Midjourney. It's a great start and ideally will increase the overall media literacy of the public, who are about to face a new wave of absolute bullshit whenever they go online. But as I argued last week, explaining the how isn't going to be enough — we need reporters who are explaining why these images are being made.
For once, Betteridge's Law might have been broken. This extremely, extremely depressing piece contends that the media's obligation to protect and inform the public has run up against the need for ratings and revenue — and that the latter is winning. We spoke at length last year about whether news outlets would change their behaviour when it comes to covering Trump. I suppose we're about to find out.
Apologies again for foisting a Musk header image on you in the morning, but this time it's good news... sort of. Germany has very clearly delineated rules against hate speech on social platforms, and Twitter appears to have been breaching them endlessly following Musk's takeover as its CEO. As with the many, many other fines it's facing, it might well be that Elon finally learns there are consequences to his actions.
More from Media Voices
In February we heard from Andrew Ramsammy, Chief Operating Officer of Word in Black. The publication was founded in the aftermath of the murder of George Floyd, and brings together 10 of the nation’s leading Black publishers in a news collaborative. He discusses how the collaborative came together, how they’ve tripled revenue since launching, and other areas of opportunity for publishers to come together.