Watch Now: The Power of Google Discover for Publishers
This week on the 'SEO and Beers with Barry and Steve' podcast we are joined by Nicola Agius, Director of SEO and Discover, Reach.
Steve: Today we’re really excited because we have a very special guest on our show: Nicola Agius, Director of SEO and Discover at Reach plc. Welcome, Nicola.
Nicola: Thank you—wow, how did you learn how to pronounce my surname? No one can normally pronounce that.
Steve: I actually had to look it up. I’ve known you for a while but never really had to pronounce it. You’re also the only SEO expert I know who’s Director of SEO and specifically Discover.
Nicola: Yeah. It’s interesting—I’ve been at Reach for about two years now. When I first started, I was just SEO Director. Then a few weeks in, maybe a month, I was talking to my boss and it was like, “Oh, you’re Director of SEO and Discover.” And I was like, “Oh, okay.” We have quite a big focus on Discover. For a lot of publishers, Discover is the big traffic driver. What’s the statistic now—Google sends twice as much traffic via Discover than it does Search—is that correct?
Steve: Yeah, it’s been quoted recently. It’s a huge driver, particularly to news publishers.
Nicola: I remember at a meetup a couple of years ago, a lot of publishers were saying, “We treat it as bonus traffic.” And I remember thinking, “Really? Just bonus?” I wonder now, two years on, when there’s been such a shift between search and discover, if people still treat it as bonus.
Barry: They’re not, but they should. But we can dig into that.
Steve: We’re doing a slightly different format today. We’ll cover three or four stories SEOs have been talking about recently, then do a deep dive into Discover—especially because we’ve had the specific Discover core update rolling out.
Responding to Traffic decline narratives…
Steve: One story we want to respond to is a piece published by the Reuters Institute for the Study of Journalism at the start of the year, looking at technology trends. They worked with Chartbeat and produced a chart showing an apparent decrease in overall traffic for both Discover and Organic Search.
SThat chart has been quoted a lot at conferences as “the truth,” giving the impression there’s no future in search. Then Press Gazette followed up asking publishing leaders about it, and some feedback sounded like they might consider not investing in SEO this year. So we want to respond to that. Barry, do you want to kick off?
Barry: I have very strong opinions. The Chartbeat data is deeply flawed and doesn’t reflect the reality of traffic trends we see in the UK and Europe. It may be heavily US-focused and skewed by a few very large publishers.
Those decreases may not necessarily be “because of AI,” but also because of things like site reputation abuse, algorithm updates, and so on. In the Press Gazette piece, there were counterpoints from publishers saying they’re not seeing those kinds of declines across the board—some decreases, but fairly stable.
Certain tactics that work well in Discover and News have continued to work. AI isn’t coming to steal all our traffic, especially for news. There was also a competing study by Graphite using Similarweb data showing a decrease in Google traffic to the wider web, but the decrease was about 2.5%.
People quoting the Chartbeat piece as gospel are tapping into a panic mindset that doesn’t reflect reality, and it pushes publishers to pivot away from optimising for search. That’s catastrophic.
Google is still the biggest driver of traffic to publishing sites. AI tools haven’t made a significant dent in Google’s dominance as a traffic source. If a publisher invests less in Google or abandons it, they’re committing suicide—competitors will eat their slice of the pie.
Yes, publishers should diversify—multi-channel, multi-format—but not at the expense of the biggest channel. Look at your own data: where is traffic actually coming from, who are you competing with, and how do you stay effective?
Steve: Nicola—what’s your take at Reach? As SEOs, we’re used to messy waves and spikes, especially around core updates, but the narrative from that chart is “everything’s in decline.”
Nicola: I hate SEO forecasting because it’s so out of our hands. A good place to start is looking at what the tech giants are doing. I’ve seen big SEO jobs going up—Anthropic advertising for an SEO lead, for example. The biggest companies are investing in SEO.
Also, Chartbeat has accuracy issues splitting between search and discover. A lot of research comes out with doom-and-gloom stories. I’d take a step back and look at the bigger picture.
Steve: It’s annoying when those charts get presented early at conferences and people don’t dig into details.
Nicola: If my competitors want to stop investing in SEO… maybe they should. Take your eye off the ball. I don’t mind!
Google Publisher Controls for AI Overviews
Steve: Next story: the Competition and Markets Authority in the UK “coming for” Google. Context: in October 2025, CMA designated Google with strategic market status in search services, allowing targeted rules. Part of that is pushing Google to introduce publisher controls for visibility of content in AI Overviews—more choice and transparency. There are also elements around fair rankings, choice screens (making it easier to switch default search on Android), and data portability.
Google responded quickly saying it will introduce a tool to allow publishers to opt out of AI Overviews. Details are limited. At an event with FT Strategies in partnership with Google, a Google speaker suggested it would be more than a simple on/off switch. They also said “we hear you” on more data in Search Console… and that was it.
Nicola—what’s your view on publishers getting more control over how their content is used in AI Overviews?
Nicola: If you opt out, what are the consequences? Less visibility in Discover and Search? And the way people search is changing—especially young people. They trust AI-generated answers. So maybe it’s not that we want less visibility; maybe we want to be compensated. If you’re going to “rent” our content, pay rent.
Barry: Two aspects. First: Google already considered granular publisher controls for AI and decided not to implement them—there was a leaked slide a while back showing options, and they picked one of the worst for publisher control. CMA forced their hand.
Second: it skirts the real issue. AI systems were trained on the greatest scale of copyright infringement in the history of mankind, and websites aren’t compensated. Citations in AI answers send negligible traffic compared to classic search.
Also: shortly after this, Microsoft launched an AI citations report in Bing Webmaster Tools that shows publishers when they get cited in AI-generated answers and for which prompts. So the tech exists. Google can do it—they choose not to. Like how Search Console still doesn’t provide certain filters publishers have asked for for years.
Rand Fishkin on AI tracking unreliability
Steve: Third story: an article Rand Fishkin published in January about AI tracking reliability. SparkToro used around 600 volunteers, about 12 prompts each—roughly 3,000 prompts—testing how stable AI results are. The list returned changed a lot: very low chance of getting the same list twice, ordering changes, length changes—so lots of randomness. The takeaway: be careful using AI tracking tools like you’d use search rankings in SEMrush or Google Search Console. Consider other metrics like overall “visibility” percentage rather than fixed ranks.
Nicola, are you using any tools at Reach, and what’s your take?
Nicola: Similarweb has some tracking, but we’re not going in hard. What’s the opportunity for publishers? Getting cited in ChatGPT or Gemini is unlikely to drive traffic. That doesn’t mean there’s no opportunity, but publishers need to think outside the box. I’ve been to AEO/AI talks that were more aimed at brands—they’re obsessed with AI visibility. The speakers always say the key is getting visibility on publishers. That’s where the opportunity is, and it’s a conversation we’re starting to have… but I can’t say too much.
Steve: So this is almost like branded content, perhaps.
Nicola: Yeah.
Barry: Citations are the new links. But AI tracking is a whole other shit show. These trackers burn energy generating prompts just to create dashboards for execs. The answers are unpredictable and inconsistent. Publishers should stay away from optimising “to get cited” because it doesn’t send traffic. Be the highest quality, most authoritative publisher you can be and you’ll get cited anyway.
Barry: Also: bad data is worse than no data. AI trackers give you bad data—illusion of certainty, not actionable. It’s noise.
Steve: But the dashboards look so pretty!
Barry: They look fantastic. Utterly meaningless, but pretty graphs.
Nicola: Toward the end of last year, a big AI tracking tool held a glitzy London event… it was a shit show. The “AEO tips” were cringe and inaccurate. I went to see if we should invest—after that, definitely not.
Deep dive — Google Discover
Steve: Nicola, let’s deep dive into Google Discover. How do you tackle it at Reach? And for those listening—how do you start investing in Discover? Can a small startup site get into Discover quickly?
Nicola: I almost smirked—normally I’d say no. But I’ve been seeing spam sites get in: private blog networks buying expired local business domains, deleting content, and posting the same recycled articles about speed limits, pensions, etc. I caught up with Google last week about this Discover core update. It’s designed specifically to try and fix the spam issue. The Discover feed is supposed to show content based on your search history, location, interests—Google has a lot of info—and it’s personalized. Everyone’s feed is different.
Nicola: We use tools to monitor what’s broadly in the feed: MyFeed has Discover monitoring. We’ve started using John Shehata’s NewsDash / Discover Pulse. There’s also Discover Snoop, which I haven’t used but publishers rave about.
Steve: What intelligence can you get from Search Console? Is it limited?
Nicola: Search Console is best for what’s working for you. But newsrooms always want competitor context—what others are seeing. Third-party tools help you see the wider market. Sometimes a topic drops. You need to know: did we do something, or did Google demote that category across Discover? If Google demotes film content, for example, there’s not much you can do except adjust your Discover-focused output and watch for it to return.
Steve: Discover optimisation is frustrating because it changes all the time. It’s spiky, unlike building organic search foundations. Some publishers put lots of eggs in the Discover basket.
Nicola: We do regular reporting. For us, it’s a strategy because it’s such a huge traffic driver. The key is monitoring patterns: topics, buzzwords, categories. For example, food content: we saw a drop, but market-wide there wasn’t a drop—the type of food content Google favored changed (more supermarket/product news vs “secret ingredient” style).
Steve: How do editorial teams respond? In the old model you could target a topic for months with breaking and evergreen support, but in Discover it feels almost daily.
Nicola: It’s volatile. This time last year, national news publishers dominated Discover. After the June core update, there was a big shift: regionals and niche sites got more visibility, so nationals got relatively less. No one would have predicted that. Then around November, X/Twitter became very visible in Discover; by December/January in the UK it was the most visible domain overtaking YouTube—then in February Google pressed another button and it dropped massively. That’s why forecasting is hard.
Steve: So: keep looking at the data. With the recent core update, have you seen changes? It’s been a couple of weeks since rollout began.
Nicola: There’s a lot of volatility. Google said it’s rolling out to US sites first. I asked Google last week if they were sure, because we’re seeing volatility. They said rollout is phased (not 0 to 100). They also confirmed the UK and India are next—so that’s probably what we’re seeing.
Barry: I’ve already been approached by an Indian publisher suffering from this update, plus the December core update.
Steve: Looking ahead, as Google rolls Gemini onto more surfaces, do you think Discover becomes more personalised?
Nicola: Preferred sources. Google wants the feed as personalised as possible to drive engagement. Encouraging people to make your title a preferred source is important—we explore ways to do that.
Barry: Uptake is tiny. Publishers integrate it and get a fraction-of-a-percent use. It feels more like placation than meaningful optimisation.
Nicola: When you do set preferred sources, it can increase visibility, but the UI is too complicated. It should be streamlined.
Steve: For publishers—when should you act during a core update?
Nicola: We give guidance during rollouts, but for conclusions you wait until it’s finished and then about two weeks after for things to settle. We’ve seen dips swing back. If you act too soon, you make decisions on incomplete data.
Barry: Nicola, Reach has 70 sites—so you can spread risk and tactics. If you have one site and you build your editorial strategy around Discover, the risk is too high. Discover encourages bad habits: clickbait, thin content, rehashing the same story 17 times. If you lose 70–90% of Discover traffic, meaningful recovery can take 18–24 months. For one site, that’s existential. So treat Discover as bonus traffic in the sense that you must be able to survive without it—even if you optimize for it.
Steve: It’s another example of publishers being led by an algorithm. We saw this with Facebook in 2017, then the shift back to search, then AI overviews, and now Discover. You can end up with whole teams chasing whatever pattern is currently rewarded, like “shops closing on your high street,” until you stop and think: what are we doing?
Nicola: Bonus doesn’t mean you shouldn’t work for it. Bonus is nice to have.
Barry: Agreed—optimise to an extent, but don’t base the entire editorial strategy on it. The slippery slope is real.
Nicola: Also worth saying: Discover is an extension of Search—get Search right if you want to get Discover right.
Barry: I would’ve agreed up until February 5th when Google announced a Discover-only core update. That suggests Discover is becoming its own system. Same foundations (crawl/index/quality), but separate “levers” on top—like what Google calls “twiddlers”—and now a set for Discover.
Steve: Did you say “twiddler”?
Barry: The technical term is “twiddlers.” I didn’t make it up.
Wrap-up
Steve: Let’s wrap up. Nicola, it’s been really useful going deep on Discover and newsroom dynamics. We should get you back in a few months and see how things settle. Last thing—Barry, you have a paywall masterclass coming up.
Barry: Yes—on March 4th I’m doing a masterclass with Jesse and Shelby from What The F*ck Is SEO and Harry Clarkson-Bennet from The Telegraph on paywalls: technical setup, commercial strategy, editorial strategy, plus Q&A.
Steve: I’ve also got an SEO and AI visibility masterclass day event via Journalism.co.uk. We’ll cover a lot of what we discussed today.
Nicola: No announcements—but if you see Reach sites in your Discover feed, definitely click on it. It’s good for the environment.
Barry: Absolutely—I’ll definitely click on the weekly website.
Steve: Okay everyone—good to see you again. Take care. Bye.
Nicola: Take care. Ciao.
Barry: See you at the next one.

