- TikTok is pulling back on testing an AI feature that added text summaries to videos.
- The tool was prone to hallucinations, comparing Charli D'Amelio to blueberries and a dog training video to origami.
- Going forward, the feature will focus on identifying products in videos, a TikTok spokesperson said.
Did TikTok just have its "glue pizza" moment?
The company is pulling back on a new artificial intelligence feature it was testing that went haywire, adding wildly inaccurate AI-generated text summaries to videos from users like Charli D'Amelio, Shakira, and Saturday Night Live.
These "AI overviews" were designed to provide additional context for a video, recommend similar products to what's on-screen, and generally explain what was happening. While the tool did a good job summarizing some posts, for others, it hallucinated more than Bryan Johnson did during his livestream mushroom trip.
Here are a few chaotic examples of AI overviews this reporter saw on the app in the past week:
- The AI feature described a video of Charli D'Amelio, sitting alone in front of a white wall and talking directly to the camera, as a "collection of various blueberries with different toppings."
- A post from a dog trainer explaining why dogs kick their feet after going to the bathroom was described as "a captivating display of intricate origami art, meticulously folded from a single sheet."
- A video from Shakira promoting a new song release, per the AI overview, was "a repetitive sequence of several distinct blue shapes appearing and moving across the screen."
- A viral post about feeling heartbroken from a user named Victoria was described as a "mesmerizing close-up of a tiny hand repeatedly tracing intricate patterns on a smooth surface."
- A video of Olivia Rodrigo promoting her upcoming appearance on SNL was called a "person's face being gradually replaced by a random, nonsensical string of letters and numbers."
One Reddit user wrote that it was as if the feature would look at a video and then "independently open a different tab and use a random text generator to create a caption."
Now, following user feedback, TikTok is pulling back on the test, a company spokesperson confirmed to Business Insider. The AI overview feature has been updated to focus on identifying products in a video rather than describing a video's full contents, they said.
The tool, which has been testing for a few months, was available to a limited set of users in the US and a few other markets, the spokesperson said, describing it as an experiment.
TikTok declined to share which models it used to power its AI overviews, but an in-app description of the feature says it relied on either TikTok's own AI tech or third-party products.
Seeing hallucinatory AI summaries show up in my feed this week felt like a throwback to when ChatGPT and other early AI products regularly made stuff up.
When Google released its version of AI overviews in 2024, the feature confidently proclaimed that a dog had played in the NHL, for example. It told my coworker Katie Notopoulos to add glue to her pizza to keep the cheese from sliding off.
While AI tools are getting better at answering some questions — a recent analysis by the AI firm Oumi found Google's AI overviews were accurate about 90% of the time — it felt oddly comforting to see that a technology that doomsdayers predict will wipe out white-collar jobs and take over many facets of our lives can still fail in silly and surprising ways.












