LA Times’ AI “Insights”: When Artificial Intelligence Gets the Last Word (and It’s Not Always Pretty)

Hey, Chad here. You know I’m not shy about calling out tech hype, and when it comes to AI worming its way into journalism, I’ve got plenty to say—and most of it’s not exactly glowing. But let’s break down what’s actually happening at the LA Times, why it has media insiders rolling their eyes, and whether this AI “Insights” experiment is the future of news or just another headline-grabbing gimmick.

Photo by Logan Scriba on Unsplash
What’s the Deal With LA Times’ AI “Insights”?
If you’ve read an LA Times opinion piece lately, you may have noticed something new tacked on at the end: a dropdown tab labeled “Insights.” Click it, and you’ll get a quick-hit summary of the article, a supposed political alignment label (think “center left” or “center right”), and—here’s the kicker—a set of AI-generated bullet points presenting “different views on the topic.” This is the LA Times’ attempt to inject a “both sides” flavor into every opinion column, powered by Perplexity AI and Particle (5) (4) (1).
The feature rolled out on March 3, 2025, and so far, it’s only being used on opinion pieces, not straight news. The idea? Give readers a taste of the counterarguments without making them go on a wild goose chase across the internet (5).
Why Is the LA Times Doing This?
Let’s give a little context. Since Patrick Soon-Shiong bought the LA Times in 2018, he’s been on a mission to “de-echo chamber” the paper. He’s said publicly that he wants “voices from all sides” and has pushed for clearer separation between news and opinion. Last year, the Times even refused to endorse a presidential candidate, leading to some high-profile resignations on the editorial board (5) (4).
The “Insights” tool is the latest twist in this push for impartiality. The stated goal is to offer readers a more balanced view and encourage them to think critically by surfacing alternative perspectives right alongside the columnist’s take (4).
How Does “Insights” Actually Work?
Here’s the workflow:
- AI scans the opinion piece and identifies its main arguments.
- It then generates a summary, slaps on a political alignment label, and digs up counterpoints from across the web.
- These counterpoints are presented as bullet points, sometimes with links to sources so you can (in theory) dig deeper (5).
It’s supposed to make you a more informed reader. But does it?
The Good, the Bad, and the “Mealy-Mouthed”
Let’s be honest: there are some upsides here.
What Works:
- Linked Citations: The AI-generated bullet points sometimes include links, so you can check the sources yourself—if you’re the kind of reader who actually clicks through (5).
- Quick Summaries: If you’re in a rush, the summary and bias label give you a snapshot of the piece’s angle (4).
What Doesn’t:
- False Equivalence: Not every issue is a “both sides” debate. Sometimes, giving equal weight to fringe or debunked ideas just muddies the waters (4).
- Tone-Deaf Counterpoints: There have already been facepalm moments—like when the AI tried to reframe the Ku Klux Klan as a “white Protestant culture reacting to societal changes,” completely glossing over its violent racist history. That’s not just bad journalism; it’s dangerous4.
- “Mealy-Mouthed” Language: The AI’s counterpoints tend to be vague and noncommittal. For example, in a piece about ICE detainments, the AI offered generic statements about “addressing a declared ‘invasion’ at the southern border,” without challenging or contextualizing the