According to PYMNTS.com, Google has agreed to a proposed $68 million settlement in a class action lawsuit filed on Friday, January 23, which requires a judge’s approval. The lawsuit alleged that Google Assistant, when it misheard words as its “Hey Google” or “Okay Google” hot words—a problem called “false accepts”—illegally recorded and disseminated users’ private conversations for targeted ads. Google denied wrongdoing but settled to avoid litigation. This news follows a report from January 2 that Apple agreed to pay $95 million to settle a similar privacy lawsuit regarding its Siri assistant. A PYMNTS Intelligence report found that over the previous 15 months, consumer trust in voice assistants has declined sharply, with those confident in them dropping from 73% to 60%.
The Settlement Game
Here’s the thing: these settlements aren’t really admissions of guilt. Both Google and Apple denied any wrongdoing. But they’re a calculated business decision. Litigation is expensive, messy, and draws terrible PR. So, for Google, writing a $68 million check—and for Apple, a $95 million one—is basically the cost of making a very noisy problem go away. It’s a rounding error in their quarterly earnings, but the allegations strike at the heart of the modern tech bargain: we get convenience, they get data. When that data collection happens because a device thinks you talked to it? That’s a nightmare scenario for user trust, and it’s exactly what these lawsuits hinge on.
The Real Problem Is Trust
And that brings us to the real casualty here: trust. The PYMNTS report nails it. Confidence that these assistants will ever be as smart and reliable as humans dropped 13 percentage points in just over a year. Skepticism is way up. Why? Because the tech has largely plateaued. We were promised conversational AI, but we still get “false accepts” and clumsy interactions. The magic wore off, and now we’re left contemplating the creepy downside—our devices listening in, even by accident. That erosion of trust isn’t just a feel-good metric; it directly leads to “declining usage across age groups.” If people don’t trust it, they’ll stop using it. It’s that simple.
privacy”>A Pattern of Convenience Over Privacy?
So, what does this pattern tell us? First, that “false accepts” are a systemic, industry-wide issue, not a one-company bug. Second, the business model of using data to refine services and target ads is fundamentally at odds with accidental recording. Apple’s settlement, for instance, requires them to confirm they’ve deleted old Siri recordings and explain opt-in policies better. But it’s reactive. These companies built incredibly sensitive microphones into our homes and pockets with a trigger that’s, let’s be honest, still kinda flaky. They prioritized always-on convenience and then seemed surprised when it led to privacy blowbacks. Now they’re paying for it, literally.
What Happens Next?
Will these settlements fix anything? In the short term, maybe a few more guardrails and clearer settings. But the underlying tension remains. The next frontier is generative AI being baked into these assistants, which will require even more data to function well. How do you train a super-smart, conversational Siri or Google Assistant without processing user speech? It’s a tough circle to square. For now, the message from these multi-million dollar settlements is clear: if you’re going to put a listener in every room, you’d better make damn sure it only listens when it’s supposed to. And frankly, they still haven’t fully solved that basic problem.
