According to Forbes, it’s been 1,096 days since OpenAI launched ChatGPT, which hit one million users in five days and now sees over 800 million weekly users. In that time, AI model “IQ” scores on tests like the Mensa Norway have jumped from 85 for GPT-3.5 to 126 for models like xAI’s Grok 4 Expert Mode. The cost for GPT-3.5-level performance has collapsed 280-fold, from $20 to $0.07 per million tokens, while training costs are exploding, with billion-dollar runs already underway and $100 billion training clusters expected by 2027. Gartner predicts 15% of work decisions will be made autonomously by AI agents by 2028, and the agent market is forecast to grow from $7.8 billion in 2025 to $52.6 billion by 2030.
Benchmarks are a trap
Here’s the thing about those impressive IQ scores: they’re almost useless for real business. A model that scores 126 can still hallucinate legal cases and miss ethical blunders a kid would spot. The gap between acing a pattern-recognition test and having reliable, real-world judgment is still a chasm. The trajectory is undeniable—AI is getting scarily good at specific cognitive tasks—but that trajectory isn’t destiny. For leaders, the question has shifted from “Can it think?” to “Can we deploy it without it blowing up?” That’s a governance nightmare, not a tech demo.
The Jevons Paradox of AI
So costs are plummeting, right? That must mean we’ll all save money. Not exactly. This is where Jevons Paradox kicks in. As AI inference gets cheaper, we don’t just do the old things for less. We invent entirely new, previously unthinkable uses for it, which drives total consumption through the roof. We’re seeing the compute equivalent: training costs are scaling exponentially, with Epoch AI projecting a $200 billion supercomputer by 2030 needing power equal to nine nuclear plants. The same paradox applies to jobs. The narrative isn’t simple replacement; it’s messy, painful reconfiguration. Planning for a smooth transition is a fantasy.
The agent governance crisis
This is the big shift: from AI as an assistant you query to AI as an operator that acts. Agents will book, approve, manage, and decide. And when an autonomous agent makes a costly error—and it will—who’s on the hook? This isn’t a future compliance checkbox; it’s an existential brand question happening now. Trust isn’t built by claiming your AI is trustworthy in a press release. It’s built—or destroyed in minutes—by how your AI behaves when it’s autonomously emailing a key client or denying a claim. The brands that survive will be those that engineered accountability in, not bolted it on as an afterthought.
The only moats that matter
When everyone can access the same powerful base models from OpenAI, Google, or Anthropic, your tech stack is not a competitive advantage. It’s a commodity. Forbes argues sustainable advantage shifts to four areas: data, brand, people, and distribution. But it’s *how* you build them. A data flywheel that just entrenches bias is a liability. A brand promise of trust that shatters under pressure is worthless. And as McKinsey’s 2025 State of AI report notes, the people who matter are those who redesign workflows and exercise judgment, not those doing automatable tasks. They’re the irreplaceable layer directing the AI orchestra. In a world of cheap intelligence, the premium on human wisdom and operational courage just went way, way up. The next 1,000 days will separate the companies that understand that from the ones that become someone else’s data.
