
Today's issue is going to be a bit different.
Rather than discussing AI releases or industry news, I wanted to zoom out and look at AI from a multi-decade, 40,000-foot view.
The idea for this issue is something I think about often, and was sparked by a quote by Sam Altman in this article from Forbes.
In it, Sam says his succession plan is to "hand the company off to an AI model."
Odds we're years away from artificial intelligence being so advanced it could literally run one of the world's largest companies.
At the same time, it's not that hard to imagine such a thing.
And here's why.
Regardless of which side of the aisle you fall on, trust in many institutions - both in the US and worldwide - is at or near all-time lows.
According to a survey from Gallup, in 2025, Americans' trust in the media reached a new low of just 28%.
According to Pew Research, only 17% of Americans trust the federal government to "do what's right."
And it's not just the United States.
In Canada, an all-time high of 67% of Canadians believe the government "misleads the public."
In Australia, a survey from Bond University shows just 16% of Australians "trust politicians."
With an affordability crisis sweeping the English-speaking world, and political polarization at the highest levels most anyone has seen during their lifetime, it's understandable why many people believe the system is broken.
While I won't pretend to have the answers, I'm a firm believer AI could solve many of today's problems.
Especially as it relates to operating large monolithic governments/companies.
And the reason for that is very simple:
If properly configured - which I will not pretend is an easy feat - having an AI run a country or corporation would eliminate most of the problems that plague today's institutions.
Specifically, it could root out:
Fraud and waste
Gender / sexual orientation-based discrimination
Race-based discrimination
Insider trading
Shady backroom deals
Misallocation of funds
Pork barrel spending
Office politics
Nepotism
Opaque promotion criteria
Tax evasion
Unfair lending practices
Predatory lending practices
Unfair credit score enforcement
And a host of others
Let's take the example of a CEO so we can avoid making this issue political.
Imagine an AI CEO configured to maximize both profits and employee well-being.
In a company like that, you wouldn't have to worry about:
Jenny getting a promotion instead of someone else for no other reason than the fact she's sleeping with the boss
The VP's nephew getting hired instead of a more qualified candidate
Random salespeople abusing their corporate expense account
High-performing team members constantly having to pick up the slack for low performers
Lawsuits from rejected job applicants
And so on
Instead, every single decision that's made from the top down would be optimized for two things and two things only: Profits and employee well-being.
Yes, this is a very hypothetical situation.
For this to work, people would need to have faith that the underlying model isn't optimized to benefit the few while exploiting the many.
And selling the masses on something like this would be a gargantuan task.
But if we look at how frustrated people are, and how much worse things could get, I don't think the idea of having AI leaders is that far-fetched.
In fact, I would bet money we see the first AI CEO before 2030.
I'm not saying it will be at a hundred billion or trillion-dollar company like Netflix.
Maybe we see it somewhere at the bottom of the Fortune 500.
With that said, all we need is one successful test run to get things off the ground.
The perfect example of this involves the four-day work week.
In 2025, a test pilot in Britain showed 100% of companies that experimented with a four-day work week showed dramatically improved employee satisfaction levels.
Leading all 17 companies that participated to adopt either a four-day work week or nine day fortnight (meaning one work day off every two weeks).
Admittedly, this is just one example.
But it proves an important point.
For dramatic change to take place, somebody somewhere has to be willing to go first and implement the change in their own organization.
Meaning, a real-life CEO is going to have to step aside and let an AI CEO legitimately run their company.
It's a highly futuristic idea.
But if you know anything about tech nerds in Silicon Valley, you know hundreds (if not thousands) of them would love nothing more than to be the first to hand over the reins to an AI and have the transition succeed.
It's the type of tech lore that would cement someone's legacy forever.
And because of that, I think we'll see AI leadership emerge in business before politics.
From there, the Overton window will shift to such a large degree that 'early adopter' leaders worldwide will begin implementing the technology in their own organizations.
In fact, in Albania, they've already begun testing a cabinet-level position that's run by an AI designed to eliminate corruption.
While this isn't a full-on President / Prime Minister role, it shows forward thinking countries are heading this direction.
And as soon as one of them pulls it off successfully, and the idea gains public support, we could see a tidal wave of AI leadership.
Of course, all of this assumes "the people in power" would step aside willingly.*
*Which has a near-zero chance of happening.
However, if the public's frustration with leadership reaches a high enough fever pitch - and we get something close to a revolution - it would not surprise me if people turn to a more neutral, futuristic type of government designed to optimize for the masses instead of the few.
Only time will tell.
Catch you next time,
AI Society Team

