Tech journalist Karen Hao has a succinct way of describing OpenAI, the creator of ChatGPT, that she’s been reporting on since 2018.
Hao believes OpenAI is an empire – the leader of a “modern-day colonial world order,” – where the richest AI companies extract unfathomable amounts of resources and exploit workers in the pursuit of superintelligence.
Karen Hao follows the rise of OpenAI in her new book, Empire of AI: The Dreams and Nightmares of Sam Altman’s OpenAI.Shoko Takayasu/Supplied
But as the Hong Kong-based reporter explains, it didn’t start out like this.
In her new book, Empire of AI: The Dreams and Nightmares of Sam Altman’s OpenAI, Hao follows the rise of OpenAI. She tracks how the company transformed from a non-profit that positioned itself as an idealistic underdog into the world’s largest AI company worth US$300-billion.
Books we’re reading and loving this week: Globe readers share their picks
Alongside the fly-on-the-wall observations of OpenAI’s work culture, built from hundreds of interviews with employees, e-mails and Slack conversations, Hao pulls back the curtain on the departure of early investor Elon Musk, the reinstatement of charismatic Chief Executive Officer Sam Altman and reports from Colombia and Kenya, where she interviews low-wage contract workers who were tasked with categorizing the severity of graphic content used to train ChatGPT.
Hao spoke to The Globe about how she is wary of OpenAI (and the industry’s other AI superpowers), but still hopeful that it’s not too late for a different way forward.
How did you first start reporting on AI?
I was trained as an engineer and worked in Silicon Valley. In the brief time I was there, I became very disenchanted with what I saw. I realized quickly that the profit incentives that drove Silicon Valley were not compatible with building technology in the public interest.
There was a culture of being detached from reality. People were always looking at how amazing the future was going to be, and literally you walk out the door and the present is really grim. How can you actually build technologies that better society when you don’t even have a pulse on society?
How are you raising kids in an AI world? Share your experience with The Globe
I switched into journalism in 2016 when the U.S., and I think the world at large, was just starting to reckon with this idea that Silicon Valley wasn’t necessarily what it said it was and maybe the platforms that it had built were actually interfering with democracy. I ended up specializing in AI in 2018 and realized that it was just a microcosm of all of the issues that I wanted to explore.
OpenAI started with the premise of creating AI for the betterment of society, but you argue that mission slipped as it started focusing on scaling products and profits. Did that transformation surprise you?
I don’t think I was necessarily surprised, but I was definitely disappointed. I had a lot of hope that maybe OpenAI had actually figured out a way to balance the societal mission and the need for profit. An observer of the industry had said it was a beacon of hope. But then I realized, oh wait, no, this is basically just like any other Silicon Valley company.
Did you see a big evolution in Sam Altman himself?
I think the biggest evolution, and he says this himself, is that [with his first startup Loopt] he didn’t necessarily have a greater purpose. He just thought it was a cool idea. He wanted to place himself in the centre of the hot activity in the valley, and being the founder was the best way to do so.
Later, he started having more grandiose ambitions. He wanted to go into technologies that would shape the trajectory of humanity. He hit upon this idea that the people who say and do those things end up not just amassing a lot more influence and wealth, but they also get logged into the textbooks of history.
The other thing about Altman is it’s really hard to pin down whether he believes these things. Regardless of who I talked to and how closely or how long they worked with him, no one could really articulate what Sam ultimately believes and what actually drives him, whether it really is these big ambitious missions that he paints, or whether it’s to be in the centre of power.
That’s probably true for a lot of successful tech CEOs.
Silicon Valley has long prioritized storytelling as a skill set. If you can tell the best story, that’s what’s going to shoot you to the top. Sam Altman is a once-in-a-generation talent when it comes to storytelling. That’s why he’s so good at fundraising, why he’s so good at persuading talent to join his next venture. But there’s this idea that’s become rooted in Silicon Valley that storytelling is also a game.
And there’s a lot that’s left out of the story. You write about meeting workers in South America and Africa who sifted through traumatizing content that was used to train ChatGPT and rate its severity.
When I went to Kenya to meet these workers that had been contracted by OpenAI, their lives were controlled by the work. They were dealing with work that was inherently toxic and psychologically damaging.
There were multiple times when I was reporting this story, and even relistening to the interviews, that I would just burst into tears. It’s crazy that we have developed a system in which this is allowed to happen, and also able to fly under the radar.
You argue the industry needs more regulation, and Canada just got its first ever minister of AI. What kind of regulation do you think is needed?
There should be environmental regulations on where data centres get deployed – and how much energy and water they’re allowed to use. Transparency laws, so that communities know when data centres enter into their communities. In the U.S., companies are entering with shell entities to hide the fact that they’re building a data centre somewhere.
Now that the book’s out, how are you feeling?
It is daunting to have something go out in the world that is going to ruffle the feathers of extremely powerful people. When I finished writing the book, it felt like I had excavated my soul on the pages. This is all I have to give and maybe that’s still not gonna be enough. That’s a really overwhelming thought.
But I’m cautiously optimistic. The reason I wrote this book is to be a platform for other people to build on. We are at a moment where we have no time to lose in terms of implementing sound policies to make sure that ultimately we reap the benefits of AI, without it coming at the cost of democracy. I hope it’ll help a lot of people find ways to assert their voice on how they actually want AI development to go in the future.
This interview has been condensed and edited.