Hello, and welcome to Decoder! I’m Jon Fortt — CNBC journalist, cohost of Closing Bell: Overtime, and creator of the Fortt Knox streaming series on LinkedIn. This is the last episode I’ll be guest-hosting for Nilay while he’s out on parental leave. We have an exciting crew who will take over for me after that, so stay tuned.
Today, I’m talking with Richard Robinson, who is the cofounder and CEO of Robin AI. Richard has a fascinating resume: he was a corporate lawyer for high-profile firms in London before founding Robin in 2019 to bring AI tools to the legal profession, using a mix of human lawyers and automated software expertise. That means Robin predates the big generative AI boom that kicked off when ChatGPT launched in 2022.
Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!
As you’ll hear Richard say, the tools his company was building early on were based on fairly traditional AI technology — what we would have just called “machine learning” a few years ago. But as more powerful models and the chatbot explosion have transformed industries of all types, Robin AI is expanding its ambitions. It’s moving beyond just using AI to parse legal contracts into what Richard is envisioning as an entire AI-powered legal services business.
AI can be unreliable, though, and when you’re working in law, unreliable doesn’t really cut it. It’s impossible to keep count of how many headlines we’ve already seen about lawyers using ChatGPT when they shouldn’t, citing nonexistent cases and law in their filings. Those attorneys have faced not only scathing rebukes from judges but also in some cases even fines and sanctions.
Naturally, I had to ask Richard about hallucinations, how he thinks the industry could move forward here, and how he’s working to make sure Robin’s AI products don’t land any law firms in hot water.
But Richard’s background also includes professional debate. Richard was the head debate coach at Eton College. So much of his expertise here, right down to how he structures his answers to some of my questions, can be traced back to just how experienced he is with the art of argumentation.
So, I really wanted to spend time talking through Richard’s history with debate, how it ties into both the AI and legal industries, and how these new technologies are making us reevaluate the difference between facts and truth in unprecedented ways.
Okay: Robin AI CEO Richard Robinson. Here we go.
This interview has been lightly edited for length and clarity.
Richard Robinson, founder and CEO of Robin AI. Great to have you here on Decoder.
Thanks for having me. I really appreciate it. It’s great to be here. I’m a big listener of the show.
We’ve spoken before. I’m going to be all over the place here, but I want to start off with Robin AI. We’re talking about AI in a lot of different ways nowadays. I started off my Decoder run with former Google employee Cassie Kozyrkov, talking to her about decision science.
But this is a specific application of artificial intelligence in an industry where there’s a lot of thinking going on, and there ought to be — the legal industry. Tell me, what is Robin AI? What’s the latest?
Well, we’re building an AI lawyer, and we’re starting by helping solve problems for businesses. Our goal is to essentially help businesses grow because one of the biggest impediments to business growth is not revenue, and not about managing your costs — it’s legal complexity. Legal problems can actually slow down businesses. So, we exist to solve those problems.
We’ve built a system that helps a business understand all of the laws and regulations that apply to them, and also all the commitments that they’ve made, their rights, their obligations, and their policies. We use AI to make it easy to understand that information and easy to use that information and ask questions about that information to solve legal problems. We call it legal intelligence. We’re taking the latest AI technologies to law school, and we’re giving them to the world’s biggest businesses to help them grow.
A year and a half ago, I talked to you, and your description was a lot heavier on contracts. But you said, “We’re heading in a direction where we’re going to be handling more than that.” It sounds like you’re more firmly in that direction now.
Yeah, that’s correct. We’ve always been limited by the technology that’s available. Before ChatGPT, we had very traditional AI models. Today we have, as you know, much more performant models, and that’s just allowed us to expand our ambition. You’re completely right, it’s not just about contracts anymore. It’s about policies, it’s about regulations, it’s about the different laws that apply to a business. We want to help them understand their entire legal landscape.
Give me a scenario here, a case study, on the sorts of things your customers are able to sort through using your technology. Recently, Robin amped up your presence on AWS Marketplace. So, there are a lot more types of companies that are going to be able to plug in Robin AI’s technology to all kinds of software and data that they have available.
So, case study, what’s the technology doing now? How is that kind of hyperscaler cloud platform potentially going to open up the possibilities for you?
We help solve concrete legal problems. A good example is that every day, people at our customers’ organizations want to know whether they’re doing something that’s compliant with their company policies. Those policies are uploaded to our platform, and anybody can just ask a question that historically would’ve gone to the legal or compliance teams. They can say, “I’ve been offered tickets to the Rangers game. Am I allowed to go under the company policy?” And we can use AI to intelligently answer that question.
Every day, businesses are signing contracts. That’s how they record pretty much all of their commercial transactions. Now, they can use AI to look back at their previous contracts, and it can help them answer questions about the new contract they’re being asked to sign. So, if you’re doing a deal with the Rangers and you worked with the Mets in the past, you might want to know what you negotiated that time. How did we get through this impasse last time? You can use the Robin platform to answer those questions.
I’ve got to go back to that Rangers game situation.
Please tell me you’re going to be able to do away with that annoying corporate training about whether you can have the tickets or not. If that could be just a conversation with an AI instead of having to watch those videos, oh my goodness, all the money.
[Laughs] I’m trying my best. You’re hitting the nail on the head though. A lot of this stuff has caused a lot of pain for a lot of businesses, either through compliance and ethics training or long, sometimes dull courses. We can make that so much more interesting, so much more interactive, so much more real-time with AI technologies like Robin. We’re really working on it, and we’re helping solve a vast range of legal use cases that you once needed people to do.
Are you taking away the work of the junior lawyers? I’m throwing up a little bit of a straw man there, but how is it changing the work of the entry-level law student or intern who would’ve been doing the tedious stuff that AI can perhaps now do? Is there higher level work, or are they just getting used less? What are you seeing your customers do?
If a business had legal problems in the past, they would either send them to a law firm or they would try and handle them internally with their own legal team. With AI, they can handle more work internally, so they don’t have to send as much to their law firms as they used to. They now have this leverage to tackle what used to be quite difficult pieces of work. So, actually more work they can do themselves now instead of having to send it outside. Then, there are some buckets of work where you don’t need people at all. You can just rely on systems like Robin to answer those compliance questions.
You’re right, the work is shifting, no doubt about it. For the most part, AI can’t replicate. It’s not a whole job yet. It’s part of a job, if that makes sense. So, we’re not seeing anybody cut headcount from using our technologies, but we do think they have a much more efficient way to scale, and they’re reducing dependence on their law firms over time because they can do more in-house.
But how is it changing the work of the people who are still doing the thinking?
I think that AI goes first, basically, and that’s a big transformation. You see this in the coding space. I think they got ahead of adoption in the legal space, but we are fast catching up. If you talk to a lot of engineers who are using these coding platforms, they’ll tell you that they want the AI to write all of the code first, but they’re not necessarily going to hit enter and use that code in production. They’re going to check, they’re going to review, they’re going to question it, interrogate it, and redirect the model where they want it to go because these models still make mistakes.
Their hands are still on the driving wheel. It’s just that they’re doing it slightly differently. They have AI go first, and then people are being used to check. We make it easy for people to check our work with pretty much everything we do. We include pinpoint citations, references, and we explain where we got our answers from. So, the role of the junior or senior lawyer is now to say, “Use Robin first.” Then, their job is to make sure that it went correctly, that it’s been used in the right way.
How are you avoiding the hallucination issue? We’ve seen these mentions in the news of lawyers submitting briefs to a judge that include stuff that is completely made up. We hear about the ones that get caught. I imagine we don’t hear about the ones that don’t get caught.
I know those are different kinds of AI uses than what you’re doing with Robin AI, but there’s still got to be this concern in a fact-based, argument-based industry about hallucination.
Yeah, there is. It’s the number one question our customers ask. I do think it’s a big part of why you need specialist models for the legal domain. It’s a specialist subject area and a specialist domain. You need to have applications like Robin and people who are not just taking ChatGPT or Anthropic and doing nothing with it. You need to really optimize its capabilities for the domain.
To answer your question directly, we include citations with very clear links to everything the model does. So, every time we give an answer, you can quickly validate the underlying source material. That’s the first thing. The second thing is that we are working very hard to only rely on external, valid, authoritative data sources. We connect the model to specific sources of information that are legally verified, so that we know we’re referencing things you can rely on.
The third is that we’re educating our customers and reminding them that they’re still lawyers. I used to write cases for courts all the time — that was my job before I started Robin — and I knew that it was my responsibility to make sure every source I referenced was 100 percent correct. It doesn’t matter which tool you use to get there. It’s on you as a legal professional to validate your sources before you send them to a judge or even before you send them to your client. Some of this is about personal responsibility because AI is a tool. You can misuse it no matter what safeguards we put in place. We have to teach people to not rely exclusively on these things because they can lie confidently. You’re going to want to check for yourself.
Right now, all kinds of relationships and arrangements are getting renegotiated globally. Deals that made sense a couple of years ago perhaps don’t anymore because of expected tariffs or frayed relationships. I imagine certain companies are having to look back at the fine print and ask, “What exactly are our rights here? What’s our wiggle room? What can we do?”
Is that a major AI use case? How are you seeing language getting combed through, comparing how it was phrased 20 years ago to how it needs to be phrased now?
That’s exactly right. Any type of change in the world triggers people to want to look back at what they’ve signed up for. And you’re right, the most topical is the tariff reform, which is affecting every global business. People want to look back at their agreements. They want to know, “Can I get out of this deal? Is there a way I can exit this transaction?” They entered into it with an assumption about what it was going to cost, and those assumptions have changed. That’s very similar to what we saw during covid when people wanted to know if they could get out of these agreements given there’s an unexpected, huge pandemic happening. We’re seeing the same thing now, but this time we have AI to help us.
So, people are looking back at historic agreements. I think they’re realizing that they don’t always know where all their contracts even are. They don’t always know what’s inside them. They don’t know who’s responsible for them. So, there is work to do to make AI more effective, but we are absolutely seeing global business customers trying to understand what the regulatory landscape means for them. That’s going to happen every time there’s regulatory change. Every time there are new laws passed, it causes businesses and even governments to look back and think about what they signed up for.
I’ll give you another quick example. When Trump introduced his executive order relating to DEI at universities, a lot of universities in the United States needed to look back and ask, “What have we agreed to? What’s in some of our grant proposals? What’s in some of our legal documents? What’s in some of our employment contracts? Who are we engaging as consultants? Is that in danger given these executive orders?” We saw that as a big use case, too. So, permanent change is a reality for business, and AI is going to help us to navigate that.
What does the AWS Marketplace do for you?
I think it gives customers confidence that they can trust us. When businesses started to adopt the cloud, the biggest reason that adoption took time was concerns about security. Keeping its data secure is probably the single most important thing for a business. It’s a never event. You can’t ever let your data be insecure.
But businesses aren’t going to be able to build everything themselves if they want the benefit of AI. They are going to have to partner with experts and with startups like Robin AI. But they need confidence that when they do that, their most sensitive documents are going to be secure and protected. So, the AWS Marketplace, first and foremost, gives us a way to give our customers confidence that what we’ve done is robust and that our application is secure because AWS security vets all the applications that are hosted on the marketplace. It gives customers trust.
So, it’s like Costco, right? I’m not a business vendor or a software company like you are, but this sounds to me like shopping at Costco. There are certain guarantees. I know its reputation because I’m a member, right? It curates what it carries on the shelves and stands behind them.
So, if I have a problem, I can just take my receipt to the front desk and say, “Hey, I bought this here.” You’re saying it’s the same thing with these AI-driven capabilities in a cloud marketplace.
That’s right. You get to leverage the brand and the reputation of AWS, which is the biggest cloud provider in the world. The other thing you get, which you mentioned, is a seat at the table for the biggest grocery store in the world. It has lots of customers. A lot of businesses make commitments to spend with AWS, and they will choose vendors who are hosted on the AWS Marketplace first. So, it gives us a position in the shop window to help us advertise to customers. That’s really what the marketplace gives to Robin AI.
I want to take a step back and get a little philosophical. We got a little in the weeds with the enterprise stuff, but part of what’s happening here with AI — and in a way with legal — is we’re having to think differently about how we navigate the world.
It seems to me that the two steps at the core of this are how do we figure out what’s true, and how do we figure out what’s fair? You are a practitioner of debate — we’ll get to that in a bit, too. I’m not a professional debater, though I have been known to play one on TV. But figuring out what’s true is step one, right?
I think it is. It’s increasingly difficult because there are so many competing facts and so many communities where people will selectively choose their facts. But you’re right, you need to establish the reality and the core facts before you can really start making decisions and debating what you should be doing and what should happen next.
I do think AI helps with all of these things, but it can also make it more difficult. These technologies can be used for good and bad. It’s not obvious to me that we’re going to get closer to establishing the truth now that we have AI.
I think you’re touching on something interesting right off the bat, the difference between facts and truth.
Yes, that’s right. It’s very difficult to really get to the truth. Facts can be selectively chosen. I’ve seen spreadsheets and graphs that technically are factual, but they don’t really tell the truth. So, there’s a big gap there.
How does that play into the way we as a society should think about what AI does? AI systems are going out and training on data points that might be facts, but the way those facts, details, or data points get arranged ends up determining whether they’re telling us something true.
I think that’s right. I think that as a society, we need to use technology to enhance our collective goals. We shouldn’t just let technology run wild. That’s not to say that we should regulate these things because I’m generally quite against that. I think we should let innovation happen to the greatest extent reasonably possible, but as consumers, we have a say in how these systems work, how they’re designed, and how they’re deployed.
As it relates to the search for truth, the people who own and use these systems have grappled with these questions in the past. If you want to Google Search certain questions, like the racial disparity in IQ in the United States, you’re going to get a fairly curated answer. I think that in itself is a very dangerous, polarizing set of topics. We need to ask ourselves the same questions that we asked with the last generation of technologies, because that’s what it is.
AI is just a new way of delivering a lot of that information. It’s a more effective way in some ways. It’s going to do it in a more convincing and powerful way. So, it’s even more important that we ask ourselves, “How do we want information to be presented? How do we want to steer these systems so that they deliver truth and avoid bias?”
It’s a big reason why Elon Musk with Grok has taken such a different approach than Google took with Gemini. If you remember, the Gemini model famously had Black Nazis, and it refused to answer certain questions. It allegedly had some political bias. I think that was because Google was struggling to answer and resolve some of these difficult questions about how you make the models deliver truth, not just facts. It maybe hadn’t spent enough time parsing through how it wanted to do that.
I mean, Grok seems to be having its own issues.
It’s like people, right? Somebody who swings one way has trouble with certain things, and somebody who swings another way has trouble with other things. There’s the matter of facts, and then there’s what people are inclined to believe.
I’m getting closer to the debate issue here, but sometimes you have facts that you string together in a certain way, and it’s not exactly true but people really want to believe it, right? They embrace it. Then, sometimes you have truths that people completely want to dismiss. The quality of the information, the truth, or the confusion doesn’t necessarily correlate with how likely your audience will say, “Yeah, Richard’s right.”
How do we deal with that at a time when these models are designed to be convincing regardless of whether they’re stringing together the facts to create truth or whether they’re stringing together the facts to create something else?
I think that you observe confirmation bias throughout society with or without AI. People are searching for facts that confirm their prior beliefs. There’s something comforting to people about being told and validated that they were right. Regardless of the technology you use, the desire to feel like they’re correct is just a baseline for all human beings.
So, if you want to shape how people think or convince them of something that you know to be true, you have to start from the position that they’re not going to want to hear it if it’s incongruent with their prior beliefs. I think AI can make these things better, and it can make these things worse, right? AI is going to make it much easier for people who are looking for facts that back them up and validate what they already believe. It’s going to give you the world’s most efficient mechanism for delivering information of the type that you choose.
I don’t think all is lost because I also think that we have a new tool in our armory for people who are trying to provide truth, help change somebody’s perspective, or show them a new way. We have a new tool in our armory to do that, right? We have this incredible OpenAI research assistant called deep research that we never had before, which means we can start to deliver more compelling facts. We can get a better sense of what types of facts or examples are going to convince people. We can build better ads. We can make more convincing statements. We can road test buzzwords. We can be more creative because we have AI. Fundamentally, we’ve got a sparring partner that helps us to craft our message.
So, AI is basically going to make these things better and worse all at the same time. My hope is that the right side wins, that people in search of truth can be more compelling now that they’ve got a host of new tools available to them, but only if they learn how to use them. It’s not guaranteed that people will learn these new systems, but people like me and you can go out there and proselytize for the benefits and capabilities of these things.
But it feels like we’re at a magic show, right? The reason why many illusions work is because the audience gets primed to think one thing, and then a different thing happens. We’re being conditioned, and AI can be used to convince people of truth by understanding what they already believe and building a pathway. It can also be used to lead people astray by understanding what they already believe and adding breadcrumbs to make them believe whatever conspiracy theory may or may not be true.
How is it swinging right now? How does a product like the one Robin AI is putting out lead all of this in a better direction?
I think a lot of this comes down to validation. [OpenAI CEO] Sam Altman said something that I thought was really insightful. He said that the algorithms that power most of our social media platforms — X, Facebook, Instagram — are the first example of what AI practitioners call “misaligned AI at scale,” These are systems where the AI models are not actually helping achieve goals that are good for humanity.
The algorithms in these systems were there before ChatGPT, but they are using machine learning to work out what kind of content to surface.It turns out people are entertained by really outrageous, really extreme content. It just keeps their attention. I don’t think anybody would say that’s good for people and makes them better. It’s not nourishing. There are no nutrients in a lot of the content we’re getting served to us on these social media platforms, whether it’s politics, people squabbling, or culture wars. These systems have been giving us information that’s designed to get our attention, and that’s just not good for us. It’s not nutritious.
On the whole, we’re not doing very well in the battle to search for truth because the models haven’t actually been optimized to do that. They’ve been optimized to get our attention. I think you need platforms that find ways to combat that. So, to the question of how AI applications help combat this, I think it is by creating tools that help people validate the truth of something.
The most interesting example of this, at least in the popular social paradigm, is Community Notes, because they are a way for someone to say, “This isn’t true, this is false, or you’re not getting the whole picture here.” And it’s not edited by a shadowy editorial board. It’s generally crowdsourced. Wikipedia is another good example. These are systems where you’re basically using the wisdom of the crowds to validate or invalidate information.
In our context, we use citations. We’re saying don’t trust the model, test it. It’s going to give you an answer, but it’s also going to give you an easy way to check for yourself if we’re right or wrong. For me, this is the most interesting part of AI applications. It’s all well and good having capabilities, but as long as we know that they can be used for bad ends or can be inaccurate, we’re going to have to build countermeasures that make it easy for society to get what we want from them. I think Community Notes and citations are all children in the same family of trying to understand how these models truly work and are affecting us.
You’re leading me right to where I was hoping to go. Another child in that family is debate. Because to me, debate is gamified truth search, right? When you search for truth, you create these warring tribes and they assemble facts and fight each other. It’s like, “No, here’s my set of facts and here’s my argument that I’m making based on that.” Then it’s, “Okay, well, here’s mine. Here’s why yours are wrong.” “You forgot about this.“
This happens out in the public square, and then people can see and decide who wins, which is fun. But the payoff is that we’re smarter at the end. We should be, right?
We get to sift through and pick apart these things, hopefully correctly if the teams have done their work. Do we need a new model of debate in the AI era? Should these models be debating each other? Should there be debates within them? Do they get scored in a way that helps us understand either the quality of the facts, the quality of the logic in which those facts have been strung together to come to a conclusion, or the quality of the analysis that was developed from that conclusion?
Is part of what we are trying to claw toward right now a way to gamify a search for truth and vetted analysis in this sea of data?
I think that’s what we should be doing. I’m not confident we are seeing that yet. Going back to what we said earlier, what we’ve observed over the last five or six years is people becoming … There’s less debate actually. People are in their communities, real or digital, and are getting their own facts. They’re actually not engaging with the other side. They’re not seeing the other side’s point of view. They’re getting the information that’s served to them. So, it’s almost the opposite of debate.
We need these systems to do a really robust job of surfacing all of the information that’s relevant and characterizing both sides, like you said. I think that’s really possible. For instance, I watched some of the presidential debates and the New York mayoral debate recently, which was really interesting. We now have AI systems that could give you a live fact check or a live alternative perspective during the debate. Wouldn’t that be great for society? Wouldn’t it be good if we could use AI to have more robust conversations in, like you say, the gamified search for truth? I think it can be done in a way that’s entertaining, engaging, and that ultimately drives more engagement than what we’ve had.
Let’s talk about how you got into debate. You grew up in an immigrant household where there were arguments all the time, and my sense is that debate paved your way into law. Tell me about the debate environment you grew up in and what that did for you intellectually.
My family was arguing all the time. We would gather round, watch the news together, and argue about every story. It really helped me to develop a level of independent thinking because there was no credit for just agreeing with someone else. You really had to have your own perspective. More than anything else, it encouraged me to think about what I was saying because you could get torn apart if you hadn’t really thought through what you had to say. And it made me value debate as a way to change minds as well, to help you find the right answer, to come to a conversation wanting to know the truth and not just wanting to win the argument.
For me, those are all skills that you observe in the law. Law is ambiguous. I think people think of the legal industry as being black and white, but the truth is almost all of the law is heavily debated. That’s basically what the Supreme Court is for. It’s to resolve ambiguity and debate. If there was no debate, we wouldn’t need all these judges and court systems. For me, it’s really shaped a lot of the way I think in a lot of my life. It’s why I think how AI is being used in social media is such an important issue for society because I can see very easily how it’s going to shape the way people think, the way people argue or don’t argue. And I can see the implications of that.
You coached an England debate team seven or eight years ago. How do you do that? How do you coach a team to debate more effectively, particularly at the individual level when you see the strengths and weaknesses of a person? And are there ways that you translate that into how you direct a team to build software?
I see the similarities between coaching the England team and running my business all the time. It still surprises me, to be honest. I think that when you’re coaching debate, the number one thing you’re trying to do is help people learn how to think because in the end, they’re going to have to be the ones who stand up and give a five or seven-minute speech in front of a room full of people with not a lot of time to prepare. When you do that, you’re going to have to think on your feet. You’re going to have to find a way to come up with arguments that you think are going to convince the people in the room.
For me, it was all about helping teach them that there’s two sides to every story, that beneath all of the information and facts, there’s normally some valuable principle at stake in every clash or issue that’s important. You want to try and tap into that emotion and conflict when you’re debating. You want to find a way to understand both sides because then you’ll be able to position your side best. You’ll know the strengths and weaknesses of what you want to say.
As the final thing, it was all about coaching individuals. Each person had a different challenge or different strengths, different things they needed to work on. Some people would speak too quickly. Some people were not confident speaking in big crowds. Some people were not good when they had too much time to think. You have to find a way to coach each individual to manage their weaknesses. And you have to bring the team together so that they’re more than the sum of their parts.
I see this challenge all the time when we’re building software, right? Number one, we’re dealing with systems that require different expertise. No one is good at everything that we do. We’ve got legal experts, researchers, engineers, and they all need to work together using their strengths and managing their weaknesses so that they’re more than the sum of their parts. So, that’s been a huge lesson that I apply today to help build Robin AI.
I would say as well, if we’re focusing on individuals, that at any given time, you really need to find a way to put people in the position where they can be in their flow state and do their best work, especially in a startup. It’s really hard being in a startup where you don’t have all the resources and you’re going up against people with way more resources than you. You basically need everybody at the top of their game. That means you’re going to have to coach individuals, not just collectively. That was a big lesson I took from working on debate.
Are people the wild card? When I see the procedural dramas or movies with lawyers and their closing arguments, very often understanding your own strengths as a communicator and your own impact in a room — understanding people’s mindsets, their body language — can be very important.
I’m not sure that we’re close to a time when AI is going to help us get that much better at dealing with people, at least at this stage. Maybe at dealing with facts, with huge, unstructured data sets, or with analyzing tons of video or images to identify faces. But I’m not sure we’re anywhere near it knowing how to respond, what to say, how to adjust our tone to reassure or convince someone. Are we?
No, I think you’re right. That in the moment, interpersonal communication is, at least today, something very human. You only get better at these things through practice. And they’re so real-time — knowing how to respond, knowing how to react, knowing how to adjust your tone, knowing how to read the room and to maybe change course. I don’t see how, at least today, AI is helping with that.
I think you can maybe think about that as in-game. Before and after the game, AI can be really powerful. People in my company will often use AI in advance of a one-to-one or in advance of a meeting where they know they want to bring something up, and they want some coaching on how they can land the point as well as possible.Maybe they’re concerned about something but they feel like they don’t know enough about the point, and they don’t want to come to the meeting ignorant. They’ll do their research in advance.
So, I think AI is helping before the fact. Then after the fact, we’re seeing people basically look at the game tape. All the meetings at Robin are recorded. We use AI systems to record all our meetings. The transcripts are produced, action items are produced, and summaries are produced. People are asking themselves, “How could I have run that meeting better? I feel like the conflict I had with this person didn’t go the way I wanted. What could I have done differently?” So, I think AI is helping there.
I’d say, as a final point, we have seen systems — and not much is written about these systems — that are extremely convincing one-on-one. There was a company called Character.AI, which was acquired by Google. What it did was build AI avatars that people could interact with, and it would sometimes license those avatars to different companies. We saw a huge surge in AI girlfriends. We saw a huge surge in AI for therapy. We’re seeing people have private, intimate conversations with AI. What Character.AI was really good at was learning from those interactions what would convince you. “What is it I need to say to you to make you change your mind or to make you do something I want?” And I think that’s a growing area of AI research that could easily go badly if it’s not managed.
I don’t know if you know the answer to this, but are AI boyfriends a thing?
[Laughs] I don’t know the answer.
I haven’t heard anything about AI boyfriends.
I’ve never heard anybody say, “AI boyfriends.”
I’ve never heard anything, and it makes me wonder why is it always an AI girlfriend?
I don’t know. I’ve never heard that phrase, you’re right.
Right? I’m a little disturbed that I never asked this question before. I was always like, “Oh yeah, there’s people out there getting AI girlfriends and there’s the movie Her.” There’s no movie called Him.
Do they just not want to talk to us? Do they just not need that kind of validation? There’s something there, Richard.
There absolutely is. It’s a reminder that these systems reflect their creators to some extent. Like you said, it’s why there’s a movie Her. It’s why a lot of AI voices are female. It’s partly because they were made by men. I don’t say that to criticize them, but it’s a reflection of some of the bias involved in building these systems, as well as lots of other complex social problems.
They explain why we have prominent AI girlfriends, but I haven’t heard about many AI boyfriends, at least not yet. Although, there was a wife in a New York Times story, I think, who developed a relationship with ChatGPT. So, I think similar things do happen.
Let me try to bring this all together with you. What problems are we creating — that you can see already, perhaps — with the solutions that we’re bringing to bear? We’ve got this capability to analyze unstructured data, to come up with some answers more quickly, to give humans higher order work to do. I think we’ve talked about how there’s this whole human interaction realm that isn’t getting addressed as deeply by AI systems right now.
My observation as the father of a couple… is it Gen Z now if you’re under 20? They’re not getting as much of that high-quality, high-volume human interaction in their formative years as some previous generations did because there are so many different screens that have the opportunity to intercept that interaction. And they’re hungry for it.
But I wonder if they were models getting trained, they’re getting less data in the very area where humans need to be even sharper because the AI systems aren’t going to help us. Are we perhaps creating a new class of problems or overlooking some areas even as these brilliant systems are coming online?
We’re definitely creating new problems. This is true of all technology that’s significant. It’s going to solve a lot of problems, but it’s going to create new ones.
I’d point to three things with AI. Number one, we are creating more text, and a lot of it is not that useful. So, we’re generating a lot more content, for better or for worse. You’re seeing more blogs because it’s easy to write a blog now. You’re seeing more articles, more LinkedIn status updates, and more content online. Whether that’s good or bad, we are generating more things for people to read. What may happen is that people just read less because it’s harder to sift through the noise to find the signal, or they may rely more on the systems of information they’re used to to get that confirmation bias. So, I think that’s one area AI has not solved, at least today. Generating incremental text has gotten dramatically cheaper and easier than it ever was.
The second thing I’ve observed is that people are losing writing skills because you don’t have to write anymore, really. You don’t even need to tell ChatGPT in proper English. Your prompts can be quite badly constructed and it kind of works out what you’re trying to say. What I observe is that people’s ability to sit down and write something coherent, that takes you on a journey, is actually getting worse because of their dependence on these external systems. I think that’s very, very bad because to me, writing is deeply linked to thinking. In some ways, if you can’t write a cogent, sequential explanation of your thoughts, that tells me that your thinking might be quite muddled.
Jeff Bezos had a similar principle. He banned slide decks and insisted on a six-page memo because you can hide things in a slide deck, but you have to know what you’re talking about in a six-page memo. I think that’s a gap that’s emerging because you can depend on AI systems to write, and it can excuse people from thinking.
The final thing I would point to is that we are creating this crisis of validation. When you see something extraordinary online, I, by default, don’t necessarily believe it. Whatever it is, I just assume it might be fake. I’m not going to believe it until I’ve seen more corroboration and more validation. By default, I assume things aren’t true, and that’s pretty bad actually. It used to be that if I saw something, I would assume it’s true, and it’s kind of flipped the other way over the last five years.
So, I think AI has definitely created that new problem. But like we talked about earlier, I think there are ways you can use technology to help combat that and to fight back. I’m just not seeing too many of those capabilities at scale in the world yet.
You’re a news podcaster’s dream interview. I want to know if this is conscious or trained. You tend to answer with three points that are highly organized. You’ll give the headline and then you’ll give the facts, and then you’ll analyze the facts with “point one,” “point two,” and “finally.” It’s very well-structured and you’re not too wordy or lengthy in it. Is that the debater in you?
[Laughs] Yes. I can’t take any credit for that one.
Do you have to think about it anymore or do the answers just come through that way for you?
I do have to think about it, but if you do it enough, it does become second nature. I would say that whenever I’m speaking to someone like you, who in these types of settings, I think a lot more. The pressure’s on and you get very nervous, but it does help you. It goes back to what I was saying about writing, it’s a way of thinking. You’ve got to have structured thoughts, and to take all the ideas in your mind and hopefully communicate them in an organized way so it’s easy for the audience to learn. That’s a big part of what debating teaches.
You’re a master at it. I almost didn’t pick up on it. You don’t want them to feel like you’re writing them a book report in every answer, and you’re very good at answering naturally at the same time. I was like, “Man, this is well organized.” He always knows what his final point is. I love that. I’m kind of like a drunken master in my speech.
Yes. I know exactly what you mean.
There’s not a lot of obvious form there, so I appreciate it when I see it. Richard Robinson, founder and CEO of Robin AI, using AI to really ramp up productivity in the legal industry and hopefully get us to more facts and fairness. We’ll see if we reach a new era of gamified debate, which you know well. I appreciate you joining me for this episode of Decoder.
Thank you very, very much for having me.
Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!
Decoder with Nilay Patel
A podcast from The Verge about big ideas and other problems.
SUBSCRIBE NOW!