#228 – Eileen Yam on how we’re completely out of touch with what the public thinks about AI

If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree.

In three major reports released over the last year, the Pew Research Center surveyed over 5,000 US adults and 1,000 AI experts. They found that the general public holds many beliefs about AI that are virtually nonexistent in Silicon Valley, and that the tech industry’s pitch about the likely benefits of their work has thus far failed to convince many people at all. AI is, in fact, a rare topic that mostly unites Americans — regardless of politics, race, age, or gender.

Today’s guest, Eileen Yam, director of science and society research at Pew, walks us through some of the eye-watering gaps in perception:

  • Jobs: 73% of AI experts see a positive impact on how people do their jobs. Only 23% of the public agrees.
  • Productivity: 74% of experts say AI is very likely to make humans more productive. Just 17% of the public agrees.
  • Personal benefit: 76% of experts expect AI to benefit them personally. Only 24% of the public expects the same (while 43% expect it to harm them).
  • Happiness: 22% of experts think AI is very likely to make humans happier, which is already surprisingly low — but a mere 6% of the public expects the same.

For the experts building these systems, the vision is one of human empowerment and efficiency. But outside the Silicon Valley bubble, the mood is more one of anxiety — not only about Terminator scenarios, but about AI denying their children “curiosity, problem-solving skills, critical thinking skills and creativity,” while they themselves are replaced and devalued:

  • 53% of Americans say AI will worsen people’s ability to think creatively.
  • 50% believe it will hurt our ability to form meaningful relationships.
  • 38% think it will worsen our ability to solve problems.

Open-ended responses to the surveys reveal a poignant fear: that by offloading cognitive work to algorithms we are changing childhood to a point we no longer know what adults will result. As one teacher quoted in the study noted, we risk raising a generation that relies on AI so much it never “grows its own curiosity, problem-solving skills, critical thinking skills and creativity.”

If the people building the future are this out of sync with the people living in it, the impending “techlash” might be more severe than industry anticipates.

In this episode, Eileen and host Rob Wiblin break down the data on where these groups disagree, where they actually align (nobody trusts the government or companies to regulate this), and why the “digital natives” might actually be the most worried of all.

This episode was recorded on September 25, 2025.

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

The interview in a nutshell

Eileen Yam, director of science and society research at Pew Research Center, lays out the findings from several large-scale Pew surveys of US adults and AI experts. The data reveals a public that is increasingly apprehensive about AI, driven by fears that it will degrade core human abilities and that it’s fundamentally uncontrollable. These anxieties create a “night and day” gap between the public’s perception and the profound optimism of AI insiders.

The public is wary, and concern is growing

The public shows a “dual sentiment” of being willing to use AI for daily chores, but having “persistent overarching concerns” at a societal level:.

  • Concern is rising: 50% of US adults are more concerned than excited about the increased use of AI, a significant jump from 37% in 2021.
  • A desire for control: 61% of Americans want more control over how AI is used in their lives.
  • It’s not seen as “hype”: 36% of the public believe media coverage is about right, and an equal 36% think AI is being made a smaller deal than it should be. Only 21% (the smallest share) feel it’s being exaggerated.

Top fears: Erosion of humanity and loss of control

When asked to name the biggest risks from AI, the public’s top concerns are not about sci-fi catastrophes but about personal and social degradation.

  • Erosion of human abilities: The number one risk (cited by 27% of those concerned) is an “erosion of human abilities and social connections.”
    • 53% believes AI will worsen people’s ability to think creatively.
    • 50% believe it will worsen the ability to form meaningful relationships.
    • Eileen highlights a teacher’s concern that children will miss “the struggle of what real learning is” by relying on AI.
  • Loss of control: The next major concern is a fundamental lack of control.
    • This includes fears that “society will be too slow to regulate” AI.
    • It also manifests as an inability to tell real from fake: 70% say it’s important to detect AI-generated content, but half don’t feel at all confident they can.
    • This anxiety is reinforced by preexisting cultural narratives from films like Terminator and 2001: A Space Odyssey, which primed the public to see AI as a “sinister supercomputer.”

A massive “night and day” gap exists between the public and AI experts

The most striking finding is the profound disconnect between AI insiders and the general public. Experts are “much more bullish” and optimistic across the board:

  • Productivity: 74% of experts see a positive impact, versus 17% of the public.
  • Jobs: 73% of experts see a positive impact, versus 23% of the public.
  • The economy: 69% of experts see a positive impact, versus 21% of the public.
  • Personal benefit: 76% of experts expect to benefit, versus 24% of the public.
  • Medical care: 84% of experts see a positive impact, versus 44% of the public.

This gap extends to job loss: 56% of the public is highly concerned about AI leading to job loss, versus only 25% of experts. Experts are also far more willing to trust AI with important decisions (51%) than the public (13%).

Key findings on demographics and AI applications

  • Where AI is (and isn’t) welcome:
    • OK: The public is receptive to AI in “big data” realms like forecasting weather (74% support) and searching for financial crimes (70%).
    • Not OK: The public rejects AI in “deeply personal” or “sacred” realms — 66% say AI should play “no role” in judging if two people could fall in love, and 73% say the same for advising on religious faith.
  • Workplace anxiety hits educated workers:
    • Unlike past automation, concern is high among professionals.
    • Workers who use AI are more worried about fewer job opportunities (42%) than those who don’t (30%).
  • Demographic splits:
    • Gender: Women are consistently “more wary than men.” This gap is much more pronounced among experts: 63% of men in the AI sector see a positive future vs 36% of women.
    • Age: Younger adults (under 30), despite using AI more, are more concerned about AI eroding human skills (like creativity) than seniors (65+).
    • Politics: For now, AI is “a bit above the fray” and not a polarised partisan issue.
  • Agreement on regulation: Both the public and experts agree on one thing: they have little confidence in either industry or government to regulate AI effectively.

Highlights

AI experts are out of touch

Rob Wiblin: So I really wanted to do this interview in part because I think you’re exactly right that me and almost everyone who I talk to about AI, we’re all insiders to one extent or another. And I think it’s remarkable the degree to which we’re just completely out of touch with how the general public in the US, and probably the general public around the world, feels about AI.

I think that’s not necessarily a sign that we’re more ignorant or that anything bad is going on here. It’s just when you are working in an area, when you’re thinking about it all day, you can just end up with very different attitudes than someone who’s thinking about it less or just has a very different kind of life to you.

And I think that creates a bunch of potential for conflict that people may not anticipate coming. The big worries that people have — about disempowerment or degradation of people’s capabilities and degradations of interpersonal relationships — if someone asked me, “What are you concerned about with AI?” I kind of share those concerns, but that’s not what I would put at the top. And I think it’s not what very many people in the AI industry would put at the top. Because for most people who are building these systems, they think of it as empowering people. and that is their vision, that’s their hope.

But the public feels very differently, or perceives the impact that these technologies are going to have very differently. And if this does become a top-level political issue because it’s affecting people so much more, or there are important events — things that really go down that involve AI that force people to talk about this and grapple with it — I think the conversation could go in directions that leaders of AI companies may not expect.

Eileen Yam: Yeah, the question of is there a looming techlash against AI, I think that’s on the minds of a lot of people in these elite discussions. What you see when you ask the general public about a lot of the things about risks, and where do they see maybe an opening for AI in my life versus not.

A notable share of these open-ended responses invoked two movies, Terminator and 2001: A Space Odyssey. … So one thing that I did note is that so many people, even if they may not be particularly informed about the ins and outs of AI and training data and bias, have been primed culturally in a way that for other tech innovations… I don’t remember any feature film about a sinister smartphone, or…

Rob Wiblin: Or social network.

Eileen Yam: Right. And there’s something about this supercomputer that I think is different about this tech. So to me, I think that’s one big context when you think about techlash: how much of it is actually culturally primed, dating back to before we even had these chatbots today, or concerns about deepfakes today?

Rob Wiblin: Yeah. I guess the thing that’s unique about AI is that it has the potential to act more autonomously, to go out and pursue goals without necessarily having a human having oversight of it at all points. So I guess it’s very natural that that’s the thing that maybe stands out to people as most unique and most disconcerting about it.

Eileen Yam: Yeah, certainly unlike other innovations of the digital era that you might think about, like the internet or social media or mobile tech.

A major fear for the public is erosion of human abilities

Eileen Yam: Yeah. So there is this question that you’re referring to about what people, in their own words, think are the main risks of AI to society. We also in a separate question asked to what extent will AI make people’s ability to form meaningful relationships and think creatively? Will it improve that or make it worse?

So certainly for creative thinking and forming meaningful relationships, of those who took a side, far and away, people felt like AI is going to worsen these abilities. And in the open-ended, freeform responses that people offered when it came to this kind of erosion of human abilities, I really loved this quote from a teacher; it gives a lot of food for thought about how we think about how children are growing up today, developmental stages, how they think.

Rob Wiblin: This is the quote that made it click for me as well.

Eileen Yam: Yeah. This person wrote:

As a school teacher, I understand how important it is for children to develop and grow their own curiosity, problem-solving skills, critical thinking skills and creativity, to name just a few human traits that I believe AI is slowly taking over from us. Since children are digital natives, the adults who understand a world without AI need to still pass the torch to children for developing these human qualities with our own human brains, instead of relying on the difficulty to be passed on to AI so that humans don’t have to feel the struggle of what real learning is.

And I, as a parent, almost feel a little chastened there. Just, oh my gosh, what kind of example am I setting? But no, it’s almost beautiful in the poignancy of how this person just really homes in on this is the different world that the kids are growing up in today. So yeah, that erosion of almost, she’s hinting at the essence of what makes us humans: this kind of curiosity, problem-solving skills, thinking skills, overcoming the hurdles in your head and relying on yourself to clear them. It’s just very evocative.

Rob Wiblin: Absolutely. I spend hours every day now interacting with AI when I’m at work. Almost everything, the first thought that I have when I’m given a task or preparing for an episode is like, “How can I collaborate with AI to make this easier and faster and higher quality?”

I guess not everyone has that experience or has a positive experience with it, but at least for me, I’ve never felt more engaged, I’ve never felt more empowered, I’ve never felt more mentally active — because the AI allows you to speed up the routine stuff so you can get to the harder stuff. So that’s the sense in which people are going to be disempowered, and not going to have such good connections, didn’t resonate with me.

But then when I’m imagining my own child growing up in this world where they’ve never actually had to do the thing themselves all the way, all the time they’ve just been able to delegate it to an AI… And by the time my kid is in school, this model is going to be even more powerful. It’s going to be maybe unclear what value they can add. Especially as a six-year-old or a seven-year-old, you can always just get the LLMs to do the thing for you better than you can do it. You’re going to get better, but the AIs are also getting better as well. So it’s always going to be ahead of you.

And I guess we don’t know how it’s going to pan out, but children growing up in this era might well end up feeling a lot of connection with the AIs that they’re talking to, with the chatbots. Maybe not. I guess some people have that experience now, that they feel a sort of personal connection — they feel almost like the AIs are their friends — and other people don’t. But if that’s the world you grow up in your entire childhood, is that going to affect your ability to have positive relationships with other people? Especially when the AIs, they kind of give you whatever you want. They’re not like other human beings with the rough edges and the difficulty, the things that you have to navigate.

Eileen Yam: Yeah, absolutely.

Americans don't want AI in their personal lives

Rob Wiblin: The things that people most objected to AI being involved with was:

  • Advising people about faith in God: 73% said that it should play no role there.
  • 66% that it should play no role in judging whether two people could fall in love. That one jumped off the page to me.
  • 60% said it should play no role in making decisions about how to govern the country. I was a bit unsure whether to interpret that as they’re concerned about AI even assisting government bureaucrats or politicians, or whether what they want is that the AI not make the final decision, which I think is a lot more understandable.
  • 47% said no role in selecting who should serve on a jury. I guess a sensitive issue.
  • And then 36% said it should play no role in providing mental health support — which, frankly, I think is a big mistake, personally.

But yeah, I guess it’s all sort of the personal stuff, that’s a common theme with several of these. And I guess also like exercise of power through government. …

Eileen Yam: Yeah, I think that when I look at these findings, I bucket them off into kind of big data in the medical/science/finance realm: there’s more receptiveness there. But on the more personal end of the spectrum, I’d say — matchmaking, religion — that’s where people feel like, not so much. And this is where the shares that are saying they’re not sure, they’re still notable, but not as high as we’ve seen for other items. There’s still maybe 13% to 19% saying not sure, but these opinions are a little bit more concretised, at least as far as people actually weighing in and saying.

And for providing mental health support to people, you noted you were surprised by how low it was that 36% said no role at all. It’s interesting because literally just the other day someone had the converse reaction of, “Forty-five percent see at least some role in mental health therapy?!” Granted, this was someone in the public health realm who feels like, I don’t know about therapy by algorithm — but I think that just speaks to there are different points, different perceptions, and it’s super nuanced.

But I think that in broad strokes, this idea is just deeply personal, very individual aspects — like matchmaking, like religion — that’s kind of a no-fly zone. But then on the more big data, out-there — developing medicine, forecasting weather, identifying financial crimes — that’s OK.

Does the public always feel this way about new things?

Rob Wiblin: So it’s fair to say that the US public, at least many people, are apprehensive about AI. If I imagine someone in the industry who was a bit sceptical about how much one can make from this, I can imagine them saying people are always sceptical, always nervous about the new thing: “Anything that’s been around since I was a child, that’s OK,” and the downsides are just acceptable and part of the air that we breathe. But the new stuff, that’s always what people get worried about.

Were people, for example, similarly concerned at this stage in the rollout of smartphones or the internet in general? Pew’s been going a while, so you have polling about that stuff, right?

Eileen Yam: So we have a 1999 poll, and at that point internet use was about 40%, give or take some percentages: people were saying “Yes, I use the internet.” So much lower than today.

One of the questions we asked at that time was how they felt about “having access” to all that information — this deluge of information right there in one place — and 62% said they liked it.

There wasn’t a corresponding sort of “hand wringing,” for lack of a better word, about the implications, the way that we’re seeing today about AI. And one big difference between those previous digital revolutions, whether you’re talking about internet, smartphones, or AI, is just people wilfully use the internet; people willfully use their smartphone. And there wasn’t this corresponding sense of, again, coming back to control, of “I don’t even know when I’m even engaging with AI or something made by AI” the way you did with the internet. So there was much more of a sense of there’s a benefit here to the internet.

To the extent that concerns were raised then, it was interesting. One was about the speed of my connection, which today, in a broadband world, I kind of chuckle at that. But then there were some who mentioned data privacy concerns. I don’t want to claim that there were no concerns about the internet, but it’s a bit of an apples and oranges comparison, because it wasn’t the kind of tech that felt ambient the way AI does.

Rob Wiblin: Yeah, yeah. I guess if you didn’t like smartphones, you could just not get an iPhone. But all of the media coverage about AI does carry with it this tone that AI is coming at you whether you like it or not. And I think that is kind of true in a way that probably wasn’t true for smartphones. I mean, I guess it’s hard to go without a smartphone now, but you can still do it. If you don’t like it, just have a dumbphone, if that’s really the decision that you want to make. But if AI replaces you in your job, it doesn’t really matter whether you can’t opt out.

Eileen Yam: Yeah, that’s exactly right. And that’s very different from, you know, let’s go back even further: calculators. People might have said the same thing, that it’s making you lazy, you’re not doing things in your head that now you can use a calculator. There wasn’t an existential conversation of, “This is changing humanity or the nature of society; it’s infiltrating every dimension of society.” It’s a calculator. It’s affecting your ability to do math in your head.

But I think there are some who would say, why not eventually regard AI as a tool, like the calculator. We now normalise calculator use, even in very advanced math classes in secondary schools. So there is a camp that talks about how, keep your eye on the prize of AI being a tool. And maybe trepidation about effect on human competencies, maybe that’ll shift as people start to potentially think of it as more of a tool as opposed to the colleague who supplanted the human being that used to be your colleague. I think that’s the other thing.

Rob Wiblin: Yes, I think inasmuch as AI does end up just being a tool, and it is just kind of an assistant that helps you accomplish more, if it continues to be the way it is for me now, I think that a lot of these concerns will recede. The case where I think people will remain worried is if the capabilities just keep growing, and eventually it becomes unclear what role the human is playing in the work or the decision at all. And that is what the pioneers of the field expect, sooner rather than later. Maybe it will take a lot longer. But I think people are aware of this possibility that they could be replaced in many parts of their life — and understandably, it makes them nervous about their future position.

The public doesn't think AI is overhyped

Eileen Yam: So 21% [of the public] said AI is being made a bigger deal of than it really is, 36% said it’s being described about right, and then another 36% said it’s being made a smaller deal than it really is. So to your overhyped point, the smallest share actually said, yeah, it’s being overhyped and made a bigger deal.

But the other thing we asked about is whether people thought it was important that people learn what AI is — again, defined broadly, so that means different things to different people. But do you need to be at least aware? Some modicum of literacy about AI, is that important? And nearly three-quarters said, yeah, that’s actually really important. So there is this sense of: it is where the world is going; it’s important that I become at least conversant or literate in what this is.

And when you see 21%, the smallest share, saying it’s being overhyped, I think that somewhat speaks to that: that there is an acknowledgment that, yeah, there is something real to this. This is not something that is just a fly by night, going to go away. That is something where I was asking that question particularly because I feel like going in I might have thought, yeah, I feel like every day I’m reading 17 headlines about AI. I feel a little saturated.

Rob Wiblin: Isn’t this a bit much? Some days I do feel a little bored.

Eileen Yam: Yeah. Every angle on AI. But no, for the general public, I think the fact that 36% equal shares — 36% say it’s described about right, and then 36% feel like it’s actually being made a smaller deal than it should be — that’s pretty notable.

For ChatGPT in particular, our latest data show that 34% of US adults have used ChatGPT. That is double the share of 2023. But 34% is still not even a majority, right? … So I did want to kind of temper the overarching framing of how it’s something that so many people are using, because it’s not so much that it’s widely known that they are always using it — the way 80% of AI experts said the general public is interacting with AI every day or almost all the time. The general public is nowhere near that.

Rob Wiblin: What was the stat there? I think experts thought that 70% of people were using ChatGPT or similar products regularly?

Eileen Yam: Yeah. So the general public, they’re not even always aware of when they might be interacting with AI. Experts have a much loftier perspective on how much the general public is actually interacting with AI. So nearly 80% said that they think people in the US interact with AI almost constantly or several times a day, and that’s compared to 27% of US adults who would say the same. So there’s this really divergent perspective on just how much people are exposed to these technologies.

Where people are positive about AI

Rob Wiblin: We’ve dwelt a lot on the negatives, but there are plenty of positive things that people had to say as well. Where were people most enthusiastic about the impact of AI?

Eileen Yam: Yeah, it is not all gloom and doom. And when you ask people, of the 25% who said they saw high benefits, the main reasons that they cited were about efficiency gains — freeing up time, for example: 41% of those people who saw high benefits mentioned efficiency gains freeing up time.

A quote that really just resonated with me was, this person said:

AI takes mundane tasks that often waste talent and effort and allows us to automate them. AI also allows us to access information in a more streamlined way and allows us to save something that we can never get back: time!

So that permeates throughout. And in our other survey questions, we’ve seen that too: about three-quarters are willing to let AI help with day-to-day tasks. There is an openness to how this can help make my life a little easier.

The other most commonly mentioned theme was about expanding human technological abilities. One thing that you hear a lot about is in the healthcare realm. … So in healthcare, and similar in education, you do see people touting the prospect of AI levelling the playing field — whether it’s these rural areas where you don’t have, let’s say, psychiatrists; or in the sense of education, where a person who is learning English as a second or third language, could that level the playing field in terms of translation, for example? Or could it be a 24-hour potential tutor for people?

So this idea of there is potential in certain human technological realms as far as abilities and advancements, where that might be where AI can shine and actually really contribute and have a positive impact on society.

Rob Wiblin: Yeah, some people got quite poetic about it, actually. One of the quotes that stood out to me was

AI has the potential to make society more efficient than ever. AI sort of transcends time and space in that it can be used to study the past, inform the present, and shape the future. It can be used by individuals, by corporations, by governments.

I think they’re right.

Eileen Yam: Yeah. They are not doomers, those people. They see some silver linings.

Rob Wiblin: Yeah. So the areas where people were happy [to use AI]:

  • 70% of people were happy for AI to be used for searching for financial crimes.
  • 70% also happy for it to be used for searching for fraud and government benefits.
  • 61% happy for it to be used in identifying suspects in a crime. Again, there’s sci-fi stories that cover this sort of thing. It reminds you of Minority Report a little bit, but people don’t have that fear that AI might miscategorise suspects of a crime.

Eileen Yam: Yeah. And in identifying suspects in a crime, I think where your head goes probably — as an insider, again — is algorithmic bias and understanding of training data. It’s a little bit of a Rorschach test in terms of how are you reading that question. Because my head doesn’t necessarily go to the algorithmic bias; I think that in the grand scheme you can imagine other kinds of data that maybe don’t have to do with physical characteristics that might feed into, “Let’s home in on where this suspect might be or where they may have left a digital trail,” however that may be. So I didn’t necessarily zoom in on the facial recognition point that I think you’re homing in on.

Rob Wiblin: Yeah. Well, because I think that’s one of the few applications that has in some places just been banned outright, because it is something that more insiders are worried about. I think in some places in the EU, you can’t use facial recognition for policing purposes. People have been trying to ban it in particular cities and so on. But it’s not a super salient concern among the general public.

Biggest gaps between experts and the general public, and where they agree

Eileen Yam: Overall, experts are much more optimistic about AI’s impact on many aspects of society:

  • When it comes to jobs, 73% of experts see a positive impact on how people do their jobs compared to 23% of the general public.
  • 69% of experts see a positive impact on the economy versus 21% of the general public.
  • Then for productivity, 74% of experts say that AI will make people more productive compared to 17% of the general public.

There’s a big divergence across those three dimensions of jobs, economy, productivity. And in all three cases, the experts are much more bullish.

Rob Wiblin: Yeah, those are eye-watering gaps: 74% of experts think it’s extremely or very likely that AI will make humans more productive versus just 17% of the general public. I’m inclined to agree with the experts on this one, but it’s such a different picture that people have in their heads. I’m not sure what to make of it.

Eileen Yam: I think part of this is experts have perceived that there is much more prevalent use of AI in the general public overall in a way that the general public isn’t necessarily perceiving their own interaction with AI as being quite so frequent as experts do.

So to the extent that if you are an expert and you believe most people are pretty much all the time or several times a day interacting with AI, they have a lot more data points to inform their opinion about how it’s affecting their lives. And it sounds like that big disconnect in thinking it’s making people more productive, there might be some element of just reflecting on AI in their own lives: “It’s making my life a whole lot more productive as someone who’s steeped in this world, and it’s what I’m drinking and eating and sleeping all the time.”

Rob Wiblin: Yeah. One that jumped out to me was 51% of experts think that they’ll one day trust AI with important decisions versus 13% of the public. Presumably more than 13% of the public is comfortable with AI being involved in some way in advising decisions, but experts are so much more comfortable with the idea of full delegation to AI versus the public. Feeling comfortable with that is actually quite a niche feeling among the broader world.

Eileen Yam: Yeah, that’s right. And this question of full delegation versus assisting or serving as a tool, that’s the crux of the conversation in a lot of circles. I think even among the general public, people who might use an LLM to maybe clean up this sentence I’m struggling writing: it’s not so much that they entirely are offloading a writing task, but there’s some element of “just assist me” or “be a tool for me.”

Rob Wiblin: Here’s another big gap: 76% of experts think AI will benefit them personally versus 24% of the general public. Maybe it makes sense because the people who work in the AI industry expect to personally profit in their career; this is their industry, this is their sector. But it’s just a big gap in people’s comfort level about how this is going to affect them over coming years: people who are in the industry, who are already benefiting really from the explosion of this technology, are going to be literally receiving personal benefit all the time in the form of sometimes incredibly high salaries.

I think that could really drive them being very out of touch. If you compare their experience with the experience of some future office worker who perceives, correctly or incorrectly, that they’ve lost their job to AI, imagine how different their sentiments are going to be when they’re speaking to their member of Congress or otherwise considering whether to vote. I think it does create the potential for quite an elite-versus-non-elite gap, or populist-versus-non-populist gap here.

Eileen Yam: Yeah, that’s right. And I think that an undercurrent to a lot of these conversations is just about equity, disparities in access, disparities in AI literacy. The fact that there’s such a gap in experts’ perception and the general public’s perception, that’s precisely part of the reason why we wanted to do this: let’s illuminate where this elite discussion is way down the road, far further downstream than where the public conversation and consciousness is.

The AI industry seems on a collision course with the public

Eileen Yam: So we don’t have a crystal ball. We don’t know how the public will react. We also don’t know what are the news cycles going to bring, what is going to feel like this is on my doorstep now. There is a lot of room for sentiments to evolve. … I think it just remains to be seen what externalities keep shaping people’s views, because it’s just changing so quickly. …

There is certainly that overarching concern about control. I do feel like that’s something that that needle isn’t moving in a better direction over time, at least. So that’s something that is just enduring, and that’s something that I think all parties keep an eye on — because it’s sort of adjacent to other areas of concern, like misinformation or deepfakes or feeling like you’re getting duped by content that you don’t realise is AI. As far as buckets of concerns, definitely that control piece is something that people hold dear and it’s just lingering and enduring.

Rob Wiblin: How do people feel about regulation of AI? Did they expect it to be regulated too much or too little?

Eileen Yam: That’s one area where the experts and the public do agree that they don’t have much confidence in either government or industry to effectively regulate AI, put the guardrails in place that they need to. So I think that there’s this sentiment that, “Yeah, regulation needs to be there, but it’s not government or industry that I have a whole lot of confidence in stepping up to that.”

Rob Wiblin: Yeah, it was interesting. I think 58% of the public said that they were more concerned government won’t go far enough in regulating AI, versus 21% who were concerned about it going too far. But it’s also true that, experts and the public, when you ask them, “Do you have confidence in the companies to govern themselves?” they’re like, no: I think like 60% or something said no. Then you ask them, “What about government? What about the alternative?” they’re like, “No, we also don’t like that.”

Eileen Yam: Right, right.

Rob Wiblin: We can appeal to the angels perhaps to come down and govern AI for us.

Eileen Yam: Yeah, exactly. And I think that there was one quote among one of the experts in our sample that was basically about how something to the effect of, “Listen to these congressional hearings: these legislators don’t know anything about this technology. How on Earth could we expect the government to regulate effectively?”

I think that is something that is shared, and that is one of the areas where both the insiders and the general public do converge: that governance, regulation, I’m not really seeing industry or government stepping up to do that effectively.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.