Transcript
Cold open [00:00:00]
Eileen Yam: Is there a looming techlash against AI? A notable share of these open-ended responses invoked two movies: Terminator and 2001: A Space Odyssey.
Rob Wiblin: Sixty-six percent say that it should play no role in judging whether two people could fall in love. That one jumped off the page for me.
Eileen Yam: Certainly for creative thinking and forming meaningful relationships, of those who took a side, far and away, people felt like AI is going to worsen these abilities.
Eileen Yam: Every day I’m reading 17 headlines about AI. I feel a little saturated.
Rob Wiblin: Isn’t this a bit much? Some days I do feel a little bored of it.
Eileen Yam: But no, for the general public, equal shares: 36% say it’s described about right, and then 36% feel like it’s actually being made a smaller deal than it should be. That’s pretty notable.
Rob Wiblin: I really wanted to do this interview in part because I think it’s remarkable the degree to which we’re just completely out of touch.
Eileen Yam: Seventy-four percent of experts say that AI will make people more productive compared to 17% of the general public.
Rob Wiblin: It’s such a different picture that people have in their heads.
Eileen Yam: It warms my heart when I see people like you say, “Oh my gosh, I’d never thought that the general public was over here and experts over there” — because I feel like that’s precisely the point.
Rob Wiblin: When you ask them, “Do you have confidence in the companies to govern themselves?” they’re like, “No.” But then you ask them, “Well, what about government? What about the alternative?” and they’re like, “No, we also don’t like that.”
Eileen Yam: Right, right. Who’s supposed to do it then?
Rob Wiblin: We’ll have to appeal to the angels perhaps, to come down and govern AI for us.
Eileen Yam: That’s one area where the experts and the public do agree.
Eileen Yam: I really loved this quote from a teacher —
Rob Wiblin: Yeah, this is the quote that made it click for me as well.
Eileen Yam: Yeah, this person wrote…
Who’s Eileen Yam? [00:01:30]
Rob Wiblin: Today I’m speaking with Eileen Yam, director of science and society research at the Pew Research Center here in Washington, DC.
Pew Research Center has been doing public opinion polling on artificial intelligence for many years. But in the last year they’ve had three reports that really caught my eye, where they surveyed over 5,000 US adults on their opinions on all different aspects of the artificial intelligence issue, and also spoke with 1,000 experts who were part of the AI ecosystem to gauge their views.
And all of this polling turned up some very strong feelings on many different fronts, and concerns that were quite different than what I expected them to be, and also turned up some massive differences of opinion between the US general public and AI experts in particular — which all made me think that we might be in for a slightly bumpy ride as artificial intelligence begins to affect people’s day-to-day life more and more, which is going to be the topic of today’s conversation.
Thanks so much for coming on the show, Eileen.
Eileen Yam: Great, thanks for having me.
Is it premature to care what the public says about AI? [00:02:26]
Rob Wiblin: Why should we care about public opinion polling about AI at this kind of early stage? I think a critic might say it’s too soon to really know what the public is going to think. They haven’t thought about it that much. For many people, it hasn’t affected them, or they haven’t been using AI extensively as yet, so we really just can’t say what people believe.
Eileen Yam: I’ll answer that in two parts. So the first question is why should we care about public opinion? So this is a new innovation that has, depending on how you look at it, either left the station or it’s about to leave the station. And a lot of this very rich dialogue, discussion, debate about AI and all of the implications are really happening in elite circles. So we’re talking about insiders — and I would count you among them; you are superlatively conversant in AI — industry leaders, policymakers.
But what we really want to understand with public opinion research is just the general public: people who aren’t spending hours and hours just mulling over every aspect of AI. Don’t forget that in this country about half the population doesn’t have a college degree, for example — so when you hear about AI on college campuses, for half the adult population in this country, that’s not an experience they’re having, right?
So the why we care about public opinion: it’s about their approval of this tech, their trust in this tech. That has a ripple effect for what policymakers are prioritising when they’re thinking regulation: “How much is this actually bubbling up to the service as a concern among my constituents?” for example. And from the industry perspective, I think you’d want to know how is the public receiving this product of mine — if nothing else, out of sustainability concerns, concerns about any kind of backlash or “techlash.”
So for all those reasons, this public opinion approach of just bringing the voice of regular people into a conversation that to date has still been pretty elite, that’s the why we should care.
Then the second part of the why is to be able to track change over time. So if you look at innovations like the internet: in the ’90s, 40% of American adults were using the internet, which seems unfathomable now. But now we can say, because we’re tracking it, that it’s more upwards of the high 90%s. So tracking change over time, when something is new coming down the pike, is another reason to track public opinion.
As far as your second question about why should you trust polling on something that is not so salient? So when we ask US adults how much they’ve heard or read about AI, 95% now say they’ve heard at least a little. So there is some notion of, “Yes, I’ve heard of this thing.”
And when you’re designing survey research, to your point, it’s not the kind of topic that is very front of mind and omnipresent for the general public. So you really have to design your survey questions really carefully. For example, we often give an option of “Not sure.” So if we’re asking about an opinion about AI, about implications in different areas of society, we’ll offer that option up of “Not sure.” And we do find that for some questions, upwards of high teens, low 20%s will say they’re not sure.
The other thing is for surveying the general public, we really take care to ask about high-level attitudes. So nothing terribly technical. We wouldn’t use the term “algorithmic bias.” Or even something that for you, you reflexively understand if I were to say, “AI recognises patterns in data” — but there’s so many things in that phrase. I think of my retired auntie who hasn’t used a computer in a work setting in at least a decade: “What’s data? What do you mean by patterns?”
So when we’re crafting these questions, we really try to keep this at a broad attitudinal level. And that’s the kind of thing that we’ll talk about, that I’m not going to use in a survey question lingo or jargon that would be tough or inaccessible to a lot of survey respondents.
The top few feelings the US public has about AI [00:06:34]
Rob Wiblin: So we’re going to go through a lot of different results in this conversation. I think we’re going to be talking about three different reports that Pew has published in the last year. We’re going to link to them, obviously, and people should go check them out. They’re very readable. Pew does a really good job of communicating its results and making the graphs very easy to understand and drawing out the main conclusions. So if you’re interested in this conversation, do go read the original reports.
But in a nutshell, how does the US public feel about artificial intelligence, would you say?
Eileen Yam: I’d say it’s a very nuanced story. I was reading our findings just a couple of days ago, prepping, and I was reminded of this school bus driver I had in elementary school who used to say, “Get on the bus or be under it.” Arguably not exactly age-appropriate messaging for children. But what I have found in looking at this data, and what really resonates with me, is this dual sentiment of, “I’m willing to have AI play a role in helping me out with chores, with day-to-day tasks.” There’s also a sentiment of there’s the risk of losing out on opportunity if we’re too reluctant to use AI. So this feeling of, back to the bus analogy: I may have a little trepidation about where it’s going, but I kind of feel like I want to get on it for certain areas of my life.
But then there are some sort of persistent overarching concerns at a societal level, at the level of how AI might affect some core human abilities. And that’s where there is this duality of recognising, “I’m kind of open to AI making my life easier, and also recognising this is a train leaving the station — or it’s left the station — but at the same time, I’m a little wary.”
Rob Wiblin: How are those two things balanced against one another?
Eileen Yam: So what we learned in our most recent poll is that 50% are more concerned than excited about the increased use of AI in society, and that’s up from 37% in 2021. So there’s been an uptick in concern. And the other big finding to many people, every time I’ve looked at coverage of this poll, reporters jump on the fact that 61% want more control of how AI is used in their lives.
Rob Wiblin: And what is the biggest concern that people raise when you ask them, “What are you concerned about?”
Eileen Yam: So we really wanted to tap into that in survey respondents’ own words. So when we asked, “Do you have concern that AI poses a high risk to society?” about six in 10 said yes, that that was the case.
The biggest concern boiled down to an erosion of human abilities and social connections. I have some great quotes that people responded to. I think that this one really encapsulates it for me:
We are sacrificing creative freedom and challenging ourselves mentally for the sake of convenience. We’ll only decrease our ability to think critically and use actual sources with increased use of AI.
So that’s kind of at the existential level, but then another respondent took it even further and said:
It’s my belief that we have evolutionary needs — such as the need for a deep social connection, contribution to community, and purpose — that are not met because they are all too easily but inadequately replaced by technology, like physical distance, social media, jobs, et cetera.
So 27% of the respondents who said that they saw high risk of AI, they really homed in on this erosion of human competencies, the connections. That was one of the top ones.
The other big one was about control. Like I mentioned, in our survey, there were 50% who are more concerned than excited. And one of the examples that one of the respondents gave for wanting more control is that it’s more at a societal level that they’re talking about here:
Society will be too slow to regulate and control AI. The technology will advance rapidly and outpace our ability to anticipate outcomes (both positive and negative). It will therefore be extremely difficult to implement and deploy risk management strategies, plans, policies and legislation to mitigate the upheaval that AI has the real potential to unleash on every member of our society.
So that person, who seemed pretty conversant, was really coming at it at a control at a policy level, at a regulatory level. Those are the kinds of things that come to mind when people think about what are the risks here.
Rob Wiblin: So I really wanted to do this interview in part because I think you’re exactly right that me and almost everyone who I talk to about AI, we’re all insiders to one extent or another. And I think it’s remarkable the degree to which we’re just completely out of touch with how the general public in the US, and probably the general public around the world, feels about AI.
I think that’s not necessarily a sign that we’re more ignorant or that anything bad is going on here. It’s just when you are working in an area, when you’re thinking about it all day, you can just end up with very different attitudes than someone who’s thinking about it less or just has a very different kind of life to you.
And I think that creates a bunch of potential for conflict that people may not anticipate coming. The big worries that people have — about disempowerment or degradation of people’s capabilities and degradations of interpersonal relationships — if someone asked me, “What are you concerned about with AI?” I kind of share those concerns, but that’s not what I would put at the top. And I think it’s not what very many people in the AI industry would put at the top. Because for most people who are building these systems, they think of it as empowering people. and that is their vision, that’s their hope.
But the public feels very differently, or perceives the impact that these technologies are going to have very differently. And if this does become a top-level political issue because it’s affecting people so much more, or there are important events — things that really go down that involve AI that force people to talk about this and grapple with it — I think the conversation could go in directions that leaders of AI companies may not expect.
Eileen Yam: Yeah, the question of is there a looming techlash against AI, I think that’s on the minds of a lot of people in these elite discussions. What you see when you ask the general public about a lot of the things about risks, and where do they see maybe an opening for AI in my life versus not.
A notable share of these open-ended responses invoked two movies, Terminator and 2001: A Space Odyssey. And I don’t know if you’re familiar with both, but it was all in the context of this cultural touchpoint that predates all of the conversations we’re having today. 2001 goes back to 1968. So there’s this cultural priming of almost a kind of sinister supercomputer in 2001. Is it OK to have spoilers about movies that are 50 years old?
But bottom line, so I haven’t even seen that movie, but I know the takeaways of there’s a supercomputer gone wrong. Same with Terminator — I did see that — this AI cyborg assassin. That again is decades before all the conversations we’re having today.
So one thing that I did note is that so many people, even if they may not be particularly informed about the ins and outs of AI and training data and bias, have been primed culturally in a way that for other tech innovations… I don’t remember any feature film about a sinister smartphone, or…
Rob Wiblin: Or social network.
Eileen Yam: Right. And there’s something about this supercomputer that I think is different about this tech. So to me, I think that’s one big context when you think about techlash: how much of it is actually culturally primed, dating back to before we even had these chatbots today, or concerns about deepfakes today? So that’s one thing.
The other thing about this particular tech is how people are engaging with it unbeknownst to themselves. So it’s just sort of ambient in a way. You’ve got to opt in to having a phone, you’ve got to opt in to using a browser. You’re knowingly using your phone in a way that, for AI, not so much.
As far as whether we are heading for kind of a backlash to tech: I don’t have a crystal ball. I can’t say. But what you do see signals of here is just this sense of, “I want more control at a personal level.” But then also, “I don’t really feel confident that industry or government will exert sufficient control and guardrails on this tech.”
So no, I can’t predict if this is going to translate into a techlash or if it’s just going to be a nothing burger, that this just is a wave that we ride and it all turns out to be no big deal to anybody. But I do feel like there is this almost priming people had, and I was really struck by how many people invoked Terminator and 2001. There’s something about that control piece, or just getting out of the domain of behaving in a way humans intended it to, that kind of bothers people.
Rob Wiblin: Yeah. I guess the thing that’s unique about AI is that it has the potential to act more autonomously, to go out and pursue goals without necessarily having a human having oversight of it at all points. So I guess it’s very natural that that’s the thing that maybe stands out to people as most unique and most disconcerting about it.
Eileen Yam: Yeah, certainly unlike other innovations of the digital era that you might think about, like the internet or social media or mobile tech.
The public and AI insiders disagree enormously on some things [00:16:25]
Rob Wiblin: Here’s a crazy number that was in one of these surveys: 6% of the general public say it’s highly likely that AI will make humans happier, and 50% say that it’s unlikely that AI will end up making human beings happier. We’ll talk about some of the survey results from AI insiders, from people in the AI ecosystem, but they’re like night and day. Experts do have a whole lot of worries about AI, but by and large, they expect it to improve the human experience, to make life better, to make people more productive at work. And there are big gaps on these between people involved in AI and the general public.
Eileen Yam: Yeah. Where it’s most striking are in two areas. So 73% of experts see a positive impact on how people do their jobs, versus 23% of adults in the general population. And the other area where there’s a big disconnect between the expert opinion and the general public is 69% of experts see a positive impact on the economy versus 21% of the public. So we’re talking about a really big gap there.
And I think a lot of that, again, comes back to people who are head down thinking about all the permutations and the pros and the cons and the potential negative consequences and all of the benefits. Regardless of the underlying, why is it the experts are so much more bullish? — the fact that they are so much more bullish is a really interesting finding right there. Just what is different about the experts and what they’re seeing, compared to people who maybe they think about it, maybe they know when they’re interacting with it, maybe they know what “training data” even means, maybe not? But that’s pretty huge.
Rob Wiblin: Yeah, there’s different angles we can take on this question of the nature of people’s anxieties. I think there’s questions about risk, there’s questions about jobs, there’s questions about people’s quality of life in general.
What fraction of people say that they think AI is risky or very risky, or think that it’s more likely to harm than benefit them?
Eileen Yam: So we asked this question of both the general population of US adults as well as AI experts. Among US adults, 43% say it’ll harm them compared to 24% who say benefit them; and then for AI experts, it’s the converse: 76% say it’ll benefit them and 15% say harm them.
Here I think that there are a couple ways to think about this. One is that for the general population, fully a third said they’re not sure. So again, back to the question about asking the general public about something that may not be that salient: I think in this context, the “Not sure” is kind of like an undecided voter. As time goes on, as maybe everything evolves, different news cycles, that might change. So that 33% is actually pretty substantial.
For the AI experts, again, they are steeped in this stuff. And I think it’s also worth noting they are studying this, working in this industry. So they’re employed in this industry in a sense, so it’s really —
Rob Wiblin: If they thought it might harm people, maybe they wouldn’t go into it.
Eileen Yam: Right. Maybe they wouldn’t still be there. But I think the takeaway for me that I really picked up on with this question in particular is just that third of the general population that just says they’re not sure about, personally, is it going to benefit or harm me? I think that’s something that is very important to keep in mind: that it’s not front of mind. So really, if on a dime in a survey, you’re asked, “Is it going to benefit or harm you?” and you haven’t really given a whole lot of thought, that’s where you get a third saying they’re not sure.
Fear #1: Erosion of human abilities and connections [00:20:03]
Rob Wiblin: So after the question about whether you think AI is risky or very risky, you gave people the opportunity to just put in free-text responses and explain why they either think it’s risky or why they think it’s not risky. And then you try to categorise those free-text responses. I think the free text stuff is really useful because it allows people to, in their own words, in their own language, highlight things without you leading them, necessarily.
And among the 57% who rated the risks as high or very high:
- 27% cited erosion of human abilities and connections; that was the biggest cluster.
- Then there were 18% who were worried about inaccurate information.
- 17% were concerned about loss of control of AI.
- 11% were concerned about misuse or people using it for bad purposes.
- And 9% mentioned job loss.
And there was a spread of other smaller issues. It’s such an interesting combination of issues. I think the stuff that experts tend to talk about the most, it’s misuse, loss of control, maybe job loss, certainly inaccurate information — in the sense of are the models good, or at least are the LLMs giving people the right answers?
So this erosion of human abilities and connections was the one that maybe surprised me the most on that list. Is it possible to give people more of a texture of exactly what about human connections or human abilities people expect to erode?
Eileen Yam: Yeah. So there is this question that you’re referring to about what people, in their own words, think are the main risks of AI to society. We also in a separate question asked to what extent will AI make people’s ability to form meaningful relationships and think creatively? Will it improve that or make it worse?
So certainly for creative thinking and forming meaningful relationships, of those who took a side, far and away, people felt like AI is going to worsen these abilities. And in the open-ended, freeform responses that people offered when it came to this kind of erosion of human abilities, I really loved this quote from a teacher; it gives a lot of food for thought about how we think about how children are growing up today, developmental stages, how they think.
Rob Wiblin: This is the quote that made it click for me as well.
Eileen Yam: Yeah. This person wrote:
As a school teacher, I understand how important it is for children to develop and grow their own curiosity, problem-solving skills, critical thinking skills and creativity, to name just a few human traits that I believe AI is slowly taking over from us. Since children are digital natives, the adults who understand a world without AI need to still pass the torch to children for developing these human qualities with our own human brains, instead of relying on the difficulty to be passed on to AI so that humans don’t have to feel the struggle of what real learning is.
And I, as a parent, almost feel a little chastened there. Just, oh my gosh, what kind of example am I setting? But no, it’s almost beautiful in the poignancy of how this person just really homes in on this is the different world that the kids are growing up in today. So yeah, that erosion of almost, she’s hinting at the essence of what makes us humans: this kind of curiosity, problem-solving skills, thinking skills, overcoming the hurdles in your head and relying on yourself to clear them. It’s just very evocative.
Rob Wiblin: Absolutely. I spend hours every day now interacting with AI when I’m at work. Almost everything, the first thought that I have when I’m given a task or preparing for an episode is like, “How can I collaborate with AI to make this easier and faster and higher quality?”
I guess not everyone has that experience or has a positive experience with it, but at least for me, I’ve never felt more engaged, I’ve never felt more empowered, I’ve never felt more mentally active — because the AI allows you to speed up the routine stuff so you can get to the harder stuff. So that’s the sense in which people are going to be disempowered, and not going to have such good connections, didn’t resonate with me.
But then when I’m imagining my own child growing up in this world where they’ve never actually had to do the thing themselves all the way, all the time they’ve just been able to delegate it to an AI… And by the time my kid is in school, this model is going to be even more powerful. It’s going to be maybe unclear what value they can add. Especially as a six-year-old or a seven-year-old, you can always just get the LLMs to do the thing for you better than you can do it. You’re going to get better, but the AIs are also getting better as well. So it’s always going to be ahead of you.
And I guess we don’t know how it’s going to pan out, but children growing up in this era might well end up feeling a lot of connection with the AIs that they’re talking to, with the chatbots. Maybe not. I guess some people have that experience now, that they feel a sort of personal connection — they feel almost like the AIs are their friends — and other people don’t. But if that’s the world you grow up in your entire childhood, is that going to affect your ability to have positive relationships with other people? Especially when the AIs, they kind of give you whatever you want. They’re not like other human beings with the rough edges and the difficulty, the things that you have to navigate.
Eileen Yam: Yeah, absolutely. And I think that this idea of the question we asked about how is AI affecting people’s ability to form meaningful relationships: we’re typically thinking about meaningful relationships with other human beings, and the extent to which people perceive AI as somehow supplanting that or eroding those qualities.
We asked another question about whether people saw a role for AI in matchmaking and predicting who might fall in love with each other. There’s something about relationships that, in our survey work, in this one, it seems a little sacred: that this is just not the domain of computers.
I think that what you picked up on as far as learning, and what this teacher picked up on, what’s a little different is when you’re getting into this realm of problem solving, seeking out information and being able to find information, overcoming that writer’s block on your own, that’s where it gives people pause.
Rob Wiblin: Yeah, so:
- 53% of people say AI will worsen people’s ability to think creatively, versus 16% who say it will be positive. I would say it’s positive for me now, maybe in the future it could become negative, but that’s striking to me.
- 50% say AI will worsen the ability to form meaningful relationships versus 5% who say it will be positive.
- 40% say AI will worsen people’s ability to make difficult decisions versus 19% positive — so it got a little bit more mixed there, but still leaning negative.
- And 38% say it will worsen people’s abilities to solve problems versus 29% positive.
I think this is just so different than the picture that people who are really hopeful and enthusiastic about launching these products have. They think it’s going to help people with all of these things.
Eileen Yam: Well, I think what this data speaks to is that both things can be true, right? You can believe that AI can help solve problems, but not necessarily feel that’s a good thing to be offloading wholesale to a computer, to an algorithm.
And I think that’s the crux of a lot of conversations I’m in, particularly in maybe the public health realm. There is this tendency to almost make this a binary conversation of should AI do this or not, as opposed to AI as a tool, as an assistant to the humans that actually are the executive decision makers. And I think there’s a little bit of that tension.
The other thing I want to note about the data on whether people think that AI is eroding these different human qualities — think creatively, form meaningful relationships, for example — is there is a sizable share that actually says neither better nor worse. What I find striking is 25%, when it comes to forming meaningful relationships, actually say it’s not going to make us better or worse, and then another 20% say they’re not sure.
I think that’s really interesting to me that they’re not sure or it could go either way, meaningful relationships with or without AI, maybe it doesn’t make a difference. So there is nuance in those findings. However, those who expressly have an opinion one way or the other definitely lean towards the negative: that it’s just going to make us worse.
Rob Wiblin: I guess the fact that there are so many people who are unsure or on the fence suggests that we could see [big swings]. Like if there are big events, something captures the public imagination and gives them a sense of, yes, AI is degrading young people’s ability to have relationships, then we could see big swings. Or conversely, if people don’t observe it, maybe they’ll just get more comfortable and become happy with it over time.
Eileen Yam: Right. It comes back to salience and what’s the mental image you have. When it comes down to these human qualities, I think for a sizable share, the jury’s still out, right? I think that if you have an opinion, yeah, you’re steering negative — but for a lot of these data points, there’s a sizable share that could go one way or the other.
Fear #2: Loss of control of AI [00:28:50]
Rob Wiblin: The third largest worry on the list was loss of control of AI, which sounds like the kind of thing that we talk about a lot on this show. What do people mean by that? It could mean so many different things, right?
Eileen Yam: Yeah. So there was one quote that was tagged as a Terminator scenario by one of my colleagues. This person wrote in response to that question about what they see as the main risk to societies:
Scientists don’t even understand the full spectrum of AI, as it’s already surpassed human intellect. That is scary and dangerous.
So there is a little bit of a catastrophist in there of just, “This is the machines coming for us.” In a way, that’s control. At the end of the day, if this is a machine whose intelligence exceeds yours and can just run amok, that’s actually pretty terrifying for some people. So there’s that.
But I think the other thing when it comes to control, there was one question we asked that was really interesting, and it was basically, “How important do you think it is for you personally to be able to detect when content is created by humans versus AI?” About seven in 10 said they think that’s really important. But then when we asked, “How confident do you feel that you could actually spot AI-generated content versus human?” about half said, “Not at all confident.”
So that’s another part of control too, is whether that discomfort boils down to, “I don’t like feeling duped,” or just, “This is something I need to be able to do in this world where AI could be permeating a lot of areas that I’d be engaging with.” So I think that’s part of control too. That it’s just a little disconcerting in this world where there’s a lot of exposure to AI that you may not even know, among them being content created by AI that you’re not aware of. I want to feel confident that I can spot what’s made by a human and what’s made by a computer, but a lot of people don’t feel that.
Rob Wiblin: Yeah, misinformation was one where the general public and experts really saw eye to eye. I think 60% to 70% of both of those groups thought that AI-generated misinformation was a significant concern. That’s actually one where I’m slightly inclined to think that people might be exaggerating, or I think it’s a manageable issue. But I guess it’s one that people have talked about a lot and I think it definitely is a legitimate concern. And most people across the board just think that they’re a bit worried because they correctly think they can’t identify when things are made by AI.
Eileen Yam: Right, exactly. And there’s been so many news cycles about celebrities being duped or being impersonated. And of the people who said they see a high risk of AI to society, the negative impact on accuracy of info was mentioned by 18% of those people.
And I have some great quotes from some respondents on that as well. One was, to your point:
AI can very easily be used to fake people’s likeness and voice. This is absolutely dangerous in the hands of criminals or other dishonest people. Identities can be stolen; innocent people could be framed for doing or saying things they didn’t do/say.
And then relatedly, another one said:
Right now, generative AI just has no relationship with the truth whatsoever. Properly using it takes a ton of effort to verify that the information it provides is correct, and which I think really mitigates its purported time savings. You can make total garbage really fast and that’s about it. It’s great for scams and low-effort work, poor if you actually want to do things right.
Fighting words.
Rob Wiblin: [laughs] So what do you really think?
Eileen Yam: Yeah, yeah. But I’m trying to remember the latest deepfake that was in headlines at the time of this survey. I believe it might have been a Secretary of State Marco Rubio voice impersonation. And yeah, I think that there is some sense of AI being used for nefarious purposes, and that just makes people deeply uncomfortable. And that’s also dovetailing with not feeling a sense of control. So I think it’s kind of two sides of the same coin: this inaccurate information and lack of control, they kind of dovetail against each other.
Americans don’t want AI in their personal lives [00:33:13]
Rob Wiblin: So you also asked for a bunch of specific applications of AI, and did people think that AI should play no role at all, some role, or a huge role?
And the things that people most objected to AI being involved with was:
- Advising people about faith in God: 73% said that it should play no role there.
- 66% that it should play no role in judging whether two people could fall in love. That one jumped off the page to me.
- 60% said it should play no role in making decisions about how to govern the country. I was a bit unsure whether to interpret that as they’re concerned about AI even assisting government bureaucrats or politicians, or whether what they want is that the AI not make the final decision, which I think is a lot more understandable.
- 47% said no role in selecting who should serve on a jury. I guess a sensitive issue.
- And then 36% said it should play no role in providing mental health support — which, frankly, I think is a big mistake, personally.
But yeah, I guess it’s all sort of the personal stuff, that’s a common theme with several of these. And I guess also like exercise of power through government.
Eileen Yam: I think that when we were developing this question, we were trying to capture a few different buckets that hit one dimension, from the personal realm to the less personal realm, be that medical science or finance. But the other dimension is how high stakes or consequential is getting it wrong or right.
For something like forecasting the weather, that’s the science realm; searching for financial crimes, finance realm; developing new medicines, healthcare/science realm. There is where you see more receptiveness to a role. On the other end of the spectrum: matchmaking, faith in God — that affects me more personally than a weather forecast gone wrong. Unless I’m maybe a farmer, it’s an unanticipated bad hair day. It doesn’t kind of land the way that advising about faith in God does — that’s deeply personal, deeply individual.
So yeah, I think that when I look at these findings, I bucket them off into kind of big data in the medical/science/finance realm: there’s more receptiveness there. But on the more personal end of the spectrum, I’d say — matchmaking, religion — that’s where people feel like, not so much. And this is where the shares that are saying they’re not sure, they’re still notable, but not as high as we’ve seen for other items. There’s still maybe 13% to 19% saying not sure, but these opinions are a little bit more concretised, at least as far as people actually weighing in and saying.
And for providing mental health support to people, you noted you were surprised by how low it was that 36% said no role at all. It’s interesting because literally just the other day someone had the converse reaction of, “Forty-five percent see at least some role in mental health therapy?!” Granted, this was someone in the public health realm who feels like, I don’t know about therapy by algorithm — but I think that just speaks to there are different points, different perceptions, and it’s super nuanced.
But I think that in broad strokes, this idea is just deeply personal, very individual aspects — like matchmaking, like religion — that’s kind of a no-fly zone. But then on the more big data, out-there — developing medicine, forecasting weather, identifying financial crimes — that’s OK.
Rob Wiblin: So the public, in the scheme of things, was quite comfortable with AI being used to fight crime, I guess to surveil people to identify suspects of crimes. Which somewhat surprised me, because one of my anxieties is that AI might be used for mass surveillance, might be used by government to oppress people. I guess that was less salient to the public than this role in romantic matchmaking.
I’m kind of curious. Maybe it sounds bad, the idea of AI forecasting whether people will fall in love, but if someone designed a new dating app, and the dating app used some sort of algorithm to match people based on their answers to questions, trying to estimate their compatibility, do you think many people really would be like, “This is unacceptable. This product shouldn’t exist”?
Eileen Yam: Well, I think you actually present a bit of a test case for how experts interpret that question. Where your mind went is to dating apps and data and algorithms, which is not necessarily how a regular Joe might hear that question. They might be thinking more of AI as something that is not necessarily tethered to an app. And even this notion of data, and, “What does that even mean to have this basically robot or algorithm that is trying to predict what my personal preferences are?”
But I think that what’s interesting is — and I do regard you as an AI expert as we would have defined it here — your mind went to dating apps, data, and algorithms, in a way that not everyone’s necessarily would in hearing that question of could you judge whether two people could fall in love? We never mentioned apps, for example. So if you’re someone who just would never even consider a dating app in the first place, is that really where your mind’s going? So that to me speaks to I think that you’re reading the question perhaps differently than others might.
Rob Wiblin: Yeah. Not necessarily wrong there, I guess, but just in a particular way people hear the question and different things jump to mind.
Eileen Yam: Yeah, exactly. And that’s the premise of our survey research, is a lot of times we don’t spoon feed to people, “What we mean by this question is…” I’d say more often than not we really want to know, “What are you reading into that? And answer it accordingly, as far as how you’re coming at it and what comes to mind for you.” And that’s where I find it very interesting, your reaction. When I read your question on that in the runsheet, I thought that’s really interesting you’re reading it that way, because I’m not sure everyone would.
Rob Wiblin: Another surprising result to me was that the thing that people were most comfortable with AI doing out of the list was forecasting the weather:
- 74% of people said that it could play a big role or some role in forecasting the weather.
- 12% of people said AI should play no role at all in forecasting the weather.
- And 14% were unsure.
I’m like, 12% of people think… How do they think we generate weather forecasts now? Do they turn on the Weather Channel — “Ugh, it’s disgusting!” I guess many people just won’t have thought about this. How much time do we spend thinking about weather forecasting?
There was a big difference in education on this: 88% of postgrads said yes, it’s absolutely fine for AI to be involved in forecasting the weather, versus 62% of people with a high school degree. Maybe postgrads and people with a high school degree, at least for some of them, they think of very different things when they think of AI forecasting the weather. Or possibly it’s just an issue where no one’s ever thought to ask them this question before. Someone’s on the phone and they’re going through the survey.
Eileen Yam: Yeah, I would say a little bit of column A, a little bit of column B. But I think that the educational gradient does speak to something. People who are more conversant in AI in general, more exposed to it, interact with it more, arguably understand this idea of data and number crunching more: there is just a lot more salience to this idea of an algorithm predicting what the weather is going to be like.
So getting back to, for me, what is interesting is I looked at this finding and thought, wow, that’s a really solid majority that are OK with AI forecasting weather. It never occurred to me to think, but 12% say no role at all. Because comparatively, for all the other items we look at of roles in society, it’s quite a bit lower. So I felt like it was interesting that was your reaction, because I feel like, wow, that’s a whole lot of people who are OK with weather forecasts.
Rob Wiblin: The Weather Channel is probably safe.
Eileen Yam: Yeah, not going anywhere.
AI at work and job loss [00:40:56]
Rob Wiblin: How do people feel about AI in the workplace and how it might affect their jobs?
Eileen Yam: That was a really interesting line of questioning that we asked both AI experts as well as the general public.
So over the years, when we have asked about concerns about or perceptions of AI’s effect in the workplace, a recurring concern is job loss, people being supplanted in certain industries. But what’s interesting is when you compare how the experts think about the impact of AI on jobs compared with the general public, 73% of experts actually see a positive impact on how people do their jobs versus 23% of adults.
Then I noted this other set of data on how the public and the experts, they’re kind of aligned as far as which specific professions they see as being vulnerable to job loss. And they mention things, both of them, like cashiers and engineers. And I think that’s something that, yes, there are overriding concerns. In our past work, we found that people were more likely to feel like people in other professions would be affected by AI taking away jobs as opposed to their own jobs. So definitely one of the headlining concerns that we find in the general public is about job loss.
Rob Wiblin: Yeah, I guess one of the three reports was mostly focused on this question of jobs and the workplace. The numbers that stood out to me here were:
- 52% of workers are worried about future impacts of AI in the workplace.
- 32% think it will lead to fewer job opportunities for them in the long run.
- 56% of the public are highly concerned about AI leading to job loss, versus 25% of experts. So this is one of the things where the experts and public feel pretty differently.
- An interesting thing was people who use AI in their job are more worried about its impact on their job. So 42% think that AI will lead to fewer job opportunities if they use AI versus 30% who don’t. I guess maybe that makes sense because you’re imagining office workers who could see that maybe they are more replaceable than they used to be.
- And workers with more education who use AI more are also more worried: 57% versus 48% for people in the lower education category. I think that’s different than I imagine previous waves of automation — where it’s actually the people with more education, the professionals, who think that AI is coming for them.
Eileen Yam: Yeah, for sure. That is something that is just different from the Industrial Revolution, where you thought of the more “working with your hands” kinds of jobs as being most at risk. Whereas now it’s like, could this LLM actually do that analysis job better than — or if not better, more cheaply than — humans?
So I was looking again at our data on experts versus general public opinion on which jobs would be most at risk or where AI might lead to fewer jobs. It’s interesting: the public is more likely than experts to see job loss happening among factory workers, musicians, teachers, and medical doctors, interestingly. And I mention medical doctors because, even with the overarching concerns about AI’s impact on society, the public is expressing a bit more receptiveness to AI actually having a beneficial impact in that medical realm.
But what’s interesting is, to me, to your point about how there are certain professions that are regarded as potentially at risk of AI supplanting workers, medical doctors wouldn’t have been on my bingo card as one of them as recently as five years ago. But the general public is up on that, and they feel like, yeah, this is actually something that medical doctors could be at risk here. So that’s super striking to me as well, and certainly different from the Industrial Revolution, where that’s not going to supplant doctors; it would be more likely that maybe factory workers would be affected.
Does the public always feel this way about new things? [00:44:52]
Rob Wiblin: So it’s fair to say that the US public, at least many people, are apprehensive about AI. If I imagine someone in the industry who was a bit sceptical about how much one can make from this, I can imagine them saying people are always sceptical, always nervous about the new thing: “Anything that’s been around since I was a child, that’s OK,” and the downsides are just acceptable and part of the air that we breathe. But the new stuff, that’s always what people get worried about.
Were people, for example, similarly concerned at this stage in the rollout of smartphones or the internet in general? Pew’s been going a while, so you have polling about that stuff, right?
Eileen Yam: Right, yeah. So I went back to look about how people were feeling about the internet in the late ’90s. So we have a 1999 poll, and at that point internet use was about 40%, give or take some percentages: people were saying “Yes, I use the internet.” So much lower than today.
One of the questions we asked at that time was how they felt about “having access” to all that information — this deluge of information right there in one place — and 62% said they liked it.
There wasn’t a corresponding sort of “hand wringing,” for lack of a better word, about the implications, the way that we’re seeing today about AI. And one big difference between those previous digital revolutions, whether you’re talking about internet, smartphones, or AI, is just people wilfully use the internet; people willfully use their smartphone. And there wasn’t this corresponding sense of, again, coming back to control, of “I don’t even know when I’m even engaging with AI or something made by AI” the way you did with the internet. So there was much more of a sense of there’s a benefit here to the internet.
To the extent that concerns were raised then, it was interesting. One was about the speed of my connection, which today, in a broadband world, I kind of chuckle at that. But then there were some who mentioned data privacy concerns. I don’t want to claim that there were no concerns about the internet, but it’s a bit of an apples and oranges comparison, because it wasn’t the kind of tech that felt ambient the way AI does.
Rob Wiblin: Yeah, yeah. I guess if you didn’t like smartphones, you could just not get an iPhone. But all of the media coverage about AI does carry with it this tone that AI is coming at you whether you like it or not. And I think that is kind of true in a way that probably wasn’t true for smartphones. I mean, I guess it’s hard to go without a smartphone now, but you can still do it. If you don’t like it, just have a dumbphone, if that’s really the decision that you want to make. But if AI replaces you in your job, it doesn’t really matter whether you can’t opt out.
Eileen Yam: Yeah, that’s exactly right. And that’s very different from, you know, let’s go back even further: calculators. People might have said the same thing, that it’s making you lazy, you’re not doing things in your head that now you can use a calculator. There wasn’t an existential conversation of, “This is changing humanity or the nature of society; it’s infiltrating every dimension of society.” It’s a calculator. It’s affecting your ability to do math in your head.
But I think there are some who would say, why not eventually regard AI as a tool, like the calculator. We now normalise calculator use, even in very advanced math classes in secondary schools. So there is a camp that talks about how, keep your eye on the prize of AI being a tool. And maybe trepidation about effect on human competencies, maybe that’ll shift as people start to potentially think of it as more of a tool as opposed to the colleague who supplanted the human being that used to be your colleague. I think that’s the other thing.
I love that question though about how this compares to other innovations in the past. Sometimes when I have heard that analogy to a calculator, that’s not really on my radar. I usually think of smartphones, the internet. But yeah, probably there were people, maybe my grandparents, who felt like, “Why are you offloading these tasks to a calculator? I grew up using an abacus!” So I feel like perhaps there is some element of that in every innovation, maybe there are some slower adopters or naysayers, and maybe ultimately it’s regarded as a tool in the toolkit and normalised in a way that it’s hard to imagine now.
It is quite different, though, from other digital innovations that we lived through in the past couple decades.
Rob Wiblin: Yeah, I think sometimes I see cited, I think there are quotes from ancient Greek times when people would complain that writing was causing people to have worse memories. They didn’t have to remember the entire Iliad.
Eileen Yam: “You’ve gone soft.”
Rob Wiblin: Yes! I think inasmuch as AI does end up just being a tool, and it is just kind of an assistant that helps you accomplish more, if it continues to be the way it is for me now, I think that a lot of these concerns will recede. The case where I think people will remain worried is if the capabilities just keep growing, and eventually it becomes unclear what role the human is playing in the work or the decision at all. And that is what the pioneers of the field expect, sooner rather than later. Maybe it will take a lot longer. But I think people are aware of this possibility that they could be replaced in many parts of their life — and understandably, it makes them nervous about their future position.
I did a little bit of digging into what was public opinion about genetic engineering and atomic energy in their kind of early phases. On genetic engineering, both of human beings and of crops, I think the public has always kind of been apprehensive about that. They felt nervous about it to start with, or negative about it to start with. And even as it has become quite common, people have felt nervous about it. That’s an interesting result.
On atomic energy, back in the ’50s and ’60s, I think Americans felt good about nuclear energy: they were more excited than concerned. And it was only, I think, around Three Mile Island that public opinion really began to turn, and people began to be significantly more concerned than excited about it.
I guess the picture is people are sometimes nervous about new technologies, and opinion gets better; sometimes they feel good about technology, and things get worse. It absolutely is influenced by how events go and whether the thing that captures the public imagination, the thing that gets media coverage, is a positive story or a negative story.
Eileen Yam: Yeah. And the question is, is there going to be a Three Mile Island episode for AI, or is it going to be more like landing on the moon, and this is amazing? And I think you’re exactly right: that sentiment can shift on a dime depending on these externalities that really we can’t anticipate.
The public doesn’t think AI is overhyped [00:51:49]
Rob Wiblin: So there’s been this debate among more AI insiders of is AI being hyped up too much? Is its impact likely to be exaggerated? Or is the public maybe a bit asleep or not appreciating just how big the changes are that might be coming down the pipeline?
You asked about this, and the result was actually the opposite of what I would have expected. So 36% of people thought that the impact of AI was being understated in coverage, versus 21% who said it was being exaggerated or overstated. There’s so much discussion about AI being overhyped in the media that I would have thought the public would think people in the industry love to talk up their own work and think that it’s a massive deal, but it might not be such a big deal. But no, it’s the other way around.
Eileen Yam: Yeah. And there was a parallel question we asked about and I think this maybe lends a little texture to that finding about how it’s kind of split.
So 21% said AI is being made a bigger deal of than it really is, 36% said it’s being described about right, and then another 36% said it’s being made a smaller deal than it really is. So to your overhyped point, the smallest share actually said, yeah, it’s being overhyped and made a bigger deal.
But the other thing we asked about is whether people thought it was important that people learn what AI is — again, defined broadly, so that means different things to different people. But do you need to be at least aware? Some modicum of literacy about AI, is that important? And nearly three-quarters said, yeah, that’s actually really important. So there is this sense of it is where the world is going. It’s important that I become at least conversant or literate in what this is.
And when you see 21%, the smallest share, saying it’s being overhyped, I think that somewhat speaks to that: that there is an acknowledgment that, yeah, there is something real to this. This is not something that is just a fly by night, going to go away. That is something where I was asking that question particularly because I feel like, going in I might have thought, yeah, I feel like every day I’m reading 17 headlines about AI. I feel a little saturated.
Rob Wiblin: Isn’t this a bit much? Some days I do feel a little bored.
Eileen Yam: Yeah. Every angle on AI. But no, for the general public, I think the fact that 36% equal shares — 36% say it’s described about right, and then 36% feel like it’s actually being made a smaller deal than it should be — that’s pretty notable.
Rob Wiblin: A lot of people are unsure. A lot of people are on the fence. I guess among people who have formed an opinion, they lean more pessimistic than optimistic about AI by 2:1 or 3:1, depending on exactly how you phrase the question.
I imagine most people now have used ChatGPT at least once. Maybe they use it occasionally, if not that frequently. Is it unusual to have this level of negativity about a technology that now quite a substantial fraction of the public has had some direct personal use of?
Eileen Yam: I think that I might phrase the question a bit differently. So there are two different prongs to this question of how much the public is using AI: there’s how much they perceive themselves to be using it and how much empirically they actually are using it. And we know from our previous research that there’s a lot of interaction with AI that is completely unbeknownst to the user. It’s ambient. They’re not affirmatively saying, I am going to chatgpt.com to use this. So I think that’s important to keep in mind.
In 2022, when we asked Americans about whether they are aware of AI being used in each of six different pretty commonplace applications, if you’re on top of this stuff, like fitness trackers and spam detection, only about three in 10 could name all of them out of those six. So that awareness is not always in step with their actual use and actual interaction with AI.
For ChatGPT in particular, our latest data show that 34% of US adults have used ChatGPT. That is double the share of 2023. But 34% is still not even a majority, right?
Rob Wiblin: Can that really be right? I understand people use AI a lot less than me. I’m out of touch. But is it really the case that that fraction of people haven’t used ChatGPT or some other similar chatbot like once?
Eileen Yam: The question is phrased, I don’t have it at my fingertips, but 34% affirmatively said, “I have used it.” So that does mean that…?
Rob Wiblin: Yeah, that’s what they’re saying anyway. Maybe they’ve forgotten sometime that they tried it three years ago.
Eileen Yam: Well, you have to opt in and actually seek it out, so it’s not really something you’d forget. It’s not like, have you come across AI using a GPS, where you may not actually be saying, “I am selecting AI as an option.” ChatGPT is something that you have to opt in to actually use and seek it out.
So yeah, 34% have said they’ve used it. So I did want to kind of temper the overarching framing of how it’s something that so many people are using, because it’s not so much that it’s widely known that they are always using it — the way 80% of AI experts said the general public is interacting with AI every day or almost all the time. The general public is nowhere near that.
Rob Wiblin: What was the stat there? I think experts thought that 70% of people were using ChatGPT or similar products regularly?
Eileen Yam: Yeah. So the general public, they’re not even always aware of when they might be interacting with AI. Experts have a much loftier perspective on how much the general public is actually interacting with AI. So nearly 80% said that they think people in the US interact with AI almost constantly or several times a day, and that’s compared to 27% of US adults who would say the same.
So there’s this really divergent perspective on just how much people are exposed to these technologies. And I think that to your point about, is this something that’s so widely used yet people are so sceptical of, I don’t know how much people are answering these questions with this understanding that it is “so widely used.”
The AI industry seems on a collision course with the public [00:58:16]
Rob Wiblin: So I wanted to connect all of that with the vision that the frontier AI companies have for the world that they’re potentially going to create for us in coming years, basically the next handful of years in their self-conception, in their minds.
I think many people at the frontier AI companies think that we’re heading towards a world of recursively self-improving AI within the next five years — so very big increases in capabilities potentially resulting from that. They talk openly, or at least until recently they talked very openly, about basically displacing almost all office workers, being able to do everything that office workers could do: dropping in an AI agent from the cloud and basically rendering most people who work at a computer obsolete.
I think in their personal lives, many of them delegate decisions to AIs that assist them in order to get more done, make more decisions more quickly. And I think they envisage a future in which humans are going to be leaning on AIs and deferring to them — sometimes just completely delegating decision making in their lives to artificial intelligence advisors who just have better judgement than those human beings, in the imagination of the people making these products.
I guess there’s more disagreement about the question of personal relationships. Some of the AI companies have shied away a bit from the idea that people would be forming personal bonds with these AI agents. But there are other companies that are specialising in that — character.ai as an example, I guess xAI — for better or worse, they’ve been trying to make these AI companions that people might have strong feelings towards.
And I think all of the companies hope to have a large impact in education, to have AI integrated into schools, to have AI making a big impact in healthcare — hopefully improving people’s health, hopefully preventing misdiagnosis, getting people the right treatment sooner.
But if this vision begins to play out, all of those things, it is going to push so many buttons for the public. They’re scared about loss of capabilities, they’re scared about deferring to AI. They want to be the one that makes the final decision. We’re going to talk maybe a bit more later about people’s view about whether AI would help in education, but we were talking about the effect that might have on children if they’re growing up with AI ambiently. People are worried about that, understandably I think. So they’re nervous about AI in education, there are maybe mixed feelings about AI in healthcare, but they’re worried about losing their jobs, absolutely — office workers and educated people in particular.
So we don’t have a crystal ball. We can’t exactly predict the future. But if this vision begins to take place, it feels like there’s a lot of fertile ground here for people to get quite upset and to have a lot of anxiety about the direction that things are going. And maybe they’ll get more comfortable. Sometimes things start happening, they see that it’s going OK, they get more comfortable.
But I think it’s also possible that people will see these things happening and they’ll feel like they’re losing control. They’ll feel like the world is going in a direction that they don’t love, and they might well want to stop it.
Eileen Yam: So yes, we don’t have a crystal ball. We don’t know how the public will react. We also don’t know what are the news cycles going to bring, what is going to feel like this is on my doorstep now. There is a lot of room for sentiments to evolve.
I didn’t necessarily walk away from this research feeling like it’s all doom and gloom, the way that perhaps to someone who’s really an insider and really thinking hard and maybe on the evangelist end of the spectrum, might feel like, “Oh my gosh, there’s such a disconnect!” I think there’s still room for there to be wins and feelings of, wow, in medicine or in treatment diagnosis, there might be some real potential for AI to make a difference.
The extent to which the public is aware of developments down the pike that actually might give AI a better look maybe, I think that remains to be seen. And that’s why we want to keep tracking this: there are these sometimes watershed moments in history that just signal to the public, “Maybe I have to give that a second look in this domain or that domain or the other.”
So I think that the fact that we see different degrees of comfort level with AI playing roles in different areas of society, there is some sense of, “Yeah, I’m comfortable with that. No, I’m not.” But you can also imagine a world with something like detecting fraud, in a way that, “Oh wow, someone’s using my card. That’s something that I realise AI has picked up on” — you might start seeing it in a different light.
So I think it just remains to be seen what externalities keep shaping people’s views, because it’s just changing so quickly. I really couldn’t anticipate is there a backlash coming? If everything played out the way you just described, how is the public going to react?
There is certainly that overarching concern about control. I do feel like that’s something that that needle isn’t moving in a better direction over time, at least. So that’s something that is just enduring, and that’s something that I think all parties keep an eye on — because it’s sort of adjacent to other areas of concern, like misinformation or deepfakes or feeling like you’re getting duped by content that you don’t realise is AI. As far as buckets of concerns, definitely that control piece is something that people hold dear and it’s just lingering and enduring.
Rob Wiblin: How do people feel about regulation of AI? Did they expect it to be regulated too much or too little?
Eileen Yam: That’s one area where the experts and the public do agree that they don’t have much confidence in either government or industry to effectively regulate AI, put the guardrails in place that they need to. So I think that there’s this sentiment that, “Yeah, regulation needs to be there, but it’s not government or industry that I have a whole lot of confidence in stepping up to that.”
Rob Wiblin: Yeah, it was interesting. I think 58% of the public said that they were more concerned government won’t go far enough in regulating AI, versus 21% who were concerned about it going too far. But it’s also true that, experts and the public, when you ask them, “Do you have confidence in the companies to govern themselves?” they’re like, no: I think like 60% or something said no. Then you ask them, “What about government? What about the alternative?” they’re like, “No, we also don’t like that.”
Eileen Yam: Right, right.
Rob Wiblin: We can appeal to the angels perhaps to come down and govern AI for us.
Eileen Yam: Yeah, exactly. And I think that there was one quote among one of the experts in our sample that was basically about how something to the effect of, “Listen to these congressional hearings: these legislators don’t know anything about this technology. How on Earth could we expect the government to regulate effectively?”
I think that is something that is shared, and that is one of the areas where both the insiders and the general public do converge: that governance, regulation, I’m not really seeing industry or government stepping up to do that effectively.
Is the survey methodology good? [01:05:26]
Rob Wiblin: Let’s talk a little bit about the methodology of the surveys and how reliable we should expect them to be. So there were three different waves of the surveys of the US general public, and it was 5,000 adults in each case, right?
Eileen Yam: About that. That’s right.
Rob Wiblin: So it’s a big sample. This is a big effort. I imagine this is quite expensive to do.
Eileen Yam: Yeah, it is.
Rob Wiblin: How do you feel about the representativeness of the sample? Obviously polling people have a big difficulty that some sorts of people are more likely to respond to polls and more willing to participate than others, and that’s something they have to try to offset in the weighting. Do you feel good about how well you can do that?
Eileen Yam: Yeah. So just to back up: for the surveys that you’re referring to of the general US adult population, those are about 5,000 adults. And the way they are selected is that we take a random sample of US residential addresses, and that gives nearly all US adults a chance of selection. So to the extent that we’re comfortable saying this is broadly representative of the entire adult population, that’s precisely because it’s a probability-based or randomised sample of adults in the country.
As far as the reliability, what you look for when you’re trying to assess the quality of a poll is we don’t have skin in this game, and we’re not sponsored by someone who does have skin in the game, whether that’s industry or someone who’s more of an advocate/activist. So as far as the impartiality of the sampling, that’s where I feel like this is pretty gold standard, and this is something that I have confidence in reporting out the estimates that we are.
I think with this topic in particular, we have to strike this balance between being accessible enough for the general public to understand something as potentially highly technical as AI, but also being specific enough that we’re actually getting something that’s a story, and that is something that you can at least tell some narrative about how people are feeling.
So one thing I’ll note, because I’ve been asked this a few times: “If it’s so technical and so tough to potentially understand, how are you even defining it?” So our stock definition that we use, before launching into questions about AI, we’ll say, “Artificial intelligence is designed to learn tasks that humans typically do — for instance, recognising speech or pictures.”
So it’s a very high-level, succinct definition. It doesn’t get into specifics about generative AI versus predictive AI. I don’t know if you’ve ever taken a survey and seen this seven-line definition before you even get to the survey questions? That can be pretty cognitively onerous. So we want to be succinct, and specific enough that we think we’re tapping into the concept that we want to. So as far as how we approach the question, that’s how we define it.
And the other thing is, again, I want to make sure it’s accessible in the sense that it gives people an out to say “I don’t know” or “Not sure” for an item. If you don’t provide that option, where do those people who are just, “I’ve never even heard of this thing, how do I even have an opinion?” So you give them the “Not sure” option and you just phrase these questions in a way that’s broadly accessible.
Rob Wiblin: Yeah. There’s a reason I wanted to do this interview with you. There’s been a bunch of other polling by other, newer organisations in many cases, but I think very often they have some sort of agenda. It might be that the result is completely reasonable, but it’s a little bit hard to be sure, because they are more interested in getting one answer than another.
Whereas I think Pew has no particular agenda here. And you have enormous depth of experience spending decades doing this kind of research and figuring out how to avoid the pitfalls that you might have with doing this sort of difficult survey work.
Eileen Yam: Yeah. So I’m trained as a health scientist. I really try to approach these questions with impartiality. Sometimes we do polling on things that are incredibly polarising, whether it’s climate change or COVID vaccines, and really approaching these questions, trying to be humble about what you really don’t understand.
And it warms my heart when I see people like you saying, “Oh my gosh, I never thought that the general public was over here and experts are over there” — because I feel like that’s precisely the point: is [to do this] without an agenda, really impartial, largely self-funded — our parent organisation, Pew Charitable Trust, is our primary funder funding this work.
I felt like that’s why I approach this work with a good amount of confidence about how to make a good-faith effort to tell the story of what’s going on at the national level among US adults.
Rob Wiblin: And how did you do the survey of 1,000 AI experts? How would you get 1,000 AI experts on the phone or willing to fill it out? They’re busy people, often.
Eileen Yam: Yeah, that’s a great question. There’s no such thing as a master list of people called “AI experts” — what survey researchers would call a “sampling frame,” where you can randomly sample from this master list. No. So we had to decide how are we going to identify people that we consider an AI expert? In other words, someone who really has expertise in either a technical skill like machine learning or AI ethics and policy. Maybe that’s something more in the social science/ethical realm, or someone with business interest in AI.
So we collated from about 20 different AI-focused conferences authors and presenters. Those are usually on the websites of the conferences. You can basically scrape them. This is a big manual endeavour — which is kind of ironic for something on AI, where you feel like, is there someday going to be an easier way to do this?
We invited 1,000 of them. They had to be US-based to participate in an online survey. There are about 21 conferences, and in this case, unlike in a probability-based general population survey, you can’t claim they are representative of the whole panorama or universe of AI experts. But we got 1,000 people — and I am unaware of a precedent for tapping into the opinions of 1,000 people who pretty much are specialists in AI in some way and have very informed opinions on these questions.
Rob Wiblin: One of your colleagues told me that the expert group maybe trended a little bit young or towards more junior people, which was perhaps understandable: if you’re the CEO of the company, you don’t have time to fill out the survey. But if you’re a PhD student or a junior person, maybe you do so.
Eileen Yam: Because a lot of graduate students attend these AI-focused conferences as authors, as presenters, we did have a sample of experts that skews a bit younger. A lot of them were under the age of 45. Nearly eight in 10 were that young. Then 40% were currently students, likely graduate students. So yes, because of the way we went about sampling…
And the other thing to keep in mind is this is not necessarily a field where there is a large share of, I don’t know, septuagenarians working in AI.
Rob Wiblin: There’s a few, but yeah, not so many.
Eileen Yam: Probably not as many as there might be 32-year-olds. So that’s the other thing where it kind of squares with what I might expect as far as skewing young like that and skewing more highly educated.
Where people are positive about AI: saving time, policing, and science [01:12:51]
Rob Wiblin: Let’s talk a bunch about the positive feelings that people have. We’ve dwelt a lot on the negatives, and I imagine people being a bit frustrated by that, but there are plenty of positive things that people had to say as well. Where were people most enthusiastic about the impact of AI?
Eileen Yam: Yeah, it is not all gloom and doom. And when you ask people, of the 25% who said they saw high benefits, the main reasons that they cited were about efficiency gains — freeing up time, for example: 41% of those people who saw high benefits mentioned efficiency gains freeing up time.
A quote that really just resonated with me was, this person said:
AI takes mundane tasks that often waste talent and effort and allows us to automate them. AI also allows us to access information in a more streamlined way and allows us to save something that we can never get back: time!
So that permeates throughout. And in our other survey questions, we’ve seen that too: about three-quarters are willing to let AI help with day-to-day tasks. There is an openness to how this can help make my life a little easier.
The other most commonly mentioned theme was about expanding human technological abilities. I think that one thing that you hear a lot about is in the healthcare realm. There was one respondent who said:
Use of AI could significantly speed diagnosis of medical issues. Now we rely on any given doctor’s ability to know about certain conditions. [This is] a real issue particularly in rural areas.
So in healthcare, and similar in education, you do see people touting the prospect of AI levelling the playing field — whether it’s these rural areas where you don’t have, let’s say, psychiatrists; or in the sense of education, where a person who is learning English as a second or third language, could that level the playing field in terms of translation, for example? Or could it be a 24-hour potential tutor for people?
So this idea of there is potential in certain human technological realms as far as abilities and advancements, where that might be where AI can shine and actually really contribute and have a positive impact on society.
Rob Wiblin: Yeah, some people got quite poetic about it, actually. One of the quotes that stood out to me was
AI has the potential to make society more efficient than ever. AI sort of transcends time and space in that it can be used to study the past, inform the present, and shape the future. It can be used by individuals, by corporations, by governments.
I think they’re right.
Eileen Yam: Yeah. They are not doomers, those people. They see some silver linings.
Rob Wiblin: Yeah. So:
- 74% of people said they’d be willing to let AI assist at least a little with day-to-day tasks and activities.
- 40% of workers who’ve used chatbots find them extremely or very helpful for speeding up work. Quite a lot of usefulness. And I think this might have been earlier, so the models keep getting better. They’re a lot better than they were a year ago, and at least a substantial fraction were already finding it very helpful.
- And 29% of people said that they found it helpful for improving work quality.
It was interesting that more people found it useful for doing more and going faster than for improving quality, which I guess is a different dimension. It’s possible that as they get more reliable, more insightful, perhaps that will change, and the quality side will come along as well.
Eileen Yam: Right, exactly. And I think an overarching question also remains: that’s fine and well to acknowledge AI might do this better and better, and maybe on par with humans; are you OK with that is the other question. It’s the ethical and values question of you can be very good at doing something, but actually maybe you don’t want an algorithm doing those things.
Rob Wiblin: Yeah. So the areas where people were happy [to use AI]:
- 70% of people were happy for AI to be used for searching for financial crimes.
- 70% also happy for it to be used for searching for fraud and government benefits.
- 61% happy for it to be used in identifying suspects in a crime. Again, there’s sci-fi stories that cover this sort of thing. It reminds you of Minority Report a little bit, but people don’t have that fear that AI might miscategorise suspects of a crime.
Eileen Yam: Yeah. And in identifying suspects in a crime, I think where your head goes probably — as an insider, again — is algorithmic bias and understanding of training data. It’s a little bit of a Rorschach test in terms of how are you reading that question. Because my head doesn’t necessarily go to the algorithmic bias; I think that in the grand scheme you can imagine other kinds of data that maybe don’t have to do with physical characteristics that might feed into, “Let’s home in on where this suspect might be or where they may have left a digital trail,” however that may be. So I didn’t necessarily zoom in on the facial recognition point that I think you’re homing in on.
Rob Wiblin: Yeah. Well, because I think that’s one of the few applications that has in some places just been banned outright, because it is something that more insiders are worried about. I think in some places in the EU, you can’t use facial recognition for policing purposes. People have been trying to ban it in particular cities and so on. But it’s not a super salient concern among the general public.
Eileen Yam: Yeah, you mean in the context of law enforcement? Exactly. I think that is a bit of an insider conversation in general: the risk of bias in identification. So I think that is something that isn’t necessarily coming through in this particular question item, of just concern about law enforcement facial recognition technology being biased — which is where a lot of that arguably somewhat elite conversation is. It’s a bit of an insider conversation.
Biggest gaps between experts and the general public, and where they agree [01:18:44]
Rob Wiblin: Let’s talk about the difference and the gap between the opinions of experts and the opinions of the general public. What were some of the biggest and most consequential differences of opinion that they had?
Eileen Yam: Overall, experts are much more optimistic about AI’s impact on many aspects of society:
- When it comes to jobs, 73% of experts see a positive impact on how people do their jobs compared to 23% of the general public.
- 69% of experts see a positive impact on the economy versus 21% of the general public.
- Then for productivity, 74% of experts say that AI will make people more productive compared to 17% of the general public.
There’s a big divergence across those three dimensions of jobs, economy, productivity. And in all three cases, the experts are much more bullish.
Rob Wiblin: Yeah, those are eye-watering gaps: 73% of experts think it’s extremely or very likely that AI will make humans more productive versus just 17% of the general public. I’m inclined to agree with the experts on this one, but it’s such a different picture that people have in their heads. I’m not sure what to make of it.
Eileen Yam: I think part of this is experts have perceived that there is much more prevalent use of AI in the general public overall in a way that the general public isn’t necessarily perceiving their own interaction with AI as being quite so frequent as experts do.
So to the extent that if you are an expert and you believe most people are pretty much all the time or several times a day interacting with AI, they have a lot more data points to inform their opinion about how it’s affecting their lives. And it sounds like that big disconnect in thinking it’s making people more productive, there might be some element of just reflecting on AI in their own lives: “It’s making my life a whole lot more productive as someone who’s steeped in this world, and it’s what I’m drinking and eating and sleeping all the time.”
Rob Wiblin: Yeah. One that jumped out to me was 51% of experts think that they’ll one day trust AI with important decisions versus 13% of the public. Presumably more than 13% of the public is comfortable with AI being involved in some way in advising decisions, but experts are so much more comfortable with the idea of full delegation to AI versus the public. Feeling comfortable with that is actually quite a niche feeling among the broader world.
Eileen Yam: Yeah, that’s right. And this question of full delegation versus assisting or serving as a tool, that’s the crux of the conversation in a lot of circles. I think even among the general public, people who might use an LLM to maybe clean up this sentence I’m struggling writing: it’s not so much that they entirely are offloading a writing task, but there’s some element of “just assist me” or “be a tool for me.”
Rob Wiblin: Here’s another big gap: 76% of experts think AI will benefit them personally versus 24% of the general public. Maybe it makes sense because the people who work in the AI industry expect to personally profit in their career; this is their industry, this is their sector. But it’s just a big gap in people’s comfort level about how this is going to affect them over coming years: people who are in the industry, who are already benefiting really from the explosion of this technology, are going to be literally receiving personal benefit all the time in the form of sometimes incredibly high salaries.
I think that could really drive them being very out of touch. If you compare their experience with the experience of some future office worker who perceives, correctly or incorrectly, that they’ve lost their job to AI, imagine how different their sentiments are going to be when they’re speaking to their member of Congress or otherwise considering whether to vote. I think it does create the potential for quite an elite-versus-non-elite gap, or populist-versus-non-populist gap here.
Eileen Yam: Yeah, that’s right. And I think that an undercurrent to a lot of these conversations is just about equity, disparities in access, disparities in AI literacy. The fact that there’s such a gap in experts’ perception and the general public’s perception, that’s precisely part of the reason why we wanted to do this: let’s illuminate where this elite discussion is way down the road, far further downstream than where the public conversation and consciousness is.
And these questions about the impact on jobs, when there is something salient to respond to about jobs or this is affecting my livelihood, perhaps that’s when you might see a needle moving. And there is some room for these views to evolve over time — we do give people the option to say, “I just don’t really know yet,” or, “It could go either way” — but I think at this moment, the perception, and frankly the optimism of the experts compared to the general public is really striking.
Rob Wiblin: You also asked about the probability of AI resulting in major harm. Here the gap wasn’t quite as large, but it was still meaningful:
- 20% of experts think AI is likely to cause major harm to humans versus 35% of the public.
- And 46% of experts say that major harm is unlikely versus 18% of the public. It’s kind of reversed.
Although a lot of people uncertain about this one, a lot of “Not sure”s.
Eileen Yam: Yeah. And there, it’s interesting in the open-ended responses when people were asked about what they saw as the risk: one thing that came up was these core human abilities that are being weakened. So I could see that being part of this idea of there’s major harm to humans going on.
Rob Wiblin: Yeah, I bet that something that’s going on here is that when experts hear “major harm,” they think of catastrophic loss of control, the kind of thing that I would think about, like AI gone rogue or catastrophic misuse. The public probably just considers degradation of the human experience — like loss of personal connection, like loss of job — they would think of that as major harm. And that’s probably the thing that’s most salient.
Eileen Yam: Yeah, I think that those are the main themes that came out of those open-ended responses: erosion of human abilities. Misinformation is another one. This lack of control theme also comes up quite a bit. I think perhaps you’re onto something about how if you, as an expert, think major harm is this stuff over here, the general public is not thinking of it quite like that. So that could certainly be underpinning some of this.
Rob Wiblin: Yeah. So two big areas that I’m personally enthusiastic to see AI get applied — and maybe I’m a little bit nervous that it might be regulated a bit out of existence to too far a degree — is education and healthcare. Where did the experts and the general public stand on AI in those areas?
Eileen Yam: Yeah, you’re right. I often think a lot about if there were industries or sectors where AI really has a lot of potential to be a disruptor, I think of healthcare, I think of education, I think of work. And for healthcare and education, we did ask both the experts and the general public about how they feel about the impact on those two realms.
On medical care, 44% of the public says they see a positive impact on medical care. And that’s actually an openness to AI that is a little bit more elevated when it comes to medical care in the general public. However, that’s still 40 points lower than what you see when you ask experts about the impact on medical care, where 84% of experts say there’s going to be a positive impact there.
For education, there’s also this gap: 61% of experts say they’ll see a positive impact on education; 24% of the general public would say the same. So even for medical care, where the public is a little bit more open, it’s still way, way behind where the experts are on their optimism there.
Rob Wiblin: Yeah. And earlier we talked about the opinions on mental health. So 36% of people said that AI should play no role at all in treatment of mental health, but 46% of people said that they support it at least playing some role in mental health. So it’s one where people are kind of on the fence. They can go both ways.
Eileen Yam: Yeah, that is interesting. And several people have homed in on that finding in particular about mental health therapy. Especially in this world where increasingly conversations and headlines are about this idea of perhaps an AI companion can alleviate loneliness, or maybe, is there a place for AI in providing therapy? So yeah, to the extent that we see an openness or a window to receiving AI in the healthcare realm, mental health realm, that’s where there is a little bit more receptiveness in the general public.
Rob Wiblin: Yeah, I think you said that — I can’t remember what the questions were here — but that the public is happier with AI augmenting and assisting doctors and less comfortable with it replacing doctors. Is that right?
Eileen Yam: What I meant to say is that’s where the conversation is. That’s a little bit of a tough question to ask the general public about. Is this a tool assisting, or is it…? It depends on where they’re at with AI in general and understanding. But no, I think what I meant to imply is that there is a conversation about AI as an assistant or a tool, and maybe that’s where we should land, as opposed to wholesale supplanting or offloading certain tasks or capabilities to AI.
Rob Wiblin: What are some questions on which the public and experts tended to agree? I think we talked about one of them, which is that both groups think that the government probably won’t regulate AI enough. And I guess both of them are somewhat concerned about misinformation, inaccurate information coming from AI. Are there other notable ones?
Eileen Yam: In terms of which arenas or societal realms, in politics and journalism, that’s where they both share the view that AI doesn’t really have a place here to make a positive impact. So there’s the concerns about not having enough regulation by either government or industry, and this sense of wanting more control over how AI is used in their lives, and then the last is about this wariness about AI in politics and journalism. That’s where they somewhat aligned.
Demographic groups agree to a surprising degree [01:28:58]
Rob Wiblin: Let’s talk about differences of opinion in AI broken down by different demographic groups. How differently did men and women feel about AI?
Eileen Yam: Yeah, across all of our studies on AI, we see a similar gender gap where overall women are more wary than men. I’ll give you two examples:
- Among the general public, 31% of men, compared to 18% of women, expect to personally benefit from AI.
- And then among AI experts, 63% of men see a positive future impact of AI compared to 36% among women.
So those are certainly non-zero differences. That’s a notable difference there.
And I think that one thing that was a parallel finding in some ways, or at least a potentially contextual finding, is that in the same survey of experts — I’ll read the question verbatim because I want to make sure I accurately portray it — we asked if they think that “the people who design AI programs take the experiences and views of the following groups into account.”
- When we asked experts about whether men’s views and experiences were taken into account adequately, 75% said yes — very or somewhat well.
- And when we asked the same about women, that drops to 44% say women’s perspectives and views are taken into account.
So I think right there, there is, even among experts, this sense of men are simply just more represented in this realm. And to the extent that that dovetails with sentiments in the general public about gender representation in STEM, men are overrepresented in a lot of the disciplines of AI and STEM writ large. So in some ways what you see among the experts when it comes to these gender divides, it’s essentially reflecting what we also see in the general public writ large.
Rob Wiblin: Yeah, it reminded me, I think I’ve seen breakdowns of differences of opinion by gender on a bunch of different kinds of current policy questions. I think maybe the area where there’s the biggest current gender gap is on nuclear power, of all things: that men for some reason are just much more enthusiastic about scaling up atomic energy than women are. I don’t know if there’s something about just like a sci-fi aspect of nuclear power, sci-fi aspect of AI, that I wonder, for whatever reason, men just feel more comfortable, or for some reason more excited about these slightly edgy new things.
Eileen Yam: I don’t know. I feel like there’s something about men and science and just overrepresentation — or rather, underrepresentation of people who are not men — that’s also feeding into this conversation about just the optics, the perception of people who are just in numbers more steeped in this world. They’re more receptive or more enthusiastic or at least less concerned than people who are less engaged in this world.
And it bears noting that this gender gap is actually more pronounced among the expert community than it is among the general public. So this gender gap, where women are less enthusiastic, we see that that gap is even bigger among the experts compared to the general public.
Rob Wiblin: Yeah, it’s moderate among the general public and then really quite large among experts, basically.
Eileen Yam: Yes, exactly.
Rob Wiblin: How about differences of opinion by age? I think young people are more likely to have used AI. People in the 20-to-29 group are significantly more likely to be regularly using AI than people who are over 65. But does that make them more or less positive about it?
Eileen Yam: The conventional wisdom might suggest that Grandma and Grandpa are the Luddites; they’re the ones who are going to be just resistant and not going to get it.
So when it comes to perceptions of how AI might be affecting human qualities, yes, younger adults are more likely to have heard about AI, interact with it — but they’re more concerned about this erosion of human skills: 61% of adults under 30 say that AI is going to worsen creative thinking, and that compares to 42% among those 65+. So that’s nearly a 20-point gap between the youngest and the oldest as far as effect on creative thinking, and the younger ones are more gloomy about it.
Similarly with AI’s effect on forming meaningful relationships: 58% of the adults under 30 say it’s going to worsen that skill, compared to 40% among people 65+. Again, yes, the younger people are more exposed, hearing about it more, interacting more — but they’re also more concerned about the effect on human skills.
Rob Wiblin: Yeah, there’s a sense in which that makes sense, that perhaps if you just have never used AI, you wouldn’t think about how might it affect your relationships or how might it affect your creativity, or how might it affect your decision-making capabilities? It’s less obvious to you. But if I was running an AI company, I would be a little bit nervous about this, because the groups that are using it the most are developing more concerns. It’s not as if they were concerned before they tried it, and now that they’ve tried it, now they feel good. Exposure potentially is creating greater anxiety.
It is also true though that I think younger people, I guess having been exposed more, were more likely to have positive views as well. So I think they were less likely to be unsure or on the fence. I think 14% of young adults were more excited than concerned. It’s a pretty surprisingly low number in a way, but only 4% of people over 65 were more excited than concerned. And I think probably part of what’s going on there is that people over 65 are just less likely to be decided; they’re more likely to be neither excited nor concerned.
Eileen Yam: That’s exactly right. There are larger shares of older people who will say, “I’m not sure.”
Rob Wiblin: Many things are super polarised along political lines in the United States these days. Is AI one of them?
Eileen Yam: AI, for now, seems to be a bit above the fray. Both Republicans and Republican leaners and Democrats and Democrat leaners are sceptical of government or industry handling AI and regulating AI effectively. The one partisan difference is on just the degree of confidence: 70% of Republicans have not much or no confidence in the government to regulate AI effectively, and that drops to 54% [of Democrats] who say they have no confidence or little confidence in the government’s ability to regulate effectively.
Rob Wiblin: But otherwise, I guess Republican/Democrat, there weren’t big differences in use, there weren’t big differences of opinion across many other issues. To me that’s a hopeful sign. People haven’t yet got fixed opinions. They’re not being super ideological about it. It’s just about your individual impressions and your individual taste and your personal experiences. That suggests there’s maybe quite a lot of pragmatism.
Eileen Yam: At least for right now. I think it’s important to note that AI is, in the grand scheme, new to the policy realm and policy debate. So yes, right now, for now, that’s where we’re at. There’s some degree of shared sentiment about the need for regulation, but sceptical of both government and corporations doing that effectively. But it’s not super polarised. It hasn’t really become a political football that you might see in other areas, no.
Rob Wiblin: Were there any other interesting differences of opinion, differences of feeling across demographic groups? Like racial groups, education level?
Eileen Yam: For education level, one thing that I noted was that the education gap in concern about AI actually doesn’t hit you over the head. Compared to those with a high school degree or less, 43% of postgraduates said, “I’m more concerned than excited,” and that compares to 54% of those with a high school degree or less. So not terribly huge. Yes, there’s a difference, but again, doesn’t hit you over the head.
Rob Wiblin: And were there significant racial differences?
Eileen Yam: The one notable difference we see across racial and ethnic groups is that Asian Americans, who in our sample are exclusively English speaking, both among experts and in the general public, they overall are more receptive, more enthusiastic, more open to AI having a positive impact in society and in their lives compared to the other racial and ethnic groups. The other racial and ethnic groups, there really wasn’t a clear difference or storyline that we saw.
Eileen’s favourite bits of the survey and what Pew will ask next [01:37:29]
Rob Wiblin: Well, we’ve reached the end of the questions that I had. Were there any striking or interesting or fascinating or confusing results that stood out in the survey that you’d like to mention before we finish up?
Eileen Yam: Yeah, I had my favourite question series that I asked in this survey, that was basically presenting to US adults in the general population seven different scenarios where they learned after the fact that a certain task or content was created by AI rather than a human.
So to give you an example, we had asked, “If you heard a song that you liked and then afterwards learned it had been created by AI, how would it change your opinion?” We gave the option of saying they’d have a more negative opinion, a more positive opinion, or it wouldn’t change. So for a song that you liked, when they learned it was created by AI, 58% said it wouldn’t change their opinion, 38% said they’d walk away with a more negative opinion, and just 3% said they would actually like it more.
So I think that that middle option of the neutral… Maybe I run in a crowd that’s a little rarified about “music is squarely a human capability,” but a lot of the general public didn’t seem to be particularly affected about that news.
And where they were most negative was in a scenario where a candidate speech that they liked, and they learned after the fact that actually AI was involved in writing it. So 71% said that that would actually change their view for the negative, 3% said positive, 25% said it wouldn’t change their view either way. There again, we’re coming back to a finding we found in the past about when it comes to politics, generally the public feels like that’s just a no-go area, that AI doesn’t have a place there. So a candidate speech, I would kind of consider that’s in that realm.
So yeah, that was a fun set of findings there. The song, I used to think maybe it’s just a product of spending a bunch of my childhood toiling learning an instrument. I feel like, oh my goodness, could a computer just do this as well as I could? So yeah, that is something that popped out at me.
Rob Wiblin: Yeah. I’m sure a lot of staffers are using ChatGPT to help them write politicians’ speeches, but I guess they’ve just got to do a good job of covering their tracks. So they’ve got to find any instances of the word “delve” in there and then take them out.
Eileen Yam: And musicians, maybe they have licence. Just go ahead. Have at it.
Rob Wiblin: Yeah, have at it. I think I’m going to really look forward to future reports on this. I’m going to be on Twitter being like, “New Pew Research on AI just dropped!” for the next couple of years.
Eileen Yam: Does Twitter still exist?
Rob Wiblin: I’m just a conscientious objector. I’m going to call it Twitter forever.
Eileen Yam: Can’t let go.
Rob Wiblin: Yeah. What are the questions that you’re interested to ask in future surveys? What sort of uncertainties do you have that you think you might be able to resolve in future reports?
Eileen Yam: Well, one thing — and this is kind of a trademark of the polling that we do — is we want to continue to track the things that we started to track on adoption and usage attitudes, and trend that over time. So that’s something that we’re already doing and will continue to do.
Where I see a lot of room for just public sentiment to evolve very quickly, probably not entirely linearly, would be where AI is most likely to be a disruptor at this moment: that is, work, healthcare, education, those three domains. And when I say “work,” I think of people currently in the workplace. So let’s say in the case of healthcare: doctors, but also the pipeline, so people training to be doctors, are they choosing different professions now? Just what’s happening in these different realms for the pipeline of workers?
So the third place we’re really interested in just keeping our finger on the pulse is on where the public is at when it comes to policy, when it comes to regulation, when it comes to guardrails. As the conversation becomes a little bit more salient to them, and the pros and the cons and the risks and the benefits, where does public sentiment move on things like guardrails, for example?
And then the last is this question around ethics and values. It comes down to a question for me of, even as AI gets better at doing things, and maybe even better than humans at doing things, does the public actually think it’s OK for AI to partially or completely supplant humans in that area? That’s something that we started to delve into a little bit in this latest study of, are there certain areas of society where right now the public says, “Absolutely not”? Or, “Yeah, actually, I’m OK with that”? And how does that move over time?
Rob Wiblin: I guess because I’m so focused on trying to forecast in which direction is public opinion going to go, something I’d really love to see more of is, if we possibly could, tracking the same people over time. And as they get more exposure to AI — either in their personal life or in the workplace or as they go from using it occasionally to using it weekly to using it daily — are they trending towards becoming more comfortable, or are they becoming more anxious and more concerned? Because I guess that would be a good leading indicator of where might the broader a public go as other people begin to use it more as well.
Eileen Yam: Right. And a related question is: as you understand it more, does it become clear to you where the pitfalls might be in a way, where it might work in the opposite direction of, “OK, I’m learning more and more, and I’m kind of freaking out the more I learn.” And it could go in either direction, and I think it’s a really interesting thing to just keep an eye on.
Rob Wiblin: Yeah. Well, thanks so much for coming on the show. Your knowledge of these reports is impressive. I don’t know how many numbers we’ve used in this, but this is definitely the most numbers I think that we’ve used in any episode of the show.
Eileen Yam: I appreciate you. This was a cram session for the exam the next day, so I’m glad. It was fun.
Rob Wiblin: My guest today has been Eileen Yam. Thanks so much for coming on The 80,000 Hours Podcast, Eileen.
Eileen Yam: Thank you.