Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

We tend to think of deciding whether to commit to a partner, or where to go out for dinner, as uniquely and innately human problems. The message of the book is simply: they are not. In fact they correspond – really precisely in some cases – to some of the fundamental problems of computer science.

Brian Christian

Ever felt that you were so busy you spent all your time paralysed trying to figure out where to start, and couldn’t get much done? Computer scientists have a term for this – thrashing – and it’s a common reason our computers freeze up. The solution, for people as well as laptops, is to ‘work dumber’: pick something at random and finish it, without wasting time thinking about the bigger picture.

Ever wonder why people reply more if you ask them for a meeting at 2pm on Tuesday, than if you offer to talk at whatever happens to be the most convenient time in the next month? The first requires a two-second check of the calendar; the latter implicitly asks them to solve a vexing optimisation problem.

What about estimating the probability of something you can’t model, and which has never happened before? Math has got your back: the likelihood is no higher than 1 in the number of times it hasn’t happened, plus one. So if 5 people have tried a new drug and survived, the chance of the next one dying is at most 1 in 6.

Bestselling author Brian Christian studied computer science, and in the book Algorithms to Live By he’s out to find the lessons it can offer for a better life. In addition to the above he looks into when to quit your job, when to marry, the best way to sell your house, how long to spend on a difficult decision, and how much randomness to inject into your life.

In each case computer science gives us a theoretically optimal solution. In this episode we think hard about whether its models match our reality.

One genre of problems Brian explores in his book are ‘optimal stopping problems’, the canonical example of which is ‘the secretary problem’. Imagine you’re hiring a secretary, you receive n applicants, they show up in a random order, and you interview them one after another. You either have to hire that person on the spot and dismiss everybody else, or send them away and lose the option to hire them in future.

It turns out most of life can be viewed this way – a series of unique opportunities you pass by that will never be available in exactly the same way again.

So how do you attempt to hire the very best candidate in the pool? There’s a risk that you stop before you see the best, and a risk that you set your standards too high and let the best candidate pass you by.

Mathematicians of the mid-twentieth century produced the elegant solution: spend exactly one over e, or approximately 37% of your search, just establishing a baseline without hiring anyone, no matter how promising they seem. Then immediately hire the next person who’s better than anyone you’ve seen so far.

It turns out that your odds of success in this scenario are also 37%. And the optimal strategy and the odds of success are identical regardless of the size of the pool. So as n goes to infinity you still want to follow this 37% rule, and you still have a 37% chance of success. Even if you interview a million people.

But if you have the option to go back, say by apologising to the first applicant and begging them to come work with you, and you have a 50% chance of your apology being accepted, then the optimal explore percentage rises all the way to 61%.

Today’s episode focuses on Brian’s book-length exploration of how insights from computer algorithms can and can’t be applied to our everyday lives. We cover:

  • Is it really important that people know these different models and try to apply them?
  • What’s it like being a human confederate in the Turing test competition? What can you do to seem incredibly human?
  • Is trying to detect fake social media accounts a losing battle?
  • The canonical explore/exploit problem in computer science: the multi-armed bandit
  • How can we characterize a computational model of what people are actually doing, and is there a rigorous way to analyse just how good their instincts actually are?
  • What’s the value of cardinal information above and beyond ordinal information?
  • What’s the optimal way to buy or sell a house?
  • Why is information economics so important?
  • The martyrdom of being a music critic
  • ‘Simulated annealing’, and the best practices in optimisation
  • What kind of decisions should people randomize more in life?
  • Is the world more static than it used to be?
  • How much time should we spend on prioritisation? When does the best solution require less precision?
  • How do you predict the duration of something when you you don’t even know the scale of how long it’s going to last?
  • How many heists should you go on if you have a certain fixed probability of getting arrested and having all of your assets seized?
  • Are pro and con lists valuable?
  • Computational kindness, and the best way to schedule meetings
  • How should we approach a world of immense political polarisation?
  • How would this conversation have changed if there wasn’t an audience?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Highlights

There is a set of problems that all of us face in everyday life, whether it’s finding a place to live or deciding whether to commit to a partner or deciding where to go out for dinner or how to rearrange your messy office or how to schedule your time. These often emerge as the function of limited time, limited information. We tend to think of them as kind of uniquely and innately human problems. The message of the book is simply, they are not. In fact they correspond, really precisely in some cases, to some of the fundamental problems of computer science. So, I think this gives us an opportunity—having made that identification of the underlying computational structure of human life—to really learn something by studying the nature of those problems and their optimal solutions. I think, that gives us payouts, I would say at maybe three different scales. At one level, computer science can in some cases give you just very explicit advice. Do this, it will succeed this amount of the time. In other cases, a parallel may hold more loosely but it still gives you an understanding of the structure of the problem, the structure of what optimal solutions look like, and a vocabulary for understanding the parameters of that space.

I think most broadly, it’s a way to think about the nature of human rationality itself. That the problems that the world poses to us are computational in nature and this makes computers not only our tools but in some sense, our comrades. We are confronting a lot of the same issues. And computer science paints, I think, a very different picture of what rational decision making looks like than you might find in, say, behavioral economics. Because one of the first things that any computer scientist takes into account is computational complexity. Once you incorporate the cost of thought itself, I think you end up with a picture of rational decision making particularly in some of the hardest classes of problems—that looks a lot more familiar and a lot more human.

So, I think it’s a more approachable and a more recognizable version—or vision, I should say—of what human rationality should be.

Imagine building a parking garage: if you start the parking garage at the best spots, and you slowly spiral to worse and worse and worse spots, that’s a computationally kind architecture. Because the optimal stopping rule is dead simple: if you see a spot, it’s by definition the best spot that you’ve encountered so far. Take it, you’re done. If you build the parking garage in the opposite direction where you enter, let’s say, the back, and you’re slowly spiraling slowly towards the place where you want to go, then you find yourself now in this dilemma where you have to kind of crunch the numbers and figure it out. And so, it’s just a small example but it shows that the problems that we face are not, some of them are just intrinsically posed to us by nature, by the world. Many of them, an increasing many of them, are designed by somebody else.

If you have a series of tasks, each of which is going to take a certain amount of time, you only have one machine to do them on, and you want to optimize for what’s called the makespan, which is the total amount of time it will take you to do everything, well, it just so happens that the order doesn’t matter at all.

You simply have a certain fine item out of work, a certain amount of time. And so, if you find yourself in a position where you’re optimizing for the makespan, so, your goal is to reduce the total amount of time you spend working, and you can’t delegate, it’s just you going to do the work, and it’s all more or less equally important, then the worst thing you can do is spend any time thinking about the prioritization. You should just begin randomly.

It’s a total waste of your energy. There’s an anecdote in the book that we tell about the Linux operating system. Every operating system has what’s called a scheduler, which performs exactly this function for the CPU, of how many microseconds to be working on this particular thread, when to switch, what to switch to. How to stack rank the different priorities that the system has, and how much time to give each of them. In a sense, you can think of this meta process, of doing the sorting and the prioritization is directly competing against doing the work. And so, this is one of these cases where it turns out that the best solution might be to be more imprecise. We follow the evolution of the Linux kernel through the 2000s. I want to say it was 2003, they replaced the scheduler with one that was less accurate about prioritizing the different tasks on the system, but more than made up for it by just spending all of that time doing more stuff. I found that a very consoling message.

Articles, books, and other media discussed in the show

Brian’s books:

80,000 Hours articles:

Everything else discussed in the show:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.