In some real sense, computers are like brains. They take information in, process it in some way, and try to make sense of it. A key difference is that, with computers, we can explicitly lay out all of the rules for processing that information. For brains, the rules are already there, we can just try to figure out what they are. The central thesis of Brian Christian and Tom Griffiths’ book Algorithms to Live By: What Computers Can Teach Us About Solving Human Problems is that, by looking at how computers can be programmed to solve problems and what kinds of problems are easy and hard, we can learn something about how brains do the same.
Christian and Griffiths go systematically through a series of problem types that are central to computer science and applied math and describe how the insights into those problems give us insight into how brains handle information. One of their first examples relates to decision making. Say you have a choice you need to make from a pool of options — who to get married to, what house to buy, which secretary to hire. The basic conundrum is this: you want to make sure you get enough data to make an intelligent choice — you want to know that your choice is really a good one by comparing it to the other options — but the more information you gather, the longer you wait, the more likely the best one has already come and gone. So, you need to wait for some time to judge the quality of the pool and each candidate relative to the pool, but you can’t wait too long or you miss the best one. Under some assumptions, applied math has solved versions of this problem, a class of problems called “optimal stopping” problems. It turns out that, under certain conditions, the optimal stopping point is 37%. That is, you should use the first 37% of your options to help you build your knowledge base about the pool, and not choose any of them. But, you should choose the very first person after that 37% that is better than any of those in the first 37%. This maximizes your chances of choosing the very best person. You aren’t guaranteed to get the very best with this algorithm, but you have the best chance of getting the best.
This is just one example that Christian and Griffiths use to draw analogies between computer science and human thinking. They delve into a variety of topics:
- Exploring versus Exploiting. Related to optimal stopping, this is the problem of relying on something you already know well versus trying out something new, such as a restaurant.
- Sorting. If you have a large amount of information, how is it best to sort through it all.
- Caching. Again, if you have a lot of information, how do you deal with it in the first place? How do you get the information you need now when you can’t have all of the information at your fingertips?
- Scheduling. If you have a full to-do list, how do you optimize the best way of getting through your list? Do you want to keep the list as short as possible? Do you want to minimize how long others have to wait for you?
- Bayes’s Rule. How do you use what you know now to make estimates about what will happen next?
- Overfitting. What are the dangers of overthinking a problem?
- Relaxation. Given a hard problem, how do you even begin to solve it? How do you find the best answer?
- Randomness. When you have a huge problem, with a lot of data, so much that you can’t look at all of it, how do you figure out what it says? Think of polling.
- Networking. In a large, interconnected world, how do you share information with everyone else?
- Game Theory. How do we make choices when our choices involve other people and their choices?
All of these topics not only have direct relevance to how we program computers to work for us, to solve hard problems that computers are better at, but also give insight into how we can organize our own thinking and data processing. With the internet, 24-hour cable news, and ever-increasing media presence, the amount of information we are bombarded with continues to grow. Our lives become busier as we juggle work, our child’s soccer schedule, the maintenance we have to do on our house, our social lives. A lot of what we do is process information and try to make some sense out of it. While computer algorithms often don’t provide silver bullets — in fact, some problems are simply not solvable, at least not in a finite amount of time — they provide some insight into how to think about certain types of problems.
Algorithms to Live By provides a nice introduction into some of the problems of computer science in a way that is easily approachable. And, if the problems Christian and Griffiths describe might offer some insight into how our own brains work, at the same time, by making that connection between computers and us, they make the problems of computer science more relatable. That is, they provide an accessible pathway to learning about computer science and how we solve some of the biggest problems in computer science. Given the ubiquity of computers in our lives, it certainly doesn’t hurt to know more about how those machines work.