Berkeley Talks: How do we make better decisions?
December 13, 2024
Follow Berkeley Talks, a Berkeley News podcast that features lectures and conversations at UC Berkeley. See all Berkeley Talks.
In Berkeley Talks episode 215, a cross-disciplinary panel of UC Berkeley professors, whose expertise ranges from political science to philosophy, discuss how they view decision-making from their respective fields, and how we can use these approaches to make better, more informed choices.
Panelists include:
- Wes Holliday, professor of philosophy. Holliday studies group decision-making, including the best methods of voting, especially in the democratic context.
- Marika Landau-Wells, assistant professor of political science. Landau-Wells studies the effect that threat perception has on national security decision-making, and how some decisions we make to protect ourselves can endanger many others.
- Saul Perlmutter, Franklin W. and Karen Weber Dabby Professor of Physics and 2011 Nobel laureate. Perlmutter co-teaches a Big Ideas course, called Sense and Sensibility and Science, designed to equip students with basic tools to be better thinkers by exploring key aspects of scientific thinking.
- Linda Wilbrecht, professor of neuroscience and psychology. An adolescent scientist, Wilbrecht studies how adolescent learning and decision-making changes from ages 8 to 18, and how it compares to that of adults and children.
- Jennifer Johnson-Hanks, executive dean of the College of Letters and Science (moderator).
The campus event was held on Oct. 9 as part of the College of Letters and Science’s Salon Series, which brings together faculty and students from a swath of disciplines to interrogate and explore universal questions or ideas from disparate perspectives.
(Music: “Silver Lanyard” by Blue Dot Sessions)
Anne Brice (intro): This is Berkeley Talks, a UC Berkeley News podcast from the Office of Communications and Public Affairs that features lectures and conversations at UC Berkeley. You can follow Berkeley Talks wherever you listen to your podcasts. New episodes come out every other Friday.
Also, we have another show, Berkeley Voices. This season on the podcast, we’re exploring the theme of transformation. In eight episodes, we’re looking at how transformation — of ideas, of research, of perspective — shows up in the work that happens every day at Berkeley. You can find all of our podcast episodes on UC Berkeley News at news.berkeley.edu/podcasts.
(Music fades out)
Jenna Johnson-Hanks: Thank you all so much for being here. We’re just thrilled to have you. I’m Jenna Johnson-Hanks. I’m the executive dean of the College of Letters and Science, and on behalf of the college, I really want to warmly welcome you and thank you for attending tonight.
The specific topic of tonight’s conversation is decision-making. And sometimes it feels like decision-making is basically all we do. It’s all of life. There are little decisions, and big ones, and there are ones that we think hard about, and wonder this or that, and ponder. And there are decisions that we make as if by rote, ones that we make collectively, whether more by consensus or by vote. So we’re going to talk today about a whole range of kinds of decisions, and we’re going to think about them across a very wide range of disciplinary perspectives. Again, one of the great strengths of the college is this great breadth.
There are many different kinds of distinctions we could make, but I want you to especially listen and think about the distinction between how we study decision-making as an empirical object, where we observe how different people or different groups might make decisions in different empirical contexts. And the other perspective being how we should make decisions, a normative one, what is an honorable decision, or a just one, or an empirically founded one? And you’re going to hear both of those, individually and then in conversation, today. So I’m going to introduce the panel, and then they’re each going to speak briefly from their own perspective, and then we’re going to have time for conversation back and forth, including bringing you into the conversation.
So our first speaker tonight is Linda Wilbrecht, a professor of psychology and neuroscience, in our brand new Department of Neuroscience. Dr. Wilbrecht works on adolescent brain development, which I rely upon very often when I’m trying to understand our students. Her research interests are wide-ranging, including experience-based plasticity, and the development of the circuits necessary for learning and decision-making. Her degrees are from the University of Minnesota, University of Oxford and Rockefeller University.
Marika Landau-Wells is assistant professor of political science. She holds a degree from Harvard, and from London School of Economics, and from MIT. Her research is concerned with how cognitive processes affect political decision-making, political preferences and political behavior, and how … Someone saying yes. Yes, we’re looking forward to this conversation. And the neurological, the sort of neural and psychological underpinning, particularly the neural reactions to threat and danger, and how those influence people’s preferences around policy.
Wes Holliday is professor of philosophy, and chair of the Group in Logic and the Methodology of Science. A degree from that other school down the street. And I know you have some allies here, right? He specializes in logic and social choice theory, and is published in a wide range of journals on logic, philosophy, mathematics, economics, political science. When you are a philosopher, your work is relevant to a very wide range of disciplinary approaches. He’s also the co-creator of the Stable Voting website and Preferential Voting Tools library. Again, we’re going to look forward to that conversation.
And our last speaker tonight is Saul Perlmutter, the Franklin and Karen Weber Dabby professor of physics, and a 2011 Nobel laureate, who shared the Nobel Prize in physics for the discovery of the accelerating expansion of the universe. He’s a leader in international Supernova Cosmology Project, and the … Executive director? Faculty director of the Berkeley Center for Cosmological physics. He’s also, astonishing work, and he’s here today because he is also very well known for the work he’s done over the past decade in bringing together faculty and students from across a very wide range of disciplines, to work on the project of sense and sensibility in science. Like, how do we use tools and approaches from across the entirety of the College of Letters in Science to make better decisions? And to teach students how to do that. He holds an undergraduate degree from Harvard, and a Ph.D. from Berkeley. So with that, I will hand the floor to Linda.
Linda Wilbrecht: Thank you so much, Jenna. And hello, everyone. Thank you for being here. I am, as Jenna said, an adolescent scientist. I’m interested in adolescence, which we define as from age 10 to 25. And I have my 10-year-old, here. So all my theory will be tested by my own children. And I work in animals and in people, we study mice in the laboratory, and we also study 8- to 18-year-olds from the local community.
And a little less than 10 years ago, I set out on a project with my colleagues, Ann Collins in psychology, and Ron Dahl in public health, to try to understand how adolescent learning and decision-making changes from 8 to 18. And we have some very controlled laboratory tasks, and some very beautiful computational models, and we thought it would be fairly simple to understand when adolescents are making decisions and they’re learning from recent feedback, playing little games on a laptop. How are they integrating positive information and negative information? And we thought that we would be able to chart, maybe an increasing sensitivity to positive information, maybe a some sensation-seeking peaks in mid-adolescence, which is a well-described phenomenon. We thought we might see changes in integration of negative information, maybe more and more of that negative information informing the decision-making, as you got older.
And the reason we do experiments is, what we believe from our armchair, of course isn’t always correct. And this idea that we’re going to study adolescent decision-making, and find something simple, maybe in hindsight we were set up to be surprised. And what we found was that there was no good answer to this in our laboratory experiment, the adolescents were integrating negative and positive information to make their decisions, both, and with some very stark differences, but it changed not only with age, but with the task that we were giving them. The same person doing three tasks on the same day integrated the information very differently, depending upon the task that they were performing.
And what we realized is that context, and the state that someone is in, may be just as important as the age that they are. And we really have to systematically, we have to go back to the drawing board, and systematically think about context and understanding. But I think what will be relevant to tonight’s panel is that uncertainty became really important. The state of uncertainty around the task, whether we were giving rewards 100% of the times, for a correct answer, or 70% of the time for a correct answer, was really important.
And there was a second surprise with these adolescent experiments, the amount of uncertainty, when there was high on level of uncertainty, the adolescents actually outperformed adults. And outperformed children. And we think that there might be a really good explanation for this, because we also work in animal models, and we think about wild animals and their natural environment. And there’s this narrative I won’t repeat, as to not reinforce it. If I meet anyone on a plane or a train, I tell them I work in adolescence. They start telling me all about adolescence. Unless, we won’t mention what they say. But when you think about wild animals leaving the nest, our falcons leaving the Campanile, they have a lot of important decisions to make, right? And they have a lot of important learning to do. And so, the brain in adolescence might be supercharged for learning, especially in uncertain environments.
And when you go back, we go back and took a look at the literature, we weren’t the only ones finding adolescents were showing a prowess in some tasks, relative to adults and children. But there were other experiments. And one interesting commonality across other experiments from other laboratories was that uncertainty and volatility in the environment were the context where we saw adolescents really shine. And so I think it’s interesting to think that, in these times of uncertainty and volatility, we might be looking to our adolescents and asking them for good solutions.
And I’ll just touch on other thing, before I hand the mic over, is this question of how do we make better decisions? And thinking about framing, or priming, or context. And there’s some really new research that’s not coming from my laboratory, but I’ll just highlight it because it’s so gorgeous, coming from the University of Washington in Seattle. A laboratory of Larry Zweifel will just post on our archive, this research that shows that a rodent making a decision in the context where they get a cue of safety, they have release of dopamine into part of the amygdala, part of the brain, important for threat learning, but also for cues related to addiction. But if they have the dopamine release in the amygdala, they learn about threats in a very specific way. So the learning is very constrained to that specific element, and they’re open to positive experiences and other conditions.
But if they don’t have dopamine in the amygdala at that time, which can come from positive safety signals, then they learn about threat in a more generalized way. The threat smears and spreads to many more contexts. And so I really like this mechanistic understanding, this mechanistic description. I think this can help us design better experiments, design better contexts that could help facilitate learning, could help clinical contexts too, where people are trying to recover from trauma. It could also help in understanding what happens in elections, and what kind of integration of positive and negative outcomes will result when someone’s just heard a political speech that maybe starts with a lot, or ends with a lot of optimism, or ends with mentions of bad outcomes.
All right, and with that, I will hand it over.
Marika Landau-Wells: Well, thank you all for being here and thank you to Jenna and everyone at L&S for the invitation to come be on this panel tonight, it’s very cool.
So I studied decision-making mostly in the context of national security. What drew me to those kind of problems are really the stakes. So the decisions I study are war and peace, life and death, billions that are spent on particular defensive technologies. Whether to do that, whether to not do that. And I’m writing a book on that, at the moment. And the way that I think about it is that I start, actually … Your lead-in was perfect, Linda. I start from the idea that threat perception has a pretty big effect on national security decision-making. And in the book, which is sort of in two parts, I first look at how threat perception works in the brain, based on not just my work, but everything that we know. Largely from neuroscience and behavioral biology.
And then sort of carry that forward, into an understanding of the decision-making processes that go on when people are confronted with big, scary, new threats. And not just to themselves, but to the countries they happen to be to some extent responsible for. And so, in the second part of the book, I look at a few cases. One of them is looking at the responses to communism in the early Cold War, so just after World War II. Then looking at the responses to terrorism in the George W. Bush administration, right after 9/11. Looking at the sort of more slow-moving threat of climate change, and how people and leaders of states have chosen to respond to that. And also sort of briefly at COVID.
And across all of these different cases, at different times, involving different people, there’s some pretty consistent themes. And one of them is that although we can armchair quarterback the kind of decision-making that went on, so which national security strategies were going to keep the nation safe, whether or not invading another country is a good idea, we can sit back and judge those decisions.
At the time, they were not particularly obvious or clear, in terms of choices. And there was a lot of debate, not just about whether or not to make a particular choice, but in fact about the nature of the danger in the first place. And so, one of the sort of theoretical points that I make in the book, but also sort of prove out with a brain level perspective, is that danger isn’t just one thing. Your brain doesn’t just sort of go off and say, “Danger.” There are different kinds of harms to us in our environments, which are quite varied, and our brains can sense different types of problems, and we’re good at avoiding different types of problems.
And what I find is that, depending on the kind of problem they thought they faced, policymakers wanted to make different types of decisions in terms of safety choices, in terms of investments to make in the name of national security. So to give you an example, in the context of the early Cold War, people weren’t really sure what kind of problem communism was. This was actually up for a lot of debate. Some people looked at it as an ideology, and it had certain principles, and those principles were antithetical to certain democratic rights and freedoms. And they thought, “Okay, well we need to combat communism, so we need to protect our rights and freedoms.” There were other folks who weren’t super concerned about that, but they said, “Look, communism is backed by a country that’s going to have nuclear weapons pretty soon, and lots of them. And actually, the main problem we face here is that they’re going to come over and carpet bomb us out of existence.”
Now the solutions that you pick, if you think that the main threat that you face is being carpet bombed out of existence, versus a threat to your rights and freedoms, you’re going to adopt very different strategies to handle those two things. And you add in the mix a third kind of harm that humans are really good at avoiding, and that’s contamination. You have people, probably best epitomized by J. Edgar Hoover, weren’t that worried about rights and freedoms. Weren’t that worried, actually, about the existential problem of getting carpet bombed. He was worried about “a virus in the bloodstream of Lady Liberty,” as he put it. And he was worried about communism as a domestic threat. And again, those folks who saw things the way that he did proposed totally different national security strategies, in many ways. And so when we look back on the decisions that people made, and the policies that were ultimately in place, it helps to consider that there was actually a pretty serious contest of ideas, about what choices needed to be made, because they didn’t agree on what problem they were trying to solve.
And I see this repeat, we can see it today with climate change, a few years ago with COVID. Same holds for dealing with terrorism. And the approach that I take, which is to really sort of focus on how the brain processes danger, I do because the alternative, which is sort of I don’t know, popular in my field, is to take a more sort of rational choice approach, that decisions should be made based on costs and benefits. And I can say this as somebody who, before academia, worked for a while in corporate finance, even financial decisions aren’t based only on costs and benefits. A lot of other things come into play, especially when the stakes are high. So to think that decisions of life and death and war and peace are made based on a simple calculus, I think is not very helpful, and not going to explain very many decisions.
And so I happen to take my approach down to the brain, and the motivation for all of this, for me anyway, is that as I said, it’s really easy to armchair quarterback and say, “Such and such a decision was a mistake.” But if you think, and this is another theme in the book, if you sort of acknowledge that the decisions that people make to make themselves and everyone around them safer, have the sometimes intended, sometimes unintended consequence of making a lot of other people less safe. This is a problem of security choices. The choices that we make for our benefit are not necessarily for the benefits of everyone. If you think that that’s kind of problematic, and you think that maybe there’s a way to kind of step back and reconsider some of those decisions, especially when they’re not very likely to pan out, I think you have to sort of appreciate the cognitive processes that are involved.
So with my undergrads, I always use the example of missile defense. Ballistic missile defense has been a dream for a long time. It’s a psychological palliative, it solves a certain kind of problem, but it doesn’t work. And physicists have told us it’s not going to work particularly well, for a very long time, but it doesn’t stop us from wanting it, from spending money on it. So you could say, “That’s probably a bad decision, not a great decision,” but it still happens. And we might want to figure out how to get people moving away from it. But I don’t think you can do that just by saying, “This is about the numbers.” You have to figure out why people want it, why it always sells. And security does sell, safety does sell, and not always to our benefit. So that’s why I’m kind of focused on it.
Wes Holliday: OK, thanks. Thanks, everybody for being here, and for L&S for putting this on. So I’m coming from philosophy and logic, so I have a kind of more abstract take on this topic, although I really appreciate the empirical side as well.
So I want to start this off with something very exciting, which is a quote from a logic textbook from 1662. So this is translated from French, this is something called the Port Royal Logic. It’s about decision-making. Great quote. “To judge what one ought to do, to obtain a good or avoid an evil, one must not only consider the good and the evil in itself, but also the probability that it will or will not happen. And view geometrically the proportion that all these things have together.”
That’s quite modern. So indeed, that’s 1662. That now is kind of the classical theory in philosophy and in economics, of rational decision-making. That now gets formalized mathematically in the theory of what’s called expected utility maximization. But it’s just the idea that a rational decisionmaker can be modeled as if they choose between actions, as follows. They write down all the possible consequences that could result from their action. And they consider not only how good or bad that action is, you could try to quantify that with a number called the utility of that consequence. But also, weight that by the probability of that consequence obtaining, conditional on my action. So if we’re making a big decision, like something I’ve been thinking about these days is AI, and AI safety versus AI acceleration. Should we put the pedal to the metal on AI, or should we pause and take our time, and invest in safety? I want to consider the possible outcomes. I’m very uncertain, right? Will AI progress kind of asymptote? Will we have an intelligence explosion? If we invest too much in capabilities, will we trigger human extinction?
And in each of these cases, I want to consider not just how likely it is, but also how good or bad it would be. So you may think that the human extinction possibility is very low probability, but maybe it’s of great consequence. Maybe the utility is really, really low, if we have human extinction. So this idea of weighting the possible consequences by the probability and also the utility, is this cornerstone of this expected utility idea. So that’s just about individual decision-making. And there’s a lot to say about that, both descriptively, we know from behavioral economics, that people cannot be modeled as perfect expected utility maximizers. And there’s also … Maybe that’s not a surprise.
Audience: Yeah, that’s …
Wes Holliday: There’s also a normative question about whether the rational agent ought to behave like that. And there’s even debate and philosophy about whether that’s the right model for what people ought to do.
But what I’m actually interested in most is group decision-making, where things get even more complicated, because of course the different agents have conflicting preferences. We don’t just have a single utility function. My utility function might disagree with yours. Now, one straightforward idea to group decision-making is, we should just take the action on behalf of the group that maximizes the sum of all our utilities, for whatever consequences would come about.
And the difficulty with that is, there’s a couple of problems. One is, it’s very hard to elicit from people these numerical values that I could then add up in this kind of calculus, that we already heard about. Not only is it difficult to elicit these numbers, but some people have even argued that it’s impossible to put your utility values on the same scale as my utility values, in such a way that it makes sense to add and subtract them. And there’s ethical concerns about whether, even if we could somehow get that information from everybody, whether it would be right to just add and subtract our utilities.
So there’s a different approach to decision-making, which I think we’re all familiar with, and in some ways requires less information from individuals. And that’s voting, OK? So less information in the sense that, when you collect ballots from people, you’re not collecting their full numerical utilities on all kinds of outcomes. Now voting, of course, can happen after a period of deliberation and debate. So it’s not like we just want to go straight to the voting, we want some prior process.
But what I’m especially interested in is, what are the right methods of voting or better methods of voting, especially in the democratic context? So as we all know, the very familiar type of voting that we have like in the United States in many places, is what’s called plurality voting, where each individual just gets to indicate one option that they want to vote for, and the option with the most votes wins. So we’re all familiar with this, from political elections.
And voting theorists generally think this is not a great way of doing voting, actually, it’s very convenient and simple, but there are big costs. So especially, problems with this kind of plurality voting, are vote splitting and spoiler effects, which you’re probably all familiar with. Remember the 2000 election in Florida, that’s a famous case, a very consequential case, where arguably there was a spoiler effect. So what happened there is, when voters were only allowed to vote for one candidate, so the major candidates there were Gore, Bush, and Nader. And as you know, the vote between Gore and Bush was extremely close. The number of people who voted for Nader in that election, and of course Nader had no chance really of winning Florida. The number of people who voted for Nader was much larger than the margin between Gore and Bush.
And it’s plausible that many of those Gore voters, sorry, many of those Nader voters would have preferred Gore to Bush. But of course, they couldn’t indicate that on their ballot. They could only indicate who was their favorite person. So it’s plausible that actually a majority of people would have preferred Gore rather than Bush, and a majority of people would have preferred Gore rather than Nader. And yet, we had no way of letting people express that on their ballot. So this is not at all to make a partisan comment about how that election should have gone, it’s just that structurally, the election system did not allow the voters to really express their will. So how can we do better? Well, one reform that you may be familiar with is the use of ranked ballots, where you collect more information from voters. So not all the way to collecting numerical utilities from the voters, which you then add up. That would be one extreme of the spectrum, of how much information to collect.
But a more intermediate point is allow voters to at least express a ranking of the candidates. You don’t have to rank all of the candidates, but at least give some more information than just, “Who is my favorite?” And if we had allowed that in Florida, then a lot of people who voted for Nader could have said, “Nader is my first choice, followed by Gore, followed by Bush.” And then if you have a reasonable way of tallying up these ranked ballots, once it’s clear that Nader is not the winner, you can see that a lot of people who voted for Nader first preferred Gore to Bush. OK? So that’s the kind of reform that I think we should roll out, actually in political elections, is the use of ranked ballots.
There’s now also a sort of more subtle question of what method should you use to compute the winner on the basis of the ranked ballots? And I’m happy to talk more about that in Q&A, if people are interested. But the basic idea comes from a French philosopher and mathematician named Condorcet in the 1700s. And he had an idea which is very simple, which is, look at how each pair of candidates does against the other. So ask yourself, in the Florida case, do more people prefer Gore to Bush, or Bush to Gore? And let’s suppose if we had collected ranked ballots, more people would have ranked Gore over Bush, than vice versa. Then we say, “Gore beats Bush, head-to-head.” And then we would’ve asked the same question about Gore versus Nader, and maybe we would’ve found that more people ranked Gore over Nader, than vice versa. And then Gore would have won that head-to-head match.
And if there’s a candidate who beats every other one, which we call the Condorcet winner, now. That’s the candidate who should be elected. So this is a kind of normative claim, not about how we do make political decisions now, but how I think we ought to. And I think there’s reason to believe that this would really actually help with problems like polarization, help with problems like now, third-party candidates are deterred from joining elections, because they’re worried about spoiling the election. This would mitigate the spoiler effect, and therefore include more voices in the political process. That’s my little pitch.
Saul Perlmutter: So I’ll finish up with another angle on the same topic, although I must say that each of the different topics that were discussed before, show up in what I’ll be describing, I think.
So this is an educational approach. What would it be that you’d like to teach at a university, that would be possibly helpful when people are going to be making decisions in the world, either individually or in groups? Or as a society, as we’re describing. And I personally came to this not from cosmology. So there were very few decisions with cosmology that actually brought me to this one. But about dozen years ago, I remember watching our society making what seemed to be just practical decisions like, “What’s the right level for the debt ceiling of our country?” This doesn’t sound like a religious issue, it sounds like just a practical thing of what turns out to be the best way to do this. And yet, that’s not how the discussions that you saw around you were being played.
And I was noticing at that time, I would go to the lunchroom, the cafeteria up at Lawrence Berkeley Laboratory, where many of the scientists have their research groups from the university, and people would be talking about all sorts of modern day problems over lunch. And I realized that the kinds of discussions you were hearing, over the lunch table there, just didn’t look anything like the kinds of discussions that you would be hearing out in the world around us, or certainly not in the political debates. And they were using just, it seemed like a different vocabulary of problem solving and decision-making than you would see when people were looking, elsewhere in the world. And I was trying to think, “Well, where is it that that vocabulary was being taught?” And it was not taught in any science course I knew of, and these are all scientists that, no math, physics, biology, chemistry course was teaching these concepts.
It was mostly being taught by, essentially an apprenticeship, as these researchers went through their research Ph.D.s, and into postdocs and even as young faculty. And I was trying to think, “Is it possible that we could just extract and articulate what these ideas were?” And it would be interesting to see, could you teach them much younger and teach them intentionally, not just through osmosis?
And so I started, and I realized that, it wasn’t good enough for me to do this just as a physicist, because many of these of the things that you need in decision-making really come from our colleagues across the campus, who have many other areas of expertise. So I found a professor of social psychology, Rob MacCoun, who was at that time in the public policy school and the law school, actually. And I found a philosophy professor, John Campbell, and they got interested in this idea. And we put a sign up saying, “Are you embarrassed watching our society make decisions? Come help invent a course, come help save the world.”
And about 30 graduate students, postdocs and undergraduates, started showing up on the end of every Friday. We’d meet around, I think four o’clock in the afternoons, on Fridays. And we would ask, “What would be a minimal set of ideas that would be useful for everybody to know, in order to be able to just think through problems together better?” And we ended up, over a course of nine months, we’d go on past dinnertime meal, on these Fridays. Eventually we came up with some, I think it was like 23 concepts that we thought would be kind of interesting to see, could you teach? And then we started asking, “Are there ways to teach these that might catch on, and that people would recognize when they saw it? Not only in their own lives, but they would recognize it when they read it, something going on in the newspaper. And, for that matter, in politics.
And the combinations of ideas, some of them were just the ways that scientists over the years have noticed that we tend to fool ourselves, and the ways that we tend to go wrong. So there are lots of tricks of the trade that people have developed to try to avoid fooling ourselves in those particular ways, again.
These include the fact that we learn that it’s very easy for us to see random noise, just coincidences, and think we see a pattern and we understand how the world works. And as scientists, over the years, people realized that they were constantly getting fooled by these things, and they had to get more aware of how often random numbers look like real effects. And that was something you could actually teach. And it meant this, is sort of the underpinnings of why we go to statistics, and why we need statistics. Because we aren’t very good at recognizing these things without these other tools.
Some of the concepts are concepts of having to do with just probabilistic thinking in general, the fact that we tend to like to have our debates as if everything is absolutely true or absolutely not true. Whereas in fact, very few things, very few propositions in the world, do we know enough to know that they are absolutely true. At best, maybe they’re 99.9999% likely to be true, and we bet our lives on it. We will get on that airplane, and assume those tons of metal are going to fly.
But most things are not that sure. You have maybe 85% conviction that maybe teachers are teaching to the test nowadays, and therefore standardized testing is a bad idea. But you wouldn’t be shocked, maybe 15% odds, that that was not really what going on. And it’s good to be able to differentiate those different levels of certainty, and it makes you, I think, a more nimble thinker if you can play with the probabilities, which I was thinking of in terms of the adolescents. That in some sense, the scientists are being trained as best they can to stay as adolescent as they can, for as long as possible. And to be able to hold things in that sense of, “We’re not really sure, but therefore we really need to know what the odds are, in these different respects.”
So a lot of the course topics then had to do with these forms of skepticism, ways that we fool ourselves. But there is, in some sense, that’s the brakes of science, to avoid falling into error, and falling into traps. But you can’t drive a car with just brakes, and another whole aspect of the culture of science is the accelerator pedal, which has to do with being able to take on big problems. And there were a number of different elements that we realized we could teach, having to do with that. One is just the fact that most of us have very little idea of how long it takes to solve an interesting problem. And we tend to give up way too early on any problem that’s really worth its salt. When you ask the students, “What’s the second longest you ever spent trying to solve a problem or a puzzle?” And of course, we say second longest, because you don’t want the one time somebody got obsessed.
And we find that people will say, “Well, I’ve spent hours on it, one time,” or, “I spent, maybe days.” But of course the kinds of problems that most scientists and researchers work on, these are interesting problems, and they tend to take months if you’re lucky, years typically, decades sometimes. And that’s not terrible. I mean, that actually, you need a culture of conviction that you can solve problems so that you can convince yourself you can solve a problem long enough, to actually stick with it to solve it. So that’s an element.
And then there’s these other elements of how you parse a problem that seems way too complex, there’s no way we’re going to be able to solve these vast problems of global warming, or pandemics, whatever it would be. But if you parse it into, and figure out where are the main levers that are driving things, and which things are you paying a lot of attention to but they’re not really that important, it makes a huge difference. And so we taught some of those techniques for that, some techniques for the fast estimation that’s necessary in a modern world, to tell when you’re being fooled by numbers. So a number of things that make you feel a little more powerful, as a solver of problems. And they can be the accelerator pedal to go along with all the skepticism, which is the brakes.
And then finally, we turn to a whole other aspect of this problem, which is that even if you’ve taught all these techniques of rationality in problem solving, it makes no difference when you get to the group decision-making if you don’t think through how are you going to actually weave all that rationality in to what’s driving a decision for a group? Which is all the fears, and the goals, and the ambitions, and the values that are in play. And if you say, “Well, that’s not my problem, we’re just teaching how to, the rationality,” you’re not really doing anybody any service, because since those aren’t the things that got you into the room, the rationality is the part, OK, left behind if you don’t manage to come up with some way to weave them in a consistent way.
So then we spend actually the last whole fraction of the course looking at different techniques that people have developed for weaving together the values, and the goals, and the fears, with the rationality. So that you don’t lose any of those things. And ideally, to be able to adjudicate in different ways, and have people recognize, which parts of this are really factual parts that you would have to have a certain kind of argumentation over, versus which parts of these are really values and priorities that you have to have a different kind of argument about? And so that becomes a big part of the course at the end, as well.
So we’ve now been teaching this with, always having three faculty in the room, from natural science, social science, and humanities, at all times. To model what it looks like to have different people deliberate together in a thoughtful way. And the students are from all those areas as well, and all classes in our university. And we found that the students really seem to get it. I mean, they seem to pick this up. We do lots of experiential game playing, and activities, and discussion, that gets them to recognize it in the world around them, their personal world and their larger world. And we hear from the students that they feel like it’s been a really important part of their education.
In fact, somebody just stopped me on a walk yesterday. No, actually two days ago, and said, they just want to tell me that they took this course as a freshman, and it’s shaped everything they’ve done in the years afterwards. Which is very nice to hear. And of course, it’s a somewhat self-selected group that comes to take a course like this. So I don’t know whether that we will see this in a larger and larger scale, but we’re trying now to spread this to a bigger and bigger part of this university, and to other universities.
And so now, Harvard is teaching a variant of the course, Irvine, the University of Chicago just picked it up this last spring. And we’re also working with the Nobel Prize Foundation to develop a high school version of the course that can be … Because every place in the world picks up their teaching material without going through their school boards first. So we’re hoping that there may be a possibility that, if you wait 20 years from now, we’ll be just that little smidgen better at being able to think through problems together. And obviously, that’s the goal. But all of this, of course, is subject to all the revisions of everything that we learn along the way, and we’re constantly coming back to the class and saying, “It’s something we were teaching you recently, we think that may be wrong. And that, in fact, you shouldn’t feel disturbed by this, but that’s part of this progress, that we’re constantly discovering new ways that we’re getting things wrong, we’re fooling ourselves. And we’ll constantly do better and better at holding those at bay.”
So that was the content. I will mention that I think the reason some of those pages are on your chairs is because we also, the three faculty ended up writing a book that just came out this spring. And so, if you’re curious, you can find the book listed there.
Jenna Johnson-Hanks: Wonderful. Thank you so much. So we’ve got about 15 minutes for questions. Do people have questions? Oh, all right. I was expecting that I would need to fill a little space, but I’m not going to. Please, speak up. If you could just introduce yourself, and then ask your question.
Audience 1: Right now, I’m a student of Ron Howard, that’s a familiar name at Stanford. Oh. But I also taught at Berkeley, briefly. But my two favorite books on this topic right now are Thinking Fast and Thinking Slow, and The Power of Us. What do you think of those books, and what would you add to that list of reading that I should get out and do?
Saul Perlmutter: I’ll just start with a quick mention of the fact that Thinking, Fast and Slow was a whole section of what we ended up teaching, because we realized that along with all these other ways that we can go wrong, there are all the ways that our cognitive processes fall into certain very standard known traps, and that it was at least worth sensitizing people to that. But let me pass it to the others, who are more expert in these topics.
Linda Wilbrecht: I have a book recommendation that just came out a few weeks ago. It’s called From 10 to 25, by David Yeager, and it’s about adolescent thinking, and it’s geared toward teachers, but also people in the business world, and working with the next generation with a more supportive, mentoring attitude, rather than this adolescent incompetence model.
Jenna Johnson-Hanks: Marika, any other books you want to recommend?
Marika Landau-Wells: I mean, all the books I read are just about such depressing things. Those will give you a positive take on humanity, so that’s probably pretty good. But if you want something else, Richard Wrangham, his most recent book is something I assigned to my students who do, in my psychology and conflict class, as a way of thinking about the long evolutionary trajectory of why it is that people think fighting is a good idea. And particularly, whom they fight with and why.
Wes Holliday: Yeah, maybe just one, on the more technical side. But if you want to see how economists and philosophers think about the normative theory of decision-making, there’s an economist named Gilboa who has a number of books on this topic, including a more accessible one. I’m just forgetting the name of it, but look at Gilboa, on Amazon.
Audience 2: Hi, my name is Sean, and I am the parent of a budding neuroscientist. I have a question about adolescent thinking. So one of the things I took away from what you said during your comments were that adolescent brains can thrive in uncertainty, sometimes better than adults, or other areas. What happens when parents, in particular, or the society does all they can to remove uncertainty for the adolescents’ lives?
Linda Wilbrecht: I think a good model for thinking about that adolescent brain is that you need experience in order to set the wiring. There is a lot of plasticity, and there’s a lot of new connections that are reaching out, and trying out different things. And that’s probably, experience expected in plasticity, is a term we use to think about that, that it’s happening in advance of the information coming in, that you have these synapses reaching out. And you probably need those experiences in order to confirm the synapses. And I think our general model as a parent is just safety-focused, and that you’re going to wait through this period.
And especially if you think about incarceration of juveniles, and just lack of opportunities for internships, or the pandemic and being at home. Really that plasticity is there, and it needs to be fed by information and experience, in negative. I think it’s really hard as a parent to allow some negative consequences to happen, and to allow learning from those negative consequences. And so, yeah, how do we … A good model I think is to scaffold development, and take that scaffolding away slowly, to make sure you survive to 18. But also to ensure that there is some sense of trying something, and finding out what happens.
Jenna Johnson-Hanks: I think you’re in the …
Audience 3: OK. I feel like critical thinking is not always front and center, anymore. That doesn’t happen as often. And with the advent of AI, I think a lot of first-level critical thinking as being, that opportunity for people to practice is being taken away. So do you have any thoughts about that, and will that change things?
Jenna Johnson-Hanks: What was your name?
Audience 3: My name is Britt.
Jenna Johnson-Hanks: Wes, you want to take that one?
Wes Holliday: Yeah, I can take, that coming from philosophy. Because of course, we’re super concerned about this. I mean, that’s one of our main goals is to help students learn how to be analytical thinkers. So we don’t want them to just outsource their thinking to the chatbot. So I mean, we’ve been having a lot of internal discussions about how to make sure that doesn’t happen, because I think it’s really important that humans don’t lose control of the whole decision-making process and just say, “Well, let the machine make the decision, it’s kind of inscrutable to us how exactly it arrived at that decision.” That would be a kind of loss of control for humanity, that I think is really important to avoid. So I don’t know that we have all the solutions yet, but it’s something that at least the philosophy faculty has definitely been thinking a lot about.
Saul Perlmutter: I was going to mention that in trying to teach a course that’s basically a critical thinking course, we were thinking that, “Well, one approach might be to have the students be not only gauging where the other humans are going wrong, but also gauging where the ChatGPT is going wrong. So that, that’s the style of teaching that I think some people have been talking about, where you actually ask them to, along with everything else, go and actually ask the answer from the ChatGPT. And then you figure out whether or not it’s making all the same mistakes that the humans that it’s trained on might have made? Or is it making different mistakes? And so, maybe that might be an approach to pedagogically dealing with some of this, and then hopefully people will do that in real life.
Audience 4: And that’s … We have one more.
Audience 5: Hi. Yeah, my hand went up really early. I’m an alumni, a Russian major, retired infectious disease physician. So my question to all of you is, what about the illusion of rationality, and decision-making? I didn’t quite hear that in everybody’s comments.
Jenna Johnson-Hanks: Can you ask that question … Can you rephrase that just one more time, to make sure that we understand clearly what the question is?
Audience 5: Sure. There’s a lot of reasons to make decisions. One of them is, “Oh, the world is round. Well, I could sail my boat around it. Well, maybe the world isn’t round.” Looking at how we believe the world to be, versus how it really is, and then what are we going to do regarding what it is? I won’t get into politics, I promise, but as a Russian major, I went to behind the Soviet Union, in the Iron Curtain, a long time ago. And it was very interesting. And it wasn’t perfect, and I came back a much more patriotic American. But it was really interesting to see how we can look at the other side. So not only from a standpoint of what we think is right, Democrat, blah, blah, blah. But also, how are we deciding? I had many care patients die of AIDS, and some of them just, “I just can’t put up with this. I’m just going to give my life.” So kind of have a conversation about the do, versus the talk, or the rationale versus, “got to live my life.” Did that help?
Jenna Johnson-Hanks: Wes, you …
Wes Holliday: I may have …
Jenna Johnson-Hanks: OK. Wes thinks he’s got it.
Wes Holliday: Go ahead.
Marika Landau-Wells: No, hmm.
Wes Holliday: Go for it. I just wanted to make one distinction, because philosophers love to make distinctions, which is between rationality as a kind of internal coherence, and a different notion of accuracy. So your beliefs could be, like my belief about how likely it is to rain later this week, could be more or less accurate. So that’s one question, but maybe … They’re very inaccurate, actually. But the models of actually rational decision-making that come from economics and philosophy, a person could be rational in the sense of being totally coherent in how they make decisions, even though they’re probabilistic estimates are very inaccurate. They’re like bad meteorologists. They’re bad at predicting the weather, right?
So I think what I was trying to pull out of your question was a concern about accuracy, about whether the world is round or flat. And I think that’s, in a way, a separable question from the internal logic and coherence of the decision-making.
Jenna Johnson-Hanks: Marika, I think you …
Marika Landau-Wells: Yeah, I mean, I think that part of the problem is that for some of the things that we care about deeply, and decisions that we’d like to be able to make, the problem that Wes pinpoints is even harder. Because accuracy is just not possible, especially when it comes to reasoning about, for example, other people. So one of the things that humans are good at, we have to reason about each other, but we don’t actually have to be right very often, we can actually be wrong in really useful ways. So if I infer that everyone’s intentions are slightly better than they are, I’ll be more cooperative. It’s probably a good thing I’m wrong, but it’s not bad.
But it’s just to say, that we have to construct the world and everyone in it that we need to interact with, and we’re going to be wrong a lot of the time. When we are wrong in a way that affects the decision that we’re going to make, and if that decision seems very consequential, we might think that to the extent we believe there’s information out there that we can go get to reduce the uncertainty that we have, we should find ways to encourage that kind of information-seeking. In government decision-making, you can get better intelligence to a point. But only to a point. And there is going to be uncertainty around a lot of the decisions that are still made. And it means that you have to be, in some sense, disciplined about the reality you construct for yourself. So people are generally also not great at reasoning too far away from themselves.
And one of the, a study that I’m about to publish, I ask people to think about threats that they understand. So climate change is one, and illegal immigration is another. And then I ask another half of people to say, “Why do you think that people who are worried about those things are worried?” And so the sample is split, they’re answering at exactly the same time. Some people are answering for themselves, and some people are answering for those others. And I measure the accuracy of those guesses. So how good are people at thinking about the threats that others perceive?
And the trick is, you have to share a belief that that thing is dangerous. If you don’t believe that climate change is dangerous, you’re going to have a very hard time accurately imagining why someone else does. Same with illegal immigration. If you don’t share that underlying belief, you’re not very good at that imaginary jump. And that’s a really simple jump. It’s a jump in a survey, it’s a jump when the stakes are low, it’s a jump when someone’s not your enemy. If you add in all those layers, it makes it much harder to understand, accurately, the world from someone else’s perspective. So I regard that as a challenge more than a problem. I think it’s just a fundamental thing we have to grapple with.
Linda Wilbrecht: One thing to add to that too, is that sometimes you can find an individual who has the information, yet it doesn’t inform their behavior. Someone with frontal temporal dementia might know they’re hot, but not be able to take off their coat, and they can overheat and become sick. Or reach into an oven and touch hot cookies that are baking in the oven, and burn themselves. And they can explain perfectly well why you wouldn’t do such an action.
And you mentioned some medical contexts, and I’m very interested in addiction, where negative outcomes fail to inform decision-making. And there’s many different ways, threat and negative outcomes, and if you try to dissect the circuitry, there’s seven different sub-circuits where we’re getting a handle on the different cell types. And potentially, we know in the future, we might be able to re-sensitize the circuits that are failing to register, and bring that negative information into the decision to help people to act differently, to avoid costs.
Audience 6: I’m Laurie, and I’m an alum of the Berkeley campus. I have a question specifically for the national security person, and that is, what do you think emotion plays in decision-making? And here I’m thinking about people looking back at their own national origin, or being persuaded by a particular leader who’s got a lot of charisma. And so, instead of just the emotional, how much do you think the emotion plays in that kind of decision-making?
Marika Landau-Wells: I mean, the short answer is, you can’t make decisions without emotions. The idea that those things are separate, or that you have a rational side of you and an emotional side of you, that’s been put to bed with some pretty dramatic neuroimaging study. Well, not neuroimaging, neurological evidence, of people who become emotionally impaired, and then cannot essentially make decisions. So if you don’t know how you feel about something, you’re not going to be able to make a choice. That’s the short answer.
The long answer to your question is, because you think and feel at the same time, if someone is trying to get you to think something or believe something, getting you to also feel something in response, or in tandem, has a pretty powerful effect. In fact, when you look at attempts at persuasion that forego any sort of emotional context or content, you probably won’t even remember those attempts. Somebody giving you a list of facts, unless one of those facts happens to scare the heck out of you, probably isn’t going to even register. And so, folks who are trying to persuade or trying to instill particular beliefs, whether they know it or not, are going to integrate the leverage on you by instilling particular emotions. And sometimes they’re positive, trying to build a positive sense of affinity, and sometimes they’re negative, trying to scare you into doing something, or trying to make you feel disgust or anger or hate. Those are certainly levers that people can pull.
So I would say that any decision that you make, you can walk down the supermarket and try to pick a new type of detergent, and you’re going to wind up finding it very hard unless you find something to feel something about, even if it’s just the price. Those are the trivial ways in which you need to feel something to make a choice. Once a choice or a belief becomes sort of cemented in you, is represented as also having an emotional association, and a physiological response. As far as I’ve seen, those things are hard to shake. They’re hard to alter. And so, one of the most maybe problematic features of what you’ve described, is how sticky those kind of manipulated beliefs plus emotions really are. I’ve been asked a lot how you persuade someone to not think about something as a threat, and it depends on the kind of problem you think you face, but some of these are quite resistant to being moved once they’ve been instilled.
Jenna Johnson-Hanks: Thank you. We’ve got time for one more question.
Audience 7: Hello, guys. I’m Armand. I’m a J-School alumni. Since we were talking about making better decisions, how better to discuss the election than right now? Well, it’s too late to talk about making better decisions. We’re three weeks out, and unless the Democrats change their candidate again, I think we’re stuck with what we have right now. But my main thing is, if we look at the last eight years, the quality of candidate has gone down dramatically. And the way things are going, we’re getting a lot more candidates that are, let’s put it frankly, more style than substance. So how exactly can we, and this is for all four of you, how exactly can we as Americans, as electors, as voters, make the conscious decision to pick candidates who are … Well, probably, well, much better qualified to lead?
Marika Landau-Wells: Vote in your primaries. (Audience laughs) I could expand upon that. So one of the debates in American politics, it’s not something I’m super close to, but it is something that folks here do research on. In fact, there’s a book talk right before this in the social science matrix, on our polarized politics, that I think probably touch on this question. Is, why do we get the candidates that we get? Well, the candidate you see in the general election on the ballot in November is somebody who passed through a process, and the processes by which you get on that ballot have a lot to do with who you were running against in the spring. Now, who turns out to vote for that larger slate of candidates? Well, not everybody. And who does well in those contests? Not everybody.
There are an increasing number of folks studying that primary process for the very reason that the question raised, which is if you’re concerned about the candidates who are on your ballot now, you have to look at the entire process that got them there. And there’s some evidence, I think, that that’s where special interests can play a role, because candidates need to raise money. They need to raise money when they’re not necessarily well-known, and they’re competing within their own parties. So it’s not just enough to signal that you’re one party or another, you have to be different in some way that’s useful from everyone else in your party. So when you’re looking at how extreme some candidates get, and you look at why, that’s what we sometimes call outbidding occurs. It occurs earlier in the process than you’d think, because these are people within the same party running against each other. And the folks who are voting for them are, to be honest, a little bit weird.
So the people who show up to vote in primaries, I mean maybe everyone here is a primary voter, but not everybody in the United States is a primary voter. It’s a fraction of the people who vote in the general election. And so your candidates are being selected for you, in a very real sense, and ballots that are maybe not optimally designed, by a fraction of the population. So that’s the kind of technical answer.
Saul Perlmutter: Actually, maybe you want to answer, in terms of the …
Wes Holliday: I just want to say, part of the problem I think, lies actually with the electoral system. So this problem about, that I mentioned about vote splitting, has indeed been a problem in primaries in the past. Where you have a bunch of moderate candidates who split the moderate vote, and then a more extreme candidate passes through the primary process. So I think that, I mean, vote in your primaries. But I think we would really help if some of these electoral reform efforts were successful.
Saul Perlmutter: And I was going to leave just another angle on all this, which is, you could ask, who do we know of our friends who actually would put up with being a candidate, nowadays? And I came across one rather, one of the things we’ve been teaching in the course is a different angle on decision-making in a society that is a little bit, which is starting to be tried in actually a number of different countries now, which is this technique of civic assemblies. Deliberative polling is another term that’s used. And so we actually enacted one of these in our class to try it out. But what’s striking about it is that they take a truly random sample of the population. And so, for the United States, that might be 600 people. So you really have all the representative elements of the population in the room, and you start them deliberating.
So they’re not a self-selected group, this is a truly random sample group. They go out of their way to make sure that people can come, so they pay for child care, they pay for airplane tickets to fly them there. At best, they are able to get 95% return on some of these, because people really get the sense that they’re being asked individually to come. And unlike a typical poll, which you get what nowadays, 10% if you’re lucky? Even less? Right. So now you have a really representative group of people in the room, so what they say actually means something, it tells you something about the whole group. But when they start to deliberate, of course, as you might expect, on some topic. They don’t really know much about the topic, and so there’s a little danger of lack of information.
So what they do is, they bring in a panel of experts for them to ask their questions, as they get stuck in their deliberations. And the panel of experts is not allowed to lecture them, it’s not allowed to try to convince them of anything. And they may disagree with each other. In fact, they probably will, because they represent different expertise with different positions. And the deliberative group, then, has to adjudicate between them. So what they find is that that process really generates some very good behavior, that people act a little more like juries, where they actually do think through problems together. They don’t get stuck the way that their leaders currently would get stuck.
And the reason I bring it up in response to how do we get better leaders, is because I went to an event not that long ago where they had, the people who are organizing this for Ireland, and for Denmark, and for the EU, and for Canada, a number of countries are starting to build this into the systems that work with their legislatures. And they all made this comment that, aside from anything else, when they talk to the people who participate in this, they come in with a group of people, as you would expect, are pretty apathetic about most of the issues of the world. They’ve not been really involved in politics. They don’t read much of the news, generally. They say that almost always, then they come out, that the group is charged up. They’re really excited by the process. They all say, “That was fabulous, everybody should do that.” And then, a number of them end up running for office.
So it leaves you with a little bit of a sense of optimism. Now we just haven’t necessarily tried every route to work well with a democratic system. And along with electoral reform, there are other games that can be played that could make a big difference, and other countries are beginning to try it.
Jenna Johnson-Hanks: Decision-making necessarily involves emotion, a little bit of optimism and hope is going to give us some motivation to go forward. I’m so, so grateful to this amazing panel. Thank you so much.
(Applause)
(Music: “Silver Lanyard” by Blue Dot Sessions)
Anne Brice (outro): You’ve been listening to Berkeley Talks, a UC Berkeley News podcast from the Office of Communications and Public Affairs that features lectures and conversations at UC Berkeley. Follow us wherever you listen to your podcasts. You can find all of our podcast episodes, with transcripts and photos, on UC Berkeley News at news.berkeley.edu/podcasts.
(Music fades out)