Berkeley Talks transcript: We need a digital infrastructure that serves humanity, says techno-sociologist

Intro: This is Berkeley Talks , a Berkeley news podcast that features lectures and conversations that happen at UC Berkeley. Find more talks at news.berkeley.edu/podcasts, and you can subscribe on Apple Podcasts or wherever you listen.

Abigail De Kosnik: Thank you everybody for coming out tonight. My name is Abigail De Kosnik and I’m associate professor here, as Claudia mentioned, in the Berkeley Center for New Media and also in the Department of Theater, Dance and Performance Studies. It is my honor and pleasure to introduce our fantastic speaker tonight, Dr. Zeynep Tufekci. Before I do, I want to let you know that there is an official hashtag for this event. It is #onlinehateindex. So please use that to post about the event, and you can also use that hashtag to ask questions, which the panel may answer during the panel discussion that will follow Dr. Tufekci’s lecture. Once again, it’s #onlinehateindex.

And now I’ll introduce our phenomenal speaker. Dr. Zeynep Tufekci is an associate professor at the UNC School of Information and Library Science and a regular contributor to the New York Times op-ed section. She has an affiliate appointment with the UNC Department of Sociology and is a faculty associate at the Harvard Berkman Klein Center for Internet and Society. She was previously an Andrew Carnegie Fellow and a fellow at the Center for Information Technology Policy at Princeton University. Dr. Tufekci’s larger research interests revolve around the intersection of technology and society. Her academic work focuses on social movements and civics, privacy and surveillance, and social interaction. She is also increasingly known for her work on big data and algorithmic decision-making. Originally from Turkey, and formerly a computer programmer, Dr. Tufekci became interested in the social impacts of technology and began to focus on how digital and computational technology interact with social, political and cultural dynamics.

Dr. Tufekci’s book, Twitter and Tear Gas: The Ecstatic, Fragile Politics of Networked Protest in the 21st Century, published by Yale University Press in 2017, examines the dynamics, strengths and weaknesses of 21st century social movements. Twitter and Tear Gas was very instrumental and inspirational to my working group called the Color of New Media, which focuses on intersections of new media studies with issues of difference and equity and has just published an essay collection with the University of Michigan Press on Twitter and race, ethnicity, gender and sexuality and nation, called #Identity . I think and hope that #Identity is following in the large footsteps of Twitter and Tear Gas , and one of my greatest hopes is that readers one day consider the two books to be in conversation with one another. Please join me now in welcoming Dr. Zeynep Tufekci.

Zeynep Tufekci: That was a very kind introduction, Claudia, and thank you so much. I’ll try to encourage you not to expect too much. After that introduction, I feel like I can’t live up to it. But it’s my super pleasure to be here, partly because I love being here and partly because this kind of intersection, where we have the technologists and the computer science and the sociologists and the social scientists and the humanities together, I believe is exactly what we need to try to deal with all the challenges of this historic transition.

So I’m going to say just a couple of words about how I got here, and then I want to give some examples of the kind of challenges we face, and some of the research that people are doing. I started, the reason I got into computers, computer programming, computer science, I kid you not, was because I wanted to avoid ethical issues. I had wanted to be a physicist. Like a lot of kids into science and math, physics was the first thing that captured my imagination, and at some point, this happens to lots of kids who are into math and science and you want to learn about physics and the secrets of the universe kind of thing, and you’re a kid, and then you learn about nuclear weapons and then you go, “Ooh.”

Exactly, right? That sound, that’s exactly how my kid self felt about it. I thought, “This is such a big responsibility. I don’t know how to deal with something like that.” And I’ve paid for college sort of myself, I’ve started working very young. I needed to because of family circumstances, so I needed to also be practical. I couldn’t just sort of wait til I got a degree to have a functioning job, so I thought, “You know what? Computers, this sounds safe. I’ll just do accounting software, write inventory stuff,” and so that’s how I picked it.

Now, last month I had the honor of visiting CERN, not for the physics, obviously, but for the 30th anniversary of the World Wide Web’s invention. It was this wonderful, it was a genuine honor. I met and I was on a panel with Sir Tim Berners-Lee and all the other early founders of the Web, a lot of physicists of course. My physics side that never happened was thrilled to be getting these super tours of the collider and talking with the researchers there. And I felt like if everything had gone right in my alternative history physics career, like I had the best possible career and managed to be an exceptionally good researcher, I would have ended up at CERN where everything is happening, and I would have been the 653rd author on a paper, because that’s how they work. Which is fine. That is totally fine.

This is not the part that I’m interested in, because it’s an honor to be working with such a brilliant team of hundreds of people in these interesting questions. And then the ethical dilemma at CERN at the moment is when you have papers that are thousands of people, which you have to be, you’ve got all this big stuff, and only three people can get the Nobel Physics Prize, how can you have three people get the prize when it’s a team of thousands? That’s their ethical issue. My people? Killer robots? Upheavals around the world potentially inflaming the hate speech, the misinformation, the stuff? I was kind of like, okay, I did not manage to avoid ethics here, did I, with my career? I ended up in the middle of it, which turned out to be more interesting and exciting in some ways.

But it sort of speaks to the moment, in that digital technologies and computer science and artificial intelligence and connectivity and all our phones in our hands and all of that, it is a historic transformation of the public sphere, of the labor markets, of the way we do a lot of things. And just like any big historic transformation, it happens under particular political, economic conditions. If we had this kind of transformation, say, 30 years ago, it would have been a different set of events. And it’s a major transformation. Just like the previous ones, say, the printing press and Industrial Revolution, it’s going along with a bunch of big changes. We have digital connectivity and artificial intelligence kind of happening in similar times and very much interacting. The printing press and Industrial Revolution, they also were very much in interaction and changed the course of the world.

Now, in hindsight, it looks pretty good mostly, but it didn’t just get there. We didn’t just have printing press, books, Industrial Revolution, and whoops, fast forward, European Union. We had global wars, upheavals, major things, and a significant effort to create the kind of institutions that gave us some stability, that gave us some ways to deal with some of the upsetting and dangerous sides of it. I mean, it’s still an open question. Did we dodge all the bullets? We have climate change that is a consequence that we’re dealing with. Nuclear weapons remain an existential threat, even though we don’t have the same sort of trigger-happy Cold War. There’s all these things that I believe are still an open question.

On the other hand, we have these wonderful things. We’re not serfs, we have medical science, we have all these on the one hand, on the other hand kind of thing. Existential threats, great stuff. So it’s kind of like that. It doesn’t just institutionalize and calm down by itself. I want to give a couple of examples of how things kind of work, that I think illustrate some of the dangers and also why this kind of intersectional research thing is really significant, because what I ended up doing was that after I started as a programmer and I did get a degree in computer programming, I just got fascinated by the idea that these technologies were going to transform the world, and switched to the sociology and social science side.

I’m the sort of multilingual research person, somewhat by accident. The practical considerations and my interests led me to one field, and then my fascination with the social impacts led me to the other side, and I just ended up having these dual competencies of sorts that talk to each other a lot. And those helped me try to figure out what we’re dealing with, because traditionally the academy is fairly prone to siloing itself. You got a department, you got a chair, you got a dean. So do companies. They got their engineers, programming stuff. So that’s what we need to break.

But my first example comes from my social movement research that was after… Let’s see, that one was in 2016, 2015, started early on, when the then-candidate Donald Trump was sort of rising in the Republican primary, but not really being taken seriously by the candidates. On the Democratic side, there was this feeling like he is the weakest candidate to beat. And on the Republican side, you were hearing a lot of the competitors wish that all the others but him would be knocked out so that they could knock him off and become the candidate. So that was kind of the setting.

Now, I mentioned I’m from Turkey. I’m from Turkey, and I do research on social movements and things. It was just I felt like I recognized some of what’s going on. “I’ve seen this movie” feeling came over me with the candidacy, and I was like, “I should sort of try to look into this and maybe say it’s okay, like don’t go into the basement when you’ve seen the movie and you want to tell the actors something serious is going on. This is a political realignment.” And I ended up, this is another sort of, I think, advantage I had is that I live in North Carolina, which is outside of a lot of the bubbles that sometimes can happen in the journalistic world. I started going to Donald Trump’s rallies. I started following his social media. I had a list of his ordinary followers that I would just get up in the morning and check, and I would do all sorts of things.

My research, again, is into social movements and public sphere, and that’s what Twitter and Tear Gas deals with a lot, at the intersection of digital technologies. I look at things like how, say, the Egyptian dissidents were able to overcome certain kinds of censorship and gatekeeping to burst onto the scene. So there are ways in which I thought this movement that Donald Trump has somewhat stumbled into, but it’s a genuine political realignment with certain roots in the country that need to be recognized and taken seriously.

I have a regular perch at the New York Times , so I thought, “All right. I’m going to write some of what I’m finding for the Times column.” It was a rally of his I had gone to, and I had noticed a bunch of things that were really interesting to me. The way the journalists were covering it at the time is they would be penned in. There was like a little penned in, and they would all sit there and type. And I thought, “What on earth are they… They should just leave one person and not attend.” There would be 30 of them sitting there, because he’d forbidden them to go anywhere else and talk to the people, which made no sense. They complied with it. I thought, “This is silly.” I’m not a journalist, so I wasn’t going to comply with anything like that, so of course I’m sitting and I’m just hearing him talking.

And I had a different impression of what was going on than a lot of the journalistic accounts were. One of the things was that he was talking in kind of these seemingly run-on sentences, but then he would hit a theme and go follow it. He was very good at feeling the room and following it. I wanted to write about one of them. It was a kind of a political message that didn’t fit either the Democrat or Republican messaging, and it was an example of the political realignment he kind of stumbled into. I wanted to get the quote right. I’d already seen it. I’d taken notes, but I didn’t just have my laptop open and typing away. That would have been kind of weird.

I also hadn’t written the exact thing, so I went on YouTube to watch the video, to get the quote right so that the fact checkers wouldn’t come back to me and say, “You misquoted it.” Very good. So I watched his rally, and I checked on a few of his other rallies to see if he used the same kind of sentence or if he’d just sort of come up with that, because it was a sentence about the military and the military spending, and this was a rally in Fayetteville, which has an Army base. So I was wondering if he just stumbled into it with a military audience or if he’d used that line before. I just watched a couple of his rallies.

At that point, my YouTube lost its mind. You know how on YouTube on the right-hand side there’s these recommendations? All of a sudden YouTube started recommending, first there was this mild kind of like Europe, great, and then white people, sort of white supremacist stuff, and then just going down this hole into the Holocaust-never-happened stuff. Okay, step right away. This is kind of a thing that occurs. I thought maybe there’s some sort of correlation or something that people who watch a lot of his rallies, there’s some correlation, something like that.

I stepped back from that, and I thought, “Okay, let’s try something else.” Start over, new account, open new incognito windows, and I starting doing this for other politicians. Bernie Sanders was then a candidate. Hillary Clinton, then a candidate. It would be somewhat similar. It wasn’t the same way, because the way what we are calling alt-right, the reactionary ethnonationalist right, has occupied YouTube is a little different than the way the left is. But it would still give me somewhat edgier stuff. It was like, okay, maybe political stuff … It wasn’t giving me more of what I watched, it was giving me edgier and edgier stuff. I said, “Okay, YouTube, maybe this is how politics works, so let’s just sort of start and do non-political stuff.”

So I just did tons of experiments with YouTube. I would watch something on jogging, and very soon YouTube would be like, “Would you like to watch a…” Of course, this is auto-playing. “Would you like to watch a documentary on the ultramarathons?” I just started something about jogging a little bit? No, ultramarathons and extreme runners. I even learned about this. Apparently it just happened. There’s this super crazy run that’s like three days of running or something. I’m just trying to watch a few things about jogging.

I’d watch a video about vegetarianism, YouTube would be like, “How about one on being vegan?” So my joke about that is, you’re never hardcore enough for YouTube. If you’re in college, you have that friend. You listen to the rock, they’re like, “I listen to heavy metal.” You’re like, “Okay, I listen to heavy metal,” and they are like, “I listen to trash metal.” And you listen to trash metal, and they listen to death metal. You go there, they’re going to listen to atonal music. They’re going to outdo you. Whatever you do, they’re going to outdo you. And YouTube was kind of doing that. I was like, “What is going on here? What on earth in going on? Why is YouTube trying to pull people to edges like this?”

Now, this was the social science-y side, and I then learned that in 2015… YouTube has a superb AI department. They’re YouTube, right? They’re Google. Google has some of the world’s best AI scientists. And they had created a recommendation algorithm that was of course maximizing viewing time, because that’s how YouTube makes money. It shows you ads, and it just wants to keep you there, because then they can show you more ads. And they had this new recommender algorithm, which was based on machine learning, was getting superb reviews as having increased engagement, and isn’t YouTube doing great? I was like, “Wait. This is what that great looks like.” Because the people at YouTube and everybody else is looking at the watch time, but somebody needs to look at what is the thing that’s increasing the watch time? If there’s an algorithm that’s increasing the watch time, it’s really good to try to figure what on earth it’s doing.

With what we’re doing right now, when people say AI, most of the time these days what we’re talking about is machine learning. Machine learning is a new set of technologies that is, well, it’s not a new technology. We had versions of it in the ’50s even. They were called perceptron. It’s a way of rather than instructing the computer what to do, like when I coded I would write very exact instructions, detailed instructions, with machine learning you don’t train the computer by giving it instructions that are very detailed and tedious. You feed it lots of data. It churns through all that data, and then it creates these neural networks or models that classify and do things with that data.

The other day, I was trying to explain to someone, and a metaphor that comes up is this. If you have a maze where you put the ball and it just goes ping, ping, ping, like a Plinko thing, and the ball comes out this way or that way, and there’s this big maze. That’s like a neural network. It just sort of has these many, many, many layers, and then you put all the inputs in. That maze has been created from eating lots of data. And then you put your new input in, and it goes ping, ping, ping, ping, ping, and it says this or that. It does its classification. Except rather than being like a small visual thing you can see, we’re talking about like a million by a million thing. We’re talking about these giant matrices. We’re talking about things that are very, very big, and it turns out these things can actually work fairly well for tons of things we’re doing.

But here’s the kicker. It works in a way that we do not understand, because we did not program it. We just fed it a lot of data of what worked, or we trained on, we labeled, and then it creates these new ways of classifying things. It’s kind of hard to explain, but you can think about it as a metaphor from the brain. If you had a cross-section of my brain right now, that would be tragic for me, but it still wouldn’t give you any idea about the next word I was about to utter. You would just have a cross-section of my brain. So with machine learning, we have that cross-section, but we don’t really know how it’s operating. It does work. And the reason it didn’t work in the ’50s was we didn’t have a lot of data, but now with all of us having phones and all of these sort of sensors and all this data, there’s enough data, it turns out this stuff really works well.

So what YouTube may well have been doing with this machine-learning algorithm, and since I talked about it and wrote about it, there’s been a bunch of real good investigations that kind of confirm big chunks of this hunch, is that it had caught on to a human vulnerability, which is, especially for young people and also not for young people, are things that promise to tell you something edgier. And something that’s been kept from you, here’s the secret to the conspiracy kind of thing, are attractive. And so are novel things. When we look at, there’s a nice new science paper, a couple of months old now, I think, on what kind of fake stuff goes viral on Twitter versus what kind of stuff doesn’t. And one of the things that seems to be a distinguishing characteristic is the fake stuff is more interesting, because it’s novel.

And as soon as I told about this YouTube thing and I wrote about it, I had so many parents contact me with things. Like they would sit their kid in front of YouTube on the NASA Channel. Here’s the Apollo Mission and blah, blah, blah, which of course is exciting. But you know what’s more exciting? Going to your mom 30 minutes later and saying, “Mom, the moon landing never happened. It’s all a conspiracy.” Because all of a sudden YouTube recommends something like that, and the kid is like, “Wow, big secret.”

So YouTube has figured this out, that it’s almost like the information equivalent of the human appetite for sugar, salt, and fat, because human appetite for sugar, salt, and fat is absolutely normal and understandable. Because for most of human history, the defining characteristic of humanity was hunger. Next year is the first year in human history, if it’s not already here, it’s going to happen any month now, is that we’re going to have more kids whose issue is obesity rather than hunger. For most of human history, we’ve been hungry. We did not evolve around supermarkets, we evolved in the Pleistocene. But all of a sudden, with the new food environment, we’ve got processed food, we’ve got supermarkets, we’ve got all sorts of things, and we have to adjust to this new thing where there’s a surplus of food.

So with information, it of course makes sense to be interested in the conspiracy. It makes sense to be interested in the novel thing rather than the boring one that you already kind of know. It makes sense, especially for young people, to try to figure out how the world works in ways adults don’t tell them, because it’s true. Adults don’t tell them everything, and you have to figure stuff out. You can even think of it as a distorted version of what drives scientific curiosity, because you want to understand. So YouTube is a little bit like a cafeteria, where they make money by making sure you eat lots of food, and you wake up in the morning and you step into that cafeteria, and they’re like here’s your potato chips. And you’re like oh, okay. And then before you finish, the recommender algorithm brings the next thing in. It was like, I don’t know, here’s your cheesecake. And then before you finish your plate, here’s some ice cream.

And of course, if you keep doing that, you have to up it. Now there is just recent research. I used to use this metaphor, but there is recent research, the amount of salt and fat in fast food has gone up, because as we get used to it… It’s true for salt. If you eat a lot of salty food, you just can eat saltier and saltier. It’s the same reason when you go to a Thai restaurant, they ask you, “Thai spicy or U.S. spicy?” Because if you grow up in Thailand, you start eating spicy from young age, you can kind of up that. It’s kind of like that with information too. It can’t just give you more of the same. It cannot pull you to the center. If it wants to keep you watching, it has to create this sort of novel conspiratorial informational thing.

Now, the other side of that picture is that, in the U.S. at least, what we’re calling alt-right or reactionary right has figured out, YouTube is the key place where people go and watch. Kids, especially, that’s where they search. That’s where they listen to music. That’s where they do lots of things. So they’ve populated the place with an enormous amount of content, so that the recommender algorithm can pull from that. What has happened consequently is that YouTube has become this place in which you can start someplace and be made more and more extreme by socialization, because you’re not just being fed these videos. It’s the commenting space, it’s a place where you see lots of things. And that kind of socialization, it’s super important, because socialization is how we define our norms.

To explain that, I want to go from the YouTube example to a community example in Reddit that, as Claudia said, I talk about in my book. It was kind of one of these, I watched this in real time, incredible sort of things to watch how a community forms. Right now, Reddit’s not like that. It’s changed a lot, but there’s a time in which Reddit did not really interfere with a lot of the subreddits, which are like forums in Reddit, that would form there. And one of the terrible ones was… There are two terrible ones that I kind of watched. One of them was Creepshots, and one of them was Jailbait, and they were just as bad as those names just evoke. They were these terrible places.

In Creepshots, they took non-consensual photos of women in compromising ways, like you’re going up the stairs and they try to catch under your skirt or something like that. And in Jailbait, it was young people, mostly minors, who were in public places sometimes, in the beach, in sports and things like that, that were posted, very clearly sexualized. And they were like, that’s what they were interested in. That’s the jailbait, because if the pictures were a little more explicit, that would have been a federal crime, but they were just collecting these pictures there.

So what happened is that obviously, child pornography is this terrible significant crime. It is heavily prosecuted, and you can go to jail for a long time if you’re caught with it. And all of a sudden, you have in Reddit, a place where the president of the United States does these Ask Me Anything sessions, you have this giant thriving community of Jailbait. And this was not a small little community. At some point, I think there were like 10 to 20% of Reddit’s traffic. It was this crazy number. They were right out there in the open, and you could have the same username to share photos of a 10-year-old from a gymnastics meet, and then use the same username to ask anything to the president of the United States. It was a legitimizing and normalizing space.

And these people had these… Sometimes people would come and try to intervene in that thing, like fellow Redditors, and they would have all these theories about, “We’re not really child pornographers. The kids have clothes on them.” And some of them would say, “Well, they’re over 12, and it’s different.” I’m not kidding. This was kind of the conversation. Everybody was pressuring Reddit to do something, but they were like, “If it’s not illegal… ” And they did, obviously, in the public-facing part, kept it not illegal, because otherwise the feds would have been there quicker. But over time, they convinced each other that this was okay. They convinced each other, like you saw how communities normalize deviance and extreme and terrible behavior by telling each other it was okay.

I watched it. They kept reassuring themselves. In fact, they had these couple of moderators, and the moderators were given these Reddit awards. You go to a conference, and Reddit gives you this bobbing head or something. It’s a little plastic statue, as far as I can tell. They’re very proud of it. They were part of Reddit. And they were so normalized into this behavior. One of the days, a young high school student came and said, “I have nudes of my 14-year-old girlfriends.” And stuff that used to happen in the back channel of this community, which is people sending each other messages, tons of people were like, “Yeah, send me, send me, send me.” Now, that is obviously child pornography, and they had so created this closed space for themselves that they just openly asked for child pornography to be sent to them. And that finally brought the feds in, thankfully. And that finally got the legal thing, and that community finally got shut down.

But while it was being shut down, the moderator was also doxxed and revealed. Adrian Chen found out who this moderator was, he was using a pseudonym, and published his name. So there was this national interest in this whole thing, and the guy actually went on national TV, on CNN’s Anderson Cooper, to defend the community. At this point, you’re kind of like, you know, this is how communities can operate. He just went, and he even brought his little bobbing head statue, as like, “Look, Reddit gave me a statue.” And you could sort of watch. He would say stuff like, “Well, the kids have clothes on,” or something like that. And Anderson Cooper’s jaw would drop, because I mean obviously, you cannot defend that community like that. It was so horrible to sexualize these little children in this terrible manner and just kind of grow and grow and grow your community.

And it was very clear that in the private, these people found each other, and then once they found each other, they led each other to other places where more explicit and direct child pornography was being shared. It was creating a community normalizing sexual exploitation of children. And over time they got so used to it, the leaders thought they could go on national TV and defend this. During the interview, you kind of see the guy realizing, he’s speaking as if he’s from a different planet, because the reactions are like just people’s eyes growing bigger and bigger, and explaining.

And that is the same kind of dangerous mechanism. It’s one thing for a community of dissidents to come together and then we support it. But if you have these socializing places where YouTube is kind of creating Holocaust-never-happened videos kind of stuff, and it just takes you down that rabbit hole, and then you find the comments, and then you get gone into that community, you will find lots of people who normalize all of this. And they get used to it, and you find the politics shifting in that particular space to more extreme, hateful versions. And this is exactly what I believe is happening in lots of places online in ways that is very much fueled, partly by the affordances, the sort of things that are made possible by these online spaces, and also their business model, which is dependent on increasing engagement, which is Silicon Valley-speak for making you look at the screen longer. That’s engagement.

They finance by selling people ads, and when you ask them, they say this is what people want. Well, I’m like, this is what people want when? If I’m hungry at breakfast and there’s no other food around, or if I’m just starting to look for breakfast, and this cafeteria shows the potato chips, I’m going to eat them. I promise you 100%, I’m hungry, I will eat them right then. But that may not be what I want. If you ask me in a different moment, I might say, you know what, I might desire to have different choices. I might buy certain kinds of snacks. I might choose when and how I am going to indulge this and when and how I am not going to. I might want to have some control.

But what these three things are doing, especially for young people, right at that moment, did you click or not? And that’s the kind of measure, did it increase engagement? And there’s no other choice. Like in YouTube, you can’t even block a channel. So if you’re a parent and you leave your kid unattended on YouTube, and they go down a rabbit hole and you want to block that channel, you don’t even have that capability. You can’t even just say click, this channel’s off. You can try to train the algorithm by giving it thumbs down. It doesn’t work. It’ll come back, because it knows better than that thumbs down that the kid will watch it.

How can a bunch of people design this? One, of course, this is how they make money. The second thing is, I’ll give a sad example, but it is very illustrative of what some of the shortcomings of the Silicon Valley model are. And since we did YouTube and some Reddit, let’s talk a little bit of Facebook. In, was it 2015, I believe so, because it was before the coup in Turkey. This is how people from Turkey, we keep time by which coup. We have lots of them, and my joke is the Inuits, allegedly they have lots of words for snow, we have lots of words for coup, different kinds of things. Yeah, this must have been in 2015, because it was before the 2016 coup.

Okay, yeah, it was 2015 when at the end of the year, Facebook decided to algorithmically pull pictures from the stuff you posted last year, this is Christmastime, pull a photograph, very good, algorithmically pulled, put a party theme around it, hats, party, and say, “It’s been a great year!” For more than a billion people. And of course, it didn’t work. One of the saddest cases, which brought this to the attention of a lot of people, is that someone in the programming community, one of the creators of great CSS libraries in fact, he very tragically had lost his daughter Rebecca that year to cancer. It was this tragedy. He’s in this community, the tech community, lots of people know him. He’s one of the nicest people. Very well-liked, beloved, loved person, and this tragedy.

But guess what that algorithm did? He had posted lots of pictures of her, and the algorithm pulled right… She had died, I think, in summer, July. Yeah, it was around that time. So the algorithm pulled a picture of Rebecca in a party theme, “It’s been a great year!” right around the first holiday without her. Very hard to take. There’s no easy time for this, but it was a very hard time. So he wrote this heartbreaking post saying that inadvertent algorithmic cruelty. That’s how nice he is. He said it inadvertent. We did this to a billion people, “It’s been a great year!”

On a lighter note, one of the examples was some woman’s ex-boyfriend’s house had burned down that year, so Facebook’s algorithm puled that up and said, “It’s been a great year!” And she was like, “I didn’t really hate him. We had an amicable split-up. Thanks for the thought, but you didn’t really need to show my ex-boyfriend’s house burning down to cheer me up at the end of the year.” I mean, how does this happen? Well, you know how this happens? There’s a couple of things going on.

One of them is, this is designed by a bunch of engineers, but it’s not just an engineering thing, because I could probably walk out the street and pull the third person and say, “Do you think it’s been a great year for a billion people?” And they’d say, “Probably not. A few bad things might have happened to a few people.” But at the time, the people running Facebook’s newsfeed, and you know, they’re very young, in their 20s, and I forget that year’s newsfeed manager’s exact academic career, but it’s probably like Exeter-Harvard-Facebook, or Andover-Stanford. It’s one of those. There’s three or four choices for high school, and like CMU, Stanford, Harvard and then Facebook.

And that’s how a lot of people around his world are too. You just go to a great high school, and then you go to the super elite university, and you get your first job in Facebook in your 20s, and your stock options have just vested. It’s been a great year for you and everybody you know. This is the kind of narrow thinking that gets people to not understand a lot of things that go on. And again, the whole thing is optimized for engagement, so they are kind of looking at it quantitatively, but not always understanding how that quantitative result is coming. And they’re a very narrow slice of humanity that I do not need to tell a Berkeley audience, San Francisco audience, because nice a city as this might be, it is a very narrow slice of humanity that populates most of the tech sector.

When you create some of these connections that allow misinformation and hate speech and all these other things to flourish, they don’t always understand what they’re unleashing. They’re making a lot of money. They are well-placed. They’re not very interdisciplinary. The engineering side has this heavy kind of control over things. A lot of the CEOs, a lot of the leading people come from that side, or they come from the moneymaking side. So you don’t have the kind of research that Claudia was talking about, where you have the humanities people, where you have the social science people, where you have the engineers, where you have the primary research, where you have all of those things to try to make sense of this. How do we deal with this?

And what happens is they keep getting blindsided, except people outside the company often try to say things in advance, say, “Don’t do this. Don’t do this.” But they get blindsided, and then react post hoc, and then they get blindsided again. They are in over their heads in ways that they’re not good for the world. But we also don’t really know a lot of things about how does hate speech thrive under these conditions versus this? Fact checking usually backfires, but what would work? If you could have counter-narratives, what kind of counter-narratives would work? It took a long time, similarly, for example, to figure out a anti-smoking message that worked for young people. There was a court order that made the smoking companies produce antismoking ads, because here’s what the smoking companies produced.

“Children, kids, smoking is an adult activity. Do not smoke. Teenagers, do not smoke, because it’s an adult activity.” This is a smoking ad, right? Telling teenagers it’s an adult activity, like what else? Just give them the cigarette. What worked, finally what through research people found, telling people about the manipulation and the lies of the smoking cigarette companies, that tobacco knew the addictive nature and hid it and did all of this. That worked, because telling young people that it’s going to cause cancer and you’re going to suffer, that doesn’t work either. They’re teenagers. They’re immortal, of course. That’s the kind of thing, threatening them with illness doesn’t work, or all the sort of things we do, but telling them you’re being manipulated worked. So that’s one of the effective ways. But we found that through research. You think it might work to say it causes cancer. It doesn’t for them. Or it’s an adult activity, that certainly doesn’t work.

So what is the correct sort of countermeasure to thriving hate speech that is mobilizing a very fundamental human dynamic, which is in-group/out-group? We’ve had hate speech and ethnic tensions and that kind of stuff with us. Humans are a wide-ranging species. We have wonderful stuff, and we have terrible stuff at the same time. And right now, a lot of online spaces are having information flow through a stadium where people are divided into groups, and they’re shouting each other. In-groups, very strong socialization effects.

For example, a lot of people thought there were filter bubbles. Academics did the research. There are no filter bubbles, really. People are not in filter bubbles. That was a good idea, maybe people would be in filter bubbles, and you might think that. That’s not what happening. What happens is they encounter opposing information, but they encounter it in a team setting. They encounter it in the stadium with all their friends, and they push it. It’s an in-group/out-group thing, it’s not a filter bubble thing. You need research to figure this stuff out, because if it was a filter bubble thing, you could fix it with countering with the information that was being kept from them.

A friend of mine at Duke did that research, showed more Democrat stuff to Republicans, Republican stuff to Democrats, in this experimental condition. It entrenched the polarization. It made things worse, because once again, it’s not the filter bubble, it’s the socialization within an in-group/out-group setting. So this is the kind of thing that you need the research for, to try to figure this out. The kind of research, the consortium, on figuring out what’s abusive language, how prevalent is it, can we detect at scale? If we do an intervention, can we figure out, did this intervention decrease or increase? Because a lot of things we think will work backfire, and things that don’t work.

These are the things we need to do. And to be honest, they’re too important to let Silicon Valley decide. Have they shown themselves to be that competent at it? And even if they were super competent at it, who died and made Mark Zuckerberg king? We get to decide as a society where those lines are and what should be, and that also includes what kinds of speech gets amplified how under what conditions? It’s not the same as censorship to say you really shouldn’t be amplifying and recommending conspiracy videos and auto-playing them with so little control on YouTube. And Chromebooks come with YouTube. Chromebooks are 50% of the market in the K-12 education in the United States. This is what we’re subjecting young people to, and it doesn’t really make sense.

I’m going to sort of … because we have a panel, we have questions, we have a hashtag, #onlinehateindex. Here’s the part. Let me just say this. I give these talks, and then nobody talks to me, thinking I’m Miss Gloom and Gloom. I’m not. Part of the reason is, you know all those upheavals I talked about in Europe and stuff like that, all those centuries, they were terrible. They were really terrible, but I lived in Belgium as a kid for a few years, because my family was assigned there just for a few years, and within living memory of there, while I was living, there were people alive who had lived through both world wars. In Belgium, you sort of go around the city and there’ll be a plaque saying, “The Germans came and destroyed everything in 19… ” Fill in the blanks. And then the French. They’re like this is what Belgium exists for. People run over it all the time.

So they have lived through so many instances where one army, or it depended on which part you were, somebody had run over and massacred people. There were people alive who remembered both world wars, and yet we had borderless travel. Nobody thought France and Germany were going to war within the next 15 years. Now, if you’re like a Belgian, and in 1950 somebody came and said, “All right, here’s a bet, $1,000 there’s another France-Germany war in the next 10 years,” you’d be like, “Sure.” That’s what happened for hundreds of years, and right now nobody really thinks they’re going to war any time soon. And they haven’t for a long time. For all its weaknesses, partly because people forgot what it did, European Union came to being and built institutions that stabilized a place that we now associate with peace, but is in fact like they have some of the worst genocides in history within living memory.

There are people who are Holocaust survivors who are alive today. We kind of went from there to the European Union and borderless travel within a few decades, because if you build the institutions humans can adjust fairly quickly. So the same kind of country that produced one of the worst genocides in history can also become a place of peace. And for all the tensions around it, Germany took the most refugees after the Syrian War. In Europe, of all the countries it was the most, and it is hard, they took in more than a million people very quickly. So you can go from that to that within lifetimes if you build the right set of institutions and take the things.

I mean, I’m an optimist, really, because I don’t think there is anything intractable about this problem. We just have to say this is fixable, and we’re not going to do it this way. And this is a very young industry. It’s kind of like if the cars were being invented and somebody came to you and said, “You know what, in 100 years you’re going to deal with climate change, and you’re not really want to going to live in suburbs, and everyone is walk about cities.” If you’re like, “Let’s do this differently. Let’s build public transportation and let’s not have cars as individual ownership. It could be something you just take on the weekend for fun or something, to go out.” You’d do a lot of things differently. You might not separate the residential and businesses.

We’re like that with digital technology and AI. We’re in that very early stage where things aren’t settled, and there’s a chance to say, “This is really a terrible way to do this, and here’s a good way to do that.” And for that, you need to bring together people from across fields and disciplines and walks of life and yes, countries. This cannot just be a U.S. and Europe thing, because this is sort of the digital infrastructure of the world. And figure out a way to do this without creating more polarization, because we also need consensus-building mechanisms. We need things that bring people to the center. We need ways of figuring out how to have debates. And we need all sorts of things, great tools to build.

I think we can do it. Why can’t we do it? If we can detect the Higgs boson, and we can stop Germany and France from having yet another war, we can have a better digital infrastructure that serves our needs as humanity, rather than this sort of stumbling way that serves, yeah, the profit, the greed of a few people, while doing a great disservice to humanity. And that’s the conversation I think we should be having. Thank you for inviting me to be part of that.