[Music: “Silver Lanyard” by Blue Dot Sessions]

Intro: This is Berkeley Talks, a Berkeley News podcast from the Office of Communications and Public Affairs that features lectures and conversations at UC Berkeley. You can follow Berkeley Talks wherever you listen to your podcasts. New episodes come out every other Friday. Also, we have another podcast, Berkeley Voices, that shares stories of people at UC Berkeley and the work that they do on and off campus.

[Music fades out]

Jennifer Johnson-Hanks: Good evening, and welcome. My name is Jennifer Johnson-Hanks. I’m the executive dean of the College [of Letters and Science] and it’s my great pleasure and privilege to welcome you. The Salon series has as its central function to get to share with you, friends of the college, some of the intellectual life that we have the privilege of leading all the time. That is, the College of Letters and Science is this amazing place of remarkable ideas, remarkable ideas across the widest possible range of scholarly disciplines, 37 departments, more than 40 centers and programs.

The College of Letters and Science is not only large and diverse, but it’s also of extraordinary quality. Our students are extraordinary and also our faculty are extraordinary. Just in our one college, we have more Nobel Prizes than most countries in the world. We have more Guggenheims and MacArthurs than not only the rest of the University put together, but most of the system put together.

That extraordinary intellectual vibrancy meets its highest accomplishment when we get to think collaboratively together across disciplines. Part of what is so important for us to understand at this moment is that the truth is very big, and the truth is very complicated. And therefore, none of us can take it on on our own. We take it on best in collaboration. That is something that happens in all of our programs in the College of Letters and Science, and it’s something that we want to show you what that looks like today.

I’m going to now introduce the moderator who will introduce our speakers. Our moderator today is Professor Marion Fourcade. Marion is a colleague and a friend, an exceptional scholar at the intersection of economic and political sociology with comparative sociology. That is, she thinks hard about how social structures emerge and are transformed in relation to economic systems and political ones. She’s worked on the discipline of economics itself in an award-winning book.

She’s also worked on a whole variety of interesting applications including how wine is priced and valued, how our digital lives are translated into our financial worth through credit scores, and a range of other topics. Marion is the External Scientific Member of the Max Planck Institute for the Study of Societies and is past President for the Society of the Advancement of Socioeconomics. She’s also just an extraordinary person to get to work with, and I am thrilled to hand the podium to her.

Marion Fourcade: Thank you so much, Jenna, for this very generous introduction. And I must say I feel very lucky to also be able to call you my friend. So the pairing of massive data sets with processes written in computer code to sort through, analyze, and create from these data sets has transformed all major social institutions essentially, right? It is changing today the way that we relate to the world. It is changing the way that we relate to each other. And most importantly, it is transforming the way that we work. And in particular, it is transforming the way that we produce science.

And so, today’s panel, which I’m very privileged to moderate, will describe the many different ways in which these revolutionary technologies are changing academia. And it is a great privilege to actually introduce and moderate a panel which brings together scholars from three of the five divisions of the College of Letters and Sciences, where someone from the mathematical and physical sciences, someone from the social sciences, and someone from arts and the humanities.

So let me begin with our panelists. Joshua Bloom is a Miller Professor [of Astronomy] here at the University of California, Berkeley. He teaches radiative processes, high-energy astrophysics, and a graduate-level Python for Data Science course. He has published over 300 refereed articles on time-domain transient events, machine learning, and telescopes inside automation. He co-founded the Berkeley Institute for Data Science. Professor Bloom has been awarded the Data-Driven Discovery Prize from the Gordon and Betty Moore Foundation and the Pierce Prize from the American Astronomical Society. He’s also a former Sloan Fellow, a Junior Fellow at the Harvard Society, and a Hertz Foundation Fellow. He holds a Ph.D. from Caltech and degrees from Harvard College and Cambridge University. He was co-founder and CTO of Wise.io, an AI application startup, which was acquired by GE Digital in 2016. And his book on gamma ray bursts, a technical introduction for physical scientists, was published by Princeton University Press.

Next we have Keanan Joyner, who is an assistant professor of psychology here at Berkeley. He did his undergraduate work at the University of Memphis and his graduate training at Florida State University. Professor Joyner’s research seeks to provide a comprehensive account of the etiology of alcohol and other drug addictions in humans to bolster early identification and prevention efforts. He pursues this research program as Director of the Clinical Research on Externalizing and Addictions Mechanisms Lab, yes, I did it, or the CREAM Lab, which is a wonderful acronym. Professor Joyner uses a range of psychophysiological methods, ecological momentary assessment, behavioral genetics, and advanced quantitative approaches, which is the topic of today, to study the disturbances in cognitive affective processes that contribute to the emergence of substance use disorder.

And last but not least, we have Alexandra Saum-Pascual. She is a digital artist, poet, and associate professor of contemporary Spanish literature and mew media at Berkeley. She’s the author of #Postweb! With an exclamation point. She wanted me to mention the exclamation point. Her first book published in 2018, and in 2025, I hear there will be another book called Earthy Algorithms. She has also published numerous articles, special issues, and book chapters on digital art and literature in the Spanish-speaking world. Her work has been featured in the Journal of Spanish Cultural StudiesComparative Literature Studies, and Digital Humanities Quarterly, among many others. Her digital poetry has been exhibited in galleries and art festivals internationally and has also been studied in specialized monograph. Professor Saum-Pascual is a member of the executive committee of the Berkeley Center for New Media and the board of directors of the Electronic Literature Organization.

So without further delay, I will let our panelists take it away. We will begin with Josh, so the floor is yours and the podium if you want.

Joshua Bloom: Thank you. Well, thank you all for coming. Good evening. It’s a great honor for me to be here with our panelists, and thank you so much for the very kind introduction. I wanted to talk about machine learning and the sciences. And in particular, while I’m an astronomer, one of the things that I’ve been so excited about is to see just how machine learning and artificial intelligence have started to work their way into the everyday life of what we do when we try to do science.

Before I get there, I thought I might start off with just an introductory to how I got into machine learning in the context of my own work. And like a lot of things, it happened by happenstance, and it happened out of necessity.

About 15 years ago or so, I got associated with a new telescope facility that was going to be producing lots and lots of images every night. And the state-of-the-art at the time was that when we wanted to find new things in the sky, such as the things I’m interested in like supernovae and gamma ray bursts and other sorts of strange variable stars, we would just look at the data. And that’s what astronomers have been doing forever.

And so, as we were starting to think about what we needed to accomplish, the science we wanted to accomplish, the answer that I got from all my colleagues was, “Just hire more undergrads to look at the data and train them.” And I said, “Well, that’s not going to work. That’s not going to scale, because it’s not just our survey. There’s going to be more data coming after this one and more after that.”

And that recognition that experts in what used to be these kind of inference loops in science looking at data, communing with the data, and making decisions aren’t going to scale. And I started looking around for alternatives. And we stumbled, really stumbled upon machine learning.

And I started to learn that there were a bunch of people at Berkeley working on various aspects of machine learning, and started meeting them and started talking to them. And one of the hardest things that we had was to get to the point where we were able to speak each other’s language and understand what the other person or people were bringing to the table.

Eventually, we wound up getting some support from the National Science Foundation for some of this multidisciplinary work. And to cut to the chase, we were able to build a machine learning algorithm that looked at images of the sky, new images, and found new things. And that, we put it into production, has been running on that facility for over 10 years now. And this idea of looking for things on the sky using machine learning has now become really a cottage industry and has gotten way better than when we originally started.

After that, we started realizing, “Well, if we find something interesting in the sky, there’s this new thing that happened on the outside of a galaxy, is it interesting enough for us to spend more time on, to get more resources involved to take more data with?” And there we realized, “Well, we could just have people look at all these new discoveries and make some decisions. But why not use machine learning there as well?” So we built classification engines that would allow us to make some probabilistic statements about the kinds of things we were seeing to help us optimize the resources that we needed, the big telescopes to follow up some of these objects.

And what I was really excited about is we had a discovery now about a decade ago, where this survey was looking at lots and lots of places in the sky. And our machine learning algorithm identified and then classified this new thing that was happening in a nearby galaxy. And it turned out to be the brightest type IA supernova in three decades, which was fantastically interesting and important, in fact, would have been found because it got so bright by amateur astronomers. It’s in the Pinwheel Galaxy, for those aficionados out there. It got so bright, you could have even seen it with binoculars for a while.

But we found it so early, just 12 hours after the explosion, we were able to use the Hubble Space Telescope, the Chandra Observatory, X-ray Observatory, to be able to get some very detailed information about this event early on. And that allowed us to learn something new and novel about this object.

So we’ve had lots of those successes over the years, and what I’ve started getting excited about is moving beyond these sort of initial statements about new data and starting to ask questions about old data and saying, “Can we fit models to our old data, the totality of that data, in a way that we wouldn’t have been able to do because of the computational complexity of some of these models?”

And also, to cut to the chase a little bit, we’ve been able to apply this in a number of different places within the time domain and gotten to the point where we’re actually learning something new about the universe, not just finding new objects that we didn’t know we’d be able to find through this needle in the haystack work. But we’d actually learned something fundamental about some of the actual equations that govern some of the events that we’ve been looking at.

So I’ve been excited to be able to bring this into my own work, and I’ve also been excited to start teaching our undergraduates and our graduate students some of the ways in which they can be thinking about the role of machine learning in their own work.

The way that I think astronomers, and maybe more broadly physical scientists, think about machine learning is a little bit like a new tool or even a new toolbox. And our approach to new tools are often, “Well, that’s exciting. I want to try to hit some nails with this new hammer I just found.” But one of the dangers, of course, is if we use these tools inappropriately or we don’t actually know what’s happening inside… This is one of the biggest criticisms, is that oftentimes machine learning models that have been built on lots of data produce answers and we don’t know where they come from. It’s hard to interrogate those answers and ask questions and maybe even learn something deep if what we’re looking at is the results from a black box.

So I think in our discussions on the panel, we’ll be able to unpack a little bit of the challenges, but I wanted to just start you off with this understanding that machine learning is really making its way into a lot of what we do. And it’s happening not just in my own work around transients, but people are starting to use machine learning to understand the chemical compositions of planets outside of our own solar system, fantastically exciting. People are using it to try to extract more information from large cosmological surveys to understand the fundamental parameters of what drives our entire universe. And the important thing to realize is that we are now getting data that we probably would’ve gotten if machine learning hadn’t been around, but we’re able to get more understanding and more information out of it if we’re careful about it.

I’ll end by just saying one of the things that I’m most excited about, I think in the future of machine learning in the context of science, isn’t just us being able to sort of get more info out of the data we already have, but to maybe even take data in a better way or even a more efficient way. And when I say that, what I mean is perhaps we can start designing the instruments that we use to look at the sky without the experts who are the ones that build those instruments, but instead maybe have those experts work alongside some AI guidance that can do very complex optimizations over a very, very large design space. Maybe we can start optimizing the way that we observe the sky based on things like what’s going to happen with the weather in an hour from now, which surprisingly we don’t do much of.

So there’s a lot of low-hanging fruit, not just once we have data, but also upstream from that, before we even acquire data. So the idea that AI is going to infuse not just our analysis of data, but the entire workflow, is one of the most exciting things for me. I look forward to hearing about the rest of the panel and look forward to your questions later. Thank you.

Keanan Joyner: So I use various forms of machine learning, or collectively we call this AI, in my own work on drug addiction, and I do lately a lot of smartphone work. So we collect data about where people are and how they’re feeling and what they’re doing and just all sorts of things. And we try to predict things like, “Are you going to drunk drive tonight or drive while intoxicated otherwise?” which you obviously would want to prevent. And something that you might want to do is what we call a just-in-time intervention, which is that if we think based on our model that you might do this bad thing, we want to maybe encourage you not to do the bad thing, right?

And that relies on immense computing power. And it’s not particularly feasible to have a set of undergrad RAs watching data as it comes in and tries to tell you when to intervene and maybe try to prevent drunk driving or suggest that you use Uber or a cab or something like that. And so, that’s something that’s a unique computing problem that’s talking about something very applied. And AI is very exciting in its potential application to this stuff.

The problem is, is that I guess I will say I’m kind of a two-handed view of what AI is going to do to psychology. Because on the one hand, that’s a problem that can’t be solved using a lot of standard tools that we might use in psychology otherwise. I can’t do an interview in real time and try to prevent an outcome from happening or something like that. I need an algorithm.

On the other hand, there’s a whole lot of what we call bias to creep in. And I mean that actually in two ways. Our colleagues over in data science and engineering and all sorts of computer science-type fields mean bias in a statistical bias standpoint. But there’s also the social bias standpoint that comes into play when you talk about human behavior.

So when you start to think about, “What is an algorithm doing? What is AI actually kind of doing under the hood?” And I teach a lot of stats classes for undergrad and grad students in the psych department and we talk about this in classes. “What is your model really producing?” Well, AI or machine learning broadly, is generating — you might’ve heard of generative AI for pictures or text, data, and things of that nature. It’s generating answers or output based on its training set. It learns what you feed the data into it, and it learns from that in the sense of creating equations really, that it codes into the computer and it generates a likely response from that.

So the problem is, is that when it comes to human-specific problems, we often want fair, equitable, unbiased answers, but the data that we feed into the training set often is not that. And so, we are asking AI to produce something that it was never trained on, and that can be very problematic. And so, we have to think very carefully about how we’re training our AI models and whether they’ll be useful or not. And I think there’s so many awesome uses of AI, and I’m going to use it in my own work and it’s going to definitely infuse psychological science and social sciences more broadly.

For Josh’s work, it’s wonderful that he’s dealing with physical things. They have good measurement properties. We know what a supernova is. And then, over in psychology, what is two units of depression? We all agree that depression exists, but what is it, and how much more is four than two? Those are a little bit fuzzier questions for us to measure because of the complexity of human behavior. And so, we have to feed in data to build algorithms, but what does our data set look like?

And to this point about the black box nature sometimes of these things, what our data set looks like changes over time. So in my own field, one thing that I’m quite fascinated about and we’re just finishing up an article on this right now, is what does cannabis use mean in the context of society changing over time? So back in 1990, for one, there was the THC content was much lower, so smoking cannabis was not quite the same thing as it is today.

But it also meant something else socially. It was illegal and generally socially frowned upon. So anytime that I, as a psychologist, asked about your cannabis use, I not only was getting information about your cannabis use, but also a relative insensitivity to rules, laws, social norms, and things of that nature.

So what that was going to relate to within the psychological realm is kind of a different set of variables than it relates to today, because on my way home I’m going to probably pass about 13 different dispensaries, right? It is legal to just walk in and buy some if you’re over 18 or 21 in different states, right? And that means that that variable means something different. So it’s the same variable, the same input, the same data, but because it was collected 30 years ago versus today, the output of this system that it generates a likely response is going to mean something a little bit different.

And what does it mean when we’re trying to build predictive models and we’re trying to generate data from all of these models when the complexity of the human experience changes over time? So the data that it was trained on takes on new meanings as society continues to change. And I think there’s incredible promise, which is why I use it in my own work, to solve very specific problems and particularly computationally heavy problems, particularly problems that you have to have algorithms for. You can’t do it by hand or call someone in the next day after they’ve already drunk drove. You need algorithms for those things.

But we have to really be careful about what we’re curating going into our algorithms. And we see this play out on a big way, ChatGPT, right? It’s a wonderful, wonderful tool, but it’s a large language model. The data that it was trained on, all it’s trying to do is find the most likely prediction of the next word that comes after the last word based on the prompt you said. It’s not concerned with facts. It was trained on mostly facts, so it does often a very good job in producing facts. But that’s not what the model is doing. The model is simply trying to make a prediction, and it’s generating a likely thing based on a data set that you trained it on.

So we have to think very carefully when we start to take tools that were developed for very discreet and concrete outcomes, computer vision applications, applications in astrophysics, biology, chemistry, these are very concrete outcomes. And so, it works very well there. When you then take the leap and you want to apply it to the complexity of human data, it’s not the model’s fault. AI has done nothing wrong here. But we asked ChatGPT to tell us things that it was never trained or intended to do. And we’re asking it to do that, right?

And so, there’s now this divergence between what the model was trained on and what we’re trying to use it for. And that’s where I think that the social science division in particular is so critically important to this AI revolution. And we need to really think very carefully about what influence we have, what circles we run in, and who we talk to in collaboration, because there’s a translation problem. You have a fantastic tool, but a computer is a tool. Microsoft Excel is a tool. What you use that for, and the translation of how that affects humans and how we think about the complexity of the human experience, is a whole research area that we really have to develop and we really have to intentionally spend time on.

I’ve taught in my undergrad classes, I think there’s going to be a major in five, 10 years from now, and there’s going to be literature majors whose job is to figure out how to efficiently talk to ChatGPT and interface with AI systems, because there needs to be a human component of translation because the algorithms can’t do it themselves. And if left to themselves, they will do grave damage to the complexity of real human life in terms of the outcomes that they’re predicting, because they weren’t trained to think about what data is going into them. They’re just trained on the data that we give them.

So those are my thoughts. I love it. There’s also dangers and there’s misapplications of it. And we have to separate all of those in our thinking about, “What is AI?” We can’t just sit in the earnings calls of whatever companies are in your portfolio and they say AI 68 times and raise their stock value, right? What can AI do? What can’t it do? But what was it created to do and how does it work? Let’s think about that. And that’s where I think social sciences, when we’re talking about applying it to humans and human data and experience, so we have to think clear-headedly about those things. Thanks.

Alexandra Saum-Pascual: OK. I think the problem about being last is that I had all these things I wanted to talk about and I’m like, “Oh, no.” So let’s see if I can make sense of anything. OK. I wanted to talk about the necessity to always historicize technology, right? I’ve been thinking about what you do with it now, how you can apply it now, blah, blah, blah, and etc., etc. And I think, yes, all those things are good, but technology has a history. All technological developments have a history. And you could also think that AI is just the latest manifestation of technological development from a particular perspective, a Western perspective, and actually a one that is based here in California.

So when we think about something in neutral terms, like you’re saying, “The models do this.” You can’t ask them to do that.” You’re also inferring a certain neutrality of the technology that it is not just the technology, but what you do with it. And you could also just say that maybe there is something internal, ideologically internal to the technology that makes it better to do something or another thing. It’s the basic argument of, “Guns don’t kill people. It is the people.” Well, yes, you could use a gun to hammer a painting into the wall, but the gun was made to shoot and hunt, and it’s going to end up finding a way to do so.

So I just like to think of AI in terms of historical development, on how it builds on previous technologies, and try to trace that to the origins of technological development in the West. So if AI ends up running through internet infrastructure, and internet infrastructure runs through telephone infrastructure, and telephone infrastructure runs through telegraph infrastructure, and the telegraph, if you think of it as a technology that was meant to join large spaces transcontinentally, you think of how those cables were also traced over colonial routes and trade route and slavery, you start seeing a different picture of how there’s implications in colonial values and ideology in these technologies from the beginning. So then, neutrality is a little bit more complicated, I think. So it might not just be that we are feeding it bad data, it may be that the monster itself has a purpose. And it’s interesting to think about it in those terms, I think.

I guess I was going to talk about something like that. But what I really wanted to talk about, regardless of that, was of how AI or machine learning is also being used for the arts and different artistic practice for a while. Actually, if we think about my particular field of digital writing, you would find that the first examples of digital literature, as in trying to make computers produce prose or poetry of any sense, date back to the 1950s when the first computers actually were being into operation.

There’s a very interesting sort of genealogy of electronic literature that puts Alan Turing at the heart of it, together with Christopher Strachey, and how they together built the love letter generator right out of the first, second computer after the Enigma situation. So you could see a long genealogy of people trying to play and create artistically with these technologies. So while I guess I was saying that there is an internal bias in technology, maybe the arts are a way to overcome that bias and try to force the machine to do different things, perhaps. I don’t know.

And then, what else was I going to say? I guess this is an approach to thinking about large problems and large situations that evolve through technologies that are so massively distributed in time and space that it’s hard to really apprehend in terms that, when we think of a large language model and how these are not just powered by a supercomputer that we don’t know how it’s built or how it’s being powered, that then is using sets that we don’t know where they come from. That then these technologies are maybe having a particular environmental footprint because of the way that materially they’re functioning, but also how they may be working to create a type of suggestive algorithm that is forcing people or suggesting to people to participate in certain activities that are also fueling and helping the oil industry, which in turn then is polluting more.

And what I’m saying is it’s hard to trace the map. So because these things are so hard to trace and see from a scientific method, perhaps the arts and literature can help us, through analogy and allegory and those things that we’re so good at doing, and they’re able to evoke a different understanding of the impacts of these technologies where everything else fails. And I don’t know how long I’ve been talking, so I’m going to just leave it there.

Marion Fourcade: So thank you so much for these really wonderful and truly complementary presentations. I love that. So I’d like to begin perhaps with a question on really the practice of AI, the practice of machine learning, and asking each of you, what are the practical challenges that you face on an everyday basis to implement these methods in your work? And it’s a question that comes out of your presentations, really. You talked about designing new instruments, right? Keanan, you depend on this infrastructure, this phone’s infrastructure and so on. And then, of course you mentioned also infrastructures. And so, I thought perhaps we could begin there as a conversation. And of course, you can react to each other’s presentation as we go along. Yeah.

Joshua Bloom: Anything?

Alexandra Saum-Pascual: Sure. I’ll go. I am very interested in exposing the tension or the contradiction between any kind of large digital technology that while presenting itself as a virtual incense, a material abstract, ethereal, think of something like the cloud as our master metaphor here. That’s something that presents itself as limitless, endless, and also kind of harmless because nobody is afraid of a cloud, although fog here, I don’t know. That they feed and they present themselves as something immaterial to hide precisely the fact that this inmateriality is based on large physical infrastructures that have huge energy costs that require not just pulling from any kind of grid or burning any kind of fuel, but also that engage in real estate enterprises that makes certain gentrification problems or certain populations displacement and things like that.

So I think something that I always try to do in my work is to emphasize the hard materiality of digital objects. So when I am creating any kind of digital work or I have my students create anything, I really ask them to question on that, on how this object is entangled in a material web, and also in a web of life that is performing materially in the world. So if they want to reflect about the environmental impact of a model, they need to be also aware that the production of that model is also contributing to an energetic waste. And that is not saying, “Don’t do it.” It’s also just saying that we may need better digital materiality literacy or something. I always ask people to delete their messages in their inboxes. You know those messages? They’re somewhere. Every digital object has the material inscription somewhere in the world. Nothing is… Everything is occupying a time and a space in the world. So delete those messages, just delete those messages.

Same goes for engaging in practices like streaming for example, right? When you recall a digital object to your computer, it is requiring a sense of performance throughout networks that wasn’t the same… It’s not the same to watch a movie on Netflix than it was to watch something on a DVD, right? Because you’re engaging different networks and you are recalling and spending new energy every time that you decide to stream a movie. Again, I’m always telling students, “Don’t stream.” And they’re like, “Yeah, sure.” Anyways, it’s complicated.

But what I’m saying is that in my practice I try to emphasize that relation, right? The inevitable tension between material and physical realities of a digital object that is always two, right? It’s a work of mathematical abstraction, symbolic language, that somehow through the magic of the object gets translated into physical instructions, energy pulses that have a material life in the world.

Keanan Joyner: I think that I spend a lot of time thinking about, kind of the tension that I talked about, is essentially data curation is what it comes down to. So there’s this tension. It’s kind of a fundamental problem of machine learning, is you need more and larger data to train on, and then you will do better with your algorithm. But more data isn’t always better, because think about if your goal, for example, is to not reproduce the gender pay gap in whatever your generative model produces. Well, if you just gather all of the historical data that you can find, the gender pay gap got worse and worse and worse the further you go back in collecting those data. So how are you supposed to generate something that doesn’t reinstantiate that? So more data isn’t always better.

But there’s the very real mathematical constraints of more data is better, but you need more data of a certain quality. You need to think about data curation in training sets. How do I not have racially biased algorithms? Well, I would have to find data that itself is not racially biased to train my algorithms on. And so, I can’t just grab all of the judicial court case data relating to substance-related charges, well, because for whatever reason, we decided that crack cocaine was a hundred times more dangerous than powder cocaine back in the 80s. And Reagan said, “Well, we’re going to mostly lock up the Black community for their drug offenses, and everything else will be a misdemeanor for a drug that’s mostly used by white folks.”

And so, now you have all of this bred into your data sets, and we’re going to use those data sets because we need it. There’s the actual need to train your algorithms on something. But human data is never produced in a vacuum. And so, we have to think about somehow to have this seesaw, where we need more data, but it’s got to be data of a certain type that actually produces the thing that we want it to produce.

And that’s a tension that I don’t know how to solve. So I’ve been playing with all sorts of creative approaches. I’ve got some very smart graduate students, much smarter than me, who are trying to figure out some of the ways that we might do this with relation to addiction data, gender, and racial equity, and things of that nature. Because these algorithms are out there. OpenAI is going to make enough money hand over fist that they’re going to keep pushing forward ChatGPT. That’s going to happen. And I think that if there’s not solutions from kind of social science-oriented folks that really get involved in AI ethics and thinking about how we manage some of this tension, that is just going to keep running more and more rampant. So I spend a lot of my time thinking nowadays about, “How do we curate data sets? What does it mean to have good data? What is good data?” And more is always better, but only more of a certain type.

Joshua Bloom: I’d say in the context of the physical sciences, we have a lot of pain points when it comes to AI and machine learning in our work. I mentioned in my introductory that there is a real challenge in getting our students up to speed, not just to know about the fundamentals of AI algorithms and approaches, but to be fluent enough in those to be able to translate cutting edge technologies that are flying through our inboxes every day as new papers are being written, into something that they can use within their own work, and then produce better science because they’re using these algorithms. So that’s one of the pain points. And obviously, teaching is one of the answers. Collaboration across not just L&S, but across the university is another answer that’s been extremely fruitful for us.

But there are other pain points which are maybe more along the lines of the psychology of how astronomers think about new tools. I’ve said I’m one of those astronomers who loves to use those new tools. By the way, there was this other astronomer you may have heard of named Galileo, who took this new technology that was meant to look at the horizon, and instead went like that and then did some stuff that’s kind of famous.

We have this history of hearing about these new tools, in this case, it was a military application looking for ships coming over the horizon, and just repurposing it for our own. That repurposing isn’t as easy as going like this to that. Our data looks different than the kinds of data that are being looked at in industry, that are being worked on for new algorithms and new approaches within the computer science realm. And we’re asking different questions of our data. That’s one issue, is how do we get people to start thinking about these new tools in their own language?

But the other one is when we are publishing papers and saying, “I found this great answer.” If it is a black box, there’s a lot of skepticism about that. Astronomers and physicists, yes, they’re couched in physical objects where there’s some belief of some sort of ground truth to what it is that we’re looking at. But we like to think about our data in the context of statistics. And if we are taking machine learning models and producing answers, and we don’t couch them in the same statistical formalism that is acceptable to our peers, it becomes the kind of thing where people say, “Well, I don’t believe it. I’m not going to use it.” And so, part of it is getting over the hump, but part of it also is helping our computational methods and stats friends to start thinking about working on and producing algorithms that can work alongside of rather than have to be coerced into the kinds of data and questions that we have.

And the last pain point I was going to mention, which is something that rings true as Alex is talking, is that some of the most interesting and innovative work these days in machine learning, not so much in the context of physical science, but in the actual algorithms themselves, is not happening at the University. It’s not happening at our University or any university. And the reason for that is energy and cost of energy.

I heard a talk just a couple days ago here from somebody at a large internet company who produced what is going to be, I think, one of the most important algorithms for us to be able to learn neural networks faster and better and cheaper than we ever would’ve been able to do. And they were able to do this because they learned an algorithm that learned on other algorithms. And it took them something like four months of a world supercomputer of not just GPUs, but these things called TPUs probably $20 or $30 million just to run that one experiment. That is well beyond the scope of what any of us in this room have probably ever contemplated to do one experiment.

And it produced a great answer, and it is going to help us be more computationally efficient in a lot of our work, but some of these really big breakthroughs can only happen when there’s a large dollar associated with them. And so, we’re struggling, I think, as academics to think about our own work necessarily needing to be partly on the receiving end of what these large internet companies are doing. And they have a very specific set of motives for doing the work that they do.

Alexandra Saum-Pascual: Yes. Can I say something to that? No, this… Can you hear me? Yeah. This actually is something that worries me dramatically, because we used to think of the University as a cutting edge, better testing things through sort of a general belief that we were going to historicize, we’re going to put ethical bounds, we were going to control the technology and have a certain… Obviously, we’ve always launched technology into the world before we were ready, and this is always the case. Who can predict the future? But we can look at the past sometimes, which we forget to do.

But now AI, any kind of machine learning, relies on private corporations that have very different motives to train and use and launch the machines. So I don’t think right now there’s any university in the world that has that kind of power. And that’s very problematic, because while I see all the evils of AI, and we’ve talked about it, the environmental footprint is disastrous. Also, it’s going to be more problematic if it’s in the hands of a corporation that has no reason to disclose the actual impact of these technologies. Who knows where we… We don’t know what we’re using, that is one.

Second, we don’t know how they’re training it, why they’re training it. And if they are, they’re going to make it in a way that is going to be optimizing their profits. And then, we’re going to try to make do with that technology that was developed with a different set of intents. So if we’re trying to apply this technology to sociology, to governmentality, to any kind of application, it is going to go through a lens of corporate greed first, and that is very problematic. So it’s kind of terrifying. Sorry, folks.

Marion Fourcade: Yes. And it is also the fact that a lot of the human data is collected by corporations. So it’s not simply that they have the money to do all of this, but it’s…

Alexandra Saum-Pascual: We willingly give it away.

Marion Fourcade: Exactly.

Alexandra Saum-Pascual: Right? OK.

Marion Fourcade: We absolutely do. Actually, I wanted to go back to this point of needing more data. Keanan, you said you want to curate data sets. We want more data of a certain type. But going back to Alex’s point about there is something in the machine itself. The fact that you need more data means that you have to produce a surveillance apparatus so that that data is actually found out. So how do we think about those consequences? So it’s not simply about fixing the data set, it’s also about the whole ecology that is transforming our entire social world. So?

Keanan Joyner: Yeah. I think it’s tough. There’s an ever-present issue. I think that one answer to that, as an academic, so as a clinical psychologist, sorry, there’s a fly, is to make things that matter to people that’s responsible uses of their data and that gives them control over their data. I think that there’s one example, I really like it. There’s a personality psychologist at Northwestern who created a fairly simple application, web application, that asked people to fill out different personality inventories. And he’s mainly a quantitative psychologist, so he’s mostly just developing new statistical models, etc. And his whole thing was that, “Well, I need a million people. That’s what I need actually to do what I want to do.” You’re not going to run that study at a university.

So he spent some time making a fairly simple output, that he would just tell you where you fell if you took his little questionnaire on these big five personality domains: neuroticism, agreeableness, openness to experience, etc. And that was enough, that people were like, “Yeah. It’s kind of like a little Buzzfeed quiz.” And what he was doing was trying to figure out how to handle missing data. So it was just a pure statistical kind of stats nerd kind of question, exactly after my own heart, right? But he created something that he gave them, and there was this relationship with the participant, is, “You get this little thing that’s a fun little read about yourself that we like to do, and in exchange I get your answers to these questionnaires.”

And I think that the concept there is something that’s really novel and important, is that there is a relationship between people and the use of their data. People are not just these things that we siphon data off of, but that we care about each individual subject and what is this doing for them as we try to develop these things.

So I think that you can do a lot as a researcher, a human psychology researcher, to do that. I don’t know what you’re giving back to stars, but to humans, maybe you can give them something of value with their data. Can you actually create algorithms and models that help them that they would like to have? We often like some of the effects of having Google Maps on our phone, to be able to turn a physical map into a digital one. There’s costs associated with that as well, but there’s also uses for it. And there’s going to be variations in how symbiotic the data gathering and application giving relationship is. And I think as academics in this space, we can think a lot about how we try to collect data, but also give people’s data back to them in a useful format when we’ve done something of value with it.

But to Alex’s point and some of the things that we’ve been talking about, are we producing the scale and importance of data in people’s lives that rival, we’ll call them the big internet companies and things of that nature? And what is the relative balance of those that we can have influence as academics in the academy, maybe thinking first about human welfare versus what happens outside of the academy? And how do you resolve that?

Joshua Bloom: I was just going to say that from the context of astronomers, as we think about our role more broadly in society, our job is to understand the universe. Our job is to understand the components of it, what its origins were, what its fate might be. And the output of our work, while yes, it’s academic, both in the capital A and little a sense, oftentimes shows up on the front page of the New York Times. And that is the give-back in many ways for the amazing thing that society has done for us, which is to employ us and allow us to build these great machines to get more data, to use our ingenuity to do interesting things.

Many of you have probably seen the picture of the first shadow of a black hole, and that was mind-blowing. Many of us sort of thought it was coming, we kind of guessed what it was going to look like. But when you first see it and you see it on the front page of the New York Times, there’s something very visceral about that, and the idea that human ingenuity was able to do such a difficult thing, such a fantastic feat. Machine learning was part of that. If you dig underneath the hood and you ask, “How did those images get made?” There are components of that that required very sophisticated computer vision techniques to be able to just produce that image. That is not something we just took a snapshot with.

And when we start learning about the chemical composition of the atmospheres of planets outside of our solar system and start seeing evidence for non-equilibrium that potentially could have come from life, that’s going to have machine learning in it. That’s going to have human ingenuity, where we’ve done something better because we’ve applied these new tools.

What I also wanted to say is that there’s this famous quote in the context of data that was made by a data scientist named Jim Gray. Some of you may have known. He was at Berkeley for a while before he was at Microsoft. And he said, “I love working with astronomers because their data is useless.” And in the context of some of these challenging biases that we hear about, where a bias in a data set can lead to an outcome that affects a person’s life and their life journey in negative or positive ways. When you look at astronomy data, we don’t have personal identifying information in there. We don’t have credit card information in our data. And if you make a mistake, you don’t start a war, you don’t get sued.

And so, what computer scientists have loved about the kinds of data that we produce in astronomy is that it is a wonderful and glorious sandbox for trying new things out, and trying them out not just on small amounts of data, but as much data as you could fathom. We have a lot of data. And so, another thing in some sense that astronomers and physical scientists bring the table is our data is useless, but it’s really exciting.

Marion Fourcade: Alex, do you want add something?

Alexandra Saum-Pascual: Yeah. I want to just add something else, is that in this whole conversation, we’ve departed from a sort of… We’ve accepted that it is OK to translate real life into numbers, right? We’ve accepted this. We haven’t questioned it, the fact that people can be reduced to a set of numbers, that life can be extracted into quantifiable units. We could do that with images and then create models. And we are assuming that there’s a correlation between that representation and the real image of the reality of the world, the material of the world. And also, the fact that we’ve accepted that human life can be turned abstractly into numbers.

And this is something that is not necessarily a given, right? This is a historical given that has a very traceable history to the early modern time in Europe, where life was decided. Where some people decided that a certain type of life could be turned into nature following a Cartesian dualistic division that also has a Christian-Judaist origin that allows us to separate our material beings from our spiritual beings, things like our mind from our brain, and so on, so forth.

And from that model, we spiral into separating humans from nature, some humans from nature, some humans are turned into nature, and we can exploit them by turning them into the geological semantics of labor force. And they become force and they become numbers.

And so, this is a historical practice that we’ve been sort of internalizing into our technologies. But that’s only one possibility when we are accepting this. And I wonder, I just wonder, this is not a given. Maybe life should not be abstracted in that way. Maybe we shouldn’t account for data that way. I don’t know. Just a question for you guys to think about.

Marion Fourcade: OK. This is actually the perfect question to open this sort of tension between the wonder of seeing these wonderful images on the first front page of the New York Times, and then the reality of credit scores and other types of numbers. So let me open this to the room and see. We have a question right there.

Audience 1: OK. So obviously, a great talk on AI and machine learning. And we see from society it has a positive impact, like the Covid vaccine, big impact on there, the negative algorithms on democracy and the truth on that. So we see the positives and negatives. I guess my question is one, and this is to your point, Alex, how do we direct or influence artificial intelligence, machine learning, to be a force for good, number one, at each of your levels?

Second, will AI help us determine what is the truth, even if it’s uncomfortable subject to the bias? And you mentioned the point, we need so much computing power, so much budgets, I’m not sure if institutions, our US institutions, can handle because most of it is in a private sector base. We couldn’t control social media. We couldn’t control technology. There is very little regulation on technology. So there’s pluses and negatives. It’s going to prolong life, but it can make life pretty miserable for a lot of people. So what do you recommend? What do we have to do in society? We know the problems. We want the solutions.

Keanan Joyner: I think if I had the solution, I might be making a little bit more money than I currently do. I’ll take a swing at it. I think that’s so well put. I think that’s exactly the tension and the problem here. I think that one thing that academia continues to have that does separate it, I think, from industry and the types of work and applications that we do, is we get the just absolutely wonderful job that’s afforded to us by certainly the state, funding donors, etc., that allow us to try to do a thing that isn’t just about making money.

And so, I think that for me as a psychologist, I can often spend my time thinking about is, “How do I reduce human suffering?” And so, even if I am not the one, let’s say driving the boat, controlling social media, for example. If I can produce something of genuine value to reduce human suffering, that will get used because it has actual value, it is generated value. You created something using your human ingenuity that didn’t exist before, that has a measurable impact on people’s lives.

And those types of things tend to, in the aggregate, pop out and get used. There’s tons of those that get quashed, right? There’s always the continual conspiracy theories about we’ve solved cancer a bunch of times, but it actually, for the drug companies, etc., etc., right?

But overall, I think that the idea is, is that ideas are bigger than people. And so, when you use these tools in ways that do actually have measurable human impact that’s positive, I think that that gets picked up. People use those things and there’s a motive to use those things.

You look at something like, we obviously know that many healthcare insurance companies, there’s a lot of issues with profit and monetizing humans’ health. Kaiser though, often offers in many different program plans, they will pay you to go to the gym. And that was after a lot of data came out that it actually saves them money if they try to promote human wellness in this way, right?

And that’s just a very narrow example because there’s so many other examples of that that make that complicated. But that’s the idea, I think, of producing something of real genuine value. Using tools that solve real-life problems can often let some of those ideas leak out and actually influence it for good. And I think that’s what we have to focus on as academics.

Alexandra Saum-Pascual: I have a slightly different answer. Can I answer that, or should we do a different question? I think the problem is the way in which society is being structured and the role of education within state governments, for example, right? I believe in public good and I believe in public governance. I’m European, so I also believe in a certain general wellness of society and people’s capability to support each other in different ways that go beyond capitalist means. Now, if we divest education and we don’t fund public universities, and we… Actually, it is a problem that… I have little kids. It doesn’t start in university. It starts at kindergarten, private kindergarten, and so on and so forth. You know the story. You know children.

So anyways, if we start divesting and thinking that the university is not a place where we can switch and change and shape society, and we think that it is not worth it, and we think that private interest actually has a way to interact with the society that is of value, as we seem to believe, we are going to be abandoning people. So the issue that we have now is that more and more we rely on private corporations that also rely on public infrastructure to determine the way that we go about things, right? And those powers are turning from a sort of distributed, or I guess we could talk about early capitalism, how we thought that there was an invisible hand in the market because there were so many interests that they would regulate each other.

But now the thing is that those interests are sort of going back and back into very specific sort of lords. And we see a different map being drawn, one that resembles more like a feudalist system, where only a few people own the land in which the rest of us get to play. Because if we privatize the internet and we think of it as a “private” comment, or we rely on certain satellite technology that also belongs to a private corporation that is not even run by a board of governors, but by a single person called Elon, then what are we doing? Because if we don’t like the situation, we can’t use the collective power of the people to vote somebody out, because those people are not voted in. They’re not regulated by public law.

And so, that’s where it’s at, right? I think there needs to be a general restructuring of society that puts back democratic structures, and places education at its rightful position of production of knowledge, funds it, and lets us make those tools. Because we’re good people, so we’re going to make good tools.

Joshua Bloom: I’ll just add something quickly, which is that from a educational perspective, when we put our teaching hats on, we have this opportunity to train and teach the next generation the ethical use of AI, for instance. And it’s not going to scale across the entire globe, but we are putting some of the brightest minds and the most eager minds out into the world after they leave here. And Berkeley has this particular way of viewing these complex questions and recognizing that they’re nuanced. But we can put our stamp on it.

I’ve been crowdsourcing a ethical use of large language models like ChatGPT in the classroom. So I’ve asked other astronomers around the world to help me with this. We’ve been talking about and debating it within our faculty, “When is it acceptable to use ChatGPT on homework or for your essay?” And these are questions that are not going to get answered by industry. We are the ones that are going to lead the way, we carry the light, to decide what is the right thing, and to try to do the best we can. And it may not be perfect, and it is always subject to change, but we have this chance now with young people to put our mark on their future. And they themselves will also will broaden out to others. I just want to add that.

Audience 2: Thank you. My question doesn’t have to do with machine learning in general, but ChatGPT in particular. As I’m sure you know, there’s anecdotal evidence in at least one study showing that its math skills tend to decline, indeed precipitously over time. And anyone who’s fooled around with the program knows too, that it has a tendency to hallucinate, to make stuff up. There’s a famous example of the lawyer who had ChatGPT write a brief. And it turned out that none of the cases cited in the brief actually existed. So I wonder if you’d just comment a little bit on these pretty significant limitations.

Keanan Joyner: Sure. Yeah. And I think that goes back to, “What is ChatGPT?” Right? It’s a generative transformer that transforms your input; that you ask it a question or give it a prompt or something like that, and it goes to its set of statistical models that just predicts what should it respond. And it’s saying, “What should it respond based on what it was trained on?” And it was trained on Wikipedia. Open AI won’t tell us what it was trained on. That’s the main problem. But it was trained on half the internet. And so, it does very well with coding in particular.

So one thing that… All these code challenges that get put out there, ChatGPT does excellent on those. It’s not an inherent property of the algorithm. It’s a property of how good was its training data? And when you have tons and tons of code on stack overflow, on any of these types of sites, it’s going to do really well with that. And so, when you ask it a question about coding, the precision of your human language input is very precise. And it can produce an answer with a high degree of kind of accuracy, not because it knows the information, but because what’s in its training set is very specific and very clear. There’s no real ambiguity. So it doesn’t mess that up.

But then you start to ask it more vague questions. “Hey, can you explain to me the difference between this concept and this concept?” Well, it has some information that it was trained on from Wikipedia that correctly did that. But then, it also has probably Twitter data, where someone got in an argument with someone else and said incorrect things, right? We don’t know what it was trained on. I’m just hypothesizing here.

Now there’s uncertainty, because in its training set, it built an algorithm that had to do with Concept A and Concept B, that was trained on both information that was both accurate and inaccurate, because it was trained on so much data. And so, now when it gives you an output, depending on how you exactly worded your question or what generation it was, because they’ve updated training sets iteratively 3.5 versus 4, GPT is going to give you a different answer to the same question, things of that nature, you’re going to get a different answer. But that’s because ChatGPT doesn’t know anything. It is trying to predict the next statistically most likely word that should follow the last word based on its models, its equations, based on what you inputted to ask it. And that’s all ChatGPT is. It’s an amazing tool. It’s so cool. And also, it just is what it is. It is the product of its training sets.

Joshua Bloom: I think one of the things that’s really exciting from an academics perspective is where those failure points are. What are the boundary conditions of those? And so, while we don’t have the wherewithal, we don’t have the training data, we do not have the compute power to make our own version of ChatGPT, there are colleagues of ours that are asking those questions about, “Where are those boundaries? When does it hallucinate? When does it make up facts?”

And one of the things that I heard recently that’s fantastically interesting is asking this question, “Do large language models have a theory of mind?” And you can ask ChatGPT very simple questions that come from theory of mind work, and it gets them exactly right. But if you change it up a little bit, it gets it wrong. And so, I’m excited about the fact that my colleagues are excited to start asking those questions about, “Where are those failure points?”

The last thing I’ll say on that is those are failure points as of today, what month are we in, October, 2023. November, 2024, there’ll be one that doesn’t hallucinate nearly as much and finally can do math problems to a level that’s good enough. But more broadly in the context of machine learning models, the challenge that we have as scientists is that they always produce answers. It’s one of the things that’s the most dangerous thing. It always gives you an answer. You throw in data, you ask it a question, it’s going to give you an answer.

So I think maybe one of the forefronts, certainly for us, but more probably within the ChatGPT world, will be to start building meta models that can answer the question, “How confident am I in what I just said to you?” That’s not far away. And a year from now, we may come back and you may say, “Oh, I’m really mad because it can’t do abstract algebra on a certain type of manifold.” Right. OK. Fine. But it’s way better at math. It’s going to get way better at math. Whenever there’s these failures, if there’s a reason to fix them, there’s now incentives to do so.

Alexandra Saum-Pascual: OK. I don’t know if we still have time, but I will add something. No?

Jennifer Johnson-Hanks: Yes. Go ahead.

Alexandra Saum-Pascual: OK. Going about ChatGPT as well and how my students love it. Something that I tell them all the time is an example about, I think this is from Judy, that ChatGPT will be very good at predicting two things that go together, like a rooster and the sun. But it won’t be able to know if is the rooster singing triggering the sun, or if it’s the sun that is making the rooster sing, right? So causality is something that is just not in there.

So this needs to give us pause. But it is also very interesting to look at the output of this machine and sort of do a cultural analysis on it. Look at it as any other cultural product and see how it responds to a particular moment in time, to a particular group of people, and see how that output reflects ideologies values. You can always see. You can always see. You can ask it anything and see if it’s biased in this way or that way, and that will be reflective society at that time of a particular type of society. Again, not everybody is on the internet. Not everybody speaks a language that is on the internet. So you could do that type of cultural analysis and learn something, just as you would from reading a poem or a novel and seeing 19th century desire or whatever. So that’s an interesting thing too, when looking also at the problems and hallucinations, right? We could look at it as just a cultural object, a product of human culture.

Marion Fourcade: All right. Unfortunately, we are out of time. I saw a lot of excitement in the room in the questions, but keep it going. This is why we’re here. A sense of wonder, a sense of… The importance of critique. We’re right there on the panel tonight, and I just want to invite Jenna back. She will close this event.

Jennifer Johnson-Hanks: The closing is very simple. It is a very warm thank you to these exceptional panelists.=

[Music: “Silver Lanyard” by Blue Dot Sessions]

Outro: You’ve been listening to Berkeley Talks, a Berkeley News podcast from the Office of Communications and Public Affairs that features lectures and conversations at UC Berkeley. Follow us wherever you listen to your podcasts. You can find all of our podcast episodes, with transcripts and photos, on Berkeley News at news.berkeley.edu/podcasts.