Berkeley Talks transcript: Mapping the brain to understand health, aging and disease

March 11, 2022
Listen to Berkeley Talks episode #136: ‘Mapping the brain to understand health, aging and disease.’
[Music: “Silver Lanyard” by Blue Dot Sessions]
Intro: This is Berkeley Talks, a Berkeley News podcast from the Office of Communications and Public Affairs that features lectures and conversations at UC Berkeley. You can subscribe on Spotify, Apple Podcasts, Acast or wherever you listen. New episodes come out every other Friday.
[Music fades]
Christian Gordon: (First part of intro not recorded) …celebrating the 100th anniversary of UC Berkeley Psychology. Thank you for giving up your time to join us for this part of the series. We will be running a continuing slate of events over the course of the year, and hopefully conditions allowing, have a culminating celebratory event at the newish home of Berkeley Psychology at Berkeley Way West. We are very honored tonight to have a presentation from professor Jack Gallant. And to help us set the stage, we are very fortunate to have a senior faculty member of UC Berkeley psychology joining us to introduce our speaker. A few housekeeping notes before we jump into it. We suggest speaker view and when our program starts, we will pin our speaker to the screen for ease of viewing. If questions occur to you throughout the presentation or when we get to the Q&A portion, please feel free to drop those in the chat.
Thank you to everyone who submitted questions in advance. Professor Gallant has reviewed, and he will be addressing some of those at the end of the presentation. As we get ready to begin, I would be remiss if I did not thank and recognize one of our attendees. We are having one of the most successful years ever in fundraising and development in the social sciences in UC Berkeley psychology. And we owe a great deal to the leader of the department who started this effort when I joined six years ago, helped us to build this strong foundation and makes possible so much for our future.
Professor Ann Kring, we see you there. Thank you for joining us and thank you for all that you’ve done for UC Berkeley Psychology. With that note, I’m very honored to introduce professor Frederic Theunissen. My colleague Anya will be sharing a link to his lab at UC Berkeley, if you’d like to check that out. Professor Theunissen is part of our behavioral and systems neuroscience group. He also is connected with our Cognition and Developmental Psychology Group, and he runs the Auditory Science Lab at UC Berkeley. Professor Theunissen, thank you for being with us and the stage is yours to introduce professor Gallant.
Frederic Theunissen: Thank you. It is really with great pleasure that I introduce my colleague, collaborator and friend professor Jack Gallant. And when Jack sent me his 50-page CV, I realized that the most difficult aspect of introducing him was to keep this introduction short and to the point. So, I’ll try to do this. Professor Gallant is currently the Chancellor’s Professor of Psychology and Class of 1940 Endowed Chair at UC Berkeley. He’s also a professor in electric engineering and computer science and he’s a member of many graduate programs on campus, including bioengineering, biophysics, neuroscience, vision science, as well as the co-director of our Brain Imaging Center. You can see that his participation in all of these different graduate programs reflects kind of the interdisciplinary approach that he has taken in his work.
Professor Gallant obtained his Ph.D. from Yale University in 1988, and did his post doctoral work at the California Institute of Technology and at Washington University in St. Louis with Professor David Van Essen. He joined our faculty in 1983. Early in his scientific career in what I will call kind of his analysis phase, professor Gallant performed seminal work to understand how the visual cortex analyzes image to extract the features that allows us to recognize objects. And he distinguished himself from other visual scientists at that time by studying this neural processing in what we call natural scenes. Natural scenes are the scenes that you and I and animals see all the time in our complex words. And this was novel because, at the time, people were studying the visual system with things like visual gradings, moving bars and random dot patterns. Things very far from the things that we need to process to extract meaning from the world. And his new approach kind of led to some seminal findings in visual processing that I won’t have time to go into detail.
And this work was already extremely difficult and challenging, both in terms of the techniques, trying to deal with the complexity of real world images. And when brain imaging techniques became available, in particular fMRI functional magnetic resonance imaging became available, I was surprised that Jack decided to go from something that was already, in my eyes, extremely complex to something that was even more complex, which is to study how we extract meaning or where meaning is represented in the brain of humans.
And this seemed a completely impossible task and Jack managed to do it. And it’s just, it’s really quite phenomenal and amazing. And he did this not only by further advancing the methodology itself, so the brain imaging techniques itself, but also and probably more importantly, by further developing the analytical method that allows us to analyze the data and to be able to interpret these signals. These very weak signals that we get these non-invasive techniques to be able to understand what the human brain is doing and how it extracts meaning from the world. And meaning sounds flaky, but it’s not flaky at all.
It’s the meaning of words that you hear, the meaning of words that you read, the meaning of objects that you see in movies. All these things that we do all naturally, but are extremely complicated and the highest level of cognition is what Jack did, and really figure out how it happens in the brain. Not surprisingly, this work has been published in multiple articles in the best journals, science and nature. He has received many prizes and honors. I’m not going to read them to you, but the most notable are Time magazine’s “50 Best Inventions” in 2011 for his brain decoding algorithm. And more recently he has been elected as the chair of the IEEE Brain Technical Community.
And in his talk today, I think, I mentioned the first phase was this kind of analysis phase, how we decompose images. The second phase, just to kind of add words to it, was the synthesis phase, how we synthesize these things to extract meaning. And I think he’s going to tell us more about this next. It’s super exciting now that we understand somewhat what the brain does. Can we use this technology that he’s advanced and developed in medical applications? And I’m very much looking forward to this next chapter in your work, Jack, and welcome professor Gallant.
Jack Gallant: Well, thank you Frederic for that incredibly nice introduction. And thank you for everybody else who showed up last December or December 2020 for an aborted effort to give this talk and who showed up again, I appreciate your faith in me, and hopefully things will work better this time. This talk, as Frederic mentioned, is going to have a medical slant, and it’s really a kind of a forward-looking talk. And it’s really a talk that’s intended to scratch an itch of mine. We can do amazing things in human brain imaging, just amazing, remarkable things that no one would’ve ever thought we would be able to do 25 years ago when human brain imaging started.
And almost none of that, virtually 0%, has transitioned to the clinic yet. So, we have this remarkable technology that is available to us. It’s not used for diagnosis or prognosis, or treatment, or to evaluate treatments of mental disorders. And I personally find this very frustrating because I think we’re leaving a lot of potential help for individuals on the table. I think we could do a lot more than we’re doing. So, today, I’m going to talk to you kind of about where the field is and where it could be, and what the challenges are in moving from basic science, which is where I built my career, into applications. And I’m going to make myself small here so that you can see the slides.
Alright. So, if we think about any kind of mental disorder — dementia, a developmental disorder like autism, obsessive compulsive disorder or depression — most mental disorders that aren’t just, say, a stroke affecting the motor system, these are fundamentally disorders of thought. They disorganize or impair people’s ability to think in a neurotypical fashion. And they therefore impact the quality of life of individuals, especially if they have to navigate in a real world that’s full of people who think differently. So, this is not a profound insight or particularly new. Obviously for thousands of years, it’s been known that certain individuals acted and thought and behaved differently than other individuals.
And the last, certainly 500 years, we’ve known that this probably had something to do with some sort of brain disorder. And because of the need to try to understand these diseases from the point of view of brain disorders, there’s been a lot of work to try to develop brain imaging methods that would allow us to discover how people are thinking, not just by looking at their behavior, but by actually measuring their brain. And, of course, some of these were kind of aborted efforts. You are all familiar with phrenology, a widely discredited and legitimately discredited method of brain imaging that proposed that one could study the location of different mental functions by analyzing the bumps on individual’s heads.
Strangely, the phrenologists had a good idea that they had based on a lot of older literature, really just natural experiments where people had suffered brain injuries. The phrenologists had this idea that brain function is localized in some sense. What they didn’t have was any appropriate technology for actually being able to recover the locations of different brain functions and brain networks. And that’s really what we’ve been able to do with modern neuroimaging since about 1995.
Now, because we haven’t had any method for really easily addressing brain function in the clinic, we’ve relied on three other sort of pillars for diagnosis and monitoring treatment of brain disorders. The first is obviously clinical assessment of behavior. And obviously clinical assessment of behavior is fundamental. All of these brain disorders — schizophrenia, strokes, dyslexia — all these brain disease and developmental disorders affect behavior. And if there’s absolutely no manifestation of a behavioral problem, and the patient themself is not complaining about any kind of impairment, then you would have no idea that anything was wrong, and so nothing would be done. No one would even consider doing anything. So, obviously clinical assessment of behavior is the primary thing that any kind of medical clinician or medical scientist will use to investigate these kinds of brain disorders and developmental disorders.
The second is brain anatomy and physiology. We have MRI scans, probably a lot of you in the audience have been in an MRI machine and had your brain scanned for one reason or another. These machines work very well. They’re designed for high throughput applications. So, you can go to the hospital and get an MRI, and the technician will be able to give you a pretty good image of your brain. And it will tell you if, for example you have, say, a tumor in your brain or a venous malformation, or something else that might be causing you problems. So, that’s useful. More recently in a growing field in medical diagnosis of brain disorders is molecular assays. And people have worked very hard on this because there’s some indication that certain kinds of brain disorders have either a molecular cause or a molecular correlate that goes along with it.
And the most notable of these is Alzheimer’s. Alzheimer’s it seems to be at least partly a disease of protein folding in the brain. And those misfolded proteins can be assayed using appropriate methods. And the diagnostic power of the molecular assays so far isn’t very strong. It’s not as strong as one would like, but it gets better all the time. But you notice there’s a fourth pillar here, functional brain mapping. That’s a missing pillar. And you might say, “Well, why do we really need functional brain maps? After all, we have a clinical assessment of behavior.” If somebody is having delusions and they’re hearing voices, yeah, sure you could do a functional brain scan, but you can just ask them, “Are you hearing voices?” Or, they might just tell you, “I’m hearing voices that aren’t there. And I think they’re real. And it’s disconcerting to me.”
So, why do we really need brain mapping? There’s a simple reason. And I like to describe this as brain metamers. The brain is a big place. There are 80 billion neurons. All of these neurons are connected together in very, very dense networks. There are hundreds of different brain areas. Each area has a 50% probability of being connected to another brain area. Information percolates around in the brain in these very complicated dynamical patterns and then it emerges in behavior.
But behavior is a very low dimensional signal relative to brain activity. So, you can have many, many different brain states that all lead to the same observed behavior. And the problem is, if you only have behavior to try to categorize people, you’re going to categorize people with very different brain disorders as in the same bin. And that is going to be a relatively weak way to diagnose people and a very relatively weak way to develop treatments, because it’s going to try to do a diagnosis or treatment that’s overly broad.
It would be much better to be able to target individual patterns of brain activity and brain function to individual behaviors, in an individual subject level. And that’s really lacking currently. So, that’s my motivation for all of this and why I’m interested in it.
Now, I just want to show you a little movie that kind of illustrates the power of modern fMRI. This is a brain. Obviously the brain is very inconveniently folded up inside your head, but we can computationally inflate it, kind of something to the size of a beach ball, and then we can flatten it out. And if we did this to your brain, we would end up with something about the size of a large pizza. Now on the pattern on the unfolded brain here, we can draw functional brain activity. Red here means more brain activity relative to the mean, and blue is less than the average brain activity.
Just to orient you on this particular brain map. The middle of the slide is the visual system. The far left and far right are prefrontal cortex. This strip here kind of at the top and the middle is the motor strip. And down here in the middle at the bottom, kind of in the middle is the auditory system. Now, in this demonstration, we have somebody simply watching a movie. There’s music playing in this movie, but it’s not important for our discussion so I turned it off. You notice this movie just cycles through a bunch of shortcut scenes. There may be some thematic relationship between them. It’s not really completely obvious. And the subject is asked to simply watch the movies and while they do that, we record their brain activity. And the patterns of brain activity are shown on the bottom. And what I hope you will be able to see is that the patterns of brain activity are very, very complicated and dynamic.
They change very rapidly and they depend in some complicated, pretty inscrutable way on the scene that is being shown. Certain scenes like this evoke a lot of brain activity in certain regions of the brain. Other scenes that you saw earlier, evoked very, very little brain activity. So, fundamentally, the problem of brain mapping is a problem of correlating stuff that happened in the movie, the things that you saw, or the things that you thought about.
For example, maybe you just started a new relationship and this scene of kissing gives you fond memories and makes you feel good. So, you have brain activity that’s kind of indirectly related to what’s going on on the screen. We want to find relationships between the explicit and implicit sort of cues and signals in the stimulus and the task, and the brain activity that we measure. The problem is the brain activity, as I said before, is complicated. We’re measuring about 100,000 points on the surface of the cortex here, and the relationship between the brain and what’s going on is poorly understood at best.
Okay. So, rather than take you through all the different kinds of projects we have in the lab, I thought today, given that we’re interested in disorders of mental function, and those are fundamentally disorders of thought, I thought I would talk about one line of research that goes back about 10 or 12 years in my lab that tries to understand how thought is represented basically. How is conceptual knowledge represented in the brain? How do you extract the meaning of the world and impute meaning to events in the world and use that conceptual knowledge in the service of plans and goals to facilitate behavior? I’m not the first person who has sought to look into this issue. There’s a long line of psychologists who have tried to understand how we learn and then deploy and use conceptual knowledge. And I would say from a neuroscience point of view, up to maybe 2010, we could fruitfully divide this problem into four kinds of large domains of sub-problems. Or, let’s say, four kinds of cognitive functions that we thought based on prior psychology and neuroscience data, probably underlied our use of conceptual knowledge.
First, there’s something called an amodal conceptual network or an amodal semantic network. And this is a network of brain areas that represents knowledge, concrete knowledge about the world, about what’s happening now, about what might happen in the future, in a form that’s not tightly tied to the senses. There’s also modal conceptual networks, for example, in the visual system, which is in humans and other primates located at the back of the head. And the visual system has several dozens of different brain areas that represent different visual concepts, like people or animals, or places, and so on. So there’s modal conceptual networks that are tied to the senses. There’s an amodal conceptual network that seems to be divorced from the senses. Then of course there is a long-term memory access, because if you see an image of your wife, but you don’t remember it’s your wife, then there’s going to be problems.
You have to index, you have to constantly index incoming sensory information into your previous experience and long-term memory. So in order to label the world and understand what people are doing and what things are out there, you need to have access to long term memory, and that’s a third component. And the final component is executive function. And executive function is important, really because when you are behaving in the world, you’re always resource limited. This is something psychologists have known for hundreds of years. You don’t have enough resources to process all the possible information you could process. You have to focus on something. So you have systems of working memory and attention that are basically everywhere in the entire brain. And they’re constantly modulating the use of information in the service of immediate plans and goals so that you can get your work done, whatever you’re trying to do. So, these four kinds of domains have been studied using different varieties of methods. And I’m going to show you neuroimaging work on three of the four of these.
The first thing I’m going to tell you about briefly is the amodal semantic network. We can map this network. I won’t go into the details, but basically you can map this network using stories. If you have someone listen to stories, say from The Moth Radio Hour, which is just a standup storytelling festival, we can have people listen to these stories while they’re in the MRI machine. And then can essentially correlate semantic concepts that appeared in the stories with different paths in brain activity, using modern methods of data science and machine learning that are not at all important for today’s talk. And if you do this, you can create a functional map of the sort shown here. Now we have a left hemisphere on the left ear and a right hemisphere on the right, and the lateral and medial views of the brain at the bottom.
And at each point in the brain, you notice we’ve assigned it a color. And the color that we’ve assigned at each point in the brain indicates the kind of semantic information that is represented by activity at that point in the brain. So, it turns out that all of the red spots that you see here are all locations where you process social information. Information about your mother or somebody’s father, or a divorce, or a death and so on. Green areas are locations where you represent visual information about the world. Texture and color, and size, and shape. Purple areas are areas where you represent mental concepts like justice and truth, and love, and beauty. Things that don’t have a physical instantiation in the world, but you can conceptualize them, and they often come up in stories. So there are actually 2000 different semantic concepts mapped on this map.
I’m just showing you a very low dimensional projection of these here, but what you can see right away is these maps are very complicated. And in fact, we have an online brain viewer that you can play with, it’s on my lab website. I should have mentioned my lab website, it’s a bit out of date, but this brain viewer is still up there. And we can click on a piece of the brain and we can find out what concepts are represented at that location, as shown down here on the right. So, you notice now I’m clicking on several location in the brain that are red, and all of the concepts that are represented here are concepts related to families and family relationships, married, divorced, mother, father, husband, brother, and so on. And there are a lot of these red locations, as you might expect, given that social information is very, very important for human behavior. But there are other networks that we’ll cover in this experiment that have to do with other kinds of quantities.
For example, there are a series of dark green patches that you will find if you use the brain viewer that all represent us — money, dates, time, and weights, units of measure. All those kinds of quantity, kinds of concepts are represented in a network of green spots that are all part of this numbers network. And there are many, many networks in the brain, and you could actually play with this Brain Viewer forever. It won’t run in your phone. So don’t try to do that, but it will run in any modern computer. Alright. So, after playing this game and mapping thousands of semantic concepts, we came up with two very general principles, and they’re shown in this map here. This is a map for the concept of dog, divorced from all other concepts. And red locations are locations in the brain where dogs are represented.
And, in other words, if a dog comes up in a story or you see a dog, these red areas will become activated. And blue is all the locations that are indifferent to dogs. And what you notice is that dogs are represented in many different locations in the brain. In fact, for every concept you can come up with — dogs, garages, cars, the president — there will be multiple locations in the brain that become activated when that concept comes up in this amodal semantic network. And at each location, the brain if it’s part of the amodal semantic network represents a family of related concepts.
I should mention going back that the interesting thing about this from the point of view of you just being a conscious being is you are not at all aware of this. Nobody’s aware of this. If you think of a dog it’s just a dog, it emerges into consciousness, some sort of unitary concept. You can focus down on individual piece of the dog, but it’s easy to think of dogs and the family of dogs and all the related information about dogs as kind of a whole, but in the brain, it’s not represented that way at all. It’s represented as this fragmented kind of map of activity. And frankly, nobody knows how consciousness binds all of this together.
Alright. Now, I also mentioned to you that there are modal networks that are specialized for representing sensory information. And the difference between the amodal network and the modal network is shown here on this map. Blue regions here are the regions that represent amodal semantic information. In other words, it’s not tightly tied to the sense. And red regions are regions that represent visual semantic information. In other words, they will represent the semantic category, the concept, but it has to be presented visually. So, a dog. So these areas would respond if they were dog selective to a picture of a dog or a movie of a dog, but they wouldn’t respond to the word dog or to the sound of a dog. And you can see that the modal networks are separate from the amodal networks.
And in fact, if you think about normal daily experience. Well, I’m walking through the world, I see a dog, it goes into my visual system. Somehow that must get transmitted to my amodal sematic system, where I will be able to access more than merely visual information about the dog that allow me to integrate, to have this more integrated concept of a dog. And by an amazing piece of research by Alex Huth and Sara Popham in my lab, they discovered that there are a large set of parallel semantically selective channels that shunt information from the sensory systems, in this case vision, into this amodal sense-free system. So, this black line here on these maps is the boundary of occipital cortex. Occipital cortex is the visual lobe, and it’s at the back of the head. And at the anterior border of the visual lobe are a bunch of semantically selective little patches of cortex that all represent different concepts, dogs, and people, and planes and guns and everything.
And so, these patches will be activated if there’s a visual pattern and it’s the right semantic category. Immediately in front of this black line is the amodal semantic network. And this network represents information. Not only if it’s from a picture, but it could also represent information if it was spoken, or you heard a story about it, or you heard a dog bark. And interestingly, the semantic category of the amodal network immediately anterior to the visual modal network, it matches precisely along this border. So it looks like basically, when you have a higher order visual area that represents say a face, it feeds into immediately a very nearby amodal semantic area that represents kind of general knowledge about faces divorced from purely the visual experience of a face.
Alright. I mentioned executive function. These networks have to be dynamic because you don’t have enough brain to represent all things at all times. If you go home and your cat Fluffy is missing, your number one job is to find Fluffy, and your brain does what it can to become a giant Fluffy detector. Where is Fluffy? Where is fluffy? I start looking around the world for my cat Fluffy. I start thinking, where does Fluffy hang out? Could she be up on top of the curtain where she climbs sometimes? Could she have gotten out of a window? I start running all these plans and schemes. I do my visual search procedure as efficiently as I can. I call out, my entire brain becomes optimized for this giant Fluffy detection task. And when that happens, basically all of the representations of information in your brain, in this amodal network, shift as much as they can to represent concepts related to the cat detection task, the current task. And we can see this by having people watch movies. And in one condition of the movie, we have them search for humans.
And every time they see a human, we have them report they saw a human. In another condition, we have them report when they see a vehicle. And in this particular color map, I’m only mapping two semantic concepts. Red is humans and green is vehicles. And you can see that when they’re searching for humans, the brain becomes largely oriented towards humans. And when they’re searching toward vehicles, the brain becomes largely oriented toward vehicles. This is actually called a tuning shift in our literature.
There’s another name toward another part of the neuroscience literature called mixed selectivity. But what ends up happening is essentially the way the brain represents information is modified according to the current task. This probably we think has something very deep to do with learning. After all, if you keep doing a task over and over, and over again, then what will happen is you’ll become more efficient at making these network changes. And they will move from being short term plastic changes to being long-term changes and long-term memory.
Now, I told you about the amodal system, the modal system. I told you a little bit about executive function. There’s one thing I haven’t told you about yet, which is long-term memory. Long-term memory and its intersection with the amodal conceptual network is interesting, because we have known for decades that there is one particular brain disorder that has an enormous effect on people’s ability to link current semantic concepts and experience with long-term memory. And that brain disorder is called ATL dementia, anterior temporal lobe dementia. The temporal lobes are the parts of the brain that kind of stick out down here below your nose. And here in these brain maps, we’re not looking at the side of the brain as we’ve been looking at, we’re looking at the bottom of the brain. I might be able to go back here a few slides.
The temporal lobe would be this part of the brain here. And so, I’m looking up at the bottom of the brain here. This is one temporal lobe, and this is another temporal lobe. And the anterior temporal lobes are this part here, I’m outlining with my mouse. They’re kind of in the middle of the brain. And you can see that the anterior temporal lobes here, this is a normal subject, the anterior temporal lobes are very dark. On this map we’re just plotting brain activity elicited in the MRI machine when we record. And what you can see is some parts of the brain, like the frontal lobes here show a lot of brain activity and some parts of the brain like the anterior temporal lobes show very little brain activity. This is not a property of the brain. This is a limitation of our measurement device.
MRI is not a perfect measurement device. It has a lot of problems of various sorts that all have to be compensated for in order to get good imaging. And one of the problems that MRI has is when you try to measure a piece of brain that is very near, say an air sac, your signal disappears. So, it’s very hard to measure the brain directly over, say, the sinuses. And it’s very hard to measure the brain near the ear canals. And so, normal brain imaging does not measure the anterior temporal lobes. And as a consequence, there’s almost no brain imaging data on the anterior temporal lobes. However, we know from lesion studies, from people with anterior temporal lobe dementia, that they have very severe problems with long-term memory. People with anterior temporal lobe dementia, they will have problems processing language. They will have problems interpreting visual scenes.
They’ll be able to walk around the world and not run into walls, but they really won’t be able to interact with the world. They won’t be able to plan. They won’t be able to achieve goals. Sometimes they develop some obsessive disorders involving one particular object or one particular activity. It’s a very, very debilitating, progressive disease. And it clearly indicates that the anterior temporal lobes provide some sort of link between long-term memory and this amodal semantic network. The amodal conceptual semantic network is continuously receiving information from the senses, but the ATL is a hub that is responsible for linking that ongoing sensory information and amodal information into long-term memory. We can’t really see the ATLs in neuroimaging. And as the consequence, nobody has really done imaging in the ATL yet, of really any use.
I mean, there have been some studies that just haven’t told us all that much, and almost all the information we have about memory, the ATL, and the link between long-term memory and the conceptual network comes from brain lesion patients. So one effort in my lab has been to try to fix this, and we run a full service lab. We do a lot of MRI physics and MRI development. We write all of our own software and all of our own algorithms. We’re a kind of a soup-to-nuts lab. When we encountered this problem, we just decided to develop a new pulse sequence that would improve our signals that we can acquire in the ATL. So we spent a year or two, one of my postdocs Matteo Visconti, along with a former graduate student of mine who is now a professor at UCSF, An Vu.
They developed a new pulse sequence that hugely rescues the ATL. And we’re now beginning to use, as you can see in the middle slide here. So we’re now beginning to use this pulse sequence to scan neurotypicals to try to map the anterior temporal lobe and neurotypicals. And we have a collaboration with groups at UCSF that study anterior temporal lobe dementia to apply this method to patients as well. All right. Now, everything I told you about up to now was about conceptual knowledge. I just want to let you know that we do a lot of fun stuff in the lab. It’s not all about conceptual knowledge. In fact, this method can be used to study absolutely anything that you can think about in the brain, including fairly complicated behaviors. So in work that’s been ongoing for about the past five years, that was initiated by a graduate student of mine Tianjiao Zhang, who’s now a postdoc of mine.
We have been studying navigation behavior. And to do this, we built a large virtual world, couple miles on a side. It contains hundreds of different buildings and landmarks. People spend about 10 hours learning to drive around in this town. And they learn where all the landmarks are, where all the streets are. And then we have them do an Uber. Uber, excuse me, an Uber driver task, a taxi driver task in the MRI machine. So obviously when you’re doing this kind of task, you’re driving around in the city, you have to manipulate the controls, the foot pedals, you have to manipulate the steering wheel. You have to be careful with your speed.
You have to constantly monitor the other traffic, the other cars. There are pedestrians in this world. They sometimes wander out in the street, you have to avoid hitting them, and you have to get to your destination. So there’s a lot going on. There are multiple brain networks that are all involved in doing this navigation task, and we can map them all in this one, single experiment. In fact, this is a summary slide of 33 different kinds of information, navigation related information that we can recover from this one experiment in the brain of an individual observer.
Okay. Now in closing, I just want to return to the issue we discussed at the very beginning which is, this looks great. At least in my view, it looks great. When can we apply it to the clinic? And as I said before, MRI is not used for clinical applications except for pre-surgical mapping. If you’re going to do brain surgery on someone, you want to be sure to avoid, essentially you want to avoid removing any part of the brain that the person will notice is missing. So they map motor areas, they map areas involved in speech and in vision, and in hearing to make sure that they minimize any resection of those brain tissues. But other than that, MRI is not used in the clinic at all. And one of the reasons is, this is a typical clinical MRI image that was just published, as you can see a few years ago in 2017. And if this is the kind of data that we’re collecting in the clinic, then it’s understandable why people wouldn’t think that this is a particularly useful method for them.
One of the problems, there are two fundamental problems in the clinic. The first is, most neuroscience and most clinical studies as well, that look at functional brain maps, they look at the group maps, they don’t look at individual brains. Imagine that I was interested in what presidents look like. And instead of looking at the individual presidents and the variety of the presidents, I just average them all together. And yeah, okay. I get an average president and it looks kind of presidential, but it obviously doesn’t look like any individual president. And if I tried to guess if someone was the president by looking at this image, that would probably not work very well. So, whenever we aggregate data across a group and throw away individual data, we’re always going to have problems. And one of the things we need to do to move this method into the clinic is to get away from the group based studies that MRI has relied on for so long and do everything on an individual subject basis.
Now, everything in my lab we do on an individual subject basis. So, we can use our technology to get around this problem. This just shows you how bad the problem is. This is four brain maps. These are the amodal sematic network mapped with stories from four individuals. You can see that the anatomical structure of the brain differs for these four individuals, but also the functional maps differ a lot. They’re not completely different. You notice everyone has two red spots in the sort of the center or middle of the brain, and everyone has maybe a green and a yellow, and a blue, sorry, a blue, yellow, blue stripe in the prefrontal cortex here.
But they look a lot different. In fact, about 70% of the functional data reflects individual differences and only about a third of it reflects the group model. If you try to do brain mapping on a patient who’s coming into the hospital only using a group-based model, you’re going to be missing two-thirds of the data in that person’s brain. And that, obviously, is not something we want to tolerate if we’re using this kind of method to do diagnosis and prognosis, and monitoring of treatment.
So, we would like to do individual brain mapping, both for medical diagnosis, for monitoring mental health, and also to build brain machine interfaces for people who have communication disorders. And to do that, we have to solve not only the individual brain problem, which I’ve told you about already at length in this talk, but we need to solve a second fundamental problem with the clinic, which is time. If I want to make a really great brain map from one of my graduate students, say, I can ask them to go in the MRI machine repeatedly for a total of maybe five or six hours over several weeks. I can get as much data as I want from them. And we can make as good a map as can possibly be made. In the clinic everything is aliquoted in 20-minute increments.
You have to be able to solve the problem in 20 minutes and if you can’t do it in 20 minutes, it’s probably not going to get done. So we need to create brain maps, kind of of the quality that you saw earlier, but in a 20 minute span. And we’ve been working very hard on doing that and the magic sauce there is machine learning. Essentially what we do is we get brain maps from a large number of individuals, and we create a machine learning algorithm that understands the variability in brain maps across individuals. Then when we get a new person, we can take a very small number of data points from that person. And the algorithm will essentially find the person in the database whose brain is most similar to the new subject’s brain, the patient’s brain. And we use that as a prior to infer the missing data that we can’t collect because we only have 20 minutes.
And as you can see from the semantic map at the top and the bottom here, this method actually works amazingly well. This uses a machine learning tool called a variational autoencoders, but that’s not important for today’s talk. I just want to let you know that this does not seem to be a lost cause it seems like it should be possible to basically improve a small amount of clinical data in order to make inferences about data that we didn’t actually collect. All right. The last thing I’ll mention is resolution. Currently in the clinic, the MRI scanner is typically one Tesla. The scanner we have here at Berkeley is a three-Tesla scanner. That’s just a measure of magnetic strength. Our scanner we have here at Berkeley can measure a cube of brain tissue about 2 x 2 x 2 millimeters on the side.
Now, the problem is that’s kind of large for brain tissue. The brain is actually organized into these little things called columns that are about 500 microns across. So we would like to reduce the size of the resolution, the size of the voxels, the volumetric pixels we’re recording to something on the order of 500 microns. And over the past three or four years, UC Berkeley has made a substantial investment, and I do mean substantial, in a new project that is funded even more substantially by the National Institute of Health, that’s being directed by Dr. David Feinberg here at Berkeley to create the next generation fMRI scanner.
And when this scanner is completed, hopefully this summer, this will, is projected to achieve a resolution of about 400 microns, which will allow us to hone in and look at the individual columnar structure of the brain. And this will provide a really great way for us to bridge between animal studies, which are at the local circuit level and human studies, which have always been at the whole brain area level. So we’re very, very excited about this next gen scanner. This scanner I should mention will probably never pay for itself because it’s always going to be kind of an MRI physics experiment, and only the most dedicated people are going to use it, but it’s going to be great for science.
Alright. That’s about it for my talk. Thank you for paying attention. I hope I didn’t drone onto long. I’d just like to thank all my lab members, the government agencies, the military agencies, private industry, and the Weill Neurohub that have supported this research. And I do want to beg you if you are at all interested in advancing both basic human cognitive neuroscience and in bridging from basic cognitive neuroscience to the clinic so that we can improve outcomes for people with developmental and disease based mental problems, then please consider supporting either the UCB Brain Imaging Center or the UCB Department of Psychology. Thanks very much for your time.
Christian Gordon: Thank you, professor Gallant for that outstanding presentation. I can hear the audience clapping. Thank you so much for your attention, everyone. Jack, we have some questions in the chat. I think you have access, too. If I could ask you to dive into what looks interesting, but could you, before doing that, if I could offer the first one, could you say just a little bit more about what’s needed to advance your work? We’ll be sharing resources with our attendees, links and information. What specifically is needed to push your work into the next level?
Jack Gallant: Well, I think in particular, in terms of this clinical bridge to the clinic, we really have a chicken and egg problem here. Clinical medicine is under very, very severe time pressure and people in the clinic don’t generally want to change their workflow. And they don’t want to introduce any kind of new methods if there’s not overwhelming reason to do so. If it’s a method that’s an evolution of something they understand, then that’s fine. But if it’s something new, that’s much more fraught. So, it actually turns out to be kind of tricky to try to do the spade work we need to do in order to be able to move this into the clinic. Now, I do have collaborations with people at UCSF, Kate Rankin and Maria Gorno-Tempini on ATL dementia.
We have a new grant that we just got, a pilot grant to work on dyslexia, along with the UCSF Dyslexia Center. But none of those grants that are, those grants are all patient focused. And if you notice at the very end of the talk, I talked about how we have to use machine learning to leverage the patient data in order to basically fill in the inevitably missing parts from the patient data. And that’s going to require collecting data from a large number of neurotypical subjects.
So, we need to be able to support doing individual differences studies at scale, where we have, we’re looking at hundreds of individuals in these very high dimensional, complicated mappings studies. And that’s really, from my point of view, the biggest gap that we can’t really plug with any sort of existing NIH funding tool right now. They’re happy to pay for clinical things that involve patients. They’re happy to pay for small scale basic neuroscience research, but this kind of bridge between the two, that’s where things are kind of lacking.
Christian Gordon: Thank you. Thank you for clarifying that.
Jack Gallant: Christian, would you like me to go through the slides that people submitted before or the questions that people submitted before, the new questions, I can do either one?
Christian Gordon: Yeah. I think if you have some that were submitted in advance that you’d like to address. Let start there.
Jack Gallant: Well, there’s only a few questions that are live. So yeah, it shouldn’t be a problem.
Christian Gordon: Okay, great.
Jack Gallant: Okay. Somebody asked: How does dyslexia show up in functional brain maps? And I just mentioned, actually, we just received the grant to do this. This is, I think a grant from, it’s an internal grant from UCSF and Berkeley. It might be the Weill Neurohub. It might be, no, I think it’s from the Schwab Center for Dyslexia. And we are going to start looking at this issue. We’ve looked at reading ability in neurotypicals and we have a lot of information about how word forms are created out of the multiple steps of visual processing that happen in early vision. We know how that works in the neurotypical brain. We don’t have any idea how this works in dyslexics because no one has ever used this high resolution, high-dimensional functional mapping method in dyslexics. So all the functional MRI data on dyslexics just says, what part of the brain is activated in dyslexics? And is it activated more or less than in neurotypicals? And that’s not informative as to what is really happening in the dyslexic brain.
Somebody asked: Can this technology be used in the diagnosis and treatment of dementia? Well, I’m glad you asked that because that is also my hope. If you noticed at the end of the talk, I talked about the driving experiment. There’s a reason that we started working on navigation. It turns out that it’s a common clinical report that when people first start feeling the effects of dementia, one of the first problems they have is they lose the ability to navigate around their town even if they’ve lived there for years and know it well. They basically can’t find their way anymore. So we think that perhaps navigation is a very sensitive biomarker for early onset of dementias, such as Alzheimer’s. And we are developing this navigation method, again, in neurotypicals because we have to find out how the, for all of these brain mapping problems, we have to find out how the neurotypical brain does it before we can understand how a broken, a diseased or damaged brain does it.
We’re doing this in neurotypicals now. Getting over the hump to doing this in dementia patients is probably not practical right now because as I mentioned, it to takes about 10 hours for people to learn the town. But what we’re trying to do is find the funding and collaborators to build a virtual Berkeley or a virtual San Francisco online, or in the computer, in VR, so that we can have people navigate in an environment that they already know. Once we have a Berkeley or San Francisco virtual reality environment, then we can put people who are at risk for dementia in the MRI machine and start tracking their brain networks for this and other tasks. What, if anything, does this teach us about plasticity after rehabilitation? Well, as I mentioned, attention has a huge effect on plasticity on a very short time scale.
If you attend to what I’m saying, then as my speech is evolving, your attention is being directed to different concepts. And those concepts are represented preferentially in brain networks at the cost of representing other irrelevant information. And this happens all the time and online, this is a fundamental feature of mammalian brains. And it’s something that’s completely missing from machine learning because in machine learning today usually what happens is you train your machine and you train your AI and then you deploy it, and it never, it doesn’t learn after that. Humans learn all the time constantly at all timescales. And attention induced plasticity of brain representations is just the shortest time scale at which you see plasticity. But we can certainly see plastic changes in brain networks over time using these functional mapping methods. In work, not done in my lab, but nice work done in other labs, they’ve mapped brains while people learned musical instruments.
So the first study of this did brain mapping while somebody learned to play the piano and they found out that the representation of the hands increased over months as people learned to play the piano. The question from that was, well, perhaps this doesn’t have to do with playing the piano. Maybe it just has to do with physical activity. So another group went and did a study of the violin, which is a very asymmetric instrument because one hand is fretting and one hand is boeing. And in that case, you also saw that as people learned to play the violin, their hand representations enlarged, but they enlarged very, very differently in the two hemisphere, depending on whether your task was to, whether the hemisphere’s task was to bow or fret. So this is definitely something that could be deployed to rehabilitation. It has never been used in that way so far.
Could brain imaging be used to show changes in brain networks due to change of diet? Certainly, there’s only one caveat I want to point out because I tend to be a brutally honest person. fMRI is not a perfect method — it has problems. One of the problems is it’s not measuring neurons directly. It actually measures metabolism. Neurons are little chemical engines. A neuron takes in sugar and it takes in oxygen, and it burns the sugar, and it creates a fuel that it uses to power its little cellular machinery. And in MRI, we’re actually measuring that change in oxygen level that has to do with the neuro metabolism. So, if you were going to look at diet and fMRI, you would have to tease out the various contributions of diet to the various components of the fMRI signal. Some of which are directly related to the brain and some of which might not be directly related to brain function. So, it would just be a caveat. It could be done, it would just have to be done very careful.
Should functional brain scans be a routine part of preventative care? Functional brain scans are expensive. So, I think at this time, no. We, here at UC Berkeley, pay $600 an hour to do our brain scan. I think if you go to a hospital that has much greater overhead and doctors who drive Ferraris, then you will pay at least $1000 an hour for your brain scan. It’s not cheap. So should it be used as part of preventative care? Probably not yet, but the costs of brain scanning are going down continuously. And I think that is probably something we should shoot for. How does the next gen scanner relate to the work being done at the Allan Brain Institute?
For those of you who are not familiar with the Allen Brain Institute, Paul Allen, a billionaire who just died very recently, but was a former, one of the founders of Microsoft, founded before he died the Allen Brains Institute in Seattle. And the Allen Brain Institute has tried to do neuroscience at scale, big neuroscience, big projects. Instead of cutting up one mouse brain and looking at it under an electron microscope, the Allen Institute will cut up 10,000 mouse brains and look at them under electron microscopes. That’s the scale that they work at.
The Allen Brain Institute has been doing a lot of great work, almost all of that is either in animals or in postmortem human tissue. As far as I know, the Allen Institute has no functional MRI component at all, that they’re working on. And if they did, they would probably have to talk to me because our MRI methods at Berkeley are at least as good or better than any other group in the world. And we can map more information more quickly here than anywhere else. And so, I would probably know about it if they were doing it.
Someone asked: What is the mechanism and substrate of consciousness? And I implied the answer to this earlier in my talk. I have no idea, no one has any idea. Consciousness is a big mystery. We have looked, the best studies in my opinion on consciousness have used anesthesia to look at consciousness. Where they’ve titrated up people’s level of anesthesia and measured brain activity as they’ve entered into consciousness and then become unconscious, and entered into consciousness and become unconscious.
Those are great experiments, but they really haven’t told us what consciousness is. When the brain is kind of working more in a cohesive network, where the areas are communicating well with each other, then you tend to be conscious. And that tends to happen when you’re not under anesthesia. When you go under anesthesia, the brain doesn’t work so well, neurons down fire so much. The different parts of the brain become decoupled from one another. It doesn’t seem to work as good of a network and you become unconscious, but that’s not a very deep observation. It’s just an observation.
What is the intersection between this research and studies of psychedelics? I wanted to get to this question because some of you may have heard, and if you haven’t, you should check into it, that Berkeley just started a new center for psychedelic studies and several members of the psychology department are involved in that center. It is going to try to use basic psychology and neuroscience research, both in humans and in animal models to try to better understand the molecular and systems neuroscience effects of psychedelics and how they alter our thought. So there are plans to do MRI experiments in the psychedelic center, along with the psychedelic center. I have plans to do that my myself. I am not optimistic about how things are going to turn out. And the reason is something known as ground truth. If I have you listen to a story, and then at the end of the story, while you’re listening to the story, I map your brain.
I know what story you heard. I controlled the story. I have ground truth about the story. So I know if I think of the story as the X variables in this equation and the brain activity as the Y variables, I know both the X and the Y variables, and I can make an equation that relates them to one another. If I give you a psychedelic drug and you take a trip, I have no idea what you’re experiencing. I have no ground truth. You might tell me some story about what you’re experiencing, but of course that’s a pale shadow of what you really experience under psychedelics. So my X variables are missing, they’re just not really measurable very well. All I have is my Y variables. And that’s just a really hard way to do science and every science that ends up finding itself in that situation is not as successful as when you control sort of both ends of the system. The things you put into the system you’re measuring and the measurements themselves, you want to control both ends.
Could this technology be used to improve cognitive function, e.g. memory? Yes. In the same way that training and might improve rehabilitation. It might also improve memory. There are several efforts, a big effort ongoing at UCSF to use brain training, not to improve memory for neurotypicals, but as a way to save off dementia. The early evidence is that it might work. But to be honest, there have not been a lot of really good studies of whether the results of these brain training paradigms generalize outside the scientific setting. So, if I give you a stack of five video games to train your brain, and you do those five video games for six months, you will get better at those five at video games. There’s no doubt about that. But then when you go to the grocery store, will you remember your grocery list any better? Unclear. I don’t think anybody really knows at this point.
Okay. So, somebody asked an interesting question which is: What is the sample size of the brains that were measured? And what kind range of people were sampled? When I saw the train entering the tunnel on the video, I thought of Benny Hill. I wonder how many in your sample were thinking of Benny Hill? Probably only me since I think I’m the only one in my lab old enough to remember who Benny Hill is. We generally use graduate students as subjects. And that is bad from the point of view of generalizing to the population.
Because as you guys all know, Berkeley graduate students are very, very smart individuals and they are not your typical human. So, in that sense, our data thus far are biased, but we use Berkeley graduate students because they are very responsible subjects and they know what they’re doing and they show up and take the experiment seriously. If we’re going to do any large individuals differences study in the population as a whole, we will have to address this. And that should and must be done with just standard neurotypical subjects that are drawn from many walks of life, not just from the graduate pool at Berkeley.
I was also asked, what is the sample size of the brains that were measured? In our lab a typical experiment, believe it or not is maybe five or 10 people, but we do the full analysis in each one of those five people. Essentially every brain is an experiment. And if we do five, if we look at five brains, we’re doing five replications of the experiment. That’s in contrast to the way most, I would say neuroimaging studies work, where they’ll collect data from 20 subjects and then average the subjects together and then look to see if they got a significant result. We always work at the individual subject level. Each subject has to show the effect.
Someone else asked: Does this new mapping support Sperry’s original research on left and right hemisphere distinctions? The left and right hemisphere are different. They’re just not as different as might have been thought. Now the old Sperry results actually had to do with the visual field and the visual information is indeed, the early stages of vision, confined to one hemisphere versus another. So if you have a split brain patient, it will be difficult for information from one hemisphere to get to the, from one visual field to get to the other hemisphere or to get to the same hemisphere actually. But if you look at the integrated brain, that’s not split brain, then there are some differences between the hemisphere, but they’re modest. They’re not enormous. Most of them seem to have to do with language production, which is very heavily lateralized because the thought is that you only have one mouth and so you don’t want to have two brain systems competing to operate the same mouth. So that becomes lateralized just on one hemisphere.
Christian Gordon: I’d like to take, can we do one more question and then we’ll move to close to stay on schedule or I don’t want to interrupt your flow.
Jack Gallant: I think there’s a new question. Well, it’s up to you. I should have mentioned that anybody who wants to email me after this talk, I’m [email protected]. I’m easy to find. You can email me and I’m happy to give you any more information I can.
Christian Gordon: Thank you, professor Gallant. So, let me once again, thank all of you for joining and thank professor Gallant. You will hear from my team as a follow-up with all kinds of useful information links to the video for this talk and prior talks. Our website, where you can learn more about the 100th anniversary of the UC Berkeley psychology department, in addition to upcoming activities, events, and ways to be in involved, will include access to our new psychology newsletter led by the people of the department, graduate student Juliana Chase, and many others. It’s outstanding.
We’ll make sure that you have access to that, as well as ways that you can get involved and get behind the work of the people of UC Berkeley psychology. I hope this program gave you an idea of how compelling, how interesting, and how important it is, and we’re grateful for your interest and for your time. And for all of you who have given of yourselves, your time, your energy, your resources, to support Berkeley Psychology and the university, thank you. Hope you all have a wonderful evening. We look forward to sending out additional information and thank you for joining us. Jack, thanks once again for your time and an outstanding presentation.
Jack Gallant: Thank you for listening. I hope this was interesting for people. Take it easy.
Christian Gordon: Thank you, and have a good night everyone.
Jack Gallant: Be safe.
[Music: “Silver Lanyard” by Blue Dot Sessions]
Outro: You’ve been listening to Berkeley Talks, a Berkeley News podcast from the Office of Communications and Public Affairs that features lectures and conversations at UC Berkeley. You can subscribe on Spotify, Apple Podcasts, Acast or wherever you listen. You can find all of our podcast episodes with transcripts and photos on Berkeley News at news.berkeley.edu/podcasts.