108 | Carl Bergstrom on Information, Disinformation, and Bullshit

We are living, in case you haven’t noticed, in a world full of bullshit. It’s hard to say whether the amount is truly increasing, but it seems that everywhere you look someone is trying to convince you of something, regardless of whether that something is actually true. Where is this bullshit coming from, how is it disseminated, and what can we do about it? Carl Bergstrom studies information in the context of biology, which has led him to investigate the flow of information and disinformation in social networks, especially the use of data in misleading ways. In the time of Covid-19 he has become on of the best Twitter feeds for reliable information, and we discuss how the pandemic has been a bounteous new source of bullshit.

Support Mindscape on Patreon.

Carl Bergstrom received his Ph.D. in biology from Stanford University. He is currently a professor of biology at the University of Washington. In addition to his work on information and biology, he has worked on scientific practice and communication, proposing the eigenfactor method of ranking scientific journals. His new book (with Jevin West) is Calling Bullshit: The Art of Skepticism in a Data-Driven World, which grew out of a course taught at the University of Wisconsin.

[accordion clicktoclose=”true”][accordion-item tag=”p” state=closed title=”Click to Show Episode Transcript”]Click above to close.

0:00:00 Sean Carroll: Hello, everyone, and welcome to the Mindscape Podcast. I’m your host, Sean Carroll. In fact, welcome to a special bullshit edition of the Mindscape Podcast. Now, I know what you’re thinking, there’s probably plenty of other episodes of Mindscape which qualify as bullshit, but this is not supposed to be a description of the episode, as a description of what the episode is about.

0:00:20 SC: I apologize for anyone who finds the language a little bit or spicy, but “bullshit” has become a technical term in philosophy ever since the publication in 2005 of On Bullshit by Harry Frankfurt. The idea was supposed to be that bullshit is different than lying. In lying, you know that there’s some truth and you’re telling the opposite of it, whereas in bullshit, it’s not that you’re trying to tell untruths, just that you don’t care what the truth is. You have something that you want people to believe, and that’s what you’re going to try to make them do.

0:00:54 SC: So today’s guest, Carl Bergstrom, is a biologist at the University of Washington who studies the role of information in biology. Not especially information in some high-brow information theory sense, although there is that, but how organisms use information, how organisms share information with each other, and sometimes information that is not true. Basically, even crustaceans and birds can bullshit each other in the correct circumstances. So this naturally led him to study the flow of information in social networks, but among human beings, and now it’s made him an expert in the COVID-19 era of the amount of bullshit that we’re hearing over social media and even over respectable news organizations about what this pandemic is doing to us.

0:01:38 SC: He’s perfectly situated to talk about this because Carl and his colleague Jevin West have been teaching a course that has now turned into a published book called Calling Bullshit: The Art of Skepticism in A Data-Driven World. It’s especially about how we can use charts and graphs and numbers and data to make a case that is basically bullshit. One of the features of the modern internet, highly interconnected world is that bullshit can both exist in new ways and travel much more frequently and much more effectively. So we need to understand both sides of the equations, both to how to recognize bullshit when it’s out there in the world and also how to prevent ourselves from promulgating bullshit, even if that’s not what we mean to do.

0:02:23 SC: So this is a fun episode, we’ll even see us do a little calculation in real time, and we’ll get to both big-picture issues about the nature of truth, but also directly relevant issues to the big crisis that we’re all facing right now. So remember, for those of you who don’t already know, we have a Patreon for the Mindscape Podcast. You can find a link to it at the website preposterousuniverse.com/podcast, or just go right to patreon.com/seanmcarroll and you can find it there. We have enormous gratitude to all the Patreon supporters. My endless thanks to you folks out there. You help support the podcast, keep it going, keep it vibrant, and Patreon supporters get ad-free episodes as well as the ability to ask questions for the monthly Ask Me Anything episodes. So with that, let’s go.

[music]

0:03:26 SC: Carl Bergstrom, welcome to the Mindscape Podcast.

0:03:28 Carl Bergstrom: Oh, thanks a lot, Sean.

0:03:29 SC: So you have a book that has come out called Calling Bullshit. That’s the title, is that right?

0:03:35 CB: That’s right, Calling Bullshit: The Art of Skepticism In A Data-Driven World.

0:03:39 SC: Yeah, we’re going to have to put one of the caveats on this episode saying that occasional bad language will be used. “Bullshit” counts as bad language, I suppose. But just so everyone knows, it’s an interesting journey, you’re a biologist by training, you studied how information comes into biology and the role it plays, so I can kind of see how that segues into bullshit. But why don’t you fill us in on how exactly that happened?

0:04:04 CB: Yeah, it has been kind of an interesting journey. I started out trying to understand animal communication and wrote about animal communication a great deal as a PhD student, thinking about in particular what keeps it honest and why do animals tell each other the truth. Because if they lied all the time, then they wouldn’t listen to each other, but they do listen to each other, so what is it that keeps communication honest? And there are a really elegant set of mathematical models that get at these kinds of questions, and they’re paralleled by really neat models in economic game theory, and so that was what I was really interested in as a graduate student.

0:04:39 CB: From there I moved on and started studying epidemiology, actually, and did a postdoc in epidemiology, started trying to understand how new, it is kind of unfortunately prescient, but how new diseases emerge from animal hosts into the human population and what happens then and what does that spread look like and what can we do about it. And at the time, I was really thinking about things H5N1, avian flu, and such. And as I started starting more and more about epidemiology, I was thinking a lot about how things spread on networks, how do diseases in particular spread through human contact networks. Because if you just sort of treat a disease as this mass action situation where all of the individuals are just bumping into each other at random, you don’t get a very good picture of what disease spread actually looks like.

0:05:31 CB: And so I took this detour and spent five years thinking about the physics of networks and working on problems in network theory, and as I started to emerge from that, we had all of the current cultural issues going on about spread of misinformation and disinformation on social networks, and so that brought together some of these things I’d been interested in for a very long time, and then sort of kept working on it on a sort of a back-burner way about animal communication and about honesty and dishonesty and the economics of information. And that brought that together with really thinking, trying to think clearly about networks and how information travels or anything else travels along networks. And with all the stress that was given on the role of social media with the spread of fake news and such around the 2016 elections, this became something I was really very interested in and started trying to work in that area.

0:06:34 SC: But it must be a slightly weird situation because you’ve written a book with your co-author, Jevin West, and based on a course that you’ve been teaching at the UDub on calling bullshit and presumably if I know how publishing works in between when you handed in the manuscript and when it came out, we got hit by a massive pandemic, and you were sort of positioned to talk about this in a useful way, and now you’re at least a minor Twitter celebrity as the go-to place to have bullshit called on various takes about COVID-19.

0:07:08 CB: It certainly has been something that I’ve been trying to contribute during the COVID pandemic, you’re right, we did hand in the manuscript before the pandemic broke out, and it sort of feels like in some way, as the book was written in a different world, in a different time and place. On the other hand, it’s amazingly relevant to everything that’s going on right now, and I think you could re-write the entire book, swapping out every single example in every chapter with something that’s happening around COVID, because it’s just… These are the sorts of problems that we’re dealing with, whether it’s the misuse of numbers and statistics to try to basically push people around and intimidate people, I’ve got all these mathematical models and my models show this or my models show that, or here are the figures and figures don’t lie, which of course they do, or whether it’s the dynamics of the way that social media leaves us vulnerable to disinformation being injected by foreign actors.

0:08:11 CB: All of these things have turned out to be major players during the current pandemic and so, yeah, it has been interesting to write this book and then find sort of immediate application for it, if you will.

0:08:28 SC: I mean, there’s no shortage of applications, of course, but yes, this is a particularly sharp example of some of these things, but so let’s be scholarly about this, what do you mean by the word bullshit, ’cause I’m sure that some people have slightly different ideas in mind.

0:08:43 CB: Yeah, so when I talk about bullshit, I’m really talking about the use of language or figures or statistical arguments that are intended to impress or persuade or intimidate a viewer or reader with blatant disregard for the actual truth of these arguments that are being made. So Harry Frankfurt, a philosopher who wrote a book on bullshit, which was this sort of originally a scholarly essay in an obscure journal and then became a surprise best-selling little book through Princeton University Press, he defines bullshit as… He distinguishes it from lying by saying, the liar knows what she wants you to believe and tries to lead you there away from the truth, and the bullshitter doesn’t even care what you… About what the truth is, the bullshitter is just simply trying to bullshit… Often trying to impress you or persuade you or something like this, without even necessarily having a target in mind.

0:09:50 CB: So I think of an example of… I think we all had this experience in high school of having to write an essay about something in one of our humanities or social sciences classes, we didn’t really understand the reading that we had done and we left it until too long and we wanted to write something that would make it look like we’d understood it, we didn’t really give it damn what the teacher believed about whatever we were writing about, but we did want to try to make ourselves look competent, and that’s kind of epitome of bullshit in Harry Frankfurt’s sense.

0:10:23 SC: So it’s less… Tell me this is accurate or not, it’s less about lying per se, and more or less just not caring if what you say is the truth or not, you have some instrumental goal in mind, you’re going to try to get there by saying whatever it takes to get that reaction in your audience.

0:10:39 CB: Yeah, I think that’s exactly right. And very often that goal, I think, has something to do with trying to impress someone or persuade them of your own competence or your own likeability or whatever that is.

0:10:56 SC: And so is this something that hooks into your, the biology research that you mentioned, are there evolutionary reasons why we bullshit, ’cause clearly it’s prevalent. It’s all over the place.

0:11:07 CB: Yeah, that’s right. I mean, we talk about that in the leading chapter of the book and we look at sort of where do these things have their origins and so it may be more along the lines of true straight-up deception, more along the lines of lying than along the lines of Harry Frankfurt’s type of bullshit, but bullshit, or at least at least deception, goes way way back in the animal kingdom. And we talk about some examples. We talk about stomatopods, these crustaceans that have these very, very powerful claws and they threaten each other and they fight with these powerful claws and they wave their claws around, but they do something when they’re moulting, and when they’re moulting their claws are just basically a shelled lobster claw, they’re defenseless and useless, but they wave them around anyway and try to intimidate each other, and it’s often successful, this kind of deception.

0:12:04 CB: And we talk about ravens and the way that ravens actually have a theory of mind, and some of the experiments that show that ravens can actually think about what other birds see and what they might be interpreting and the way that they use that to try to fool other conspecifics and so, of course, deception of that sort goes way, way back, because information can be… Manipulating other organisms’ access to information is a very good way to manipulate their behavior, and whether that’s done by manipulating that access to information through a communication channel that sort of evolved for the purpose of communication or whether it’s done through something like camouflage, it can be a very effective way to…

0:12:53 CB: Communication channels, in general, give you handles on other people’s behavior. And so I tell you, how do I get you to do something, Sean, where I don’t grab your arm and move it? I talk to and convince you that that’s a thing that you want to do. And so this goes way, way back, and then as we move toward humans, what we have that’s sort of unique in humans or I believe unique in humans is we have this sort of infinitely expansible combinatorial language that allows us to create an infinite range of meanings and so forth. And so we can talk about and we can construct all of these kinds of arguments and narratives and entire cognitive edifices in the world that we can use to influence what people do and think and so on. And so we have all of that, and then we have a quite well-developed theory of mind about one another so we can think about in advance how the stories that I’m telling you are going to manipulate your own belief states and how that might lead you to respond.

0:13:48 CB: So when you put all of this apparatus together with the fact that human incentives are rarely perfectly aligned between any two people, that creates the sort of setting in which we’d have a bunch of lying and a bunch of bullshitting as well.

0:14:06 SC: I guess, I’m going to guess that there is some game-theoretic reason why when it comes to communication between individuals, you usually want to be telling the truth; otherwise, people won’t believe anything you say, right? There’s some sort of optimum mixture of bullshit you should stick in there to maximize the chance that you’ll be believed and also get what you want.

0:14:28 CB: Yeah, it’s spot on and we see things like that. Yeah, you see something like this even in a poker game. It’s a…

0:14:34 SC: That’s what I’m thinking of, that’s my general paradigm for these issues. [chuckle]

0:14:37 CB: Okay, yeah, exactly. So you bluff some, but you can’t bluff always and so otherwise, people will call your bluffs all the time, and it’s the same when we interact in person. And then the frequency of the interaction varies. If I do a single appearance on your podcast, I’m able to shit a lot more than if you’re a close colleague of mine that I deal with on a daily basis, that sort of thing.

0:15:00 SC: I wonder if… And the incentive to put false ideas about the world in the mind of other individuals is pretty obvious to me. What about putting false ideas about the world into our own minds? I know that people have sort of debated how much we trick ourselves, but clearly, we do and sometimes, either we trick ourselves or at least, we’re not very good at always getting to the right picture of the world.

0:15:27 CB: Yeah, of course, there’s a ton of self-deception that goes on. And I think a lot of it, in my personal view, is actually due to relatively ineffective application of heuristics that we have to various ways of interpreting the information we have about the world. There are certainly people that argue that, actually, you have sort of adaptive or optimal self-deception where you can actually perform better at certain problems or tasks if you believe things that are false and so on. That, to me, clashes with an awful lot of what I think about are basic principles in decision theory. And so I’ve not found those arguments particularly persuasive, but what is completely clear is that we’re all very good at convincing ourselves of things that are wrong, though I just don’t think it’s necessarily optimal that we’re doing so.

0:16:22 SC: Well, that’s interesting to me. Is it an open question or considered an open question? Is it optimal to be as correct as possible?

0:16:30 CB: Well, if you’re… In decision theory, you would certainly… You certainly would, you’ll do, for whatever your objective function is, you’ll do better if you have correct information than if you have incorrect information, making, say, Bayesian decisions based on bad information. And similarly, in decision-theoretic context, not in a game-theoretic context, but if you’re just trying to make a decision that doesn’t impact anybody else, more information is always at least as good as less information, and usually better.

0:17:06 CB: Things get a little bit more complicated when you move into a game-theoretic context. You can definitely have situations where getting more information can hurt you. It’s less obvious to me that getting false information helps you in those kinds of situations, but there are people that argue, especially that there are constraints about what we can do, is say that, “I’m not a… Humans can’t bluff well so the only way to be effective at bluffing is to believe the thing you’re bluffing about,” for example, would be one argument. So we’ve evolved the ability to self-deceive in order to be better bluffers. I don’t understand why we didn’t just evolve to be good bluffers and believe the truth. But that’s a…

0:17:50 SC: I guess what… This is off-topic, but it’s just fascinating to me. Is there some sort of group selection argument we could put down to say that for any individual, they would be best off getting the most accurate view of the world possible, but within a species or within a community, if some people are wrong, there’s some benefit to that, like they act in crazy ways that benefit the group because of their adventurousness or something.

0:18:13 CB: As an evolutionary biologist, I tend to be pretty skeptical of group selection arguments without really, really hard quantitative backing, because they just typically don’t go through. And the basic problem with the group selection arguments are what George Williams brought up, a great evolutionary biologist, and he said, “Well, look, sure, that might be true that a group that has a mix of people, ones that have crazy ideas and ones that don’t, does better than a group that all have sensible ideas. But within that group of people with crazy ideas and reasonable ideas, which ones do better? Well, the ones with reasonable ideas do better.” And so pretty soon, you lose the… The genes for having crazy ideas go away to massively oversimplify things and pretend that there are genes for having crazy ideas, but… But you see what I’m trying to say about this kind of, this random selection, yup.

0:19:02 SC: I do, I do, and yet, so many people have incorrect ideas about the world, so we have to work to understand this. Good, so back to bullshit, then. Is there a categorization? Is there a grand Aristotelian theory of kinds of bullshit?

0:19:20 CB: There have been… It’s remarkable, there’s been quite a detailed exploration of the nature of bullshit in the philosophy literature since Frankfurt’s original paper. There is not a taxonomy that I’m completely happy with. There are some fundamental debates that come out in that literature. One of the really important ones is is bullshit in the mind of the speaker, in the mind of the beholder; in other words, does intent matter. And so there are arguments that go in both ways. For Harry Frankfurt intent very much matters, and for some other authors, it’s a statement, it’s on paper, it’s just the text, it’s bullshit or it’s not, which is it.

0:20:03 SC: Yeah, okay.

0:20:05 CB: So these are the kinds of things people are carving up, I don’t know if there’s sort of a grand taxonomy of bullshit in any other sense.

0:20:13 SC: I mean, I guess then the two strategic questions are, how do we avoid being bullshitted? And then maybe the slightly naughtier one is how do we become better bullshitters when it would benefit us?

0:20:26 CB: Yeah, I mean, both of these are interesting, valid questions. It’s kind of been a challenge too, as we’ve been thinking about it, so we’ve been teaching this course calling bullshit since 2017. And one of the things we’ve been kind of thinking about is, how do we make the world a better place with this course instead of just simply training people to be better bullshit artists. So let’s see, to get to the first question, how do we avoid being bullshitted. I think we’re actually quite good at that for what I call old school bullshit, and old school bullshit is the kind of weasel words that you’ll hear from company spokespeople, might be a sort of a political speech, it’s sort of just taking rhetoric and words and maybe this notion of paltering that people use to kind of bend the truth or get around the truth enough to have plausible deniability or diffuse away the direct responsibility by using the passive voice and all of this.

0:21:31 CB: I think we’re pretty attuned to that, we’re good at picking that kind of thing up. When people are bullshitting, we know it and we wince and people have a natural distaste for sort of weasel wordery and all of that. And what the book is really about is that there’s this stuff that I call new school bullshit, and that comes clad in the trappings of science, and in particular in the trappings of numbers and statistics and figures and machine learning algorithms and all of that. And I think that we are more susceptible to this because first of all, it’s a relatively new thing, the world is so much more quantified than it was 20 years ago for reasons we could talk about.

0:22:08 CB: Second of all, I don’t think the education system is really doing as much to train us to be attuned to this kind of quantitative bullshit that’s out there, and third, I think there’s this feeling that we have that numbers are somehow more real or objective than words. Words are opinions and they’re kind of, they’re fuzzy and numbers are these hard things that come straight from nature, and of course, that’s not true, and even when it is true, there’s so much flexibility in the way that people present numbers and facts and figures that they can give you true numbers, but make you feel completely differently about them, depending on how they’re presented.

0:22:47 CB: So that’s really what we’re trying to do in the book, is to say, look, we’re assuming that you know when the corporate spokesperson is bullshitting their way out of taking responsibility for something, but it’s a lot harder when somebody comes rolling in with a statistical analysis and waves it in your face and you don’t quite… You’ve never learned that kind of analysis or you don’t remember what it does if you have learned it, then how can you challenge them?

0:23:15 SC: Yeah, I mean, I guess, I want to get to the quantitative stuff ’cause obviously it’s fascinating, but maybe I’m a little bit less optimistic than you are about the old school bullshit. I mean, the examples that come to mind are just hugely grandiose claims from sketchy sources which in my experience, people are really willing to buy. And as a physicist, people on the street come out saying, “Oh, I’ve solved all of physics,” and there’s a remarkable number people who are like, “Well, yeah, okay, we should take that seriously.” Or recently, just as we’re recording this, the New York Times had a story about UFOs being captured by the Pentagon. And the people are like, “Oh, we should definitely pay attention to this.”

0:23:58 CB: It is really interesting, you know, out of just curiosity, I went to a heterodox science conference that was held on the campus of the University of Washington, because essentially, because the University is a public institution and anybody can use our facilities. And I was quite interested to see what was going on. Of course, every person there had either disproven general relativity or had unified it with quantum theory. So it was very interesting, they felt that there was this conspiracy by big science to sort of cover up the problems with modern astrophysics and quantum theory and so on, which was interesting in its own right. But the thing that I think the most interesting to me was that the rhetoric they used was very much the… Many of the things that we’ve essentially tried to teach our students in classes to be skeptical, to say you don’t want to just, don’t take this for granted, ask.

0:25:04 CB: And somehow it was just this sort of misapplication of those principles, which are quite reasonable principles, that had led people down the wrong path, coupled with various delusions of grandeur and deep misunderstandings about how science works that had led you to this place. So I see what you’re saying, especially as a physicist, how you would feel vulnerable to this. There’s nothing glorious about figuring out, about reversing the popular belief about the life cycle of the three-toed river salamander, and so these guys don’t come after that, but they sure come after grand unified theories of physics and such.

0:25:51 SC: Can you, for our audience members who might be romantically tempted by these theories, is there a… I know it’s difficult to make everything boil down to an algorithm, but are there rules of thumb when we see a scientific claim outside our own area of expertise, like none of us has an area of expertise on all of science, right? But what are the warning signs that, okay, maybe this is a little bit less reliable than something else might be?

0:26:22 CB: I mean, this is… I’m going to give you an answer that people will immediately… Contrarians will immediately say is an appeal to authority. But my answer is, if somebody makes a particularly… Particularly if someone makes a really shocking or extraordinary claim, you want to look at the venues in which that claim appears, and the credence that it’s given by people who are well thought of in the field. And I think… So if somebody actually was able to mathematically show that there were fundamental contradictions in general relativity and we should throw the whole thing out and replace it with their new framework, we would expect this to appear in, better be in Physical Review Letters and in Science and better be in the newspapers, and you’d have top physicists taking it very, very seriously.

0:27:15 CB: And to give you an example, there are… Some people say, “Oh, there’s a cover up, they’d never do that.” But of course, people do take these things very seriously, and when we had the whole cold fusion story, there were a number of really good groups that investigated that in great detail, even though it seemed like a fairly implausible claim, because it’s so damned important if it turns out to be right and it wasn’t patently wrong on the surface, even though it did raise some major questions that no one had answers to at the time, if it had been true.

0:27:47 CB: So I think one thing to do is just basically look at the venues, look at who’s supporting this and think that, look, the world-changing ideas may not come from the most important, most prominent people, but they will be relatively quickly embraced by some reasonable fraction of serious scholars in that area. And here, I think it just helps enormously to try to explain to people a little bit more about how the scientific community works, it’s not like we all go to a meeting and sit down and decide what we’re going to tell the public, it’s this very red in tooth…

0:28:23 SC: Oh, my goodness, no.

0:28:25 CB: It’s this very red in tooth and claw world, where everybody’s trying to scramble their way to the top, and there’s enormous prestige and so forth to be had by getting in early on something that’s right and disproving something else. And so if somebody had really made this major discovery and you’re one of the first people to pick that up, that completely makes your career and people would jump on that. So anyway, I think that maybe might help with thinking about some of these sort of contrarian scientific claims.

0:29:00 SC: No, actually, I think that’s a wonderful point, probably the best advice you can give. People talk about the establishment or whatever, as if it’s a monolithic thing, but the reality is, there’s a bunch of people who are competing with each other and they would love to find the new brilliant idea. So if some outsider scientist comes up with it, if it’s at all plausible, someone’s going to jump on that, I think that is a very good thing to point out.

0:29:24 CB: Yeah, absolutely, I think that’s the… And if no one does jump on it, then that tells you something, right?

0:29:30 SC: Yeah, exactly.

0:29:32 CB: This Heterodox Conference was very interesting because one of the speakers there, one of the organizers actually said he was talking about what they needed to do to move forward, these are people that are passionately in love with science, by the way, these are not deniers or anything like that, they adore science and they want to be part of it, but they’re talking about what they need to do. And the organizer said, “Well, one of the problems is when we have our conference, everybody is selling, and nobody is buying.” Everyone comes in and they’ve got their own personal theory, and it’s almost as if everyone knows that no one else’s theory is worth taking seriously, and that’s so different from what science is, right? Because when we go to a meeting, sure, we’re selling, but we are buying, there’d just be no point to go to a major society meeting if you weren’t there to buy, so to speak.

0:30:23 SC: Oh, yeah, yeah, yeah. No, I dream about small meetings with wonderful people at which there are no talks and everyone just sort of talks to each other and hears what each other has to say, but that would be a fantasy…

0:30:34 CB: Pretty much the only ones I go to anymore, I guess I used to go to… Now I don’t go to anything, I just log on to Zoom in the mornings.

0:30:42 SC: Yes. [chuckle] So you indicate the idea that the quantitative era that we’re in now has sort of changed the nature of bullshit. I imagine that’s both in the kinds of bullshit that we can get, but in also in how we can spread it. What are the new kinds of bullshit that have arrived since we’ve started to quantify the universe so precisely?

0:31:04 CB: Yeah, so I think there’s just been a fundamental change in the last 20 years about how data and numerical arguments are introduced into the public discourse. So if you look at newspaper articles from the 1980s, say, when I was in high school, you would not see a whole lot of data visualization, you wouldn’t see just a whole lot of discussion of statistical arguments. If you did, it would just say, “Oh, the Fed did statistics, and here’s the qualitative outcome,” or something like this, but you wouldn’t go into these kind of details and you wouldn’t have… There’s no such thing as data journalism. And we just simply didn’t use numbers for persuasive purposes the way we do now. And I think so much has changed in the last 20 years, if we think about the availability of data compared to what was there even in the year 2000. First of all, we’re all tracking ourselves physically in space with the cell phones we have, then we’re… Through the sort of data exhaust we produce with companies, Google knows what we want to know, and Amazon knows what we want to read, and Uber knows where we want to go, and Tinder knows who we want to go there with and all of this.

0:32:31 CB: So all of this information is out there. We’re creating just unbelievable data sets that they’re in the AB testing that any of these companies can do in real time on their user interfaces means that every single one of these companies know more about psychology than sort of the collective discoveries of the entire psych community, at least about the very specific things about what color buttons do people click on, and what kind of wording works well in headlines. And in fact, not only do they know that, but they know that like tailored to each of us individually because of [0:33:09] ____, so there’s all of that data and then there’s all the environmental sensing that’s out there.

0:33:15 CB: Then we have the internet of things, and so my refrigerator is telling Google a tremendous amount about how often I go to the grocery store because once a week I leave it open it for 20 minutes when I load it in and yeah… And my car down… So it’s just all of this data that we live in is a fundamentally new thing, and we’re taking that on, I think, in the way that we make persuasive arguments about how the world works and about what we should do in the world, and so when you look at, even on a news broadcast, you see a ton of data visualization. The New York Times has a large and extremely talented data visualization team, a couple of dozen people that does these amazing things, and I see all of that has really been a massive change in the last 20 years. I feel like our education system hasn’t really adequately caught up to that.

0:34:06 SC: And is the biggest problem that people misuse it or people don’t know how to use it when it comes to making persuasive cases on the basis of some data sets?

0:34:16 CB: I think that there’s some of each. I think there’s… There are… I think most of the people that are putting material out there in, say, traditional media are good actors at heart. However, we typically do have perspectives and we try to convey those perspectives as powerfully as we can. And I think that’s completely fine, by the way. I mean, this is the same thing in science, right? It’s like part of why science actually works well is as we’ve talked about it, it’s a somewhat adversarial system, it’s a I would love to prove that you were wrong, and so that sets up these nice arguments where we each try to make our cases as well as possible, and then the universe can adjudicate between us and so that’s…

0:35:03 CB: But anyway, we do the same thing in the media, and even if we don’t necessarily know all of the black arts of manipulating data and the stories that come… And telling misleading stories out of data, we certainly get this… Get an inclination as we’re playing around with ways to display things, “Oh, well, it sort of seems more persuasive… I want these numbers to look really big so I’m not going to write them down as percentages, I’m going to write them down as absolute numbers, ’cause there’s 330 million people in the US, if I do that… ” The numbers get big. As you see these sorts of arguments where someone says, “A thousand DACA recipients have been accused of crimes against Americans,” right, and it sounds like this terrible number and, “Oh, my gosh, a thousand crimes… ” And then what you’re not pointing out is that that rate of being accused of crimes is more than an order of magnitude lower than American citizens, for example, and…

0:36:06 CB: So we talk about those kinds of examples in the book, and we see this… Some of this may be malicious and deliberate, and a lot of it, I think, is just people that are trying to make their case as well as possible and doing that, however… Kind of stumbling into some of these tricks, if you will.

0:36:32 SC: Well, the example that you just mentioned, I think that is an example of a more general trend, tendency, whatever you want to call it, to people to not take either rates or fractions or proportionalities into account. I mean, I read a… The thing that really bugged me is I read a… It’s an article about buffets in restaurants and how they’re going away, and it would just say like, “Oh, in this town, three people got sick at a buffet,” but there was no comparison to how many people got sick from other ways or at non-buffets or anything like that. Is this an example of a general principle that we should be keeping in mind when we look at other people’s data?

0:37:14 CB: Absolutely. Yeah, we essentially have a chapter on this in the book, and so some of the things that you described, there are these people choosing whether to use percentages. You can use percentages to… So if you have a really big number, but you have a really big denominator and you want to make that big number look small, you can report it as a percentage or vice versa. And you brought up another very important thing, which is that numbers need to be presented in a way that allow you to make useful comparisons. When numbers are presented in ways that don’t allow you to make useful comparisons, that makes them, almost by their very nature, bullshit. Because if I just say, “Hey, look, Sean, 123 people had a heart attack while listening to a podcast last year. You really need to stop doing this.”

[chuckle]

0:38:15 CB: You might think like, “Oh, shit, I need to rethink my whole life,” but you… Not your whole life, but one little element of your life, but in any case… But of course, the right thing to do would be to say, “Well, how many hours or… How many hours did Americans listen to podcasts and how many heart attacks do people have an hour,” and so on, and cache that out, ’cause there’s just… If I just give you a number, you can’t possibly make meaningful comparisons and so… I mean, that actually… This is really something we’re talking about a lot in the book is this sort of use of quantitative shock and awe, where you just throw these numbers out there, and because you’ve got them, it’s really hard for somebody to come back. And if I make a claim like that… By the way, I have no idea whether it’s… I’m off by two orders of magnitude in either direction, but in any case… So if I make a claim like that, it’s really hard for you to respond right away because unless you’re super good at Fermi estimation, which you may be…

0:39:12 SC: Right. I’m not.

0:39:14 CB: You’re going to struggle to be able to come back and say, “Oh, yeah, well, here is the right denominator and when you do this division and here’s the comparator.” And so one of the things that we’re really encouraging people to do when they feel like they’re getting pushed around by numbers like that is to take a step back. We talk a little bit about Fermi estimation and how you could sit down and do that calculation and figure out whether a number like that should be something that’s frightening, or whether that’s entirely to be expected, etcetera, or alternatively, just run a quick Google search and work something like that out that way, but…

0:39:52 SC: Maybe for the listeners, you can fill them in on what a Fermi calculation actually is.

0:39:56 CB: Oh, yeah, so a Fermi calculation is just this notion of doing a sort of back of the envelope, quick estimation of the rough order of magnitudes or what power of 10 is a particular quantity, and named after Enrico Fermi, who was very, very successful, very skilled at this. And I find it a very, very useful technique whenever I’m confronted with these various numerical claims to just see whether, is this even in the ballpark of plausible? And then also, if it is in the ballpark of plausible, is there anything I should be surprised by? And so the idea is…

0:40:44 SC: Yeah, you can sit down and you could say, “Well, people live for roughly 100 years. There’s a certain number of people in the United States, they must be dying at a certain rate. Should I be surprised that people die while listening to podcasts?”

0:40:55 CB: Exactly, yeah, yeah, that’s right. Yeah, and you try to do it… And what we encourage for Fermi estimation is that you don’t get sort of hung up, just to get it roughly right, you could do it… Just use powers of 10. And so… We could even do it, we could even do that one live, and then you can cut it out if it doesn’t work, but so, what do you think? People live for 100 years, I’ll agree with you on that. What fraction of Americans… A 10th of Americans are podcast listeners, shall we say, and of these podcast listeners, they…

0:41:30 SC: Ten hours a week.

0:41:32 CB: You think they listen 10 hours a week? Okay, so…

0:41:35 SC: Order of magnitude, yeah.

0:41:37 CB: Yeah, order of magnitude, 10 hours a week. Let’s have them listen… Let’s see, yeah, okay, so then, they’ve got… Then you’ve got 500 hours of listening a year and… See, I shouldn’t be trying to do this live. The…

0:41:58 SC: You could make it 100 hours a year if the numbers would be easier.

0:42:01 CB: Yeah, so you could say you listen 100 hours a year and of course, off the top of my head, how many hours in a year?

0:42:11 SC: Oh, that’s where it gets hard, right? [chuckle]

0:42:13 CB: Right, right, right, right, right, exactly.

0:42:14 SC: Because we don’t live in a sensible universe where years are 1,000 days long. That would have been a much better world.

0:42:18 CB: If it fit to a metric, this would all be so much damn easier, so yeah. If we had…

0:42:25 SC: Let’s leave this in, but not completely, that would…

0:42:27 CB: We’ll take 10,000… Okay, yeah, fine. You can cut this one out, but 10,000 hours in a year, they’re listening for 100 hours, so there’s a 1 in 100 chance if somebody dies in a year, that they’re listening to a podcast while they do it. And now, you’ve got 300 million Americans, 330 million Americans. So we’re going to have… Here are your Americans, lives exactly a half a log between your powers of 10, which is always unfortunate in doing these calculations, but…

0:43:00 SC: Yeah. [chuckle]

0:43:04 CB: So let’s say you’ve got… Let’s say you have…

0:43:08 SC: And so that’s…

0:43:09 CB: Three million Americans dying a year, 1 in 100 of them is dying while listening to a podcast, if they’re a podcast listener, it’s only 1 in 10 of those, so one in 1000 of the 3 million Americans that die a year, die listening to a podcast, so one in 3000. So you have about 3000 podcast deaths due to heart attack a year, probably. Or 3,000 podcast deaths a year; not all due to heart attack. And then we could say, well, 1 in 10 Americans dies to heart attack, and we get 1 in 300. And so it’s the same order of magnitude as the 123 that I threw at you at the start.

0:43:44 SC: Yeah, I know, that’s pretty good. Yeah, you should…

0:43:45 CB: I hope you’ll cut that all out, but that’s… But you should feel free.

0:43:48 SC: Do you really? ‘Cause I’m going to keep it in there ’cause I think that this is…

0:43:50 CB: Oh, absolutely keep it, it’s fine.

0:43:52 SC: Yeah, it’s a wonderful… I just did… I’m doing these videos these days on some of the big ideas in science, and believe it or not, I got the periodic table dramatically wrong. I said that beryllium was the third element, rather than lithium. And I got some comments saying like, “Oh, that was my favorite part because you made a mistake, I love that!” [chuckle]

0:44:13 CB: Yeah, yeah, yeah, that’s totally true, that’s…

0:44:17 SC: Seeing how the sausage is made is very, very useful, I think.

0:44:20 CB: It is useful, actually. And it’s fun to do, right? It’s just…

0:44:23 SC: Yeah, and it’s a demonstration that it’s doable, and it’s also… There is the fact that your IQ gets cut in half when you stand in front of a blackboard or get on the live podcast.

0:44:33 CB: This is certainly true, it’s…

0:44:35 SC: People should cut slack for that. But so part of this is how we get the data and what we do with it in our individual brains, but I know that you’re interested in the sort of ecosystem questions as well. And there are a lot of journalists or more broadly, people sharing information, who are not trained, who have not taken your course. Is there some realistic aspiration that we can just train up all the citizens in the country or in the world to think a little bit more carefully about statistics and rates and things like that?

0:45:09 CB: I think if we’re going to keep using social media, I don’t think we have an alternative. And the thing about social media is that we had this, and we talk about this as well in the second chapter of the book, about the way that sort of… The way that information production and distribution has changed, not only sort of the volume of information and its rate, but also what kind of information is out there. And so there’s been this… There’s this fundamental change, essentially, for most people in the 1990s where all of a sudden, you go from having… Basically, you get the internet plus digital typesetting and now everyone can create professional-level content.

0:45:50 CB: Add the web on top of that, plus the acceptance of world wide web, you don’t even have to know lawtech. And then you add… And so then you’re starting to get all of this new content. And we were all very excited about this in the 1990s, ’cause it was going to bring all these voices to the table that maybe hadn’t had the political and social and economic capital to be there. And to some degree that promise has been realized, but what we also got, of course, is this enormous glut of information and we came to all these problems about how do we sort through it. And one of the solutions that we’ve come to, especially going the sort of acceleration at which… Of the rate at which this information is produced is through social media where we all take on the role of becoming editors for one another.

0:46:33 CB: And so all of a sudden, I have this really important role in determining what my friends read or given this amount of time I’m spending on Twitter these days, a lot of people I’ve never met and what they read. But we’re all doing this, and so we’re all filtering information in this way, and we’re all doing the things that professional editors used to do, and one of the big problems is we, not only don’t we have the training, but we also don’t have the same incentives that professional editors do in terms of protecting our reputation. And we may be trying to signal something about our group identity by sharing a meme instead of like trying to signal something about the truth and…

0:47:11 CB: But in any case, because we’re using a information exchange system where we are all playing this role of editors, people have to, I think, if we’re going to have reasonable information hygiene out there in the world, people have to… What we suggest in the class is they have to learn to think more, share less, right? And how to think carefully about what they are sharing before they share it. There’s sort of, as I see it, there’s kinda three approaches you can take to this problem in the social media universe. You could try to throw technology at it and, “Oh, we’re going to have AI that can detect fake news.” And while that might get you a deal with a VC, I don’t think there’s any chance you’re going to be able to actually make that happen.

0:47:56 SC: Yeah.

0:47:57 CB: Because of the power of adversarial AI and everything else, even if someone were able to do it for the current, as things stood now, it would be easy to overcome. So and then you could try regulation, and you could try doing things like many places have of sort of criminalizing misinformation and so on. And as a pretty strong advocate of a very broad interpretation of First Amendment rights, I don’t like this solution at all.

0:48:24 SC: Right.

0:48:24 CB: I would like to see some regulation over the tech industry to make sure that the… That we have some control of individual users of what we see, but other than that, I don’t want to go down that road. And the sort of the third leg of the stool that I see there is education. And that’s the only one that I think we can really stand on, is to help people understand better how to parse the media environment that we all live in now. And a big part of that is sort of media literacy and that kind of education, but at the same time as we move toward this increasing quantification of our world, I think this quantitative literacy becomes very important as well.

0:49:04 SC: But I guess that what I worry about with that, I mean, I agree with everything you’ve said, but the pessimist in me says, well, the biggest problem is not people who want to get the truth but don’t have the quantitative education to do it, the people whose incentives are to do something other than get the truth, to identify with their tribe or whatever and media, social media make that so much easier in some ways than it ever was before.

0:49:32 CB: I don’t have a rebuttal to that. I mean, we’ve slammed very, very hard into that during the COVID crisis, and that’s something that actually we never really anticipated or saw coming. I spent the 20-oughts doing pandemic planning work. And while there were a lot of debates in the scientific community and in the political setting about what role should the government versus the free market have in planning for rare health disasters and so on, we all assumed that if something like this ever broke out everyone would be on the same page.

0:50:08 SC: Yeah.

0:50:09 CB: And we’d just try to get this thing done. And instead, of course, we find ourselves in a world where the very existence of the virus is a politicized issue, whether masks work, politicized. Does hydroxychloroquine work? Politicized. And this is really bonkers, because if you sort of think about it, there are a lot of things where if I said, “Oh, hey, Sean, there’s going to be this crisis in two years, here’s what the crisis is going to be, some people are going to want to do this response, are they on the left or the right?” You’ll say, “Oh, military intervention. They’re on this side, right?” If I said, “Hey, Sean, there’s going to be a pandemic and some people are going to want hydroxychloroquine.” You’ve got no way to tell me whether that’s going to fall on the left or right, right? This is just no reason to politicize that. It just doesn’t make any sense, but it is this tribal allegiance that you’re talking about. It is facilitated by social media, and it’s been an enormous problem in our national response to COVID, because you get things like people refusing to wear masks simply to signal their tribal affiliations.

0:51:12 SC: Yeah, it seems as if, I’ve never really thought about it in these terms, but it seems as if there’s some stew of interactions that spirals you down between number one, the ability to bullshit quantitatively in this data-driven world we’re in. Number two, the ability to spread the bullshit through social media, and number three, the partisan or identity affiliations that make your allegiance to a certain statement stronger than your allegiance to the truth. And somehow, we’ve got to change the incentives so that somehow people are punished for giving into that.

0:51:50 CB: I think that’s… I think that is the sort of thing that is happening. My colleague, Joe Bak-Coleman and I have sort of not completely jokingly said that the answer to the Fermi Paradox, the fact there are no… The fact we don’t see evidence of intelligent life anywhere out there is not that people invent nuclear weapons, it’s they invent social media.

[chuckle]

0:52:15 CB: And it launches you exactly into that. Not people aliens, but it launches you exactly into that spiral that you’re describing. And that could be an existential threat, we have a somewhat more serious paper that we’re working on about what we call human collective decision making as a crisis discipline. The idea is, again, this same sort of thing. It’s like once you network people, and you have all these tribal affiliations, what happens? What happens to the way that information flows? What happens to the ability to have an informed electorate? And these become things that… We were working on this before COVID and thinking this was going to be a real problem. And it’s one of those papers where now everyone doesn’t… It doesn’t seem nearly as prescient when you publish it after it’s already happened. It’s…

0:52:58 SC: Well, we shouldn’t put too much blame on Facebook and Twitter at all. You also mentioned things like TED Talks or self-help books, or there’s a pre-existing set of ways that we bullshit ourselves and each other.

0:53:12 CB: That’s for sure. It’s by no means the sole domain of social media, I think that’s… I keep coming back to this partly ’cause of this interest I talked to you about at the start, about the way that information spreads through networks. And I find that so fascinating, it’s really the patterns of information flow are so fundamentally different from these highly centralized patterns where you have the TED Foundation or whatever it is that hand-picks some people to groom into media stars, and then puts them up and teaches them how to give a 10-minute talk that starts with a problem and ends up with this techno-optimist solution and has three jokes along the way at minute two, seven, and nine and so on. And then broadcast that in a central broadcast fashion, and then it’s just absolutely so different from the dynamics of what we see playing out on the internet, which can be a really enormous and powerful scale as we see things like the dreadful information around the pandemic video and so on, taking off.

0:54:23 CB: So understanding those dynamics are so fascinating to me. That’s why I keep coming back to it. But absolutely, the seeds of all of this pre-date social media. There have always been charlatans and hucksters and all of this, and a lot of that takes the form of bullshit rather than of just outright lying, because a lot of them just want to impress you and make you think, “Wow, that person’s really smart. That person’s a thought leader,” whatever.

0:54:52 SC: Well, yeah, the thought leader terminology is a good one because it once again comes back to the incentive structures. I know that there’s been a lot of discussion recent years on the role of public intellectuals. And one claim is that we don’t have public intellectuals anymore. All we have are thought leaders. [laughter] And the difference between the thought leaders are a little bit more beholden to certain patrons who will pay them to say optimistic things about technology or whatever. And it’s a little bit less rigorous and scholarly than the public intellectual model.

0:55:26 CB: Yeah, that’s very interesting. I think that, I know I… I’ve just heard it now, but I buy it, on a first thought. Also I guess there’s not much intellectual about a thought leaders is the other…

0:55:40 SC: Well, yeah, I don’t know what the origin of the phrase thought leader was. It might be something that was originally very sincere and now has been given so much irony layered on top. But it might be Dan Drezner who made that…

0:55:51 CB: It was, who was it? Sorry?

0:55:55 SC: Daniel Drezner, maybe?

0:55:56 CB: Okay. The place I know it was in the ’90s. This was the term for… So basically, it was what the pharma companies used to influence what doctors do, influence their prescribing practices. It’s too expensive to get things, to get on label uses approved through FDA, but what you can do is you can have people give talks at meetings about off-label uses, which then doctors have the right to do, and thought leaders are the people that people look up to who would be able to give a compelling talk in front of an audience of, say, 5000 or 10,000 at a major medical conference. And so a thought leader was precisely the person that a pharma would seek out to give a talk about off-label use of their product.

0:56:43 SC: But that’s interesting because that, so to the extent that that’s true, built into the idea was the idea that it was more for signalling purposes than for finding truth purposes.

0:56:54 CB: Absolutely, I mean, that was the… And I don’t know if that’s the original origin. That’s just my first place I encountered it.

0:57:00 SC: Maybe… I do want you to have a chance to say a little bit more about solutions to the technology social media problem. You mentioned that there could be intrinsically technological solutions or regulatory solutions, and neither one of these were things you’re really optimistic about. Maybe say a bit more about that. I know that that’s also very much in the news these days, especially with Facebook, Twitter, maybe a little bit less, but Facebook, YouTube, there are algorithms that drive people to places where maybe we don’t want them to go, and maybe that is fixable.

0:57:33 CB: Yeah. So this is a really important issue. And maybe we’ll just start with YouTube, is this well-known effect that YouTube sort of radicalizes. And so YouTube pushes you toward more and more extreme content. We talk about this in the book, there’s little story where Jevin and his son are watching the live feed from the International Space Station on YouTube, and then on the side bar are all these flat earth videos. [laughter] And so this goes back to the constant AB test and the psychological experiments that these algorithms are figuring out that people move toward more extreme content over time and they’re, of course, all of these algorithms are being… Are optimized to maximize engagement, not education or accuracy or anything else like that.

0:58:21 CB: So I think definitely that kind of algorithmic direction is really important and really problematic. We see something similar if you try to use Facebook or Twitter to get a good sense of what the general zeitgeist in the country is, not just the zeitgeist among the friends of an academic in Seattle, say, so I follow people all over the country with different opinions and things like that. But Twitter quickly learns that when some of my right-wing followers, who I don’t necessarily tend to agree with, post links, I don’t click on them, so it stops showing them to me, and so then I get pushed back into my own filter bubble, if you will. So this is where I think regulation could be somewhat useful, as I’d like to see, ideally, a sort of opt-out, but even opt-in would be okay. I’d like to see people have a lot more control over the content that they actually see on social media and things like this.

0:59:25 SC: Okay.

0:59:26 CB: But I’d like to see these algorithms, and sort of recognition that these algorithms are not designed to move us toward this extremely important feature of having an informed electorate; they’re designed for something else. And given that, and given this role these are playing essentially as major media companies, we need to have a little bit more… We need to downplay the influence that these algorithms are having. So I’d like to see if you opt-in on Twitter, you should be able to just see what your follow… The people that you’re following have posted in order, no bullshit. And so these kinds of things, I think, would be little steps that would move us in the right direction. I think they don’t trouble me in terms of sort of First Amendment arguments.

1:00:24 CB: If you look back to the origins of the fair and balanced doctrine from the FCC prior… Until it was killed under Reagan, it was not something about bandwidth limitation, per se; it was basically that it was a sine qua non for democracy, was that people… You have to have an informed electorate and so we need fair and balanced coverage of the issues that matter, ’cause that was one of the other criteria there was that the news stations had to talk about relevant things. And so I think that there’s quite reasonable precedent in the way we think about media to say that another sine qua non for democracy is to have people able to get access to the information they want, instead of being manipulated by these algorithms that are able to run non-stop experiments on them 24 hours a day to maximize their own engagement. So that would be my ideal.

1:01:17 CB: There are these little things you could do that would chip away at the problem, but would only chip away at the problem. It’s like why do we allow targeted political advertising on social media. This is so dangerous. I can target all of the… I could target all of the unemployed men between 35 and 45 in Tukwila who have indicated racist sympathies. That’s something we’ve never been able to do before, and more or less trivial to do on, with social media advertising. And then once I do it, the ad that I’ve put out there is dark, in the sense that if I run a national racist broadcast ad, everyone sees that I did it. But if I just push a message to these 400 people, no one even knows that I did that, and I think that’s a really, really dangerous thing. We’ve got very strong restrictions on political advertising overall, and so why we continue to allow this, I have no idea. Again, a tiny piece of the puzzle, but these are the sorts of places where I think regulatory solutions could be useful.

1:02:15 SC: Is there anything that you specifically learned or changed your views about since the COVID-19 pandemic hit? You’ve been out there on the Twitter frontlines, trying to set people straight. Is it more or less, given that you’re an expert in both biology and bullshit, what you would have expected? Or has your world been rocked by how crazy things have been?

1:02:37 CB: It has somewhat been rocked, because I really did think we were all going to be on the same team when this thing hit, and I thought that… Remember what things were like after 9/11?

1:02:50 SC: Yeah.

1:02:51 CB: Everybody just felt like, “We’re all Americans, and we’re going to put these political divides behind, and we’re going to unite, and we’re going to solve this problem.” And yeah, it wasn’t blissful. There was a lot of latent and not so latent racism that came along with that, and it wasn’t all good, but you didn’t see the sort of massive hyperpoliticization of just every aspect of the crisis that we were dealing with, and we did have this feeling of sort of a common goal. And when you deal with a pandemic, of course, you may not want to admit it, but we really are all in this together because of the infectious nature of the disease and the fact that these things spread exponentially through communities early on and so on. And yet, we’ve completely failed to address this as a unified country, we’re unable to… Not only are we unable to sort of mount, come up with widely-supported consensus policies about what to do; we can’t even agree on things like whether this is killing one person in 100 or one person in 1000. We can’t agree on whether or not schoolchildren who get it are a risk to their parents. We can’t agree on whether masks help. I definitely did not expect that level of politicization of the science.

1:04:11 CB: And I think some of the ways in which that has happened have become quite interesting as you look at what’s changing about the science. We’ve gone to, in an effort to solve this problem, we’ve gone to this most massively open science that we’ve ever tried, where everyone is posting raw data and pre-prints. Immediately, people are sharing all the code, all the models are up there for the most part. There are some exceptions that are sort of notable, but it’s been much more transparent than it ever has before at much earlier stages, rather than waiting ’til everything goes through peer review, the discussion is taking place on open boards, on Twitter, on PubPeer, places like that, instead of at conferences and by private email. So all of that discussion is out there for the public to see. And the piece that I hadn’t really done an adequate job of anticipating was that as soon as that happens, and you have this thing super politicized, now every single paper that goes out there lands on one end or the other of a political spectrum unless you exactly split… It’s on the rail. And as soon as that happens, that paper gets picked up by one side and used as a cudgel to bash the other side with.

1:05:23 CB: And so as soon as I release a paper, if my paper is just trying to estimate R0 for this disease, the intrinsic rate of increase, that paper is going to benefit the arguments of one side and harm the arguments of the other. And all of a sudden, it just gets launched into this partisan debate that we, as scientists, aren’t used to. We’re completely used to the fact that you think R0’s 2, and I think it’s 3, and I think you’re an idiot, and I’m going to prove it. But there’s no partisan [1:05:51] ____, and… So this new… And similarly, people will… When your evidence overweighs mine people will just take your side, and now that this has become polarized like this, even if your evidence overweighs mine, no one in my tribe is going to take your side anyway ’cause they don’t want to risk anyone thinking that they’re in your tribe. And so this is the part that surprised me and has been… I guess in April, I just found it enormously discouraging and I think I’ve kind of accepted it at this point, and just trying to think of how to work in that environment now, but yeah, that was a real shock.

1:06:29 SC: I mean, in addition to the sort of political or identity aspects of it, there’s some sort of purely scientific aspects that are curious to me, the idea of how to model the spread of an epidemic like this and there’s even been… And maybe this counts as politics where there’s been little squabbles like economists versus epidemiologists and who has the higher GRE scores, right?

[chuckle]

1:06:53 CB: Yes, right, right, right.

1:06:57 SC: Have you learned anything about that, let’s say, has that surprised you any?

1:07:00 CB: I guess I already knew it, but I’ve learned you shouldn’t hold George Mason against economics. That’s one thing I’ve learned. But I think that’s actually been more common, I’ve done… I’ve always loved doing interdisciplinary work and it’s been tremendous fun and it’s a learned… It’s a learned skill. And one of the key things to doing good interdisciplinary work is not assuming the other person’s an idiot even when they sound like one. Because you’re probably not understanding something. But it is something you have to learn, and you have to learn by doing. So I think you have a lot of people that are all of a sudden coming from different communities working on the same problem, where maybe they haven’t done as much interdisciplinary work in the past, and so you do see more of these kind of clashes, whereas in my own experience it’s like so much fun sitting down with computer scientists and having them tell me about game theory from their perspective, and you just think, “Gosh, everything this brilliant person has said for the last hour is wrong, how can they be so dumb?” And you finally eventually figure out what the difference in assumption is.

1:08:07 CB: You’re thinking about the expected case and they’re thinking about worst case, and that clicks and then it all… If you poison the well before that clicks things… You don’t move forward.

1:08:21 SC: It’s hard.

1:08:21 CB: So I think that’s been some of the struggle there, and I don’t want to… To some degree, I think there are different disciplines that have different cultures and different perspectives of their own superiority, that may be… We may be bumping into a little bit of that too, but I don’t know.

1:08:45 SC: And you already mentioned the fact that some of this science is being done out in the open, but this is… There are issues, so not only is that being done out in the open but then it’s instantly politicized, like you said, but these are not new things because of the pandemic, we’re learning things, we’re seeing things that were always there, it’s just more of a reminder that science is kind of messy and politicized, and we should keep that in mind when we evaluate any bit of science.

1:09:13 CB: I would disagree. I guess the… First of all, one thing just want to note before I do that is that I’m acting as if this is a surprise, and all my friends in climate science are just thinking like, “You poor innocent naïve child, what the hell did you expect?”

1:09:28 SC: Sweet summer child, yes.

1:09:29 CB: Because they’ve been dealing with this for decades. And the rest of us just kind of missed that, and I missed it even though I was not… Even though I was actually involved in all of this sort of evolution war stuff in the early 2000s around intelligent design, creationism and all that, and I still didn’t anticipate this. Yeah, but I think science is really not politicized in the same way that we’re seeing right now. I think that there are areas where as you move toward applications you start to see more and more of that, but I think what we talk about as being political in science is really a quite different thing. Take, for example, in my own home discipline of evolutionary biology, mathematical population genetics, we talk about political fights that often essentially come down to battles between the great-grandchildren of Sewall Wright and the great-grandchildren of R A Fisher, right? And it’s absolutely remarkable, actually, if you treat… My PhD advisor, Mark Feldman, gives a great lecture where he sort of traces back these ideas, and you actually see that we really are the great-grandchildren having the fights.

1:10:44 CB: Continuing the fight between these two men. And so that’s really politicized, maybe it’s like, “Goddamn guy is a selectionist just through and through, I just don’t understand how he cannot see the way that epistasis works.” So we talk about that as political, but it’s a really different kind of political than Trump thinks hydroxychloroquine will work and damn the conspiracy that’s trying to cover it up.

1:11:12 SC: No, that’s true, just to clarify, I was using politicized in the former sense, more, nothing to do with right versus left, Republican versus Democrat, red versus blue. But there are human attachments, or even just ways that you are brought up as a scientist to pay attention to some things versus another that are not completely rational, that have huge effects over what we think is important and how we go about doing even the most removed science from political cares in the real world.

1:11:42 CB: Yeah, absolutely, I think that’s a really good observation. A friend of mine, Jacob Foster, who’s in sociology at UCLA, and I spent a couple of weeks trying to… And not making a ton of progress on this, but trying to understand whether an adversarial model of science… We have an adversarial model of justice in this country, where you have a prosecutor and a defendant, you don’t have sort of consensus where everyone sits down and decides whether the person’s guilty or innocent, and to some degree, science is more adversarial than it’s usually portrayed in the public, because you do have these different schools and these different political camps in the sense you’re talking about.

1:12:18 CB: And so we’re thinking about whether that could possibly actually be a good thing, and I think it probably is, because it really sharpens the conflict among the data that we have. You rarely have direct observation of what you want to know, and so you have these various indirect ways of trying to get at this question and where there are conflicts there, if everyone kind of just agreed to ignore them, we might not make the same headway as we do when you have, you know, this school is determined to prove that that school is wrong and so forth. I think that’s all good and well. I think what’s been so discouraging with the pandemic is the way that you just see a whole different level of disinformation getting put out there. I mean, things that scientists would never do. I mean, I may really dislike the selectionists in population genetics, but I’m not going to make a video where I tell lies about them. Nobody does that. I’m just going to try to write a paper that shows that their interpretation of the data is stupid, and so like this is a whole different level, and I’m certainly not going to be calling their universities and trying to get them fired and sending threats and all the other things that public health officials in the United States are dealing with right now.

1:13:53 CB: So that’s just sort of a different scale. This may just be sort of appeal to the conventions that I’m comfortable with, but in some ways, I’m kind of saying, boy, people aren’t fighting by the rules this time.

1:14:14 SC: Well, emotions are running high and stakes are high, right? So even if I deplore it, I get it. I’m not completely surprised and even far removed from those kinds of stakes, the examples that are close to my mind are like loop quantum gravity versus string theory, right? And I worry about the… Even though I’m pretty establishmentarian on many physics issues, I do worry about a lack of diversity, viewpoint diversity here, because from a game theory perspective, if you think there’s a 90% chance that string theory is right and a 10% chance that loop quantum gravity is right, every department is going to want to maximise the chance that these people are right, so they will only hire string theorists, right? And it’s hard to build in that kind of multiplicity of efforts.

1:15:05 CB: Yeah, a couple of my friends in the Philosophy of Science have written quite nice things about this, in particular, Cailin O’Connor and Kevin Zollman, and they’ve written about the value of epistemic diversity in scientific research and they study these kind of things where you have, say, epistemic networks where people share their beliefs with one another, and in fact, if you have this too tight of a community where people quickly adopt one another’s beliefs, basically everybody can get off on the wrong track and science doesn’t proceed as fast, and so I think that’s a very important observation.

1:15:40 CB: And this has really been something I’ve been very interested in in the last five years, and it sort of got derailed by the whole COVID crisis, but basically like taking a hard look at the way that science operates and looking at what are our institutions and our norms that we use and thinking about the way that those norms and institutions create the incentives that shape our behavior and the questions we ask and of course that determines the outcome we get, and so there’s this direct tie from the structure of these norms and institutions which evolved more or less haphazardly out of essentially Enlightenment ideas and Western Europe to the beliefs that we have about the world as emerging from science right now.

1:16:24 CB: And so one thing we can do is we can go back and look at the structure of some of these institutions, whether it’s things like publishing or priority rule for credit or tenure system or whatever it is, and ask how are these things affecting epistemic questions about what we believe to be true that is true, what we believe to be true that isn’t, what we don’t know is true, even though it is, etcetera, and thinking about are there ways to nudge the system to make it function better. We see a lot of this going on as people are trying to tackle the reproducibility crisis right now, and you could ask very similar questions around issues such as viewpoint diversity.

1:17:07 SC: Yeah, I mean, the mentioning of the reproducibility crisis brings up the previous thing you mentioned about how science is now being done in the open more, and I know there’s also two sides to that, science is being done in the open a lot more, including the fact that especially because of the COVID crisis, people are noticing that there are preprint servers, right? You can get scientific papers that haven’t yet been peer-reviewed. On the other hand, something I’ve… Even though I’m a believer in peer review, I do try to tell people that just because a paper has passed peer review does not mean it’s correct. You still need to be a little bit skeptical of it. Do you think that this greater openness is improving science or is it a danger? ‘Cause I know there’s a lot of people who say it’s terrible to let the hoi polloi in on all the speculations and preliminary ideas that scientists get around all the time.

1:18:00 CB: Yes, that’s gatekeeping bullshit. I completely disagree with that.

1:18:03 SC: I agree, but I’m…

1:18:04 CB: But it would be… I mean your a physicist, you know what it does to science when you open up preprint servers. It blew my mind the first time I wrote a physics paper, it’s a paper on network theory I wrote with a postdoc of mine, Martin Rosvall, I was really excited about it. I thought it was neat work and I really hoped I could get it into a good journal so that people would read it and give me some feedback on it. Martin wasn’t nearly as concerned about this, and we posted it to the archive, and within a week, every single person that I wanted to read it had written me back unsolicited and had read the paper and had the comments that I was hoping to hear or in some cases critiques I was hoping wouldn’t exist and so on, but that was a revelation to me.

1:18:48 CB: So having preprint servers and having the timeline between putting an idea out there and it being adopted by the rest of the community being a few days instead of a year is enormous, obviously. And I also think that this notion of gatekeeping is… And we shouldn’t let the public see what’s going on behind the walls of the ivory tower. This is also absolute rubbish. What we could do a better job of is explaining how the whole process works, and if we do that, then we have to hit on the thing that you hit on as well, which is that peer review is no guaranteer of correctness either. What I think of peer review as… And this is what we write about, we have a whole chapter about science and how it works and all of this stuff in the book… But what I think of peer review as, is I think of it as kinda like burn-in in engineering, it is enriching for things that are correct and interesting.

[chuckle]

1:19:40 CB: It’s not guaranteeing that anything is correct and interesting, it’s just taking a bunch of the stuff that isn’t correct and a bunch of the stuff that isn’t interesting and throwing that out, and you end up throwing out a little bit of the stuff that is correct and interesting, but that’s the cost of enriching the pool of stuff that you see. And then of course, you’ve got this tier of journals from the highly enriched to the barely enriched and you can sort through all of that. And the highly enriched may be enriching but are not the things you want, and people complain about most of the stuff in Science and Nature is sensationalized, and so on. And that may be the case, whereas most of the stuff in Cell may be enriched for being correct, but not being very interesting or whatever…

[chuckle]

1:20:22 CB: Depending on your perspective. But fundamentally, that’s what peer review is doing. And it’s helping improve the papers as well, and it’s enriching the pools, but it’s not this seal of approval that it’s been presented to the public as, unfortunately.

1:20:41 SC: Alright, final question, going to be completely unfair here, I know you begin your book by saying that… What is the sentence that you use, do you remember it?

1:20:50 CB: There’s so much bullshit, probably.

[chuckle]

1:20:53 SC: We are awash in bullshit, something like that.

1:20:55 CB: Oh, yeah, the world’s awash in bullshit, and we’re drowning in it.

1:20:57 SC: The world is awash in bullshit. And we’ve mentioned a lot of techniques that people can use off-handedly, but is there the best piece of advice you have for we people on the street who are awash in bullshit, to separate it out? Is there one or two pithy little things we can say, or is it more that we just have to genuinely keep our wits about us and try our best?

1:21:23 CB: No, there are a few pithy things that you can do. One of them is to just recognize that if some claim seems too good or too bad to be true, it probably is.

[chuckle]

1:21:34 CB: And so these are the things that we’re most likely to share, especially on social media, and yet they’re the most likely to be wrong. I learned today, only today, that there’s a law about this, Twyman’s law for data analysis is that, any figure that looks interesting or different is usually wrong. The more unusual or interesting the data, the more likely they are to have been the result of an error of one kind or another. And this seems quite a reasonable law of data analysis. And it’s a law of media analysis as well, I think, when you think about it. So that would be one. And another thing I really push people on is, especially in those cases, and if you care, track things back to the source.

1:22:10 CB: So one of the things that emerges through the social media environment, but we had it even beforehand, it’s just exacerbated, is that you have this sort of game of telephone, where you have… Maybe you’ll have research studies that are distilled into a scientific paper that is written up in a Medium post that’s picked up by the New York Times that gets tweeted about, and then you see the tweet. And so, if the tweet seems to good or bad to be true, start tracking back and finding… Figure out for yourself what the story is. So I think the most important… And the most important, and this is what we teach in the class, and the students get so excited… I’ll have the students come in and they’ll say, “Oh, I saw this tweet from NBC News. And then I tracked it back and it was actually about this story, and the story came from this research paper… And you know what, Professor Bergstrom, it was bullshit.”

[chuckle]

1:23:11 CB: Which is so cool. And when…

1:23:14 SC: Ah, you’ve done good.

1:23:15 CB: What it gets at for us is… A fear with this class, is that we create a community of nihilists and cynics, whereas what we want to try to do is to show people that despite the bullshit-prone world that we live in, there is truth out there, and you can get to it.

1:23:37 SC: Yeah, that’s a very good motto. I cannot possibly think of a better place to finish the podcast on, so, Carl Bergstrom, thanks very much for being on the Mindscape Podcast.

1:23:45 CB: Thanks. It was great to talk to you, Sean.

[music][/accordion-item][/accordion]

4 thoughts on “108 | Carl Bergstrom on Information, Disinformation, and Bullshit”

  1. Robert Antonucci

    Minor comments – this is already a word that meets the need for bullshit perhaps, the meaning is slightly different but equally handy: propaganda. Like bullshit it has a pejorative feeling, but that’s not essential according to my dictionary.

    When I discuss critical thinking in a class, which I do to a somewhat unusual degree, I use “weasel words” differently, to refer to necessary qualifications. We’ve all written the phrase “it seems obvious” in our papers, and it’s telling. Com6sider dietary advice. There’s little that’s very solid and generalizable because of randomization, compliance, and other challenges. So an AI could easily recognize bs from a dearth of weasel words. That actually works well for anything much more speculative than Earth orbiting the Sun.

    One way I teach assessment is in the process of converting say statistical null significance to bettable odds. A colleague calls that metastatistics. Decades ago he told me that the first theorem of metastatistics is that half of all 3 sigma results are true. He has since revised it to 5 sigma.

    Finally a gem attributed to Martin Rees.
    He once commented on a paper of mind which repeated the attribution, without denying it. So if he got that far in my paper, he either did say it, or wished he had. The aphorism: ” It would be really funny if nothing funny ever happened.”

    For example: if no one ever sneezes in a reality tv show, it’s fake.

  2. Pingback: Sean Carroll's Mindscape Podcast: Carl Bergstrom on Information, Disinformation, and Bullshit | 3 Quarks Daily

  3. A couple of months ago the excellent Mindscape podcast was devoted to the topic of bullshit, and I used the occasion to post a comment pertaining to a critique I made of Harry Frankfurt’s theory, which generated one appreciative further comment. Now I see Frankfurt’s notion again uncritically referred to by Prof. Carroll and Prof. Bergstrom, and at this time I was inclined to prevent myself from bringing my particular view yet again; but at one point in their conversation, towards the end (I should have noted the time), Prof. Carroll asks Bergstrom about incentives that might be introduced or employed to prevent the proliferation of misleading statements in our society, and especially on social media. And I realized that one of the points I tried to make in my critique of Frankfurt was precisely that the phenomenon of bullshit and its peculiar place in our society and its communicative practices is much better understood if considered on a structural level focused on relations of power, rather than confined to interactions of individuals and the motives of those individuals. So, in the most concise way I can, I will try to recreate my argument one more time. I should mention that my argument in full is contained in an unpublished review of Frankfurt’s book, On Bullshit, I composed in 2005. That piece, playfully entitled “Bullshit for Dummies”, has sat in my drawer for the last 15 years. More recently, I composed a coda to the former piece to account for the Trump phenomenon, with quotations from the original essay. It is from this coda that I have excised the following summary of my critique of Frankfurt:

    Frankfurt stated that the essence of bullshit consists in the fact that the bullshitter does not care about the truth value of statements s/he makes (or, by extension, actions s/he performs, though I don’t remember Frankfurt saying this explicitly), and, therefore, takes no heed in any essential way to refer to a reality that can possibly be shared with anyone else. But that’s pretty much where he ceases the exposition of his thesis, though he does go on to illustrate his meaning to some extent. As I noted in my review, I found this to be an intriguing starting point, but thought that the analysis had to be carried on much further, by posing questions as to what the motivation of such a speaker or actor might be, and what kind of social conditions would make such statements or actions resonate in our particular society. Accordingly, I noted that specific social conventions privilege the bullshitter and put the bullshitted at a disadvantage; and I continued:

    [w]hat seems to make bullshit so exasperating is precisely…the reliance of the perpetrator on accepted, if not always well understood or appreciated legalisms or conventions that somehow appear to contradict at least some of his or her own ulterior intentions. And this often requires different levels of access to or knowledge of the rules, as well as the constant presence of one other decisive factor that [Frankfurt] has [not] considered in the slightest. Indeed, I would say that the thing which in fact provides the word with the emotive power of forbidden fruit, and installs it in its rightful place as one of the language’s most cherished obscenities, lies in the fact that there is no effective means of appeal which might allow the contested statement…to be challenged. We are, in consequence, forced to walk away from situations we find somehow unsatisfactory without having recourse to…devices to tease or tear out of the statement or situation more acceptable ones…Indeed, it is worse than this: we feel these means are, somehow, and to a not insignificant degree, already available; and that we must submit precisely on those occasions when clear-cut rules which we think should automatically generate the objections we hold are clearly those which exist in the wider…community. But far from being encouraged to appeal to these, we sense that we are threatened not to, with anything from implicit ridicule to out-and-out violence.

    And I concluded:

    [i]t is precisely because the bullshitter feels manifestly and aggressively confident that the truth conditions of his or her statements cannot be adequately investigated in the first place, or that adverse consequences cannot follow upon such an investigation even if it is carried out, that he or she can can be at all indifferent or hostile to any particular “truth”….

    I would love to see any feedback on this line of thinking Mindscape listeners would like to provide, and I promise not to bring the matter up again if there is no response. Thank you, Prof. Carroll, for a terrific program.

  4. @Laurence Peterson, I haven’t read Frankfurt’s book so I rely on your comment about the author’s main idea. I agree with you in the sense that the bullshit phenomenon needs further analysis. Motivations and social conditions are, whatever they may be, crucial components in this phenomenon without any doubt. But there’s also another factor I think needs some consideration: the bullshitted’s acknowledgement and acceptance of the bullshit. Same as the bullshittter, I think the bullshitted does not care about the truth value of statements either. Those statements agree with their core beliefs and values which are hardly ever questioned by the people holding them.
    The bullshitter caters to those with said core beliefs and values and the statements will adjust to an “alternate truth” that resonates with those who hold those beliefs. He/she relies on opportunism rather than impossibility of his/her statements being investigated. Even when proven wrong with a strong and truthful rebuttal, they bullshitted rejects it because the “real truth” does not jibe with their core beliefs. After all, “reason is the slave of the passions”.

Comments are closed.

Scroll to Top