A characteristic of complex systems is that individual components combine to exhibit large-scale emergent behavior even when the components were not specifically designed for any particular purpose within the collective. Sometimes those individual components are us -- people interacting within societies or online communities. Studying the dynamics of such interactions is interesting both to better understand what is happening, and hopefully to designing better communities. I talk with Petter Törnberg about flows of information, how polarization develops, and how artificial agents can help steer things in better directions.
Support Mindscape on Patreon.
Petter Törnberg received a Ph.D. in complex systems from Chalmers University of Technology. He is now an Assistant Professor at the Institute for Language, Logic and Computation at the University of Amsterdam, Associate Professor in Complex Systems at Chalmers University of Technology, NWO VENI laurate, and senior researcher at the University of Neuchâtel.
Click to Show Episode Transcript
0:00:00.6 Sean Carroll: Hello everyone, and welcome to the Mindscape Podcast. I'm your host, Sean Carroll. There's an idea in social science circles called physics envy. Economics, especially is susceptible to this idea. It's not supposed to be a good thing. You're not actually supposed to be envious of physics, but social science is hard. People are messy. There's a lot of variables going on. Physics is able to make enormous progress by simplifying things a great deal. In part, that's because the fundamental ingredients that we study, even though it's like quantum mechanics and cosmology and relativity and other things that sound way out, but there aren't a lot of moving parts. The basic things that we're looking at are sufficiently simple. You can describe them using relatively few variables, and you can isolate all the interesting things that are going on in these systems with small numbers of variables. As a result of this, you can make tremendous progress.
0:00:54.8 SC: You can prove theorems. You can do experiments that test your theories to many, many decimal places. It's a lot of fun. Of course, people would be envious of this, but it's a disease, or at least something to be avoided, to therefore try to make your social scientific research too much like physics. When you do social science, you should admit that there are complications there that cannot be abstracted away in the same way that we abstract away air resistance or friction when we're doing physics. Nevertheless, I'm sure that everyone who listens to Mindscape on a regular basis knows. I do think that there are contexts in which physics like reasoning can be helpful or even interesting in the social science. There can be contexts in which physics type of reasoning and concepts borrowed from physics can be really useful, very interesting in the social scientific contexts.
0:01:49.9 SC: Ideas like equilibrium, ideas of emergence in general, ideas of what is collective behavior like when it arises from the sort of mindless, non-directed interaction of many small things. These are things that physicists think about all the time and are very, very relevant to the social sciences. So today's guest is Petter Tornberg, who is a professor of computational social science. I promise I didn't know this, but he admits on the podcast that he actually has a physics background, so this makes some sense. But he uses models, agent-based models that we've talked about recently, with Doyne Farmer and others as ways to study the behavior of social systems. Can you make a little model where the individual pieces are either simple agents that always act in some way, or maybe there's a little bit of stochasticity in there, or maybe they're even very complicated.
0:02:45.0 SC: We'll talk about an example where better used LLMs, large language models to model human interactions in social media landscapes. And then you can ask, what is the robust behavior? Do you get things that we observe in the real world? Do you get polarization? Do you get sort of an accumulation of influence in certain people rather than having it be completely uniform? Is this good or bad? If you intervene in certain ways, in a social medium, can you make things better? Is it all the algorithm's fault or is it just the preferences of the individual actors? All of these kinds of questions can be addressed with this sort of slightly physicsy attitude, but put into a social science context. I don't wanna give away too much about Petter's results, but they're not great. They're not very encouraging for those of us who want social media to work.
0:03:40.0 SC: There are certain natural dynamics that seem to come up that drive things in a bad direction, that drive polarization and echo chambers and things like that. It doesn't matter whether we like it or not, if that's what happens, knowing how it happens and that it happens under certain circumstances, hopefully will be helpful in making social media or media or information ecosystems in general, more functioning for the purposes of democracy, but also for the purposes of just learning fun new things and having a good time and being good human beings in connecting with our peeps in various different ways and building connections that otherwise wouldn't have been possible. So we want to keep the good aspects of these wonderful technologies without being subject to the bad aspects. And I think this kind of study helps us figure out how to do this. So let's go. Petter Tornberg welcome to the Mindscape podcast.
0:04:50.2 Petter Tornberg: Thank you for having me on. It's a real treat as a longtime listener and fan of the podcast. It's really great.
0:04:57.2 SC: Okay, very good. Well, then you know how it's gonna go. Let's just start setting some stage a little bit. In one of your books, you have this provocative line, power has an epistemology. Do you remember that one? That's from Seeing Like a Platform. So what does that mean to those people in the audience who might not use words like epistemology every day?
0:05:18.4 PT: Yeah, so it's a good starting point, I think. So a lot of my research is kind of informed by this understanding of society being a complex system. So I kind of... I actually have a kind of physics background myself, but like, long time ago, so don't quiz me on that. And come very much from the kind of complex systems perspective. And in my PhD, I was kind of focusing on taking that perspective to try to see how we can understand society using the methods of complexity science, and using computational methods. And so this book is basically centered around this idea, okay, we're seeing this notions of complexity becoming used throughout the social sciences, but also in kind of urban planning, and also in kind of when we talk about digital platforms and how they're shaping society. And so we try to kind of understand what does that actually mean? And so... I mean, in the social sciences, the way that complexity tends to be understood is, like this bottom up system. So often you have like, there's complicated systems like a car or spaceship that you can kind of take apart, you can decompose them, you have like an engine, and it's quite easy to figure out how they fit together.
0:06:36.9 PT: You can kind of understand the system by taking it apart. But then on the other hand, on the other side of the kind of this line, you have complex systems. So like ant colonies or flux of birds or whatever. And these systems if you take them apart, like you take out an individual ant from an ant colony, you can observe its behavior as much as you want, but it won't tell you very much about how an ant colony functions, because the kind of intelligence of the ant colony emerges through the interaction of a lot of ants. And so the book kind of stems from the observation that we've shifted to more and more talking about society through this complex lens. And that is kind of intertwined with this notion of new forms of democracy, because obviously a lot of those ideas come from kind of from your field, from physics and from computer science, but as they're entering, they're changing quite a lot as they're entering into the social world. And they also begin to have kind of political implications. And so those are the kind of implications that we follow.
0:07:37.6 PT: And ultimately, in the social sciences, this becomes a question of kind of epistemology. So the question of like, kind of how do we envision what society is? And we went from, in the '60s, and in what we social scientists refer to as Fordism or industrial modernity, where we tended to see society as a machine, as a complicated system. So basically, that way of seeing and way of understanding society stemmed, we argue from a society built on large industry, large mass production, and it led to kind of a mass society. And it led to the kind of ambition of, like, we can design, we can plan society, we can build it as if we're building a machine.
0:08:20.2 SC: So this is Fordism, as in Henry Ford and his assembly lines.
0:08:24.8 PT: Exactly. So that image of the factory that Ford produced because Ford, he didn't only produce a kind of way of producing, but also a way of consuming. And social scientists have looked at how that kind of idea of the industry, how it kinda leaked in society, leaked into society and shaped how schools functions, how companies function, that it became a kind of way of organizing society broadly speaking. And so what we are kind of observing in this book is the fact that we've moved from that kind of machine epistemology into an era that's defined by kind of complex systems where we are talking about society as kind of swarms or as self-organizing. We're using these different metaphors that stem from... They are are much more organic. They're not the kind of machinery. And we've also kind of in some ways abandoned the ambition of having the state design society to produce certain outcomes. So this kind of utopia, this hope of improving the world, we've kind of given up on that. And there is this idea that, now that it's implying that the outcomes of systems like the bottom out outcomes of systems somehow will be inherently better.
0:09:44.0 PT: That's the kind of underlying assumption of this, what you might call ideology. And so that's kind of what we're interrogating and questioning. Because we would argue that this isn't just something that's natural, there are still forms of power that are shaping society. They're just much less visible. And they're operating through by shaping kind of how we interact by things like algorithms. And so they often become kind of difficult to see, but they can still have very large structural outcomes on society. So for instance, it's by kind of fiddling with the rules of interaction, which is basically what platforms are doing. And that can have really important large scale outcomes. And there is still power there. It's just power that has a new epistemology.
0:10:29.7 SC: So that's very, very helpful. Thank you. I read the beginning of your book, but I didn't get through the whole thing. So this is why we have the podcast so I can just ask you questions. So in other words, let me try to rephrase it and see if I'm understanding. We might have had this dream of planning an organization where it was top-down and either Henry Ford, who I guess was very capitalist at heart, but still, he was trying to organize his factories, or central planning from a government. And there are various arguments you can make that simply letting things come to an equilibrium is more efficient, whether it's a sort of thermodynamic equilibrium or an economic free market equilibrium or whatever. And you're making the point that maybe so, maybe that's sort of a better way of calculating some optimum, but it still carries with it its structures of domination and power.
0:11:28.8 PT: For sure. I mean, there is this, even emergent outcomes, even when we arrive at an equilibrium, it's still like those outcomes are still defined by the conditions that we came in with. It's like these arguments that you sometimes hear from certain parts of academia where it's, they built some model of the economy and they find that certain people become very poor or like a large majority become very poor and certain people become very rich and they say, "Well, it's an inevitable outcome of this system. We shouldn't try to change it." But that's just not... It's not a good argument.
0:12:04.6 SC: It doesn't follow.
0:12:05.8 PT: Yeah, it doesn't follow because you could have produced other rules that would produce other outcomes. And if your outcomes that are produced by your system are problematic and are harming a lot of people, then maybe you should reconsider the rules that you're operating under. And also with the notion that like that this idea that the market would somehow be natural and not, like that it would not be shaped by state power has been questioned for the better part of 100 years that actually the market is very much a construction of state power.
0:12:36.0 SC: I remember the epiphany I had when I was thinking about different probability distributions. This is purely a math statement, but still it's important. Like we have uniform probability distributions, power laws, bell curves, whatever. And you might ask, well, which one maximizes the entropy? And the answer is they all do subject to different constraints. It's all about the constraints you put on the system macroscopically. Nothing is there really inevitably.
0:13:05.8 PT: Yeah, and I think it's very much at the core of what I'm interested in to a certain degree. And I think it's also very much at the core of many of my papers and much of my research on this, which is the fact that as we've moved from a kind of a spatial society, we moved from a society that we could. As a physicist, one might think of it as the kind of lattice.
0:13:32.4 SC: Or nearest neighbors.
0:13:34.6 PT: Yeah, exactly. And then we moved to a digital society, which is characterized by network structures. And those produce, like they are associated with different types of distributions. As a social scientist, everything is a bell curve. But as a computational social scientist, everything is power laws.
0:13:51.8 SC: Everything is power laws, right.
0:13:53.7 PT: And that's not just like a question of how we study these systems and what assumptions we need to make. It's also, that certain people, when we have digital network structures, become very powerful. And most people are powerless, and they don't get attention, and they don't get resources that are important. And that is profoundly problematic for society. And they are attributes of these networks or the structures.
0:14:20.3 SC: I mean, maybe a good inroad here, and I think you just gave us a good segue to it, but there's this classic work on self-organization in society by Thomas Schelling in his segregation model. So, you have an updated version of that, but why don't you tell us what Schelling's version was?
0:14:36.9 PT: Sure. I think it was published in 1969 already, and it's still kind of pointed to. I mean, I use it all the time when I'm teaching 'cause I think it's still somehow the best model. It's not just the first. Also, we peaked, as computational social scientists. And so, basically, Schelling was kind of, he was... I think this backstory is that he was like walking through his cafeteria in the university and was looking around and he saw like, it's weird that all the geographers are sitting in one table and sociologists are in another table. And he was like, yeah, and the same thing with the city. And he was trying to figure out like, why is segregation such a common outcome across systems? And so what he did, he actually took a checkers board. So, he wasn't actually, he wasn't even a computer simulation back then. And then he had coins on the checkers boards and he was, okay, so let's imagine that the system is a lattice and agents are randomly distributed on this checkers board. And each agent, they follow a simple rule. So, if more than like a very high number of their neighbors, if they're like kind of more neighbors are of a different type than themselves, because there are two types of coins, then they move to a random available space on the lattice. And you can have the rule of something like that, they're completely satisfied if 70% are of a different type than themselves. But if it's like 90%, they're like, yeah, okay, like, I'm tolerant, but like, come on.
0:16:17.1 SC: Right. So, they want some neighbors to be like themselves, but they don't insist that most of them be like themselves necessarily.
0:16:24.3 PT: Exactly, exactly. And then, so they move to a random space. And so, what happens? You would kind of expect that if they're happy with like 80% being of a different type, you would expect the system to settle on maybe something like 80%, like pretty much 50-50. You wouldn't expect very high levels of segregation to emerge. But what actually happens is that you get almost complete segregation, even with very kind of high thresholds. And why is that? Well, basically, you get this kind of cascade effect. So, one person is leaving the neighborhood, the neighborhood is left more segregated, his neighbors also move, and you get this kind of cascade. And so, basically, what the model is telling us is that the integrated state is just very unstable. And so, the system tends to tip over, like any neighborhood tends to tip over to one color or the other. And so, to me, looking at it, I mean, it's always been my favorite model. And to me that seemed to tell us something also about the digital world. So, I was interested in like, can we generalize this to other type of interaction structures that are non-spatial? And so, I look at kind of forums and different platforms because, basically, when looking at platforms, when looking at social media, there's been like for the last 20 years a debate on...
0:17:50.0 PT: 'Cause we often see echo chambers, We often see spaces being very homogenous. And then there's been a longstanding debate about whether that is driven by the algorithms, like the filtering algorithms. So, there's this notion of Pariser's notion of from his 2011 book, The Filter Bubble. So, this idea that the algorithms create this cocoon. They only show us content that we already agree to. And then especially in recent years, there's been more and more argument that that's not true, that, in fact, we want to be segregated. We don't want to be exposed to other ideas. We don't want to encounter someone that disagrees with us. And so that's been the kind of the debate between those two positions. And to me, it seemed like that might, maybe it's neither. So, basically, what I do in this paper, and it's a very simple and short paper, I implement the Schelling model, but I move it online to a certain degree. So, I look instead, instead of having a lattice, I look at, we have different groups that can represent kind of subreddits, that can represent websites. And the agents are randomly located. So, it's again very similar to the Schelling model. So, it's just as simple as possible. So, agents are randomly located to these groups.
0:19:09.8 PT: And then in each round, they interact with some random people in their group. And then it's the same rule as in the Schelling model. So, they're happy with, as long as at least some of their interactors are of their same type. And if there's no one of their same type, or, maybe a very few percentage, they move to a random other group. With this background, you can kind of guess what the conclusion is. But what I find is that actually the Schelling segregation effect is even stronger in this kind of communities, that they're even more prone to segregate. And, of course, that has kind of interesting implications for this debate, because it's not necessarily either that this is driven by filter bubbles, that it is driven by algorithms, nor is it in the interest of anyone. It's just something that follows from having social interaction structured in these ways. And there's also some kind of counterintuitive results from this. So, for instance, actually having filtering algorithms, having a filter bubble, it actually reduces segregation. Because if you have a filtering algorithm that always shows you someone who agrees with you, like, or, whatever you're shown, some messages, you always get someone who agrees with you put into those messages, you will become less prone to move.
0:20:31.9 SC: To moving. Okay.
0:20:33.7 PT: And so that will... Like, for the system level, it will reduce the amount of segregation. And so the system will be much likelier to be stable under those conditions.
0:20:46.9 SC: So in Schelling, it's literally a checkerboard. Even in my book, The Big Picture, I talked about the Schelling model a little bit. And so it's literally your nearest neighbors, which makes sense if you're talking about racial segregation in a city or something like that. And so you're saying you're putting it on a network, basically, where there's different nodes that you can hear. Or is it more dynamical than that? Is it just like a different spatial structure or something that it changes with time?
0:21:15.2 PT: I focus on groups in this case, so more like subreddits. So it's more like you join a community and then you're exposed to random people within that community. You can also run it on networks. But in this sense, the network structure are less prone to this emerging as kind of Schelling dynamic, because you need a kind of transitivity. You need something like if you're leaving the community, the community becomes more segregated and that increases the chance of someone else moving. So you get this kind of threshold effect where, one person triggers another person. In a network, you have to make really strong assumptions for that to be the case. Because usually, if you get annoyed with someone, you just unfollow them. But that doesn't change your friends' networks. And so they're not gonna be more likely.
0:21:59.3 SC: So there's not the positive feedback you get in what you did.
0:22:02.9 PT: At least not this kind of Schelling feedback.
0:22:05.5 SC: Good, okay. And by the way, my impression is that Schelling was offering an explanation for urban segregation that did not require like, racism handed down from on high via redlining or whatever. It was all just individual preferences. But in fact, when the social scientists have gone to look at it, the reason why real cities are segregated is in fact because of racism from on high forcing it to happen.
0:22:31.9 PT: Yeah, no, I think this is a really important point. And it's quite funny in some ways. I've been in geography for, not anymore, but I was in postdoc in the geography department for about four years, and quickly realized that the only social scientists that do not really know or engage with the Schelling model of segregation are the geographers, because it just seems fundamentally incompatible with that way of thinking. And I'm very much in agreement. And it is quite interesting to a certain degree. And I think it connects to the question of epistemology, because it's kind of the Thomas Schelling segregation model, it gives a very deep insight into the kind of dynamics of segregation, but it's also really hard to kind of bring that insight into dialogue with the existing literature on segregation in cities that, as you say, very much point to kind of structural racism, redlining. But to a certain degree, I mean, both are true, right?
0:23:30.3 SC: Yeah, exactly.
0:23:31.1 PT: It's just difficult to make these theories kind of speak to each other.
0:23:34.9 SC: Well, my line has always been that the Schelling model is really good at explaining exactly what you started with, which is where people sit in the cafeteria. There's not rules like, the jocks and the nerds have to sit on different sides, but they always do because of exactly these preferences.
0:23:51.0 PT: Yeah, no, I think that's a good point. And it's probably also less of a provocative example than using it to think about urban segregation.
0:23:59.4 SC: And so this idea that we do change our social network or social media usage to be just a little bit more within a set of people that we want to hear, where does this apply in the real world? Are we thinking of Twitter or YouTube or TikTok or Facebook or what?
0:24:22.5 PT: Yeah, so this has basically been a long debate in the social sciences, this kind of question of how pervasive echo chambers are or not, and it's still a very, very heated kind of debate. But what I would say is basically that there are suggestions that there is quite a lot of communities that are relatively segregated. And so looking at, for instance, most subreddits are, if they are political, they tend to be towards one side or the other. But I do think it's interesting. I mean, Twitter has historically been a good example of the opposite. It was for a long time quite inclusive in the sense of having both political sides. And it's an interesting because Twitter has kind of functioned as the kind of model organism for social science research for looking at platforms because it's been one of the few platforms where we can actually get a lot of data or we could. And so a lot of kind of computational social science research has looked at Twitter and used it as a kind of way of speaking about social media. I would say that Twitter is like... It was a very different platform from everything else because it actually had all political sides and it was characterized much more by kind of conflictual debate.
0:25:44.7 PT: But if you look at kind of smaller communities, they do tend to be much more segregated in terms of opinion. And that can be a problem in the sense that if political theorists when they talk about what conditions need to be fulfilled for us to have a kind of functioning political discourse, functioning deliberation, one of those conditions are that we need to have kind of diversity of opinions. We can't just have like political side, which is, I mean, pretty obvious, I guess.
0:26:16.7 SC: I guess I was going to ask a question about that. How bad is it if people on social media interact with people who are like them? I can imagine maybe a utopian political structure wouldn't be like that, but most people on social media are not there to be utopian political actors. They're there to talk to their friends and be reinforced. Is that so terrible?
0:26:41.3 PT: So I would say that I think in a lot of cases, it can be even very beneficial. I mean, in some way, one of the ways that social media transformed society, one of the key ways was this possibility that we couldn't connect with anyone from all over the world. And so for a lot of communities, especially minorities, if you are LGBT and you grow up in a small village somewhere and you don't have anyone to connect to, it's been shown that it's very beneficial for your mental health and for your experience, for your lived experience. At the same time, the way that it affects politics is not always as beneficial. Because obviously, if the minority that you belong to happened to be kind of some extremist form of neo-Nazism, it seems to have similar kind of consequences for those communities because it allows them to come together and form a kind of shared sense of community. And it transforms them from being someone isolated to a kind of confident political community. And that can be quite dangerous in terms of radicalization.
0:27:59.8 SC: And is this... I mean, my impression is that it is something that comes from newfangled technology, social media, things like that. The ability of these smaller groups to come together, like some of them are just going to be people who like to crochet, and others are going to be neo-Nazis, But is there data that backs that up in the sense that have we seen more viability of these small groups than we did in the 1960s or whatever?
0:28:28.5 PT: I mean, it's very difficult to kind of look at those kind of changes, because unfortunately, we only have one society, so it's hard to compare how society would look different without social media, without digital media. But what we can say is that we've seen a kind of increase in political violence. We've seen a kind of democratic backsliding in a lot of countries. And we've seen the kind of political extremist movements entering into the political mainstream. And whether or not that is costly linked to social media is very difficult to say, but it is very clear that it is, in our current society, very much entangled with social and digital media. And I've looked... So in my previous book, Intimate Communities of Hate, we look at one of these online communities, and basically we try to answer this question by going in depth and looking at the Stormfront community, which is a very old kind of Nazi community in the US.
0:29:35.9 SC: They pre-date social media, right?
0:29:38.2 PT: Yeah, so basically it goes back to like '95. And the nice thing about it is that all of the data, all of the conversations over this long period of time is all available online, if you're able to scrape it and bypass their various securities from preventing you from scraping it. So we have all of the data and all of the conversations over this 20 plus year period. And so that allows us to kind of look at how the users are changed by interacting with this community. So we can kind of use natural language processing and various forms of text analysis and kind of see of how individuals, when they interact in this community, how does it change their language and different markers of like how they perceive themselves and so on. And the kind of image that we come out with is much more... It's very much a kind of a question of a community formation of changing identity and so on. And so just an example, we can kind of see how when they first come in, they use I and my and speak of themselves. But then over time, they start saying we or as for Stormfront, because that's kind of a marker of them starting to think of themselves as part of a collective and as part of something larger.
0:30:55.4 SC: So that's very interesting because it's not just about people. In the study we were just talking about, you were talking about associating with people who are like minded. And here you're talking about the feedback acting on yourself. And the individual people are sort of changing their identities in response to those interactions.
0:31:14.3 PT: For sure. I mean, I think it's very clearly a kind of a feedback process, where we have a kind of segregation mixed with a kind of changing of identities. And so we can kind of see, in the Stormfront case, especially after the 2008 Obama election, is that the few days after the election, there was just a huge surge of users coming in, so new people joining. And looking at what they're saying, we can see this kind of emotional kind of confusion and anxiety and they're trying to... They somehow feel confused about this world that they're living in where a black person who's like articulate can become president. And what does that mean for their self identity? And like, how can they make sense of this? And then the community function is kind of emotional talk therapy that allows them to find new narratives and resolve this emotional anxiety and turn it into kind of from something passive, like anxiety, to something active, like anger or outrage. And they come out with these absurd narratives about the Jews and, like actually the whites are the superior race. It was just, whatever that happened. So it's very much a kind of process that is on the level of like identity and emotion and kind of self narratives.
0:32:37.8 SC: It's very interesting because it all cycles back. With the very first podcast episode of Mindscape, I interviewed Carol Tavris, who's a social psychologist, and she has this idea of the pyramid of choice. When if you imagine two people who are basically 50-50 as to how they could make some certain choice, what sneakers to wear or whatever. But once they make the choice, if they make it in different directions, they start justifying that choice to themselves and they end up very far apart, even though they were essentially indistinguishable before they collapsed their wave function on that particular option.
0:33:12.2 PT: Yeah, no, that makes sense. And so, I mean, ultimately, it's kind of the question of the kind of structural context in which people are interacting that can produce these kind of outcomes.
0:33:23.5 SC: Can I ask about either your model or the Schelling model? There seems to be, like, as the physicist in me thinks of the Ising model, which is similar but not exactly the same, where you have spins that are interacting on a lattice. And the thing we do there is we introduce probabilities by having a temperature. There's some chance that the spin is going to flip and whatever. And in the Schelling model, there's a probability because when the person decides to move, where they move to is random. But the choice about whether to move or not is not random. That's just determined by how many neighbors they have of each kind. So, have people done that? Have people introduced a probability of moving rather than a certainty and seen if that changes anything?
0:34:07.7 PT: Good question. I honestly don't know.
0:34:11.1 SC: Okay. I don't know either. Someone should do that. Someone listening out there.
0:34:15.9 PT: Maybe something for us to do.
0:34:17.9 SC: Yeah. Yeah, absolutely. And then... Okay, so you then did a different study, which came out also very recently, using large language models. And here, well, I'll let you tell the story, but the idea is rather than just having these mindless dots on a grid or whatever that are interacting with each other, you literally had little agents talking to each other and making choices in a social media context. So, how did that go?
0:34:48.5 PT: Yeah. So, basically, the aim of this is to address, in part, this kind of longstanding criticism from social scientists, or at least from a lot of social scientists when it comes to agent-based modeling, which is that these rule-based agents are just not very good representations of the full spectrum of human behavior, which is, I think, fair enough. And I think... I mean, in a lot of cases, like the Schelling segregation model, this simplicity is very useful and it allows us to throw light on some emergent phenomenon that is ultimately structural. But in other contexts, it is also limiting. So, looking at social media, like politics on social media, for instance, it's an example where these richer behaviors, they also can really matter, We cannot really separate the cultural from the structural. We need to look at them as kind of intertwined and as interacting. And so that's kind of the background, because what I'm interested in here, what we're interested in, is this question, because we've basically... We spent 20 years or something criticizing social media and pointing to the problems, and it linked to various problematic outcomes.
0:36:01.6 PT: But now there's more and more kind of interest in, like, can we be a little bit more constructive? Can we actually do something about this? Because ultimately, if social media can shape a politics that is outrage-driven and radicalizing, it should also be able to shape a form of politics that is pro-social, that has healthier political and social outcomes. So, that's the kind of idea. And how do we study that? Well, using observational data doesn't really work. So, that kind of modeling approach can be really beneficial. And so we're using agent-based models, but having, instead of these rule followers, we're using large language models. And they work as kind of stand-ins for humans.
0:36:42.4 SC: Maybe, if I could interrupt you just quickly. I mean, maybe give a little bit of background onto the concept of an agent-based model. Like, as opposed to what? What kind of models are not agent-based, and what are agent-based, and what is that used for?
0:36:57.4 PT: Sure. So, in the social sciences, the way that we have traditionally approached the social world is in, to link back to where we started, very much as a kind of complicated system. So, we tend to think of society as kind of variables interacting, which, in a lot of cases, can work really well. But if you're thinking of this kind of complex aspects of the social world, where you have interaction between agents and that leading to unexpected outcomes, those traditional kind of variable-based approaches just don't work at all. Like using these variables, how would you study a murmuration like of birds? It just wouldn't be possible. And so, agent-based models is kind of using this kind of bottom-up modeling approach where we... And so one example would just be the Schelling model. Like, so you have agents that are individuals, they follow simple rules, and then you look at the outcomes. And that allows you to think together the kind of micro behavior of individuals with system-level outcomes that can often be kind of unexpected from the rules that you're coming in with.
0:38:07.0 SC: So, the individual agents need not be very complex themselves.
0:38:12.0 PT: Traditionally, they haven't been. So, they traditionally have been kind of simple rule followers. So, you have like the Schelling threshold rule, or you have maybe an optimization rule. But basically, building agents that would kind of mimic human behavior in terms of reasoning or language production just becomes impossible traditionally. It would just be extremely complicated. You would have to build a kind of reasoning agent, which we just haven't had up until a few years ago. But so, basically, when ChatGPT came out and with the kind of rise of large language models, there just became a huge amount of interest in like whether we can use these models as part of agent-based models to kind of simulate social behavior. And so, that's the kind of what we're doing, but we're trying to also use it to contribute some kind of social scientific theory and contribute to our understanding of social media and its dynamics.
0:39:11.5 SC: And so, what exactly was the experiment you did? I think of it as, roughly speaking, letting loose a bunch of LLMs on a fake social network.
0:39:20.5 PT: That's pretty much it. But basically, our idea was to try to create a social media platform and make it produce the negative outcomes that have been observed on real social media, and then try out a bunch of suggestions from the literature on how we can address those problems. And so our expectation coming in was kind of that we would have to fiddle a lot with the system and try to make it produce problematic outcomes, so we could then see like how stable those outcomes are and how easy, like what kind of solutions are best for addressing the problems. And so, the problems that we focus on, it's kind of conditions of social media that make public deliberation or public conversation difficult. It makes it difficult to have a kind of a functioning politics playing out on these platforms, drawing on kind of political theory. And so, there are three different things that we've already touched on a little bit. But so, one of them are echo chambers that you do need to have if you're going to have a kind of constructive conversation across the political divide. You need to have both sides of the political divide present. Otherwise, it's going to be really hard.
0:40:35.6 PT: So, that's one condition you need. And then the second is this kind of question of attention inequality that we also touched on. If you're going to have functioning political discourse, you need to have relative equality among individuals. So, you can't just have like two or three individuals dominating the entire conversation, 'cause that's not a public discourse, that's just broadcasting. And then finally, what's been referred to as the kind of social media prism. So, this is the idea that you need to have a kind of constructive debate where people are actually trying to come to a solution. And so, that speaks to this question of that social media has kind of tended to benefit loud, polarizing, conflictual voices. And that is very much kind of undermining functioning conversations. And so, those are the three outcomes that we were trying to kind of see if we could produce. And we were expecting that to be quite hard, to be honest.
0:41:33.4 SC: That's so sweet that you thought that would be hard.
0:41:36.8 PT: Well, I mean, the literature has kind of pointed to or argued that a lot of these are, especially the kind of the social media prisms or this kind of the polarizing tendency, that those would be expression of engagement algorithms. And so, that they would be like the expression of social media identifying the most outrageous things that are being said, and then shove it in your face to kind of make you upset and increase the probability that you will comment or engage with the post.
0:42:06.6 SC: So, sorry, so the sort of two alternatives that we're trying to test here are one is that when you get these echo chambers and polarization, things like that, it's the algorithm's fault or the platform's fault versus this is just human nature.
0:42:24.5 PT: Well, so I'm not sure if I would put that as the context really, because it's also like in this study, I was honestly just kind of assuming that it was from the algorithms at least, that it's not just something that would emerge from human behavior. But these are kind of structural outcomes from the interaction between people and the rules of the platform. But to me, at least this kind of social media prism, it's a rather specific thing. It's kind of odd outcome that the most extreme voices get more attention. And so I was expecting that to be... And I've written about this before, arguing that it's kind of what I've called it, trigger bubbles. So it's not the filter bubble, but the trigger bubble. It's the social media algorithm trying to trigger you, make you upset in order to make you engage, 'cause that's how the platforms ultimately make money. They make you post something, they draw information, they figure out who you are and they sell ads. And so that was kind of my expectation. But basically what we started with was just building the most bare bones platform we could imagine, which is just the agents. So I should say also that the agents, their personalities, we take the ANES, so the American National Election Survey, which has very detailed informations about US citizens, including their politics, and like if they like to go fishing, we have very detailed information. And basically we turned that into a description for persona. Because LLMs, they really love to impersonate people. You can ask it to explain your microwave in the voice of Shakespeare and it will do an excellent job.
0:44:14.3 SC: So just to be clear, so the LLMs that you let loose on the social media, they weren't all Adam and Eve, they weren't all tabula rasa from the start. You gave them a backstory each one.
0:44:24.8 PT: Exactly. And the point here isn't necessarily that we want them to be completely realistic encapsulations of this particular individual that they're enacting. We just want to have kind of a diversity and want to capture a little bit of that cultural richness. And so then they are allowed to interact on this platform. And the platform is very simple. So basically they can look at the... They see the most recent news, so random selection of news from the specific day that we're simulating. And then they can choose to post about these news and they can just write a post about whatever based on it. Or they can see the timeline and they can choose to repost what someone else has written, someone that they follow. And they can also see based on the posts that they see, they can choose to follow someone that shows up in their feed. And if they follow someone, they kind of go into their timeline. They see a little presentation about them and then they see their most recent posts.
0:45:27.3 SC: Okay. And could they unfollow people too?
0:45:30.1 PT: No, we don't have unfollowing. It's really a bare bones.
0:45:33.7 SC: Okay. But there is some ability, they can choose to follow someone or not.
0:45:39.7 PT: Exactly. And so what we're looking at at the outcomes here is the network structure that emerges through this interaction. 'Cause we want to have simple measurable things and we don't want to just look at how they're talking or something that's like an immediate outcome of the large language model. We want to have something that's more an expression of the structure of the platform. And network structures are interesting in the sense that they are very much kind of emergent and they are produced through their structural outcomes. And so that's what we're focusing on. And so we're trying to identify these three attributes. And to our surprise, we didn't actually need to do anything more than provide this bare bone platform and we got these three features that are widely considered the problematic aspects of social media. But I should say that that doesn't mean that engagement algorithms aren't problematic. That doesn't mean that... It might still be that they're making matters worse. But it does imply that removing them will not completely solve the problem.
0:46:43.4 SC: So how many users did your social network have?
0:46:48.2 PT: We ran it with 500 users.
0:46:50.4 SC: 500. Okay.
0:46:51.7 PT: Which is... It's a little bit of a limitation with the approach. It is quite expensive compared to running a conventional agent-based model. And conventional agent-based models have always been criticized for being very expensive to run computationally. So it is kind of a weakness of this approach.
0:47:11.2 SC: And you said that the bad outcomes happen. Remind us what the bad outcomes were? There's a list.
0:47:16.1 PT: Yeah, so basically you get echo chambers. So Democrats and Republicans end up not following each other. They're just talking to themselves. You get high levels of inequality, basically power-law distributions of attention. So a few users kind of dominate the entire discourse. And then finally, you get what Chris Bale has called the social media prism, which is that the more polarized, more extreme users tend to have more attention.
0:47:45.6 SC: Okay, okay. And is there... I mean, it's very, very tempting to speak anthropomorphically about LLMs and to ask about their motivations or something like that. But of course that's illegitimate. They're just faking that. They don't really have motivations. But what is the right way of phrasing the answer to the question, why does an LLM want to follow people on its own political side of the spectrum?
0:48:16.3 PT: So, where these outcomes are kind of stemming from, like, I think it's a little bit different outcomes each one, but so for instance, that we get the power-law distribution, that we get very unequal distribution of attention. That's kind of stems from preferential attachment, that the probability of you getting a follower is proportional to the many followers you already have. So, I wasn't so surprised to a certain degree that that emerged 'cause it is a well-known feature of network, as I already mentioned earlier. But the other kind of dynamic that we identify that we haven't really seen studied before, because you do kind of need to have this kind of special large language model network combination to be able to study it, which is the fact that basically we know that retweets, people reposts or sharing is very effective, it's very emotional, it's very much reactive. So we see something that we're upset about or that we're really emotional reacting to, and those are the types of things that we tend to share. And that is well-known and it's been kind of argued that that shapes the content that we see on social media.
0:49:26.4 PT: But what we're adding to that is that it's not only shaping what kind of content you see, but it's also shaping the kind of construction, the gradual formation of the network structure. And so you have a kind of feedback effect between what is shared, which is very much emotional, and who you follow. And that is what's creating this kind of dynamic where more polarized users tend to have more followers and get more attention because the kind of sharing is part of this kind of feedback process.
0:49:56.0 SC: I understand how an LLM can respond to a query and say something, but did you have to cook up an extra set of instructions for when to follow somebody?
0:50:08.2 PT: We don't provide them with any. It's just like, this is your persona. Based on this persona, how would you act in this situation? And then just have them kind of respond on the basis of that.
0:50:20.0 SC: Okay. And the persona includes their political orientation as well as whether they like fishing or whatever?
0:50:26.2 PT: Exactly. So it contains their political affiliation. And so in terms of the echo chambers, it's also not maybe so surprising. And we can also look at the kind of motivations that the LLM is giving for why they choose to follow them. And it does, like in terms of the echo chambers, it's kind of like, Okay, since I'm Democrat and I feel strongly about it, I don't want to engage with this person who's from the other side. But I think that it is interesting that it goes beyond just that individual choice. It's not like the dynamic and the emergence of the structural echo chambers is not just because individuals are choosing it, but because their choices are impacting others because it shapes what messages are shared and how the structure of the network is gradually built up.
0:51:08.7 SC: And is there room for the individual agents, as it were, to be affected by their success on the network? Is their audience capture? Do the LLMs learn to be more provocative so they get more retweets?
0:51:24.7 PT: So this is something that I think is really important and I think is really central to how social media is reshaping politics because it's not just that we get more polarized when we go on the platforms, but the platforms are kind of shaping the incentive structures that are shaping society because they are defining who gets attention and who doesn't. And so in that sense, it's reshaping a politics built around being kind of outrageous and so on. But no. Briefly put, no. But this is like a paper that I'm currently working on actually. So very much what I'm interested in.
0:52:00.4 SC: And of course, someone is going to say, look, LLMs are not people. What worries about limitations of mimicking human behavior should we have in the forefronts of our minds when we're using these LLMs?
0:52:16.5 PT: Yeah, so this is, it's a really kind of big debate right now 'cause there's a lot of kind of excitement around this kind of generative simulation. I've tended to be on the more skeptical side of this. So me and my co-author, Mike Leroy, we also have another paper coming out where we were specifically trying to answer this question and like, how much can we trust this and how much can we trust that they're valid as representations of human behavior? And it is kind of... It's a very interesting question to a certain degree because the models are more realistic as representation of human behavior than just a list of rules, but they're also much harder to validate, because you don't know exactly what is playing out and it's very hard to calibrate their behavior to match human behavior. And so it becomes kind of like a difficult question. They're ending up somewhere in between empirical methods that have a high level of validity and formal models that are parsimonious, that are easy to understand and they're neither. And so it's a little bit unclear how we can use them. And the way that we think about that in this paper is precisely, it's linked to the fact that we're not looking at the text that they're producing.
0:53:24.7 PT: We're trying to look at kind of structural features that are emerging as aspects of the platform, not that... We're trying to distance ourselves from their behavior and try to look at the kind of structural outcomes of that behavior. And to a certain degree, what we're interested in is the kind of the robustness of those outcomes. So in the sense, what do I mean by robustness? Well, basically, similar to the Schelling segregation model, you can... Basically, it's the kind of joke among modelers that whenever you build a geographical model that has any type of lattice in it, you always get the Schelling segregation effect. 'Cause it's just such a stable outcome and you have to really work hard to study any other effect. And so that's the kind of effect that we're interested in. We're interested in seeing how robust these emergent outcomes from the platform are. And so if we were just looking at the kind of toxicity of their conversation, I would feel less confident that these are something that are actually playing out in real world. But given the robustness of these emergent patterns, I have more confidence. But that being said, we definitely also need to do more studies on this.
0:54:37.9 SC: Well, it does seem like the robustness from Schelling to your LLMs, I mean, these are sort of different individual small scale dynamics giving rise to similar large scale behavior. It sounds like there should be a theorem, like the second law of thermodynamics, but about polarization or segregation or something like that being the state with lowest free energy. I don't know what to call it, but are there theorems like that in the statistical mechanics of social media?
0:55:09.3 PT: So there are people doing these much more physicist approaches, kind of opinion dynamics. I have a colleague here who's basically doing Ising models, wrapped into Ising models in order to understand social media. My approach tends to be a little bit more based on empirical data and trying to be closer to the system. So I think it generally is hard to find, unfortunately, this kind of loss in the social world because it is such an open system. And I mean, linking back to the question of whether society is a complex system or a complicated system, I would argue that it's neither or it's both to a certain degree.
0:55:55.8 SC: And one of the features of polarization that is a little bit weird to me in the United States is how even it is. We have essentially always in the history of the USA had two political parties with roughly 50% representation each. It never goes to like 70, 30 or anything like that. That's not something you can test in your model. I mean, you basically baked in into your initial conditions how many Democrats there were and how many Republicans.
0:56:25.6 PT: Yeah, no, this is not something I would capture in this model. In general, I'm thinking of these models as just trying to somehow capture one mechanism. And as soon as you start having a lot of mechanisms, it's very, very difficult. So in certain sense of like how the outcomes of this interaction feed back into society and how it drives polarization, that's to a certain degree outside the bounds of this model.
0:56:53.0 SC: So you said that you were tracking like the statistical results, not the individual words that the LLMs were putting into their posts or whatever. But do you know that those backstories you gave them, like, this person lives in Boston, they enjoy theater or whatever. Did that matter? Did that affect how they sorted or polarized or succeeded in the social media game?
0:57:18.1 PT: I mean, to a certain degree, yes, because if I wouldn't have given them any sense of if they were Democrats or Republicans, they wouldn't be able to act as that and you wouldn't trigger this kind of feedback effect. But if it matters that they know if they go fishing or not, that probably doesn't matter.
0:57:36.8 SC: I don't know. I would be curious to know.
0:57:39.8 PT: I think of that and more as a kind of a little bit of noise or a little bit of perturbation to make them not only act on the basis of their political personality, but that there's also like, okay, so he likes fishing. He's talking about fishing. I like fishing. I'm going to follow him. So it adds a little bit of that kind of noise and the fact that our lives are not just politics. That's just a small part of everything that we are. We have much more rich identities than that.
0:58:08.4 SC: And you injected news, basically? Is that like, there was an external source of perturbations that said, like, this news event happened or whatever?
0:58:17.9 PT: Yes, exactly. So basically, if you're just giving the agents, if you're just letting them talk without giving them something to talk about, it's just becomes the most generic, uninteresting conversation. And it also doesn't create this kind of richness that also functions as a kind of noise. And so we basically focus on a certain day and we got all the news from that day. And then we present them with like a random selection of those news and then have them discuss it.
0:58:46.5 SC: Okay. So you didn't totally make up the news, you actually were inspired by real news.
0:58:50.4 PT: Exactly. Yeah. We got real news from a particular day.
0:58:54.3 SC: And so I guess you mentioned the power law distribution of attention. So some of these, they're all LLMs, but some of them get a lot more followers than others. Is there any sense in which some of them are just better at social media than others? Or is it purely statistics and randomness?
0:59:15.1 PT: So I would say that it's pretty much stochastic. I wouldn't say that it's just randomly one of them happens to be a great influencer as such. But there are... I mean, being more political and being more extreme does help to become more influential. But it is, I would say, pretty much stochastic. And I mean, that fits also, there's been various experimental studies on these power law distributions and how they can emerge in systems just through the feedback effects. And it doesn't need to be any difference between the things that are being selected. You still get these power laws and it's just kind of random.
0:59:55.1 SC: So did we learn anything about how to make the world a better place through doing this? Does this help us suggest any ways to make social media better?
1:00:04.1 PT: So, I guess we didn't really mention the interventions that we tried out, because we built this platform and then we saw these negative outcomes and that gave us the baseline where we could try, okay, can we fix this problem? That became the kind of next step. And so we looked at the literature and looked at what has been suggested, what are people kind of optimistic about in terms of trying to solve these problems? And basically we had the kind of wide variety of more or less sophisticated solutions that have been presented. One of them was kind of the bridging attributes, which Jigsaw has released, which is a sub company of Google, which is basically, they analyzed the content of messages and then you can sort your newsfeed. If you're a social media platform, you can use it to sort your newsfeed and you get the most kind of constructive comments. Those are the ones that you show. So instead of showing the most upsetting comments, you can show the most constructive, the most partisan. So it's called the bridging attributes. So that's one example. And another example is like, is this smaller solutions just hiding the biography, the little description off the agents when they follow each other so they don't know if the other person is Democrat or Republican, for instance?
1:01:29.2 PT: And another is just sorting chronologically instead of showing the most shared posts. So basically we try out a bunch of those solutions that have been suggested, but I should also say that we are doing like fairly extreme versions of it that wouldn't necessarily be realistic to implement on a platform. So for instance, we have one algorithm where we show the least liked post first, which probably would lead to a really awful platform if you implement it in the real world. Because we want to see the most extreme solutions kind of. But unfortunately, none of these solutions really fix the problems that we're observing. And some of them actually make matters worse. So for instance, the chronological timeline actually leads to more of a social media prism where you get more extreme users get even more attention. And so what we take from this is that this kind of emergent phenomena seems to be very kind of rigorous to perturbations that it's basically a little bit like the Schelling segregation effect. It's a very robust emergent phenomenon.
1:02:47.8 SC: Yeah, and you don't want to have a social media network that just tells every user who to follow. You need to give them some agency there.
1:02:55.8 PT: Yeah, it's also the question of if people are going to use the platform or not. But basically, I mean, to me, what this suggests is that this basic structure that we see across social media platforms, where you have a network, you follow people, you repost things, that that tends to be linked to these problematic outcomes.
1:03:15.9 SC: Well, I guess that was where I was going to go. You already sort of said this is hard to study, but is it something truly new, these social media things? We used to be happy just getting the conventional news on one of the three network stations on TV, and now we have a lot more variety in what we can listen in on. But has this had a big effect? Is it really, like, you can see that it has an effect, but I guess how much of our current political mess can we be tracing to this? I know it's a hard question to answer.
1:03:52.2 PT: Yeah, I mean, it's ultimately impossible to know. And to a certain degree, it's also a bit tricky treating social media as something that's like external to society and that happened to society. Because to me, that social media is structured the way it is, is very much an expression of coming back to Fordism, the kind of transition from an industrial society to post-Fordist society, where the focus of capital is advertising and figuring out information about you that was not catering to a mass market where you're selling the same product to everyone, but that you're really trying to not only identify consumer niches, but even create consumer niches. And that basic fact of how the companies make money, the business model underlying social media and the internet, that has very much shaped what social media has become. Of course, it's also feeding back, but it's very difficult to say how social media would be different if it wasn't that context.
1:04:56.2 SC: Well, I guess it's the feeding back I was going to mention very briefly. Like, it's not just that you have social media in addition to mainstream media, but the social media affects the mainstream media. They want those clicks too.
1:05:09.0 PT: Yeah, exactly. I mean, I think this is something that people often mention as like, okay, but I can just stop using social media and I won't be affected by these negative consequences. But of course, that's not the case because social media is, as I mentioned, it's reshaping our politics and it's very much reshaping also mainstream media. So a student of mine in a student project in my course last year, for instance, he looked at the New York Times headlines over time and then measured how click-baity they are. And basically what he saw was that when social media entered on the scene in like 2010, and so you saw a kind of jump and you saw that the New York Times also changed how they wrote their headlines. That's just the kind of... There's one expression of it, but of course the incentives of attention produced by these platforms are reshaping our politics, our media, and our culture overall.
1:06:07.9 SC: And what does click-baity mean? Is it a function of sort of giving less information and saying like, you won't believe what happened next?
1:06:15.3 PT: Yeah, there's like actually a bunch of kind of features of text that make them more or less click-baity. But basically the way he did it was to just look at the databases of click-bait news articles and then non-click-bait news articles and then train a classifier on it.
1:06:32.3 SC: So, I guess our last thing to talk about is there's polarization. So you had these LLMs, they sorted themselves in a Schelling-like way, et cetera. Can we say anything about the quality of the information, like the truthfulness versus misinformation? Are social media helping us not just only talk to people like ourselves, but to get it wrong by sharing mis and disinformation?
1:06:58.9 PT: So, I would say... I mean, this is not something I'm looking at in this specific model, in part because the LLMs are under... OpenAI doesn't let them produce misinformation, so you can't really use them to study that. But I have looked at this in the context of using actual social media data. And what I would say broadly around it is that social media is, by removing the kind of gatekeepers that we used to have from mainstream conventional media, and by creating kind of really strong incentives for gaining attention and shaping the kind of conditions for that, they're really producing conditions where not anchoring what you're saying to truth becomes a kind of beneficial strategy for gaining attention. Because it allows you to be kind of outrageous, it allows you to trigger people, and you're not really constrained by reality in the same way. And of course, that also becomes interconnected with politics. So I had a paper coming out with my co-author Giuliano Schueri earlier this year, where we look at politicians across countries, and we look at their Twitter posts over a five, six year period. And we look at all the examples of when they've shared links, and we identify misinformation through that.
1:08:27.9 PT: And so we can link each politician to their likelihood of sharing misinformation. And then we can basically use that for a kind of comparative model. So basically a statistical model to identify the conditions when politicians are spreading misinformation. And so this links to this broader question of the link between social media and the spread of misinformation, which, there's been big debates around this, especially in the last few years about whether it's just social media reducing the quality of information overall. And what we argue is basically, it's not just that, but that social media becomes intertwined with politics, it becomes intertwined with different political movements. And the result is that certain political movements are emerging shaped by the interests and incentives of social media in such a way that they use misinformation as a political strategy to gain advantages in political competition. And what we find in that study is basically that it's specifically the radical right populist parties that are driving this rise of misinformation. So it's not just a social media phenomenon itself, but it's social media intertwined with politics and political systems.
1:09:43.3 SC: So I'll let you give a last big picture kind of thought here. Like, am I getting the impression that social media are just bad, that it was a mistake, that their net effect is negative? Or can we have some shred of optimism to hold on to?
1:09:59.2 PT: I think that there are also kind of to some degree positive outcomes from it. And like for certain communities can be beneficial. And I mean, to a certain degree, like growing up, I loved the internet. It was great. I grew up in the countryside on this little island in Sweden in the middle of nowhere. And like the internet kind of provided a social world for me and allowed me to connect to ideas and everything. And to a certain degree, what I would hope for is also kind of going back to that innocent era of the '90s of the internet of the '90s. Like, I was using ICQ and you had this button where you could click and talk to a random person anywhere in the world, and I just loved that. I spent my days talking to some random person in Arizona. And to a certain degree, I mean, of course, that was a more innocent time. And maybe if we would try to bring that back, it would lead to something horrible these days. But I do think that we could create structures, like we could create platforms and spaces that would actually be beneficial for us. And that would actually be positive. It's just we might need to rethink it in more fundamental ways than just this cosmetic changes to algorithms or designs.
1:11:19.5 SC: All right, that is something, there's homework out there for all the young people. Think about fundamental changes we can make because they're not going away. Even if social media are a net bad, they also are good and we're going to have to live with both of them. So, Petter Tornberg, thanks very much for being on the Mindscape podcast.
1:11:35.7 PT: Thank you so much. This was really great.
[music]
Very cool. Looking forward to the paper he is working on as well
My question would be: how reproducible are those experiment’s results with LLMs, and sample size of 500?
Very interesting episode! As a physics guy who moved to engineering, I identify with using simplified computational methods to get reasonably close, knowing you are ignoring many relevant variables. The job has to get done, so you do what you can with a reasonable effort, then rely on testing (or crossing your fingers!). I am wondering if Petter knows about the “virtual voter accounts” (made-up Twitter accounts) that the BBC podcast Americast has been using for a couple of years to try to understand how social media influences voters. He should take a look at that!
One big question for me is the “Bowling Alone” issue. Back to PT’s “fishing” interest. Once upon a time not very long ago, we belonged to groups that were more local and tended to be based on something other than politics – bowling leagues, service organizations (Rotary,etc), churches, neighborhood organizations, sports teams, etc… And yes, these had other kinds of sorting/boundaries, but mostly different than just politics or even class sometimes.
So, this tended to force people to deal with “friends” who happened to think differently about lots of topics – whether politics or religion or even color/gender. The old situation where someone finds out their friend or their child’s boyfriend is gay after knowing them for a while and slowly changes attitude….
So, it would be great to see the LLM based agents where there is a much stronger crossing function perhaps more based on physical locality (ex. bowling leagues) – and see how much of an effect that has to moderate the strong segregation/extreme attention enhancement.
Someone who is as extreme as many online wouldn’t last long in a bowling league or Rotary as they would clearly be perceived as obnoxious and annoying…
Anyway, perhaps thoughts for future experiments…
Great podcast episode…
Pingback: Sean Carroll's Mindscape Podcast: Petter Törnberg on the Dynamics of (Mis)Information - 3 Quarks Daily
Very interesting conversation. My comments would be:
1) it’s common to study far right groups as more radical or prone to hate speech – but I am afraid to say that groups on the left can be pretty extreme too and hence worth to study and compare across extremist groups across the political spectrum, and how polarizing dynamics play out.
Also I am not entirely convinced – in general – on the causal links between social media and polarization. Maybe social media make more visible tendencies that were already existing in society but not seen. And the sensationalism of click-baity – is also a feature of TV news, or newspapers before, so what is now more social-media distinctive in this tactic?
2) the interplay betwen individual preferences and broader structures. His research seems to focus on structures but it would be interesting to study also models and platforms that want to empower the individual user like Bluesky. He appears to believe that individuals are structurally overdetermined. Also, there is a distinctly European trust in the state and the legitimacy of the state structure over, say, the social media structure – but this is not guaranteed in many places of the world. In addition, the form of social media is changing. Twitter is not so dominant anymore, whereas the forms of substack and podcasts are on the rise. So the platform model as well might soon be superceded from another structural form – and this shift in structural dynamics could also be studied. Networks come in many shapes – a big advantage over fixed geographical borders.
3) about the exposure to a wide spectrum of ideas and relation of those ideas to certain values and human rights. It is true that the benefits to being exposed to counter-arguments are indeed valuable (I certainly support JS Mill here). But there are also times that there are certain red lines, e.g, the growing population of Islam in Europe and their views on the position of women. I am sorry to say that I would be in no mood to be exposed to views about having lots of children from age 19, etc. This may sound trivial – but sometimes it is just such simple things like this, that people don’t want to revisit old debates.
It is also a certain European tendency about nostalgia and that the good things are to be found in the past, and in the naive early days of every beginning – as he refers to the internet of the 1990s. However, I think we underestimate the significance of the amount of information we have today, and the access to it. A simple example would be how easy you can search now for a study program and get information about the courses, professors, people you would study with. A tinge of optimism and positivity of our present moment might achieve way more.
And then there is this juxtaposition between groups and broader society? But what is the level of this broader society? National? Nationalism can also be seen as a “group polarization” during world wars.
There is still a lot of value in social media – and how you can find people that you share interests across the globe, exchange, expand your knowledge and perspectives – but sure enough, you cannot do that with everyone.