It is not manifestly obvious that universities should be where most scholarly research is performed. One could imagine systems that separated out the tasks of "teaching students" and "generating new knowledge." But it turns out that combining them yields spectacular synergies, both from letting students experience cutting-edge research and from keeping researchers inspired by interacting with bright young minds. Today we talk to Elizabeth Mynatt, Dean of Computer Sciences at Northeastern, both about her own research in "human-centered computing," and about the bigger-picture issues of why basic research is important, and why universities are such good places to do it.
Support Mindscape on Patreon.
Elizabeth Mynatt received a Ph.D. in computer science from the Georgia Institute of Technology. She is currently Dean of the Khoury College of Computer Sciences at Northeastern University. She is a senior investigator with Emory’s Cognitive Empowerment Program and co-PI for the NSF AI-CARING Institute. She is a fellow of the Association for Computing Machinery and the American Association for the Advancement of Science, and a member of the American Academy of Arts and Sciences. She was lead author on the National Academies report, "Information Technology Innovation: Resurgence, Confluence, and Continuing Impact."
Click to Show Episode Transcript
0:00:00.3 Sean Carroll: Hello, everyone. Welcome to the Mindscape Podcast. I'm your host, Sean Carroll. A couple months ago, I did a bonus episode that you might remember on science funding, and I was a little worried when I did the episode in response, of course, to proposed cuts in science funding that the government has put forward. A little worried that it might seem a little dry, a little inside baseball. We were talking about indirect costs, overhead, how different ways of applying for grants played out in the academic environment and so forth. But in fact, I got a lot of positive feedback about that episode. People explained that they knew nothing about this. They'd never heard of any of this stuff, and it turns out that it's really important. I guess maybe I shouldn't have been surprised, because we academics, as much as I love them, my fellow compatriots in academia, we love doing our research. We love doing our work. Some of us love taking that work that we do and explaining it to broader audiences, but mostly we're trained to sit in the office or the lab and do our thing and talk to our colleagues.
0:01:08.9 SC: We're not very good at explaining ourselves to the broader world. And it continues, the assault on the research infrastructure that has been built up here in the United States and the world. So maybe it is useful to talk about how academic research works and its relationship to the important technological breakthroughs that we all benefit from. There's a very clear story where you see the end result of a certain research tradition. You might see the last bit of it, and the last bit of it that leads to some important technological innovation is often carried out in the context of private industry, corporations, right, who want to make money off of something. But there's very often a long lead up to that where important basic research was being done within academia. AI, for example, AI is something that is a big deal right now. We're in the go-go days of AI, but it's an old tradition. AI has been actively explored since at least the 1960s. And since the 1960s, it has gone through at least two different episodes of what are called AI winters, periods of time, one in the 1960s, another one roughly in the '80s, where people were giving up, were like, nah, this isn't going to work, this artificial intelligence thing, it'll never happen.
0:02:30.1 SC: Maybe arguably, if it were left to the profit motive, people would have stopped studying AI entirely. But academics are not as driven by the profit motive. They can keep a field alive, keep thinking about it, keep trying to make breakthroughs without worrying about the next quarterly report for their company. So not only can academics do more sort of blue sky research, be more optimistic, look for long shots that may or may not pay off, they can also talk to each other and they can give away what they learn for free, roughly speaking, in ways that would be much harder at least to do in the private sector. So today's guest, Elizabeth Mynatt, is a dean of computer science at Northeastern University. And we're going to talk about two things. First, we're going to talk about her own research, which is in human-facing computing, how to make the ubiquity of computers around us more palatable to people, more user-friendly, less invasive, and more helpful overall. But then the other thing is that she's been very active in explicating this, the ways, I suppose, in which academia has partnered with industry and with the government to generate like a hugely effective technological innovation machine.
0:03:55.9 SC: And government and industry and academia all play a crucially important role in this story. Her own expertise is in computer science, so that would be most of the examples that we use. But you could tell exactly the same story about physics, about chemistry, about biology, about medicine, about energy, about a whole bunch of things. And there's enough... Beth is very, very good at coming up with examples that are very vivid. So even though it's still a little inside baseball trying to figure out how to justify the system that we have for academic research, it's still also very illuminating about the history and the prospects for the future about how this research is going to go forward. We're not done yet. We haven't come up with all of the technological innovations we have yet to go. We still need a healthy collaboration between these different sectors. It's been super-duper successful. We're both a little worried that we've lost sight of that success and are undermining our own successful strategies. But at the end of the day, we choose to be optimistic that we're going to keep it going, or at least we have the opportunities to keep it going and continue discovering wonderful new things both about our universe and about how to build things that help our everyday lives get a little bit better than they were before. So let's go.
[music]
0:05:33.5 SC: Elizabeth Mynatt, welcome to the Mindscape Podcast.
0:05:35.9 Elizabeth Mynatt: Thank you. Glad to be here.
0:05:37.8 SC: So we're going to talk about these big-picture questions about how research done in universities trickles down, as it were, to industry in our everyday lives and so forth. But, of course, for my podcast, I would like to start talking about your own research. You're an active, I guess, Computer Scientist is the right word?
0:05:59.6 EM: Computer Scientist, yes.
0:06:00.0 SC: Right set of words? Okay, good. And I love some of the jargon that goes along with your research activities. One of them is the idea of human-centered computing. So tell us a little bit more about this, especially since you've been doing this for much longer than the last few years we've had this AI resurgence and people have strong feelings about that, but you've been thinking about this for much longer than that.
0:06:23.3 EM: Much longer, back during the AI winter when we didn't call it AI. So human-centered computing is a term, I guess it's about 20-years-old, and it coincided when computing technology was moving out of workplaces, official areas of business, and more into everyday lives. And as it was getting closer and closer to mere mortals, then the question was, well, instead of contorting the human being to how they would use computers, how do we actually start designing computers and computing systems such that people would actually want to use them? So my PhD studies was in the forerunner of this, which was human-computer interaction. So it really started with the psychology, human factors of how people interacted with screens. And then the social impacts of these technologies became more obvious. And so human-centered computing has the psychology built into it, but also more the social sciences. So we will do ethnographic work, we will do qualitative research, we will understand deep ethical and privacy trade-offs in how computing shows up in everyday life. And that is the motivation for now a pretty robust area called human-centered computing.
0:07:48.9 SC: And are you directly collaborating with psychologists, sociologists, et cetera?
0:07:53.7 EM: Every day.
0:07:56.2 SC: This is a completely unfair question. Is there any way to summarize what we've learned from thinking about this? Is there anything non-intuitive that we figured out?
0:08:05.6 EM: I think the first non-intuitive thing is I think people is always the hardest part of the equation. Everyone assumes technology is the hardest part. Understanding what people will do and why they will do it is really challenging. And we get it right sometimes, we get it wrong sometimes. I think one of the things that we got wrong was understanding how deeply personal mobile computing and cell phones turned out to be. When we were looking at them, I worked at Xerox PARC a long time ago. That was back when we called it ubiquitous computing. And when we were looking at these little hand-sized devices, back around the time the PalmPilot was coming out, we really saw these as interchangeable. They were like electronic Post-it notes, digital Post-it notes, and they would be laying around like we really thought of them like paper and pencil. And so we imagined hundreds of them in the environment, but we never really thought of them as so deeply personal as the way they landed up becoming. And because these devices are deeply personal, people will share information and use them in ways that no one really predicted back in the 90s.
0:09:24.9 SC: Did you have a PalmPilot?
0:09:26.6 EM: I did have a PalmPilot.
0:09:28.0 SC: I had one. I can't say I ever used it that much. I'm sure that at least half the audience has no idea what we're talking about right now.
0:09:36.1 EM: My favorite story from... So this was the Xerox tab. So again, about PalmPilot size. So a little bigger than your typical smartphone now. And it had a great gestural language, pen language, to interact with it, just awesome. And people immediately started putting little apps on it, and so a calendar was one of the first apps. You could look at it for your calendar. And then the calendar had a little alarm to remind you that you had to go to a meeting. And so socially, everybody knew that sound. So, of course, the first app that someone just kind of added into the mix was a quick gesture to make that sound. So if you were in a boring meeting, you could make the gesture. The sound would go off. You would look, surprised, I'm sorry, I have to get to this meeting, and you would gracefully exit out of the room.
0:10:24.8 SC: Who says human ingenuity is dead? Yeah, it definitely opens up new opportunities. I mean, is there any sense in retrospect that we should have seen the ubiquity of cell phones, pocket little computers?
0:10:41.0 EM: So I always tell people I was setting my status back in 1993. We used a little app, I think it was just called MBone for multimedia backbone. And you could use it to chat over the internet. It was a big deal back at the time. And I would use it to chat with my lab mates. And we had it just because it was a cheap way to always just talk to each other, is we had all disappeared into summer internships. So we were all working at different companies in the Bay Area. And so we would just chat with each other. But what we found was that we could reset the name, the listing of who we were in the app. And so we just naturally started sharing our status. It's like, Beth, I'm out to lunch. Beth, I'm looking for something to do this evening, who's interested. So as soon as those little baby affordances were there on the internet, people started doing that. And so Facebook was many, many years later after that. But it was just this natural inclination. As soon as you let people connect to each other over these types of technologies, they will find really interesting ways to do so.
0:11:51.5 SC: I've definitely told the story several times before, but in the very early days of web browsers, of Mosaic and Netscape, my friends will tell you that I was proselytizing to them. I'm like, this is going to be really, really big, this World Wide Web thing. And they had no idea what it was. And I couldn't explain what it was going to be used for. I would say things like, but look, you could order a pizza using this technology. And they would just say, I can order a pizza already. I don't see what the advantage here is. So I could kind of dimly see the importance of it, but articulating how it comes to be is a very special skill.
0:12:33.5 EM: I was right, but nevertheless felt embarrassed. I was in a National Academy's meeting looking at the future of broadband. And so this was all about arguing like the size of the pipe for downstream content versus the size of the pipe for upstream content. And watching Seinfeld was the killer app. That was really... The focus of the committee was really looking at it from a cable broadcasting point of view. And I was in there playing, and most of them didn't have real Internet connections. And I'm in there playing with the Internet just as a normal user. And I'm like, people are sharing news. People are sharing baby pictures when babies are born. And they looked at me like I was an alien with three heads. And I said people are going to start getting their news from their social feeds as opposed to traditional news organizations. And they all but kicked me out of the room for being so heretical and so ridiculous. Because why would anyone listen to a bunch of random social feeds for their news when they could go to the major channels or the major newspapers?
0:13:47.6 SC: But I get the feeling nowadays that we've seen obvious successes with the Internet, with Facebook, with social media. It's almost like it's the other way around, like that people make these dramatic proclamations about how this is going to change our lives that maybe you have to learn to draw back a little bit. It's not always, you know, obviously virtual reality is one that I still believe in going forward. But more than once, we've been told any day now that's going to be the next big thing.
0:14:17.4 EM: Any day now, the metaverse, we're all going to walk around with these headsets on. So great science fiction, but part of what we'll talk about today in the study we did is sometimes it is decades in the making in terms of understanding when these technologies will have such an impact. And oftentimes it will be a different application of that technology that really takes hold. So VR, incredibly powerful now in design. Automotive design relies on it day in and day out. But it's just not the sexy consumer walking around the streets version that we hear about in the media.
0:14:59.8 SC: Well, does your experience with handheld devices, cell phones, and so forth give you any insight into what to expect down the road in terms of augmented reality headwear or even brain-computer interfaces that we all are going to have someday?
0:15:20.4 EM: So Fred Brooks, a famous scientist, software engineer who worked in virtual reality, said never underestimate the laziness of the user.
0:15:34.3 SC: Good.
0:15:35.3 EM: And there's a story of this tale where he walked into this fancy VR environment, and you could walk around the environment and interact with things. And the first thing he requested was a stool to sit on and that he would sit there and he would revolve the environment around him. So I think a number of these technologies with BCI and all of this, they're actually just more work than they're worth it. And people are... You can get little bits of information via your texting or whatever media app that you're on. And that's enough for people to do what they want to do. And if you're asking them to do more than that, no.
0:16:24.3 SC: So one of the most difficult things to forecast, if that's what we're in the business of doing, is timescales, right? Like maybe something will happen, but to say this can happen next year versus 10 years or 100 years from now is really, really hard. Is that something we're getting any better at?
0:16:43.6 EM: There are projects now that are trying a machine learning approach to predicting when technology innovations or when research will have an impact. I think one of the things that we've seen is those predictions tend to have a very narrow funnel. So they look at a roadmap trajectory of, for example, the accuracy of a particular AI model. But what they need to also take into account are all the contextual factors, right? So the business readiness and the availability of hardware and even the particular regulatory side of things. And it's really difficult to predict when the stars align. And that's the challenge. Healthcare is one of those fields that in some sense is frustrating because there's so many innovations on the technology side, but because the regulatory and the business side hasn't caught up to that, many things that are possible never make it out into our day-to-day experience. So predictions are very complicated.
0:18:00.4 SC: They are, especially about the future, yes. But healthcare is an important one. And you already mentioned this phrase, ubiquitous computing. My impression is that one of the things that you and your people have been working on is how to help people who might be medical patients or just elderly people, how to design their homes so they can live more independently.
0:18:22.6 EM: So the longest thread of my work, and it's changed names. It was ubiquitous computing in the '90s, and then it became pervasive computing. IBM rebranded it at that point in time, and then it became the Internet of Things. And now I'm not actually sure what it's called anymore. But those are at least three names. And the gist is if you have technology out in the world, out in your environment, and for me in particular in people's homes, how can technology provide supports and services, awareness, in my case so that older adults can age with more independence, sustain quality of life, in contrast to moving to assistive care or other types of more institutional settings. And it's a wonderful mix of we can see the potential, right? So human behavior tends to show up in step functions. You seem to have perfectly reasonable control of your life, and then things fall apart.
0:19:23.2 SC: Yep.
0:19:24.7 EM: And in reality, it's actually a long, slow little decline, but we compensate so well for that that it masks what's actually happening. So there's a lot of things in the environment that could notice, for example, cognition slowly declining. And then how do you build supports or services around that? How do you, for example, flag that this person is going to be more vulnerable to fraud and scams? And maybe turn the knob a little bit of a little less privacy and a bit more control by family members or other institutions to keep a tighter rein on things, something you wouldn't do to someone in their 30s, but now in their 70s, it's like, okay, there's more risk here. So that's that human-centered computing approach. The technology can monitor, the technology can provide services and supports, but when and how to deploy them is deeply a human question.
0:20:33.6 SC: Well, and this is where the human side does come in. I can only think of my mom, who is an older human being who lives very successfully and independently, but is of a certain age. And the apartment building she lives in, number one, you can't take the elevator without your cell phone and an app on it. And number two, you can't pay your rent without a different app on your cell phone. And at a certain age, like, you don't want to do that. You don't want to live like that. It's making your life much harder. I'm wondering if there's a balance between the enthusiasm of trying to help people with new technology versus adapting to the fact that not everyone always wants it there.
0:21:15.9 EM: This is one of my longstanding rants is even age-proofing technology. So, online banking, online services, right, they're great, but we can measure the psychological impact, the cognitive impact, every time a new update comes out. And so my mom does online banking, I'm doing more and more of it for her, because I don't know, about every quarter they whip out a new interface and everything is different to her. And the climbing that hill again to learn yet a different way to pay my bills online is just too much. And so it's... Technology can make a difference, but our model of how we spew technology innovations out into the world, which is mostly based on 20-year-olds, is not the right model for how we would design successful solutions for older adults. So it's a real challenge to line up those stars.
0:22:17.8 SC: Yeah, and I'm wondering, like, when we get to the nitty gritty, how does an effort like that play out? I mean, are you sort of exhorting corporations or institutions to adopt better practices, or are they coming to you trying to figure out what the better practices are, or is it sort of natural, or do you have to really nudge them?
0:22:42.2 EM: It is a little bit of both. There is definitely exhorting, and I can't say that that's terribly successful. There is, ever so often, a company will pop up and say, healthcare, older adults, huge market potential, right? How do we do this? And there's a few out there right now that are constantly advertising themselves to me. So you do get kind of the commercial innovation waves that look at this. But healthcare is, in some sense, the most complicated because it is deeply regulated. And so, for example, if having an extra set of technologies in the home would be better preventative care for an older adult, who's paying for that? Where does that come from? And where does the technical support come from to keep things up and running? So as I said, it's tantalizingly close. You can see, right, we have occupational therapists, we have even independent living environments where these technologies can make a difference. But it's lining up the business model alongside the technology capabilities and that long-term technology support. Because, yeah, having to use a cell phone to access the elevator seems like a bit much.
0:24:06.3 SC: It does seem like a bit much. Well, apart from elderly or medical uses, the Internet of Things, I think, in a lot of people's minds has grown a little out of control, right? Like where you can't buy a toaster without it being Wi-Fi connected. There's clearly separate incentives, and I think we'll get to this later. The corporate world has a different set of incentives than either the consumer's or the academic world might have, right? Like they want to collect data on us. They're putting AI and other Wi-Fi connectivity into everything. Not everyone wants it. Is this something that... Is this a research program for people like you?
0:24:49.0 EM: Very much so. There's the old saying, right, nothing's free. If you think it's free, you're the product, right? And it's your data. And I think it... I am hopeful, perhaps forever the optimist, that some corporations will start to realize that simplicity and privacy is a selling point as opposed to... At some point we've gleaned as much data as we can out of these systems. But my mom was having to purchase a new washing machine this week, and my goal was like the simplest thing. Like can I please have a knob? Like no apps, no Wi-Fi, right? Just put your clothes in and be able to wash them. So this exhortation or showing business models where simplicity and privacy start to rule the day is something that I'm hopeful about. And in the meantime, folks here at Northeastern University and others are doing quite a bit of work in understanding the challenges around Internet of Things. And it's in some sense a weakest link challenge from a security and privacy point of view. So you have a bunch of things in your home, all Internet of Things, all Internet connected. They're all behaving, and you get one bad actor in the home, and it's grabbing information from everything else and then broadcasting out to the world. Like, just one bad actor breaks the security and privacy safety net. So lots of work to be done, lots of recent work that we've done on how all the signals that your car, your automobile, is sending out about you, it's also now an Internet of Things place that has tons of information that is being broadcast.
0:26:41.2 SC: And I get the impression that there's a certain amount of exhaustion that sets in on the part of the user to try to keep up their vigilance about this. I don't know if you saw literally just this morning I was reading about WeTransfer. Did you read about that? So WeTransfer is a way that you can send big files over the internet. And so they just sneaked a new paragraph into their agreement that says, oh, by the way, we can train our AIs on any file that you transfer using WeTransfer. So it's not even like we can train AIs on books or public things, like, your private conversations that you send or video or whatever is now there...
0:27:23.1 EM: It's all now fodder for AI?
0:27:24.2 SC: Yeah. And people are outraged by this. But it's some kind of collective action problem. Do you work at the level of government and legislation to try to help people?
0:27:36.5 EM: We do. So faculty from my university, so I'm dean of the Khoury College of Computer Sciences at Northeastern University. And our faculty, part of what they will do under, I guess you would call it a sabbatical, is we've had folks working in the White House with the previous administration. We've had faculty working in the Senate and faculty working at DOJ because it is this intersection of business models, regulation, and technology. And it does feel exhausting because you think you've got everything set, and then some new term of use sneaks in. So you have to put regulation forward. And that's why I'm very distressed over these conversations around no regulation around AI because we have to be able to innovate as quickly as possible. Like, are you kidding me? Because there are so many violations of trust and privacy and security that are possible with these technologies. And we have certainly shown that in decades of innovation up until now, we have been able to manage that with regulation in place, and we can do so again.
0:28:53.4 SC: And at the level of the researchers and the people building things, I realize that one of your initiatives at Northeastern has been to push the idea of ethical computing or at least train people that there is an ethical dimension, not just a puzzle-solving dimension to this kind of work.
0:29:12.7 EM: Yes. So when we were talking about human-centered computing, it's not just the psychology and the human factors, but it is the social sciences, including ethics and philosophy. And that is important to not only expose that to, for example, every single student that comes through our program, but then also to foreground that in our research. So one example from my work in aging in place, I joke like we can keep older adults safe, right? You just wrap them in bubble wrap and don't let them do anything, right? You can create a prison of a home that's perfectly safe, but no one would want to live there. And so there are ethical tradeoffs of risk. There's ethical tradeoffs of autonomy. There's ethical tradeoffs of, well, who's in control of this information and how these services work? Because it probably is the kids of an older adult that are setting things up and paying for things. Okay, well, who's the user then? Is it the older adult who has this in their home or is it the person who's paying the bills? And then, well, what about the insurance companies or the medical industry, right? So there's huge tradeoffs that have to be made. And so having a background of ethical frameworks and foregrounding that into technology innovation, I feel should be a requirement of every student studying in this field.
0:30:45.7 SC: Do you find that the students agree with you about that? Are they happy to have this kind of training or do they think it's a distraction?
0:30:52.3 EM: It has really shifted. When I started teaching ethics and technology, again, probably around the 90s, turn of the century, the students hated it. They were complete technology determinism. It was technology is neutral. It's just what people do with the technology. I just do the coding. That's someone else's problem. And I kind of went and was teaching other classes and then came back to this class a decade later. And at this point in time, by now, Facebook has happened and so many things have happened. And the students are like, yeah, no, technology is a hot mess. Give us tools and guidance. And I think that's part of the challenge is what are the tools? Because you can have a heady understanding of this, but then you still sit down to write code. And there's a big gap there. And so how do the tools, for example, help you make sense of the data that you're collecting and that you're relying on? And how do they foreground those conversations in the software engineering and the design processes? So that gap between avoiding, "Unintended consequences and actually creating technologies," that gap is pretty big. So that's something that we work on every day is closing that gap.
0:32:12.2 SC: It's good. I always like to hear that the students are a little bit more aware these days than they were in the past, which makes perfect sense. You're exposed to more things and you discover them. But it's a good segue into the other thing I wanted to talk about with you, which is you're talking about research being done at universities. You're talking about training students. There's a partnership involved between universities and industry and government to sort of do all the different aspects of technological development. And you're, I guess, the lead author on this report from a couple years ago from the National Academies on how that happens. So before we get into the details, what's your big-picture view of this? Is it a healthy set of relationships between universities, industries, government? Is it working?
0:32:59.6 EM: Up until a few months ago? Yes, yes.
0:33:02.4 SC: I knew you were going to say that.
0:33:04.8 EM: Yeah. So it's a 2019 report, a long title, Information Technology Innovation, Resurgence, Confluence, and Continuing Impact. And I'm the lead author, was the chair of the National Academy study. We finished it during the Great pandemic project. And it's part of a series of reports that have tackled certain myths out there about how technology has an economic impact in the U.S. And it really points to this amazing partnership, this ecosystem that has evolved since the 1950s of universities, federal government, and industry working together for really significant technical impact that we've seen over the decades in the United States.
0:33:57.3 SC: And so what are your favorite examples of universities doing research without necessarily any obvious technological or application-centered aims that nevertheless turned into something crucially important?
0:34:12.8 EM: So there are some things that are home runs. The report looks at, right, there are these folks who had this research at Stanford around, kind of at the edge of the internet, and then, a few years later, Google pops out the other end, literally walking down the road. Very few things are that clean cut, but sometimes there is something quite direct. One of the ones that I have enjoyed talking to folks about is most folks would not think of dairy farms as a major area of technical innovation, but they are, the success of dairy farms in the U.S., is strongly tied. So back in the 50s to the 70s, there was this new technology called radio frequency identification. It had been used in World War II, and it was a transponder, and then the other aircraft would chirp back, and it was a friend or foe type of technology, right? So wartime use. And somehow some researchers at Cornell said, this could be useful for cows. And they developed and miniaturized the technology to the point that RFID tags for a long time were just known as cow tags because that turned into the dominant commercial use.
0:35:42.8 EM: Each cow has its own little RFID tag, and it would get tailored feed based on monitoring milk production. Very straightforward data science problem at that point. And no one, I'm sure, in the creation of friend and foe technology that was being done in World War II thought that these tags would show up now pervasive across dairy farms. And that was enough of a push to keep that technology going because now it's in every single inventory and it's pervasive now, ubiquitous. When I used these tags at Xerox PARC, we called them cow tags because that was. And we were coming up with other things we could do with cow tags. And so now you have cows wandering around with these tags, and milking machines are all, they're just robots. That's what they are now. And so again, you have decades of people creating robots that can open doors and manipulate objects. None of those researchers were thinking about the robot latching onto a cow to milk the cow. But you have all of these investments in the robotic milking machines, and what has been incredibly helpful for the dairy industry, which has had huge labor shortages, is they can continue to be economically self-sufficient with now this very healthy business ecosystem of robotic milking machines.
0:37:18.2 EM: And funny enough, the robotic milking machines rely on the tags. So a cow wanders up, it knows which cow it is, essentially knows how the cow likes to be milked, and can monitor, and of course the robots clean and do everything with that. I didn't know this about cows. Cows like to be milked about every 12 hours. It's kind of hard to have a labor force that wants to be there for the 7 AM, milking and the 7 PM, milking if you're lucky at this. The cows actually just wander in to be milked on their own schedule. Robots take care of it. They do all the monitoring, cleaning. You get better production, safer, faster, all of these things through cow tags and robots. And who knew that all of those decades of investment in these areas would be a huge boon to the American dairy farm.
0:38:09.2 SC: And it's a good example of two stages of innovation. You had to invent it for whatever purpose, and then someone else had to say, oh, we can use this for a completely different purpose.
0:38:19.2 EM: And that's the magic of universities. Universities sit at these intellectual cross streets. Someone is doing something, military funding, I'm sure, and someone is learning about it in the class, and that university, Cornell, happens to have an extension program that works with local industry, so that's dairies. And so it wanders through the university long enough for someone to say, hey, I could do something with that. And then that becomes cow tags. I'm being trained in another university, starting to learn about this idea of ubiquitous computing, but I'm really interested about homes and families. So I start taking cow tags and start putting them on different objects in the home and see what we could do with them there. So universities are these places where people get exposed to research, they get exposed to concepts in their classes, they work in laboratories that are essentially time machines into the future, and then they graduate out and go into all sorts of industries going, hey, did you ever think about using cow tags?
0:39:33.7 SC: I almost have difficulty asking the right questions for this kind of topic, because it's so obvious to me that this is important and helpful and useful, but maybe I look out at the world and it's not obvious to everybody else. But one of the ways, and I'm sort of partly paraphrasing what you just said, but maybe you can expand on it. One of the ways in which universities are different than industry is that the goal at the university is to create new knowledge and then get credit for that, whereas the goal in industry is to make some money. And so sharing your ideas back and forth is a virtue in one of those contexts and maybe dangerous in the other one.
0:40:14.1 EM: Exactly. And that's when I refer to this report, it really is combating these myths. So one of the myths is the Silicon Valley garage. That innovations pop up when some industry folks get like-minded about something and then out, poof, comes a new idea. And the report looks at that that garage was just the last 10 minutes of years of work, and the people in the garage got trained in the universities and got exposed to a whole bunch of crazy ideas. And then they have the know-how and the insight to come together and make a business proposition. And then they have a very narrow, appropriately kind of path to follow of bringing something to market and having a return on that investment and making a profit. Universities are the opposite of that, right? They are rewarded for thinking big, thinking long horizons, doing something that is foundational, doing something that is creative. We get rewarded on creating something people hadn't done in the past. And we get rewarded on how much other people read about it. So it only counts if you get it published and other people are reading your publications and citing your publications.
0:41:39.5 EM: So we have an entirely different economic model of why we do what we do. And universities are strangely highly efficient, because you don't think of universities as an efficiency-oriented place. Universities are highly efficient because we have these dual business models sitting right next to each other. We have the business model of creative research, that convinces people to work long hours at much lower salaries than they could get in industry. Because they're committed to the mission. And then the second business model is education, where people pay us to come hang out and learn things from us and get exposed to new ideas. So you put these two business models next to each other, and that has been the foundation, the modern American university since the 50s has been the foundation for massive innovation and economic growth in the country.
0:42:37.1 SC: It's an interesting subject because the dual business model idea, universities provide education, they also provide research. Those aren't obviously compatible with each other in some sense, right? I mean, it seems to have worked, but sometimes I think... Like I wrote a blog post years ago, the purpose of Harvard is not to educate people.
0:43:00.9 EM: Yes.
0:43:02.7 SC: I mean, the students are great, we love having students, but kind of we're here to get our research done is the attitude of many faculty members. Do you think that this gluing together of these two different missions is long-term sustainable and compatible? I know that increasingly a lot of universities have their tenured faculty doing research and they teach using adjuncts or lecturers and things like that, and I worry that that's a fissure that is going to grow over time.
0:43:31.7 EM: It has certainly been a set of tensions that universities have had to manage. And I think part of this comes from my 25 years at Georgia Tech, Georgia Institute of Technology, and now I've been here three and a half years at Northeastern University. We're very much a R1 university doing top tier research with a lot of students, thousands of students coming through. So Harvard is at the end of one very long tail, and other universities that may not be R1 or may have very little teaching colleges and such are at the other end of the spectrum. But the heart of the curve is the research university that is public facing in its educational mission. And what we've especially seen with the National Science Foundation over the past decade or more, has been to lean into that. So, Ponch, the previous NSF director, would talk about the missing millions and really pushed that research investments went out across the United States, not just to the top 10 universities, but it was getting out everywhere because that magic of educating students in research, bringing that as close together as possible, he would just say that's where the magic happens.
0:45:00.8 SC: Yeah. And do we do a good enough job in universities at letting different parts of the faculty talk to each other, letting faculty and students talk to each other? I mean, there is a siloing effect in academia, where we're in our own departments and judged by our own criteria sometimes.
0:45:17.4 EM: I think universities have, especially in the past decade or more, really leaned into that interdisciplinaryness. Interdisciplinary research, interdisciplinary education is, John Seely Brown would say, where the white space is, right, where kind of where the interest is by the students. So, for example, at Northeastern University, over 50% of my students are in a combined major. They're majoring with computer science plus psychology or biology or game design or law or ethics, right? So that's part of what we're seeing in the market is students are more and more interested in how these combine and universities may be slow, but they also respond to that. It is where the disciplines cross that things get really exciting. So I think most universities have lowered the barriers or increased the reward function for interdisciplinary research. I mean, I remember at Georgia Tech, my previous institution, we saw the inflection point when at one point, historically, most grants were single investigator grants, and they were kind of heads down, domain focused. And we saw it in the data in the 90s when it had flipped to multi-investigator, multi-discipline focused. And so there has been a back and forth between the funding agencies and the research community to move us in that direction.
0:46:57.6 SC: And the other thing I guess that universities are really good at is taking time, doing things that don't have an obvious payoff right away, right? The blue sky, the basic research stuff. I'm spoiled because, or at least I'm biased, I suppose I should say, because I'm an early universe, fundamental physics, physicist, and also a philosopher. Nothing I do has any application technologically whatsoever. So of course, I think this is great. From a more objective point of view, how important is that basic pie in the sky, blue sky research kind of aspect of things?
0:47:39.0 EM: It is amazing how incredibly important that is. Part of what we look at in the report, we call it the pattern resurgence, right? That there'll be a flurry of activity in an area and then there's no commercial application or it becomes out of favor. But there's always someone who's just continuing to work on that, whether it's different vaccine techniques, which we saw during the pandemic. Neural networks, right? This is the heyday of generative AI right now. Longstanding ideas, right? There's a reason we refer to AI winters and summers, because sometimes the winters were long. And there was no money in industry. Very little, because there was no immediate business application. But those stubborn university researchers just dug in and said, no, no, no. The brain is a network, the world is based in networks, these networks will work.
0:48:37.9 EM: And it turned out that we needed enough data and enough computational horsepower to put these things together to now see what looks like, it feels almost instantaneous, like it just appeared. What just appeared was decades, decades and decades in the making. Modern exchange of information on the internet, the foundations of e-commerce, right? Number theory, right? Just, again, people working on the mathematical foundations of information science, they weren't thinking about Amazon, right? Amazon didn't exist. They weren't thinking about any of that, but they were creating the theoretical foundations, which now leads to secure e-commerce that people use day in and day out.
0:49:26.1 SC: I love that point that you made in the report about areas of research that were exciting and then sort of lay fallow for a while and then turned... Like they were just ahead of their time, right?
0:49:38.2 EM: Mm-hmm.
0:49:39.0 SC: And that sounds great, but then part of me worries, how do you distinguish that between that and a failed research program whose advocates are just holding on because that's what they've been doing their whole careers?
0:49:52.9 EM: So, the good news is that there's natural limiters in the system. And so, if you're working in an area and it's just not gaining traction, at some point, it's like, all right, I have to keep publishing and I have to be able to attract students because that's my currency. And people will shift into areas where they can continue to participate in the economic system at the university, publishing and training students. So, there's natural forcing functions that an area may go very quiet, and sometimes it's just you look at it in a different way. Virtualization was a technology that was initially about just supporting multiple jobs running on the same machine. So we're back at punch cards, right?
0:50:43.4 SC: Yeah.
0:50:46.4 EM: And virtualization went fallow for a while because punch cards disappeared. And at some point, a new group in Stanford looked up and said, hey, we would like to be able to share compute for a completely different reason, to run different operating systems. And then that very quickly turned itself into the basis of cloud computing. But you had to have a university kind of looking at, all right, what's a new way to use this other thing that we have on our bench and to play with it in a new way because there was no way that industry would say virtualization, well, that's dead, that's a dead technology now. It got reinvented. And it got reinvented because people could ask those creative ideas before it could become commercially viable.
0:51:32.1 SC: I guess there's still this problem, though, and this is not a rhetorical question. I truly don't know the answer to these questions. How do you decide how to allocate grant money when one proposal is right in the middle of an exciting renaissance and the other one is like, okay, you've been doing this for 20 years and it hasn't paid off yet? Like, I'm in favor of both, but deciding who gets how much is really hard.
0:51:58.2 EM: It is a really hard question, and there is no way that one can argue that the system would ever be perfectly optimal. The minority report of research, right, that you can forecast ahead of time. So, what you do is you start to balance across other criteria. So, I talked about paunch and the missing millions, balancing the criteria of making sure that different universities had a healthy amount of research, because that would just provide benefits into the state and into their local communities. We have programs that emphasize junior researchers. We have all of my faculty, my junior faculty, are about to submit their NSF career proposals, right?
0:52:43.0 SC: Great. Yeah.
0:52:44.4 EM: This is a program that is just for pre-tenured junior faculty. So, we're optimizing because we want to make sure young talent gets funding, and not every one of those gets accepted. By far, they get three shots at it three years in a row, but it nevertheless prioritizes growing junior researchers. Then we have funding programs that prioritize different types of societal impact. So, my work with older adults is an AI institute funded by the NSF, but it was chosen because we were very explicit about having an impact for older adults and aging in place, and that's why it was funded. And there are other AI institutes, for example, looking at agriculture or looking at education. So, at this point, you're managing a portfolio, right? Venture capitalists are managing a portfolio, and they're taking risk, and over time, you understand kind of like how to lay down different types of bets and how to make sure you have a mature portfolio.
0:53:45.9 SC: I think that's a very good analogy there. So, this reminds me, I meant to ask this a little bit ago because you brought up the 1950s, as this system has been going on since the 1950s. But it hasn't been going on since the 1700s, right? The idea that government spends a huge amount of money at universities to do research is not obvious or necessary. It's something we invented in the 20th century. Can you tell us a little about the history of that idea?
0:54:16.7 EM: Yep. Vannevar Bush. I think most people just argue about how to pronounce his name. But Vannevar Bush was a scientific advisor during the war and famously wrote a number of articles coming at the close of World War II. And he was fascinated by how technology had turned out to be so incredibly important in the success of the war. Cryptography, targeting, data analytics around weaponry and such. But he was a renaissance kind of guy and was interested in how can technology not just be about wars, right? But how can... It seems so powerful, its capabilities. What could it do in terms of the rest of American society? And so he laid down the rules of how this would work. It's really interesting that we're following a rule book from the 50s that said, we're going to push this out to universities. Universities are conveniently located around the country. We're very interested in the education of our workforce. And so we're going to put those double missions next to each other.
0:55:38.3 EM: And then we're going to have basic rules for mission funded agencies as well as basic science of like how we would set national priorities, but we would be hands off enough to let the system work. And especially with NSF, it was baked in from the very beginning that this peer review function that a bunch of scientists would get together in a room and argue with each other about what was worth investing in, that that was actually part of the secret sauce. And those discussions of the act of writing proposals and reviewing proposals and arguing about proposals, that is part of the research process, because it allows people to, you may not get funded the first time with your proposal, but you get really good feedback that helps you think about it differently, and then the proposal gets better. So that's all baked in, much more so than someone coming in and saying, all right, I have this much money, I'm going to give some to Caltech and some to MIT and some to Chicago or CMU or Johns Hopkins and just go do good things. The peer review process is part of how research gets better.
0:56:51.0 SC: I'm glad you said that out loud and now we have it on the podcast because I have this online discussion with people all the time, why don't we switch away from a model where the government pays for everything to private funding for things? And it's just, private funding is great when it's there, it's so much less reliable. And I think you actually made the point, it's so much less objective, right?
0:57:12.0 EM: Right.
0:57:13.4 SC: Private funders have their favorite things to fund. It's not like the scientific community is taking a bunch of money and deciding what to do with it, which I think is ultimately a much better model.
0:57:23.3 EM: Now, there's so many... And I hear this especially today because people point to, well, look how much money industry is investing, right? Surely they can take it from here in the AI gold rush that we're living in right now. And I always joke, you know, remember the people who made money in the gold rush were the people who sold the shovels?
0:57:43.3 SC: Yeah.
0:57:44.0 EM: Right. So this explains NVIDIA. So when there's a gold rush, like sell the hardware. Industry is very narrow and very short term in terms of what they can invest in. And so the money isn't there in terms of the scale, but more importantly, the breadth and the community ownership of like what things to invest in just needs to occur. And one of the things that I always point to in this is that, universities will poke at and embrace questions that industry never will. And that is an important part of the discussion. So two examples. One is in the AI space. So we have folks doing research about all the ways that AI fails. How is it leaking information from a privacy perspective? How are they not secure data poisoning attacks? How is censorship? How does censorship work in the different models? So these are things that even if industry was working on them, they're not publishing them. And if you're not publishing, then the rest of the world can't learn and improve. So there is just an entire space of the public good of these technologies that universities will tackle, which you're never going to convince is part of the ROI for industry.
0:59:20.6 EM: And in that public good space, there's also questions, there's also things that we do that are just not... We live in a capitalist society, right? But we can't view everything from just economic ROI and profit. So for example, universities have a long track record. I've worked in this space on technologies for people with disabilities, assistive technologies. These are technologies that enable people who are blind or who have hearing deficits or mobility challenges to continue to work and to continue to engage with society in a meaningful way. That research has come out of universities and that with a little bit of regulation, so ADA, Section 508, others, has made a profound difference for the lives of those millions of people in the United States. That market is never big enough to exist all by itself, right? But this notion of public good technologies and the kinds of research that people do, that's also a natural part of what attracts university researchers and fuels their creativity. And there is a huge avalanche of evidence that says when you land up doing something really interesting for someone with special needs, you land up creating benefits for everyone.
1:00:44.1 SC: Sure, yep.
1:00:45.2 EM: So it works out that way. It takes time, but it's never gonna be something you would see coming straight solely out of industry. So this creativity, this public good, this long-range perspective, and this messy ecosystem of it going through peer review and through open discourse is the magic that they figured out in the 50s and it has been the American playbook since then.
1:01:16.6 SC: Well, let me ask about, in that theme of open discourse, it does seem that there is a place for improvement in the university research system, which is publishing. We have a model where a lot of scientific research is done and then published, and it's immediately put behind a pretty severe paywall that people can't get to. What are your feelings about trying to incentivize researchers to make their own research products a little bit more publicly available?
1:01:47.7 EM: I think researchers very much want to do that. It's the economics of the system that they're in, so journal publishing and such. What we have seen though is those walls are coming down. The pandemic did quite a bit for this because there was an understanding that this was an all hands on deck global emergency and we need to get results out there as quickly as we could. And so this pre-publishing in my field, it's in archive. You're getting results out there and people are learning from those results in an open platform. You're not going to get the genie back in the bottle after that. And so what we've seen is more and more researchers and so ACM, which is Association for Computing Machinery, so the major professional organization for my faculty is moving to these open models. And they're having to change their business model to say, okay, we're going to get revenue in other ways, but that serve the community. But open and transparent publishing is becoming a greater value that's appreciated.
1:02:57.8 SC: I mean, I've certainly heard claims that are music to my ears. So I believe them that say that money spent on university research comes back as an economic impact many times over. Am I just hearing what I want to hear? Or is that something that we can actually accurately measure?
1:03:17.0 EM: No, we can measure it. It gets messy because, for example, let's go back to my dairy farms. If I look at the... If I try to create an economic value for affordable, safe milk in the U.S., and then I say, okay, well, how much does computing have to do with it versus all of the other aspects of dairy farming, right? That's a complicated equation?
1:03:41.1 SC: Very hard. Yeah.
1:03:42.5 EM: Right. The good news is that the scale in almost all of these settings is so dramatic that I have joked, I've written the equation, a small number of dollars here, add time, becomes a large number of dollars on the other side. And every time we've looked at this, it's been, so annual investments, low billions, look at annual revenue, low trillions, right? But the billion to the trillion is the point.
1:04:11.8 SC: Yeah, yeah.
1:04:14.9 EM: So it is strangely economically effective because of universities working with this dual business model. And if you had to take my faculty and just put them in industry to do research, well, most of them wouldn't be willing. But you'd have to pay them twice as much, and they would be segmented out into very specific industries, such those insights would not cross-pollinate in the ways that they do.
1:04:46.9 SC: You do notice sometimes, like, very clever researchers in industry are willing to make less money and move into academia just because they have more freedom to both work on pie-in-the-sky things, but also talk to various people and talk about their research products.
1:05:03.9 EM: Oh, very much. I was that way. I was at Xerox PARC for three years, and Xerox PARC in the 90s, it only competed with Bell Labs from the perspective of, like, one of the best research labs in the world. But at some point, I wanted to work on how people interacted with these everyday technologies, and I ran out of ways to twist it into the pretzel of the document company, right?
1:05:32.3 SC: Yeah.
1:05:34.3 EM: It just wasn't possible. Love my Xerox friends. And I see this with my faculty. We have an amazing faculty member joining us in the fall who's coming from Meta, and it's like, no, I want to go back to the foundations. I want to go back to creative scholarship. At Northeastern, we actually facilitate for our faculty to go back and forth. So I have faculty at Google right now. We joke because our students all do co-ops and that the faculty go do a co-op. So our faculty will go on a leave, and they'll work in a company or a startup. And they'll do that for a few years, and then they'll come back. And the best faculty understand the magic of doing that. They can take research results, and then, the rubber hits the road in terms of, like, making it into an actual product. And then they learn a whole bunch of new hard questions, and then they come back and work on it with their students. But having the interactions with the students is really key. That dual purpose and that educational mission is just where they thrive.
1:06:40.6 SC: And as much as I love the universities, there's absolutely a place for industry in this ecosystem, right? I'm a little worried that a lot of universities are very excited about AI right now, but AI is at the sort of shipping product stage, and it's hard for universities to keep up with the financial resources of the biggest companies that are doing this work.
1:07:05.1 EM: Yeah. And universities are very quickly learning to, like, it's like surfing, right? So AI is its own tidal wave right now in the current versions of generative AI. And so universities or researchers are already figuring out, like, to pick up the next wave. And so the next wave is around these smaller models. It's more efficient models. It's models that are going to be applied into very specific domains where universities can do this cross-disciplinary partnerships. And my bet is that a large part of that impact of that tidal wave we're watching right now is actually going to be the next generation of these systems. But universities can go ahead and start working on the next wave while industry is riding that really, really tall tidal wave right now.
1:08:01.7 SC: Well, we would hope that is true. I guess for my wind-up last question, you alluded to the fact that all of this is a different conversation today than it was six months ago. What are the prospects that you see? I mean, I know there's a lot of back and forth in government where there's, like, a bill proposed and the committees change it, and I have no way of keeping up with what the budgets are right now, but the uncertainty certainly isn't helping anybody.
1:08:30.0 EM: No, it's not... And thank you for probably the most important question, even though maybe the most depressing answer. We are really at a dramatic point of failure in the system. One of the versions out there is 60% cut in the National Science Foundation, including the future of AI and the future of these human centered technologies. And we're already seeing the immediate impact. So people who are leaving people leaving the country, you know, maybe is it in droves yet? I'm not sure, but it's significant. And they're not coming back. They're going to set up their research operations somewhere else. And inertia is difficult. They're not going to suddenly flock back to the US.
1:09:19.6 SC: Yeah. It's not frictionless.
1:09:21.9 EM: Yes. And so we're already seeing the immediate impact. And to me, it feels like, we've got this this flourishing forest and we're clear cutting it with the myth that something else is going to grow and it's not going to grow. And you don't get a forest back just because an administration changes. So we have been working this really magical, powerful combination since the 50s. And we're just kneecapping it at a time when we look at AI and all of the opportunities and dangers that it poses. This is not the time to kneecap the research ecosystem. We need it more than ever. But we're doing dramatic damage to it unless the American public stands up and says, no, we have to keep investing in the future. And that's what the federal investments in research are all about.
1:10:26.3 SC: When you talk to people in legislatures and bureaucracies and institutions, are they sympathetic to the message? Are they getting it? Have we done an insufficiently good job of making these points as strongly as we could?
1:10:41.4 EM: I think we're getting better, but it's been insufficient. Because we've been running on our own success for a little while. And that's part of why I talk about losing talent, because that is a message that rings true in Washington. It's also part of why I talk about cows, because most folks, when they hear computer science research, they think meta, Google, Amazon. And like, they're fine. Like, why do we need to put money into them? And I'm like, no, it's actually about farms and health care and safe automobiles and preventing cancer deaths. That's where we're working. And that's where those federal investment dollars are going. And I think we haven't done a good enough job of pointing to that ubiquitous impact that we have.
1:11:36.2 SC: Well, I feel the need after that to at least give you a chance to end on an optimistic note, which I always like to do. You've been optimistic throughout the whole podcast, so it feels a little bit unfair. But what are you most excited about going forward in these partnerships?
1:11:55.2 EM: So I love... I've been dean here at Northeastern for three and a half years, and just adore working with my faculty and our students. They are so... University researchers, they are just so courageous. And I may lose a couple to international competition. I hope not. But what I'm really hearing is, they come to me and say, Beth, you hired me because this is what the difference I want to make in the world. Help me be able to make that difference despite everything that is going on. So their commitment and passion is there. The insights and the work that they're doing is amazing. And this is, you know, it's not just my faculty. This is replicated all over the US. So the talent and the passion and willpower is there. We just have to not get in our own way and kneecap ourselves at such a critical time.
1:12:59.4 SC: You know, I gave you the opportunity to say something optimistic, and I think you did a great job at it. Elizabeth Mynatt, thanks so much for being on the Mindscape podcast.
1:13:07.0 EM: Really enjoyed it. Thank you.
[music]
I really appreciated this episode. It helped clarify my thinking about how the grant process works. I tended before to think that there was no value in the bureaucratic part of that the evaluation and such. However, it makes sense to me that in Grant writing you think about actually what you’re doing and why it should be funded. And that you might get better overtime in that process. Really a very helpful and calm conversation.