301 | Tina Eliassi-Rad on Al, Networks, and Epistemic Instability

Big data is ruling, or at least deeply infiltrating, all of modern existence. Unprecedented capacity for collecting and analyzing large amounts of data have given us a new generation of artificial intelligence models, but also everything from medical procedures to recommendation systems that guide our purchases and romantic lives. I talk with computer scientist Tina Elassi-Rad about how we can sift through all this data, make sure it is deployed in ways that align with our values, and how to deal with the political and social dangers associated with systems that are not always guided by the truth.

tina eliassi-rad

Support Mindscape on Patreon.

Tina Eliassi-Rad received her Ph.D. in computer science from the University of Wisconsin-Madison. She is currently Joseph E. Aoun Chair of Computer Sciences and Core Faculty of the Network Science Institute at Northeastern University, External Faculty at the Santa Fe Institute, and External Faculty at the Vermont Complex Systems Center. She is a fellow of the Network Science Society, recipient of the Lagrange Prize, and was named one of the 100 Brilliant Women in AI Ethics.

4 thoughts on “301 | Tina Eliassi-Rad on Al, Networks, and Epistemic Instability”

  1. Mr Declan Brennan

    Thought provoking.
    One little niggle on value functions. At one point there is a throw away comment: “I think we can all agree we want to live in a stable society”. In fact societies work best when they are neither boiling chaos, nor frozen stability. Moreover, people will have different ideas for the ideal amount of order in society. Witness the mixed feelings about HOAs for example.
    Should AIs optimize their value function to the individual being answered, should they employ some agreed value function for society or more likely some compromise between these and if so how is that compromise arrived at?

  2. Good point on the “stable society”. I had the same thought, but you said it better than probably I would have.

    For the value function, probably again some intermediate compromise would be desired. A third option would be that the AI adopt it’s own value function and adjust it over time, much like each person chooses (consciously or unconsciously) and adjusts their value function over time. I would prefer not to have an AI that always mimics my values or an AI that has always uses the value function imposed by a committee. Not easy to describe how to accomplish this, but if each human can do this, then probably AI can get there too.

  3. At 0:27:40.3 TE mentions a Nature Perspective piece about Human-AI coevolution. Does anyone have the link?

  4. Pingback: Donald Trump is Today’s Walter Cronkite – One Inextricable Beam

Comments are closed.

Scroll to Top