Jeff Nagy, postdoctoral fellow at DISCO, is a self-proclaimed word puzzle addict and a historian of computing. He holds a PhD in Communication from Stanford University where he wrote the dissertation, “Watching Feeling: Emotional Data from Cybernetics to Social Media.” [Responses have been edited and condensed for clarity.]

Q: Tell me a little about your scholarly background and research. 

A: I took a sideways path to get to the history of computing. In undergrad, I was originally a math major; I was a hard sciences person until I (to my parents’ enduring disappointment) switched to comparative literature. When I went to grad school, I became interested in the history of computing and AI in part because it brought these two interests of mine together. I’ve been studying the history of transforming emotion into the kinds of data that algorithmic systems can measure and manipulate—think ChatGPT and possible therapy chatbots. This kind of technology is called “sentiment analytics” or “affective computing.” My project recovers 70 years of trying to think about emotion as something that could be computational; there’s a hidden history of thinking about emotion and computing together, one that goes back all the way to the very beginnings of computing. [AI isn’t] just the pursuit of machine intelligence and rationality like the Deep Blue IBM v. Garry Kasparov chess matches; it’s also the search for a kind of machine empathy. 

Q: You have a background in history of the behavioral sciences. How does this fit into or influence your media studies work?

A: Many new kinds of machine learning come from strange exchanges between computing and the behavioral sciences. For example, when mid-century psychologists like Sylvan Tompkins (and later Paul Ekman) got interested in emotion, part of what inspired them is actually earlier, cybernetic computational thinking about the way that the brain might resemble a computer. This leads to ideas like the basic emotion paradigm: a theory that claims there is a small, specifiable number of universal emotions—like joy, sadness, shame, or fear—that are essentially genetic. These mid-century emotional categories form the basis of a lot of the ways that AI and emotion are envisioned together today: they’ve been transformed into a data architecture. Also, these exchanges often come from the ways technological development has leveraged specific marginalized groups. A lot of times where [computing with feeling] comes from is technologists leveraging groups of people who are problematically thought to be somehow emotionally deficient in order to build computational prosthetics for them that claim to fix these supposed problems. So in the history of computing emotion, a repeatedly targeted group for what is thought to be care or technological assistance is autistic individuals. But I think that targeting is a broader dynamic in the history of AI, where these specific marginalized populations are enrolled in a project of technological development that is simultaneously care and control, and that can also reproduce or deepen ableist conceptions of neurological or bodily difference. These projects are both a way of envisioning new forms of automated care at the same time that they bring new groups of people into algorithmic surveillance. 

Q: What attracted you to the DSI at U-M?

A: I wanted to work with other scholars interested in the intersection of technology and social justice. As a DISCO postdoc, I’m a member of a community in which everyone is trying to think of ways that we can chart these histories of technology and social justice and share them with not just academic audiences, but people who are affected by these histories and who are empowered to do something about them. 

Q: Tell me a little about the DIGITAL 258: “Feeling Digital” course you’re currently teaching. What can students expect from a class with you?

A: The course content starts from the following conceptual flip: what if we thought about emotions rather than reason when it comes to the digital? We forget that [the digital is] also the habitat for our emotional lives. Emotional media, which we discuss in the class, is everywhere. Zoom is emotional media—[Zoom programmers] have developed systems that operate out of sight on the platform and register your emotional expressions as you’re in a call. [Note: This feature was never rolled out due to too much pushback.] For example, this tool could detect which students were bored or frustrated as we shifted to remote schooling [during the pandemic]. Emotional media are also things like mood rings—basically any media that claims to give you some sort of transparency into your unconscious, emotional life. [In class] we’re thinking about how throughout the 20th-century, work moved from manual labor to work that required emotional performance; about surveillance systems; about algorithmic biases; and about who gets to have what kinds of feelings. We’ve been using a platform called to organize our research on emotional media; it’s a great way to grab things that you’re thinking about as you’re moving across the Internet without having to copy paste the URL or download the image, so it makes doing this digitally based research a bit easier to organize. 
Q: What is your favorite text on your 258 syllabus?

A: Affect and Artificial Intelligence by Elizabeth A. Wilson. When Affect and AI came out in 2010, it was one of the only books that took seriously the role that emotion played in the history of computing. I’m continuously amazed at how Wilson manages to take complicated theoretical arguments and a complicated historical archive and make it feel like you’re watching entertaining trash TV, which is not the right description, but it’s also not not the right description.

Q: What do you hope to accomplish as a postdoc at DISCO?

A: My specific role as part of the DISCO Network is to help bridge the five labs (U-M, University of Maryland, Stony Brook University, Purdue University, and Georgia Tech), all of which have their own fellows, their own research interests, and their own methodologies, and to help them form a cohesive whole. Part of my work is getting the kinds of thinking we’re doing out in the world; I’m organizing a series of publications for us to share our research with academic specialists and with broader audiences online. Hopefully by the fall we’ll have some shorter, accessible pieces around social justice and technology—especially emerging technology—published for non-specialist audiences. 

Q: What is a typical day online for you?

A: I typically open the New York Times while I’m waiting for the coffee to perk and with one eye open, read a couple articles and try to make my tired brain make sense of what’s happening in the world. I’m also kind of addicted to word games—I do the crossword and Spelling Bee almost every day. I’m not a big social media person; I left Facebook and Twitter, and I don’t regret it, though I have a very piddling Instagram account that I almost never post to. I’m actually on TikTok, although I don’t post. I’m solely a consumer of TikTok content… whatever they’re doing over there, they have a direct line to my brain.