Home Blog Newsfeed Bringing Meaning into Technology Deployment: MIT’s SERC Initiative
Bringing Meaning into Technology Deployment: MIT’s SERC Initiative

Bringing Meaning into Technology Deployment: MIT’s SERC Initiative

MIT faculty recently convened to discuss pioneering research integrating social, ethical, and technical expertise, supported by seed grants from the Social and Ethical Responsibilities of Computing (SERC) initiative. This cross-cutting initiative of the MIT Schwarzman College of Computing addressed complex challenges and possibilities at the intersection of computing, ethics, and society.

Nikos Trichakis, co-associate dean of SERC, emphasized the importance of driving progress in this space, stating, “SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space.” Caspar Hare, also co-associate dean of SERC, highlighted the collective community judgment involved in selecting the most exciting work in the social and ethical responsibilities of computing at MIT.

The full-day symposium on May 1, organized around responsible health-care technology, AI governance and ethics, technology in society and civic engagement, and digital inclusion and social justice, featured presentations on algorithmic bias, data privacy, and the social implications of AI. Student researchers also showcased their projects as SERC Scholars.

Key highlights included:

Making the kidney transplant system fairer

Dimitris Bertsimas presented an algorithm that significantly reduces the time required for kidney transplant allocation from six hours to 14 seconds. This algorithm examines criteria like geographic location, mortality, and age, enabling faster policy changes. James Alcorn from the United Network for Organ Sharing (UNOS) noted that this optimization radically changes the turnaround time for evaluating policy scenarios, improving the system for transplant candidates.

The ethics of AI-generated social media content

Adam Berinsky and Gabrielle Péloquin-Skulski explored the implications of disclosing AI-generated content on social media. Their research indicated that labeling AI-generated images with a process-oriented label reduces belief in both false and true posts, suggesting that labels combining process and veracity might be better at countering AI-generated misinformation.

Using AI to increase civil discourse online

Lily Tsai discussed experiments in generative AI and the future of digital democracy. She explained that online discourse has become increasingly “uncivil,” with too much information available. Her team developed DELiberation.io, an AI-integrated platform for deliberative democracy, with initial modules to enhance online spaces for deliberation. Tsai emphasized the need to assess technologies for positive downstream outcomes.

A public think tank that considers all aspects of AI

Catherine D’Ignazio and Nikko Stevens created Liberatory AI, a “rolling public think tank about all aspects of AI.” They gathered researchers to examine the most current academic literature on AI systems and engagement, intentionally grouping the papers into three distinct themes: the corporate AI landscape, dead ends, and ways forward. D’Ignazio noted that they aim to contest the status quo and reorganize resources for larger societal transformation.

Add comment

Sign Up to receive the latest updates and news

Newsletter

© 2025 Proaitools. All rights reserved.