
Bringing meaning into technology deployment
In an era defined by rapid technological advancement, ensuring that innovation aligns with human values and societal well-being is paramount. The MIT Schwarzman College of Computing’s Social and Ethical Responsibilities of Computing (SERC) initiative is at the forefront of this endeavor, championing research that seamlessly integrates social, ethical, and technical considerations. This commitment was profoundly showcased at the recent MIT Ethics of Computing Research Symposium, where faculty presented their pioneering work, each supported by SERC’s inaugural seed grants.
The call for proposals last summer saw a remarkable response of nearly 70 applications, with a select few receiving up to $100,000 in funding. Nikos Trichakis, co-associate dean of SERC, emphasized the grants’ purpose: “SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space.” Caspar Hare, also co-associate dean of SERC, echoed this sentiment, noting, “What you’re seeing here is kind of a collective community judgment about the most exciting work when it comes to research, in the social and ethical responsibilities of computing being done at MIT.”
The full-day symposium, held on May 1st, was structured around four crucial themes: responsible health-care technology, artificial intelligence governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Attendees were treated to a series of insightful TED Talk-style presentations and a vibrant poster session featuring projects by SERC Scholars. These discussions delved into critical areas such as algorithmic bias, data privacy, the societal implications of AI, and the evolving dynamics between humans and machines. A comprehensive video playlist of the symposium presentations is available for viewing on YouTube.
Transforming Kidney Transplant Allocation for Fairness
A highlight from the symposium’s health-care technology theme was the groundbreaking work by Dimitris Bertsimas, vice provost for open learning and Boeing Professor of Operations Research. Bertsimas unveiled his latest advancements in analytics aimed at creating a fairer and more efficient kidney transplant allocation system. Current policies, managed by a national committee, can take months to create and years to implement—a timeline often fatal for patients awaiting organs. Bertsimas’ new algorithm drastically cuts evaluation time, processing criteria like geographic location, mortality, and age in just 14 seconds, a monumental improvement over the previous six hours.
Working closely with the United Network for Organ Sharing (UNOS), a nonprofit managing the national donation and transplant system, Bertsimas’ algorithm promises to revolutionize the process. James Alcorn, senior policy strategist at UNOS, shared a compelling testimony, stating, “This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a couple months to look at a handful of different policy scenarios, and now it takes a matter of minutes to look at thousands and thousands of scenarios. We are able to make these changes much more rapidly, which ultimately means that we can improve the system for transplant candidates much more rapidly.”
Navigating the Ethics of AI-Generated Social Media Content
Under the artificial intelligence governance and ethics theme, Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in Political Science, addressed the complex implications of AI-generated content on social media. Their research explored the impact of disclosing, or not disclosing, AI involvement in posts. Through a series of surveys and experiments, they analyzed how various labels affected users’ perceptions of deception, their willingness to engage, and their judgment of truthfulness.
Their initial findings reveal a crucial insight: “one size doesn’t fit all.” Péloquin-Skulski noted that applying a process-oriented label to AI-generated images reduced belief in both false and true posts, a problematic outcome. This suggests that future labeling strategies may need to combine both process and veracity information to effectively combat misinformation without undermining truthful content.
Fostering Civil Discourse Online with AI
In the realm of technology in society and civic engagement, Lily Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, alongside Alex Pentland, Toshiba Professor of Media Arts and Sciences, presented their work on using generative AI to enhance digital democracy. Their research addresses the twin challenges of overwhelming information and increasing incivility in online discourse, which often deter public participation.
Their solution, DELiberation.io, is an AI-integrated platform designed to improve online deliberative spaces. While current studies have been conducted in a lab setting, forthcoming field studies are planned, including a partnership with the District of Columbia government. Tsai passionately urged the audience to demand accountability from technology developers: “If you take nothing else from this presentation, I hope that you’ll take away this — that we should all be demanding that technologies that are being developed are assessed to see if they have positive downstream outcomes, rather than just focusing on maximizing the number of users.”
Establishing a Public Think Tank for Holistic AI Consideration
Finally, under the digital inclusion and social justice theme, Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoc at the Data + Feminism Lab at MIT, introduced Liberatory AI. What began as a proposal for a framework to integrate community methods and participatory design into AI/machine learning work evolved into a “rolling public think tank about all aspects of AI.”
Liberatory AI brought together 25 researchers from diverse institutions and disciplines who authored over 20 position papers. These papers were strategically grouped into three themes: the corporate AI landscape, dead ends, and ways forward. D’Ignazio articulated their ambitious vision: “Instead of waiting for Open AI or Google to invite us to participate in the development of their products, we’ve come together to contest the status quo, think bigger-picture, and reorganize resources in this system in hopes of a larger societal transformation.” This initiative underscores a collective effort to steer AI development towards more equitable and beneficial outcomes for all.



