CSSI Research Seminar: Soroush Vosoughi

Location
Lederle Graduate Research Center (LGRC) A112
Date

Friday, April 5, 12pm-1:30pm
Talk starts at 12:15
Lederle Graduate Research Center (LGRC) A112

Soroush Vosoughi, Dartmouth College (Dept. of Computer Science, Program in Quantitative Social Science)

Prosocial Language Models

Abstract:
Large language models, such as GPT-4, have marked a significant advancement in the field of natural language processing, achieving near-human performance across a variety of tasks with minimal to no additional training data. The remarkable capabilities of these models can be attributed to their substantial parameter counts, often reaching hundreds or thousands of millions, and the extensive datasets sourced from the web for their pre-training. Despite their successes, the very characteristics that empower these models also render them susceptible to mirroring web-based biases and antisocial behaviors. Such reflections pose considerable challenges in deploying these models in real-world scenarios, particularly in socially sensitive applications. In response, our laboratory focuses on developing techniques for the post hoc mitigation of these antisocial tendencies, allowing for the enforcement of prosocial behaviors during model inference without the need for resource-intensive retraining. This presentation will delve into our latest efforts to reduce bias and enhance alignment with human ethical standards in language models through inference-time interventions.
 

Speaker
Soroush Vosoughi
Speaker Title
Assistant Professor
Speaker Institution
Dartmouth College
Speaker Biography

Prof. Soroush Vosoughi leads the Minds, Machine, and Society group at Dartmouth. The group explores the nuances of large language models (LLMs), focusing particularly on mitigating their anti-social tendencies to foster a more responsible and transparent AI technology. His research also delves into the sphere of computational social science, creating tools that offer nuanced perspectives on various social systems and issues. Recently, their research has ventured into integrating visual data with language models, aspiring to craft a more comprehensive representation of the extensive data available, thereby inching closer to a nuanced understanding of human cognition. Prof. Vosoughi is a recipient of the Google Research Scholar Award in 2022 and an Amazon Research Award in 2019, and his work has earned several Best Paper awards and nominations, including the Outstanding Paper Award at AAAI 2021. Before joining Dartmouth, Prof. Vosoughi was a postdoctoral associate at MIT and a fellow and later an affiliate at the Berkman Klein Center at Harvard University. He received his Ph.D., MSc, and BSc from MIT in 2015, 2010, and 2008. Prof. Vosoughi's research has been supported by a variety of entities, such as the NSF, NIH, and the John Templeton Foundation.