About

Niloufar Salehi is an assistant professor in the School of Information at UC, Berkeley. She studies human-computer interaction, with her research spanning education to healthcare to restorative justice

Her research interests are social computing, human-centered AI, and more broadly, human-computer interaction (HCI). Her work has been published and received awards in premier venues including ACM CHI, CSCW, and EMNLP and has been covered in Venture Beat, Wired, and the Guardian. She is a W. T. Grant Foundation scholar for her work on promoting equity in student assignment algorithms and is a member of the advisory board on generative AI at NVIDIA. She received her PhD in computer science from Stanford University in 2018.

Research

If you are a current UC Berkeley student who is interested in getting involved with the research described here please fill out this form [1].

Human-Centered AI: Machine Translation

This work focuses on developing technical methods for more reliable use of AI systems based on ML and LLMs. On example is Machine translation. Machine translation (e.g. Google Translate) has potential to remove language barriers and is widely used in hospitals in the U.S., but almost 20% of common medical phrases are mistranslated to Chinese, with 8% causing significant clinical harm. Examples of this work includes:

The long term goal of this research effort is to develop new approaches to design and evaluate reliable and effective AI systems in high-stakes, real-world contexts such as machine translation in medical settings.

Community Centered Algorithm Design: School Assignment

There is growing awareness around the impact that algorithmic systems have on people. An open question is how algorithmic systems can be designed to center the needs and values of the communities they impact. In our work we study a matching algorithm that assigns students to public schools across the U.S. Examples of this work include:

Ultimately, our goal is to develop methods and best practices to engage parents and policy makers in designing algorithmic systems.

Restorative Justice Approaches to Addressing Online Harm

Harms such as harassment are notorious online but extremely difficult to address effectively. Dominant models of content moderation leave out victims and their needs and instead focus on punishing offenders. We take restorative justice as an alternative approach to ask: Who’s been harmed? What are their needs? And whose obligation is it to meet those needs? Examples of this work include:

Our goal in this work is to design better mechanisms and tools to address online harm by centering the needs of those who are harmed and supporting those who have caused harm to take accountability and work to repair the harm.

Funding

My work is currently supported by the National Science Foundation under the following grants:

  • FAI: A Human-Centered Approach to Developing Accessible and Reliable Machine Translation, with Marine Carpuat (PI) and Ge Gao, [article, award]
  • DASS: Legally & Locally Legitimate: Designing & Evaluating Software Systems to Advance Equal Opportunity, with Catherine Albiston and Afshin Nikzad, [article, award]
  • FOW: Human-Machine Teaming for Effective Data Work at Scale: Upskilling Defense Lawyers Working with Police and Court Process Data, with Aditya Parameswaran (PI), Sarah Chasins, Joseph Hellerstein, and Erin Kerrison, [article, press, award]

Additionally, I am grateful for support from the W. T. Grant Foundation, Google-BAIR commons and Facebook research.

[1] Unfortunately, we can not currently accept research assistants from outside the university, but there are some great summer undergraduate research opportunities at CMU HCII, UC, San Diego, and University of Washington among others. You can find a list of NSF supported undergraduate research opportunities in computer and information sciences here.