Niloufar Salehi photo

Niloufar Salehi

Assistant Professor, School of Information, UC-Berkeley
Affiliated appointment, EECS

Curriculum Vitae · Google Scholar


PhD students Talks and travel

25-30 Apr

20-21 Feb

Impact Fund Class Action Conf.
AI and algorithms 101 [slides]


23 Oct

Design at Large, UCSD. View talk.

16 Oct

Social Justice Program for Undergraduate Students, College of Engineering, UC, Berkeley

7-10 Aug

Feminist Data Studies Workshop, University of Michigan

20-21 June

Stanford University
UIST PC meeting

14-16 May

Simon Fraser University
Digital Democracies Conference: Artificial Publics, Just Infrastructures, Ethical Learning

Niloufar Salehi is an Assistant Professor at the School of Information at UC, Berkeley, with an affiliated appointment in EECS. Her research interests are in social computing, participatory and critical design, human-centered AI, and more broadly, human-computer-interaction (HCI). Her work has been published and received awards in premier venues in HCI including ACM CHI and CSCW. Through building computational social systems in collaboration with existing communities, controlled experiments, and ethnographic fieldwork, her research contributes the design of alternative social configurations online.

Recent Publications

Random, Messy, Funny, Raw: Finstas as Intimate Reconfigurations of Social Media
Best Paper Honorable Mention
Sijia Xiao, Danaë Metaxa, Joon Sung Park, Karrie Karahalios, Niloufar Salehi, ACM CHI 2020

Feminist Data Manifest-No
Marika Cifor, Patricia Garcia, TL Cowan, Jasmine Rault, Tonia Sutherland, Anita Say Chan, Jennifer Rode, Anna Lauren Hoffmann, Niloufar Salehi, Lisa Nakamura, 2019

Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas
Eva Yiwei Wu, Emily Pedersen, Niloufar Salehi, ACM CSCW 2019

Hive: Collective Design Through Network Rotation
Niloufar Salehi, Michael Bernstein, ACM CSCW'18

Research Highlights

My group studies and designs social computing systems. Ongoing projects:
What would a Restorative and Transformative Justice approach to moderation and governance of online platforms look like?
Why don't school assignment algorithms live up to their theoretical promises of efficiency and equity in practice?
What could go wrong when affect recognition is used to evaluate and compare people?


publication: Hive: Collective Design Through Network Rotation, CSCW'18
Niloufar Salehi, Michael Bernstein


Hive, explores how social systems can help build strong networks by organizing a collective into small teams, then intermixing viewpoints by gradually rotating team membership. I deployed this project with Mozilla to reimagine accessible web browsing with disability advocates online.

In this paper, we introduce a new class of collective design system that intermixes people instead of ideas: instead of receiving mere exposure to others' ideas, participants engage deeply with other members of the collective who represent those ideas, increasing engagement and influence. We thus present Hive: a system that organizes a collective into small teams, then intermixes people by rotating team membership over time. At a technical level, Hive must balance two competing forces: (1) networks are better at connecting diverse perspectives when network efficiency is high, but (2) moving people diminishes tie strength within teams. Hive balances these two needs through network rotation: an optimization algorithm that computes who should move where, and when. A controlled study compared network rotation to alternative rotation systems which maximize only tie strength or network efficiency, finding that network rotation produced higher-rated proposals. Hive has been deployed by Mozilla for a real-world open design drive to improve Firefox accessibility.

We Are Dynamo

publication: We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers, CHI'15
Best Paper Honorable Mention
Niloufar Salehi, Lilly Irani, Michael Bernstein, Ali Alkhatib, Eva Ogbe, Kristy Milland, ClickHappier

Dynamo, shows how structured human labor can help move efforts forward when they stall. I undertook this project in collaboration with worker rights advocates on Amazon Mechanical Turk.

Dynamo is a platform to support the Mechanical Turk community in forming publics around issues and then mobilizing. We are researching new approaches, systems, and labor mechanisms for collective action online.

"Despite their variety, Turkers have something in common—a lack of power. They operate in a realm largely untouched by legislation, unions, and guilds. As a result, the inexperienced can find themselves earning well below minimum wage, or abused by underhanded employers. But a project out of Stanford University [and UC San Diego] is hoping to grant Turkers agency—and might begin to revolutionize the industry. Dynamo is a platform that gives Turkers a collective voice and, consequently, the chance to drive change." -The Daily Beast

Selected Press

Amazon's Turker Crowd Has Had Enough, Wired, 8/23/2017
The Vanishing 9-to-5: Ruthless and Liberating, YES! Magazine, 9/3/2016
On Demand, and Demanding Their Rights, The American Prospect magazine, 6/28/2016
Intellectual Piecework, The Chronicle, 2/16/2015
Amazon's Mechanical Turk workers protest: 'I am a human being, not an algorithm', The Guardian, 12/3/2014
Amazon’s Turkers Kick Off the First Crowdsourced Labor Guild, The Daily Beast, 12/3/2014
Amazon's Mechanical Turk workers want to be treated like humans, Engadget, 12/3/2014
Workers of Amazon services published open letters to Bezos saying "We are not an algorithm", Gigazine, 12/4/2014 ( Japanese)
Amazon Mechanical Turk workers begin letter-writing campaign,, 12/5/2014
Amazon Mechanical Turk: Artificial Artificial Intelligence, Rhizome Today, 12/5/2014
The proletariat web accesses the class consciousness and launches its first collective action to improve working conditions, Slate, 12/4/2014 (French)
Mechanical Turk workers protest, Xakep, 12/8/2014 (Russian)


1. Have you signed the guidelines for ethical research on MTurk?

Plenty of academic research passes through AMT or is about Turkers, but ethics boards (IRBs) who review and approve research protocols often don't know how workers want to be treated. Turkers have collectively authored these guidelines to help educate researchers and let Turkers hold them accountable to a higher standard.
Email us at to add your support.

2. Letter writing campaign to Jeff Bezos: I am a human being not an algorithm
All around the world, people are writing letters about themselves to a distant, all-powerful figure who can make their dreams come true. But this year, many of those letters will be addressed to Amazon boss Jeff Bezos rather than Father Christmas. - The Guardian
The goals of this campaign are to publicly state that:
  • Turkers are human beings, not algorithms, and should be marketed accordingly.
  • Turkers should not be sold as cheap labour, but instead skilled, flexible labour which needs to be respected.
  • Turkers need to have a method of representing themselves to Requesters and the world via Amazon.