My work examines the places where technology, language, politics, law, and culture intersect. It is interdisciplinary by design, and I have worked on a variety of technical and non-technical projects. Below is a sampling of my work.
Democratic governance, both online and offline, relies on the effectiveness of jury decision-making. However, the legitimacy of jury decisions requires that juries' outcomes are not random: that outcomes depend more on the case as presented, rather than on arbitrary, path-dependent features of the deliberation. In this project, we test this assumption by utilizing a pseudonymous online deliberation platform to reconvene the exact same jury without the group's knowledge that they are working with the same people again, and measure the consistency of the jury's judgments. Ultimately, we find that groups and individuals are equally consistent; participating in a group also does not affect an individual’s own decision consistency. We also find that minority voices are more influential in deliberation than previously expected. These results are especially interesting in light of the fact that participants greatly underestimated the consistency of the teams they participated in. Jury decisions are consistent despite a widespread perception to the contrary.
I started working on this project in January 2019, and submitted it in May 2020 as my Undergraduate Honors Thesis. This area is one that I am incredibly passionate about, and I'll be continuing to explore and carry this project forward.Download Thesis Abstract
Gender bias has been a pervasive issue in U.S. political coverage since women gained the franchise in 1920. Women are routinely targeted for their appearance, likeability, and familial qualities—characteristics for which men are rarely scrutinized. As a way to help journalists and editors identify and correct gender-biased language in political reporting, we introduce “Disarming Loaded Words” (DLW) as a new solution to this issue. DLW is a computational tool, backed by both machine learning and human expert curation, that tags potentially biased words in a document and provides feedback and context for why a word may be problematic. DLW provides a minimally disruptive way to nudge journalists toward understanding unconscious biases. It was successfully prototyped as a standalone Google Docs add-on, and the code for its model, interface, and API are now open-source and available for use on other platforms.
This project originally created for CS 206 (Exploring Computational Journalism) at Stanford. We were accepted for Publication at Computation + Journalism 2020.
As a platform, Slack has almost no built-in content moderation. On the vanilla Slack platform, it's possible to curse out your teammates, post hateful content, and disseminate misinformation with impunity.
Thus, enter ModBotHero6. We combine a little bit of Reddit moderation bot with a little bit of Baymax—the friendly robot from Big Hero 6 that always asks its users whether they're 'satisfied with their care.' Like Baymax, our goal here is to put humans first. Our philosophy was to use the AI-powered moderation bot to remove or reduce immediate potential threats (i.e., hate speech, sexually explicit material, etc.) while also clearly signalling our understanding that AI isn't perfect. Some uses of profanity are positive, or appropriate given context—nuances that AI cannot yet detect. As such, the content we moderate goes to a human moderator for final approval. Each moderation report contains the API scores, in addition to a link to the message for context and bot-generated suggested actions.
We also realize that AI isn't perfect at catching every instance of abuse. Some types of abuse are nuanced, and hate speech is almost always culturally-specific. As a result, this bot also includes a flow for users to report abusive content, which, in turn, also gets directed to the moderator channel.
This project was selected as one of the top two projects in the course CS 152 (Trust and Safety Engineering) at Stanford. We were featured on the Stanford Internet Observatory's Live Webinar in March 2020! Check it out here.
Workers on Amazon Mechanical Turk often find it impossible to prove their qualifications and find decent-paying work. Vitae is a resume management system for online gig workers, enabling them to achieve higher wages and more meaningful work.
This project was conducted through the CURIS (Undergraduate CS Research) Program.Download Poster
New employees often face a steep learning curve. Oneboard is an AI-powered chatbot that intelligently recommends resources to help new employees answer questions and get started more effectively.Go to GitHub Repo
Shopping is meant to be social—a way to bring families together. Instead, some family members (predominantly women) shop, while others sit at the sidelines. Battleshop is a collaborative shopping game that enables shoppers to complete shopping goals together through a gameified experience—ultimately, bringing families and friends together.Go to Project Website
Thousands of the world's languages disappear each year. Some sounds are never heard again. What does it mean when we lose a language—and is there a chance to save them?
This project won the Boothe Prize for Excellence in Writing.
I had the immense fortune of studying abroad at Oxford from April - June 2019, where I worked on five papers under an advisor at the Oxford Internet Institute. There, I spent some of the happiest and most intellectually free months of my life so far, deep-diving into the socio-political implications of an increasingly digitized world.
View Collection of Papers
Immigrants are often told to "go back to their country," even when their families have lived in the United States for generations. What does it take to be truly seen as American—and what is the cost?Download Paper
Product design is a powerful tool that can subconsciously shape users' lives and everyday decisions. Used maliciously, users can suffer physical or mental harm. How might we create a framework that introduces ethics into product design, specifically with regard to products that rely on user information?Download Paper