Hyperbias in Web Search

Metaxa-Kakavouli, D. and Torres-Echeverry, N. Google’s Role in Spreading Fake News and Misinformation. Available at SSRN.

Some of my ongoing work focuses on political bias in web search results. In particular, rather than studying the outright-fake content, I’m interested in content that is factually correct but has a strong parisan bias. Previous iterations of this work have focused on Google search results in the months leading up to the 2016 U.S. elections; I’m currently working on a similar analysis for the upcoming 2018 elections, as well as producing a digital tool for combatting hyperbiased content online.

Gender Bias in Web Interfaces

Metaxa-Kakavouli, D., Wang, K., Landay, J., and Hancock, J. Gender-Inclusive Design: Sense of Belonging and Bias in Web Interfaces. CHI 2018.

We interact with dozens of web interfaces on a daily basis, making inclusive web design practices more important than ever. This paper investigates the impacts of web interface design on ambient belonging, or the sense of belonging to a community or culture. Our experiment deployed two content-identical webpages for an introductory computer science course, differing only in aesthetic features such that one was perceived as masculine while the other was gender-neutral. Our results confirm that young women exposed to the masculine page are negatively affected, reporting significantly less ambient belonging, interest in the course and in studying computer science broadly. They also experience significantly more concern about others’ perception of their gender relative to young women exposed to the neutral page, while no similar effect is seen in young men. These results suggest that gender biases can be triggered by web design, highlighting the need for inclusive user interface design for the web.

The Dark Matter Project

Metaxa-Kakavouli, D., Rusak, G., Teevan, J., and Bernstein, M. The Web is Flat: The Inflation of Uncommon Experiences Online. CHI 2016. Best Short Paper.

People populate the web with content relevant to their lives, content that millions of others rely on for information and guidance. However, the web is not a perfect representation of lived experience: some topics appear in greater proportion online than their true incidence in our population, while others are deflated. This paper presents a large scale data collection study of this phenomenon. We collect webpages about 21 topics of interest capturing roughly 200,000 webpages, and then compare each topic’s popularity to representative national surveys. We find that rare experiences are inflated on the web (by a median of 7x), while common experiences are deflated (by a median of 0.7x). We call this phenomenon novelty bias.

This project is ongoing and with the help of Jared Bitz and Mo Tiwari will soon emerge in a new form: we are using Common Crawl’s archive of the web to mine the entire web at different points in time and analyze political opinions online in relation to in public opinion.

SleepCoacher: Combining Computational and Clinician-Generated Sleep Recommendations

Daskalova, N., Metaxa-Kakavouli, D., Tran, A., Nugent, N., Boergers, J., McGeary, J., Huang, J. SleepCoacher: A Personalized Automated Self-Experimentation System for Sleep Recommendations. UIST 2016.

Personal informatics provides its users with a wealth of data about themselves, but often leaves without any help interpreting that data or turning into actionable steps for change. As part of my honors thesis, I worked with Brown PhD student Nediyana Daskalova on this project, which resulted in an experimental design and user study evaluation for helping users perform micro-sized self-experiments to improve their sleep.

Crowdsourcing from Scratch: A Pragmatic Experiment in Data Collection by Novice Requesters

Papoutsaki, A., Guo, H., Metaxa-Kakavouli, D., Gramazio, C., Rasley, J., Xie, W., and Huang, J. Crowdsourcing from Scratch: A Pragmatic Experiment in Data Collection by Novice Requesters. HCOMP 2015. Best Paper Runner-Up.

This project began as a course assignment during a graduate HCI seminar taught my my undergraduate advisor, Jeff Huang, at Brown. I worked with a team of graduate students to define a taxonomy of data collection for crowdsourcing strategies, which were implemented and tested by novice requesters. We analyzed the resulting data as well as student reports of the experience to provide recommendations for novices learning to use crowdsourcing.