BrockNLP Researchers Published in New Scientist!

Congratulations to BrockNLP Researchers Sangmitra Madhusudan (Undergraduate Researcher), Robert Morabito (MSc Student), Skye Reid (Undergraduate Researcher), Nikta Gohari Sadr (MSc Student), and Ali Emami (Director) on being featured in New Scientist alongside their upcoming publication, Fine-Tuned LLMs are “Time Capsules” for Tracking Societal Bias Through Books*!
The team sat down with Matthew Sparkes of New Scientist to discuss the use of books as proxies for the evolution of bias. Their work fine-tunes Large Language Models on BookPAGE, a collection of 593 fictional books across seven decades, to analyze the evolution of societal bias across generations. Their findings reveal troubling sparks in bias with respect to gender and religious biases especially.
Their work has been accepted to the upcoming NAACL 2025 conference to be held in Albuquerque, New Mexico from April 29th to May 4th.
You can read the New Scientist article here!
Abstract:
Books, while often rich in cultural insights, can also mirror societal biases of their eras - biases that Large Language Models (LLMs) may learn and perpetuate during training. We introduce a novel method to trace and quantify these biases using fine-tuned LLMs. We develop BookPAGE, a corpus comprising 593 fictional books across seven decades (1950-2019), to track bias evolution. By fine-tuning LLMs on books from each decade and using targeted prompts, we examine shifts in biases related to gender, sexual orientation, race, and religion. Our findings indicate that LLMs trained on decade-specific books manifest biases reflective of their times, with both gradual trends and notable shifts. For example, model responses showed a progressive increase in the portrayal of women in leadership roles (from 8% to 22%) from the 1950s to 2010s, with a significant uptick in the 1990s (from 4% to 12%), possibly aligning with third-wave feminism. Same-sex relationship references increased markedly from the 1980s to 2000s (from 0% to 10%), mirroring growing LGBTQ+ visibility. Concerningly, negative portrayals of Islam rose sharply in the 2000s (26% to 38%), likely reflecting post-9/11 sentiments. Importantly, we demonstrate that these biases stem mainly from the books’ content and not the models’ architecture or initial training. Our study offers a new perspective on societal bias trends by bridging AI, literary studies, and social science research.