I am a postdoctoral fellow at Princeton University's Center for Information Technology Policy, under the guidance of Aleksandra Korolova. My research in machine learning (ML) focuses on sequential learning, with specific interests in:
I completed my Ph.D. at the University of Massachusetts, where I was advised by Phil Thomas in the Autonomous Learning Lab. Previously, I was a research intern at IBM, Microsoft, and Meta, where I focused on issues related to algorithmic fairness in ML. I earned my bachelor's degree in Computer Science and Mathematics from the University of Maryland Baltimore County, where I also competed as a track & field athlete.
Google Scholar | bmetevier [at] princeton [dot] edu
ICLR 2026 Workshop Acceptances. Our papers, "Statistical Verification of Fairness in Agentic Alignment" and "Towards Statistical Verification for Trustworthy AI," have been accepted to the Principled Design for Trustworthy AI and Algorithmic Fairness Across Alignment Procedures and Agentic Systems workshops, respectively.
EvalEval Workshop Poster. Our poster on "Measuring Validity in LLM-based Resume Screening" has been accepted to the EvalEval workshop.
IASEAI 2026 Acceptance. Our paper on "Measuring Validity in LLM-based Resume Screening" has been accepted at IASEAI.
NeurIPS 2025 Acceptances. Our papers on "Fair Continuous Resource Allocation with Learning of Impact" and "Managing the Repercussions of Machine Learning Applications" have been accepted at NeurIPS.
Fall 2025 Postdoctoral Position. I started as a Postdoctoral Research Fellow at the Center for Information Technology Policy, working with Aleksandra Korolova.
Spring 2025 Ph.D. Defense. I defended my PhD in Computer Science on "Fair Algorithms for Sequential Learning Problems."
RLC 2025 Acceptance. Our paper on "Reinforcement Learning from Human Feedback with High-Confidence Safety Guarantees" has been accepted at RLC.
Spring 2024 Thesis Proposal. I proposed my thesis titled "Fair Algorithms for Sequential Learning Problems."
FAccT 2024 Acceptance. Our paper on Analyzing the Relationship Between Difference and Ratio-Based Fairness Metrics has been accepted at FAccT.
Fall 2022 Internship. I worked on the Responsible AI Team at Facebook AI Research.
Summer 2022 Internship. I worked with Nicolas Le Roux at MSR FATE Montréal.
ICLR 2022 Acceptance. Our paper on Fairness Guarantees under Demographic Shift has been accepted at ICLR.
Summer 2021 Internship. I worked with Dennis Wei and Karthi Ramamurthy in the Trustworthy AI group at IBM.
NERDS 2020 Organizer. Emma Jordan and I organized the first Northeast Reinforcement Learning and Decision Making Symposium (NERDS).
Measuring Validity in LLM-based Resume Screening
Jane Castleman,
Zeyu Shen,
Blossom Metevier,
Max Springer,
Aleksandra Korolova
International Association for Safe & Ethical AI (IASEAI 2026)
Abstract | Paper
Beyond Prediction: Managing the Repercussions of Machine Learning Applications
Aline Weber*,
Blossom Metevier*,
Yuriy Brun,
Philip S. Thomas,
Bruno Castro da Silva
*Equal contribution
Advances in Neural Information Processing Systems (NeurIPS 2025)
Abstract | Paper
Fair Continuous Resource Allocation with Learning of Impact
Blossom Metevier,
Dennis Wei,
Karthi Ramamurthy,
Philip S. Thomas
Advances in Neural Information Processing Systems (NeurIPS 2025)
Abstract | Paper
Reinforcement Learning from Human Feedback with High-Confidence Safety Constraints
Blossom Metevier*,
Yaswanth Chittepu*,
Will Swarzer,
Scott Niekum,
Philip S. Thomas
*Equal contribution
Reinforcement Learning Conference (RLC 2025)
Abstract | Paper
Analyzing the Relationship Between Difference and Ratio-Based Fairness Metrics
Min-Hsuan Yeh,
Blossom Metevier,
Austin Hoag,
Philip S. Thomas
ACM Conference on Fairness, Accountability, and Transparency (FAccT 2024)
Abstract | Paper
Fairness Guarantees under Demographic Shift
Stephen Giguere,
Blossom Metevier,
Yuriy Brun,
Philip S. Thomas
International Conference on Learning Representations (ICLR 2022)
Abstract | Paper
Reinforcement Learning When All Actions are Not Always Available
Yash Chandak,
Georgios Theocharous,
Blossom Metevier,
Philip S. Thomas
AAAI Conference on Artificial Intelligence (AAAI 2020)
Abstract | Paper
Offline Contextual Bandits with High Probability Fairness Guarantees
Blossom Metevier,
Stephen Giguere,
Sarah Brockman,
Ari Kobren,
Yuriy Brun,
Emma Brunskill,
Philip S. Thomas
Advances in Neural Information Processing Systems (NeurIPS 2019)
Abstract | Paper
Fair Offline Contextual Bandits with Guarantees under Inferred Attributes
Blossom Metevier*,
Yaswanth Chittepu*,
Max Springer,
Sohini Chintala,
Bohdan Turbal,
Scott Niekum,
Aleksandra Korolova
Abstract
The Geometry of Alignment Collapse: When Fine-Tuning Breaks Safety
Max Springer,
Chung Peng Lee,
Blossom Metevier,
Jane Castleman,
Bohdan Turbal,
Hayoung Jung,
Zeyu Shen,
Aleksandra Korolova
Abstract | Paper
Matched Pair Calibration for Ranking Fairness
Hannah Korevaar,
Chris McConnell,
Edmund Tong,
Erik Brinkman,
Alana Shine,
Misam Abbas,
Blossom Metevier,
Sam Corbett-Davies,
Khalid El-Arini
Abstract | arXiv
Apart from the academic grind, I enjoy running, weightlifting, and reading. I’m a fan of the DC Universe, especially the Teen Titans, and I follow a number of Japanese comics. I also love spending time with my cats (featured here) and my dog!