Talks
Preference Learning & LLMs
I wrote a blog-esque document on training LLMs for a lab presentation. Per its audience, it devotes a lot of energy towards motivating RLHF and DPO from a statistical perspective, via Bradley-Terry.
ML Interpretability
Prof. Giles Hooker and I co-wrote two papers quantifying and reducing uncertainty in SHAP and other feature importance scores. These slides overview both papers. I have presented this work at the following venues:
- Center for Human-Compatible AI
- World Conference on Explainable Artificial Intelligence
- Apple, Data Analytics and Quality group (Hardware Technologies)
Epidemic Severity Rates
Many important epidemiological metrics, such as the case-fatality rate, relate two time series to one another. These “severity rates” are typicaly estimated with a ratio of aggregate counts, for example deaths today divided by cases L days ago. While seemingly reasonable, we show these ratio estimators have stark failure modes. We derive bias expressions and propose robust ML-based alternatives.
I have presented on the bias and estimation of severity rates to a wide range of audiences.
- Public health: CDC, California Department of Public Health, Santa Clara DPH
- Academic: Delphi Group, UCSF MINDSCAPE, Tibshirani & Hooker labs