Joseph Janizek

Physician scientist (MD/PhD) working on building safe and reliable AI systems for medicine and biotech. During my PhD (Computer Science and Engineering @ UW), I developed methods for AI interpretability and robustness, with applications in computer vision (radiology, dermatology), natural language processing, and biology (bulk/single-cell transcriptomics). Outside of research, I also do consulting work on projects at the intersection of AI/medicine/biology. My clinical interests are in neuroradiology. I'm currently a PGY-1 in internal medicine @ VMFH, and am an incoming Stanford radiology resident for 2025.

Profile

Featured Projects

A selection of projects representative of my research interests.

LAB-Bench

An evaluation dataset for AI systems intended to benchmark capabilities foundational to scientific research in biology.

Download Data Read Paper

Guideline-Grounded Oncology QA

A system leveraging large language models and retrieval-augmented generation to provide guideline-grounded clinical management recommendations in oncology.

Featured nowhere, other than my blog. Just some fun practice at trying to build guideline-directed RAG. Features hybrid semantic and keyword-based retrieval from NCCN guidelines, achieving high accuracy on complex clinical decision-making tasks.
View Project

Auditing radiology vision models

We used interpretability techniques based on generative image models to audit COVID-19 deep learning classifiers, and proposed changes to dataset construction to improve generalization.

Published in Nature Machine Intelligence, discussed in this Outlook piece in Nature.
Read Paper