Joseph D. Janizek

Joseph D. Janizek

MD-PhD Student

University of Washington

Biography

I’m an MD/PhD student at the University of Washington’s MSTP. I’m currently in the PhD phase of my training at the Allen School of Computer Science and Engineering, where I’m being advised by Su-In Lee. My research mostly focuses on two separate topics in machine learning: developing methods for feature attribution and learning models that are robust to domain shift. I’m interested in applications ranging from precision oncology, to molecular biology, to radiology.

When I’m not in lab, you can probably find me biking, running, or hiking with my partner Sam and our dog Toby.

Interests

  • Explainable AI (XAI)
  • Robust Machine Learning
  • Radiology
  • Computational Biology, Genomics, Transcriptomics

Education

  • PhD (Computer Science)

    University of Washington

  • MD

    University of Washington

  • AB (Biological Sciences), 2016

    University of Chicago

Recent News

All news»

[09/25/20] Our paper on Deep Attribution Priors accepted at NeurIPS 2020.

[09/24/20] Our preprint on Shortcut learning by COVID radiograph classifiers featured in The Imaging Wire.

[11/21/19] Cost-aware Artificial Intelligence (CoAI) project wins Madrona Prize (GeekWire, Bloomberg, Madrona Venture Group)

Recent Posts

Finding interactions in deep neural networks with Integrated Hessians

An introduction to our new feature interaction method

Projects

Path Explainer

A repository for explaining feature importances and feature interactions in deep neural networks using path attribution methods (Integrated Gradients and Integrated Hessians)

De-confounded Pneumonia Classification

A repository to train confounder-invariant deep learning models for chest radiographs

Attribution Priors

A repository with code to regularize a deep learning model’s attributions during training in order to learn models with more desirable properties

Recent & Upcoming Talks

Learning to robustly classify chest radiographs

Spotlight presentation at ACM CHIL 2020 for research paper “An adversarial approach for the robust classification of pneumonia from chest radiographs”

True to the model, or true to the data?

Spotlight presentation at ICML WHI 2020 discussing the selection of set function when using Shapley values for model explanation.

Recent Publications

Quickly discover relevant content by filtering publications.

Learning Deep Attribution Priors Based On Prior Knowledge

Published at NeurIPS 2020. A method for jointly learning a deep neural network and a flexible prior on the attributions of that model using meta-features.

AI for radiographic COVID-19 detection selects shortcuts over signal

Preprint. An investigation of COVID-19 AI classifiers using explainable AI shows use of confounding factors and lack of external generalizability.

AI for radiographic COVID-19 detection selects shortcuts over signal

Preprint. An investigation of COVID-19 AI classifiers using explainable AI shows use of confounding factors and lack of external generalizability.

Adversarial Deconfounding Autoencoder for Learning Robust Gene Expression Embeddings

Published in Bioinformatics, selected for oral presentation at the European Conference on Computational Biology 2020.

True to the model, or true to the data?

Short paper selected for spotlight presentation at the ICML Workshop on Human Interpretability 2020.