Publications

2020
Lage I, Doshi-Velez F. Human-in-the-Loop Learning of Interpretable and Intuitive Representations. ICML Workshop on Human Interpretability in Machine Learning, . 2020;1 :1-10. Paper
Yao J, Brunskill E, Pan W, Murphy S, Doshi-Velez F. Power-Constrained Bandits. ICML Workshop on Theoretical Foundations of Reinforcement Learning. 2020;2 :1-30. Paper
Coker B, Fernandez-Pradier M, Doshi-Velez F. PoRB-Nets: Poisson Process Radial Basis Function Networks. UAI. 2020 :1-59. Paper
Nair Y, Doshi-Velez F. PAC Imitation and Model-based Batch Learning of Contextual MDPs. ICML Workshop on Theoretical Foundations of Reinforcement Learning. 2020;2 :1-21. Paper
Nair Y, Doshi-Velez F. PAC Imitation and Model-based Batch Learning of Contextual MDPs. ICML Workshop on Inductive Biases, Invariances and Generalization in RL. 2020;2 :1-21. Paper
Thakur S, Lorsung C, Yacoby Y, Doshi-Velez F, Pan W. Learned Uncertainty-Aware (LUNA) Bases for Bayesian Regression using Multi-Headed Auxiliary Networks. ICML Workshop on Uncertainty in Deep Learning. 2020;2 :1-18. Paper
Lu M, Shahn Z, Sow D, Doshi-Velez F, Lehman L. Is Deep Reinforcement Learning Ready for Practical Applications in Healthcare? A Sensitivity Analysis of Duel-DDQN for Sepsis Treatment. AMIA. 2020;1 :1-13. Paper
Gottesman O, Futoma J, Liu Y, Parbhoo S, Celi LA, Brunskill E, Doshi-Velez F. Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions, in International Conference on Machine Learning. Vol 2. ; 2020 :1-17. Paper
Gottesman O, Futoma J, Liu Y, Parbhoo S, Celi LA, Brunskill E, Doshi-Velez F. Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions. International Conference on Machine Learning (IMCL). 2020;2 :1-17. Paper
Yacoby Y, Pan W, Doshi-Velez F. Failures of Variational Autoencoders and their Effects on Downstream Tasks. ICML Workshop on Uncertainty in Deep Learning. 2020;1 :1-39. Paper
Ghosh S, Doshi-Velez F. Discussions on Horseshoe Regularisation for Machine Learning in Complex and Deep Models. International Statistical Review. 2020;1 :1-3. Paper
M. Downs, J. Chu, Yacoby Y, Doshi-Velez F, WeiWei P. CRUDS: Counterfactual Recourse Using Disentangled Subspaces. ICML Workshop on Human Interpretability in Machine Learning. 2020 :1-23. Paper
Guenais T, Vamvourellis D, Yacoby Y, Doshi-Velez F, Pan W. BaCOUn: Bayesian Classifers with Out-of-Distribution Uncertainty. ICML Workshop on Uncertainty in Deep Learning. 2020;1 :1-24. Paper
J. Antoran, Yao J, Pan W, Doshi-Velez F, Hernandez-Lobato J. Amortised Variational Inference for Hierarchical Mixture Models. ICML Workshop on Uncertainty in Deep Learning. 2020 :1-11. Paper
HC. Ou, Wang K, Doshi-Velez F, Tambe M. Active Screening on Recurrent Diseases Contact Networks with Uncertainty: a Reinforcement Learning Approach. AAMAS Workshop on Multi-Agent Based Simulation (AAMAS). 2020 :1-12. Paper
Yacoby Y, Pan W, Doshi-Velez F. Characterizing and Avoiding Problematic Global Optima of Variational Autoencoders. Advances in Approximate Bayesian Inference. 2020;1 :1-17. Paper
Prasad N, Engelhardt B, Doshi-Velez F. Defining Admissible Rewards for High-Confidence Policy Evaluation in Batch Reinforcement Learning. ACM Conference on Health, Inference and Learning. 2020;2 :1-9. Paper
Ren J, Kunes R, Doshi-Velez F. Prediction Focused Topic Models via Feature Selection. AISTATS. 2020;2 :1-19. Paper
Futoma J, Hughes M, Doshi-Velez F. POPCORN: Partially Observed Prediction Constrained Reinforcement Learning. AISTATS. 2020;2 :1-18. Paper
Wu M, Parbhoo S, Hughes M, Kindle R, Celi L, Zazzi M, Volker R, Doshi-Velez F. Regional Tree Regularization for Interpretability in Deep Neural Networks. AAAI. 2020;3 :1-9. Paper

Pages