[1] F. Doshi-Velez and B. Kim. Towards a rigorous science of interpretable machine learning. arXiv:1702.08608, 2017.
[2] C. Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215, 2019.
[3] J. Amann et al. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1):310, 2020.
[4] M. T. Ribeiro, S. Singh, and C. Guestrin. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In KDD, 2016.
[5] S. M. Lundberg and S.-I. Lee. A unified approach to interpreting model predictions. In NeurIPS, 2017.
[6] S. M. Lundberg, G. Erion, and S.-I. Lee. From local explanations to global understanding with explainable AI for trees. Nature Machine Intelligence, 2:56–67, 2020.
[7] B. Kim, M. Wattenberg, J. Gilmer, et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In ICML, 2018.
[8] A. Ghorbani, J. Wexler, J. Zou, and B. Kim. Towards Automatic Concept-Based Explanations. In NeurIPS, 2019.
[9] P. W. Koh et al. Concept Bottleneck Models. In ICML, 2020.
[10] E. L. Kaplan and P. Meier. Nonparametric estimation from incomplete observations. JASA, 53(282):457–481, 1958.
[11] D. R. Cox. Regression models and life-tables. JRSS B, 34(2):187–220, 1972.
[12] F. E. Harrell Jr., K. L. Lee, and D. B. Mark. Multivariable prognostic models: issues and measures. Statistics in Medicine, 15(4):361–387, 1996.
[13] E. Graf, C. Schmoor, W. Sauerbrei, and M. Schumacher. Assessment and comparison of prognostic classification schemes for survival data. Statistics in Medicine, 18(17–18):2529–2545, 1999.
[14] T. A. Gerds and M. Schumacher. Consistent estimation of the expected Brier score in general survival models with right-censoring. Biometrical Journal, 48(6):1029–1040, 2006.
[15] J. H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29(5):1189–1232, 2001.
[16] T. Chen and C. Guestrin. XGBoost: A scalable tree boosting system. In KDD, 2016.
[17] XGBoost Documentation. Survival Analysis with AFT loss. https://xgboost.readthedocs.io. Accessed 2025.
[18] C. Davidson-Pilon et al. lifelines: survival analysis in Python. Journal of Open Source Software, 4(40):1317, 2019.
[19] F. Pedregosa et al. Scikit-learn: Machine Learning in Python. JMLR, 12:2825–2830, 2011.
[20] T. M. Therneau and P. M. Grambsch. Modeling Survival Data: Extending the Cox Model. Springer, 2000.
[21] T. M. Therneau. A Package for Survival Analysis in R. https://CRAN.R-project.org/package=survival. Accessed 2025.
[22] V. Arel-Bundock. Rdatasets: Datasets from R packages. https://vincentarelbundock.github.io/Rdatasets. Accessed 2025.