Explainable deep learning models for healthcare - CDSS 3

Updated on

Course overview

Provider
Coursera
Course type
Free online course
Level
Intermediate
Deadline
Flexible
Duration
39 hours
Certificate
Paid Certificate Available
Course author
Fani Deligianni
  • Program global explainability methods in time-series classification

  • Program local explainability methods for deep learning such as CAM and GRAD-CAM

  • Understand axiomatic attributions for deep learning networks

  • Incorporate attention in Recurrent Neural Networks and visualise the attention weights

Description

This course will introduce the concepts of interpretability and explainability in machine learning applications. The learner will understand the difference between global, local, model-agnostic and model-specific explanations. State-of-the-art explainability methods such as Permutation Feature Importance (PFI), Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanation (SHAP) are explained and applied in time-series classification. Subsequently, model-specific explanations such as Class-Activation Mapping (CAM) and Gradient-Weighted CAM are explained and implemented. The learners will understand axiomatic attributions and why they are important. Finally, attention mechanisms are going to be incorporated after Recurrent Layers and the attention weights will be visualised to produce local explanations of the model.

Similar courses

Machine Learning
  • Flexible deadline
  • 61 hours
  • Certificate
Neural Networks and Deep Learning
  • Flexible deadline
  • 27 hours
  • Certificate
Introduction to Machine Learning in Production
  • Flexible deadline
  • 10 hours
  • Certificate
Explainable deep learning models for healthcare - CDSS 3
  • English language

  • Recommended provider

  • Certificate available