Interpretation

Interpreting model outputs.
Sleep Staging

Analysis of paediatric polysomnography (PSG) uses machine learning (ML). All trained models are multiclass classifiers that output calibrated probabilities using the Softmax function.

These probabilities undergo log probability smoothing to stabilise sleep states and the transitions between. Smoothed probabilities are used to produce the PSG hypnogram, and unmodified Softmax probabilities are available in detailed per-epoch analysis.

Model Explanations

Feature importance values or explanations attempt to describe the impact of individual features on the model output on a per-epoch basis. Care should be taken when interpreting these values, and consideration should be given to the real world directionality of each feature when interpreting the sign of an explanation value.

Global Explanations

To estimate the overall importance of each feature to a prediction, Global Explanations are approximated using the KernelSHAP method. The coefficients of a weighted linear model provide an approximation of how changes in each feature influences the model prediction.

Local Explanations

To understand model behaviour near a specific prediction, Local Interpretable Model-agnostic Explanations (LIME) and a K-LASSO approach are used. Lasso regression is applied to select K features (where K equals 5 plus an additional feature for every 10 features included in the model) that are most relevant locally. A linear model is trained on perturbed samples around the original data, weighted by their proximity. The coefficients of this linear model form Local Explanations, providing a sparse estimate of how the prediction changes with small, localised variations in the selected features.