Label-Environment friendly Sleep Staging Utilizing Transformers Pre-trained with Place Prediction
Authors: Sayeri Lala, Hanlin Goh, Christopher Sandino
Summary: Sleep staging is a clinically essential activity for diagnosing varied sleep issues, however stays difficult to deploy at scale as a result of it as a result of it’s each labor-intensive and time-consuming. Supervised deep learning-based approaches can automate sleep staging however on the expense of huge labeled datasets, which may be unfeasible to obtain for varied settings, e.g., unusual sleep issues. Whereas self-supervised studying (SSL) can mitigate this want, latest research on SSL for sleep staging have proven efficiency good points saturate after coaching with labeled information from solely tens of topics, therefore are unable to match peak efficiency attained with bigger datasets. We hypothesize that the speedy saturation stems from making use of a sub-optimal pretraining scheme that pretrains solely a portion of the structure, i.e., the characteristic encoder, however not the temporal encoder; due to this fact, we suggest adopting an structure that seamlessly {couples} the characteristic and temporal encoding and an appropriate pretraining scheme that pretrains your complete mannequin. On a pattern sleep staging dataset, we discover that the proposed scheme gives efficiency good points that don’t saturate with quantity of labeled coaching information (e.g., 3–5% enchancment in balanced sleep staging accuracy throughout low- to high-labeled information settings), lowering the quantity of labeled coaching information wanted for top efficiency (e.g., by 800 topics). Primarily based on our findings, we advocate adopting this SSL paradigm for subsequent work on SSL for sleep staging.