DSpace Repository

Distilled BERT models in automated ICD-10 coding

Show simple item record

dc.contributor.author Simangan, Rencio Noel Q.
dc.date.accessioned 2025-08-18T03:40:01Z
dc.date.available 2025-08-18T03:40:01Z
dc.date.issued 2025-06
dc.identifier.uri http://dspace.cas.upm.edu.ph:8080/xmlui/handle/123456789/3143
dc.description.abstract Accurate and efficient extraction of ICD-10 codes from electronic medical records (EMRs) remains a critical task for automating clinical documentation and supporting healthcare analytics. However, the large size and computational demands of pre-trained language models (PLMs) pose challenges for deployment in realworld and resource-constrained settings. This study investigates the effectiveness of distilled BERT-based models—specifically CompactBioBERT, DistilBioBERT, Roberta-PM-distill, TinyBioBERT, and Bio-MobileBERT—for ICD-10 code prediction using the PLM-ICD framework on the MIMIC-IV dataset. Evaluation metrics including Micro AUC, Micro Precision, Micro F1, and Precision at K were used to assess model performance. Among the models tested, Roberta-PM-distill achieved the best results with a Micro AUC of 97.91% and a Micro F1 score of 46.15% in addition to maintaining strong performance in P@K metrics. While lower, performance proves comparable to similar studies, providing basis for the viability of distilled models for for scalable and efficient ICD code prediction. A web application was developed to deploy the best-performing model for practical use. en_US
dc.subject ICD-10 Coding en_US
dc.subject Distillation en_US
dc.subject Bidirectional Encoder Representations from Transformers (BERT) en_US
dc.subject Pretrained Language Model (PLM) en_US
dc.subject Natural Language Processing (NLP) en_US
dc.title Distilled BERT models in automated ICD-10 coding en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account