In:
PLOS ONE, Public Library of Science (PLoS), Vol. 18, No. 9 ( 2023-9-28), p. e0291865-
Abstract:
Due to the significant resemblance in visual appearance, pill misuse is prevalent and has become a critical issue, responsible for one-third of all deaths worldwide. Pill identification, thus, is a crucial concern that needs to be investigated thoroughly. Recently, several attempts have been made to exploit deep learning to tackle the pill identification problem. However, most published works consider only single-pill identification and fail to distinguish hard samples with identical appearances. Also, most existing pill image datasets only feature single pill images captured in carefully controlled environments under ideal lighting conditions and clean backgrounds. In this work, we are the first to tackle the multi-pill detection problem in real-world settings, aiming at localizing and identifying pills captured by users during pill intake. Moreover, we also introduce a multi-pill image dataset taken in unconstrained conditions. To handle hard samples, we propose a novel method for constructing heterogeneous a priori graphs incorporating three forms of inter-pill relationships, including co-occurrence likelihood, relative size, and visual semantic correlation. We then offer a framework for integrating a priori with pills’ visual features to enhance detection accuracy. Our experimental results have proved the robustness, reliability, and explainability of the proposed framework. Experimentally, it outperforms all detection benchmarks in terms of all evaluation metrics. Specifically, our proposed framework improves COCO mAP metrics by 9.4% over Faster R-CNN and 12.0% compared to vanilla YOLOv5. Our study opens up new opportunities for protecting patients from medication errors using an AI-based pill identification solution.
Type of Medium:
Online Resource
ISSN:
1932-6203
DOI:
10.1371/journal.pone.0291865
DOI:
10.1371/journal.pone.0291865.g001
DOI:
10.1371/journal.pone.0291865.g002
DOI:
10.1371/journal.pone.0291865.g003
DOI:
10.1371/journal.pone.0291865.g004
DOI:
10.1371/journal.pone.0291865.g005
DOI:
10.1371/journal.pone.0291865.g006
DOI:
10.1371/journal.pone.0291865.g007
DOI:
10.1371/journal.pone.0291865.g008
DOI:
10.1371/journal.pone.0291865.g009
DOI:
10.1371/journal.pone.0291865.g010
DOI:
10.1371/journal.pone.0291865.g011
DOI:
10.1371/journal.pone.0291865.g012
DOI:
10.1371/journal.pone.0291865.g013
DOI:
10.1371/journal.pone.0291865.g014
DOI:
10.1371/journal.pone.0291865.t001
DOI:
10.1371/journal.pone.0291865.t002
DOI:
10.1371/journal.pone.0291865.t003
DOI:
10.1371/journal.pone.0291865.t004
DOI:
10.1371/journal.pone.0291865.t005
DOI:
10.1371/journal.pone.0291865.t006
Language:
English
Publisher:
Public Library of Science (PLoS)
Publication Date:
2023
detail.hit.zdb_id:
2267670-3
Bookmarklink