UNIVERSI
TÄ
TS-
BIBLIOTHEK
P
ADERBORN
Anmelden
Menü
Menü
Start
Hilfe
Blog
Weitere Dienste
Neuerwerbungslisten
Fachsystematik Bücher
Erwerbungsvorschlag
Bestellung aus dem Magazin
Fernleihe
Einstellungen
Sprache
Deutsch
Deutsch
Englisch
Farbschema
Hell
Dunkel
Automatisch
Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist
gegebenenfalls
nur via VPN oder Shibboleth (DFN-AAI) möglich.
mehr Informationen...
Universitätsbibliothek
Katalog
Suche
Details
Zur Ergebnisliste
Ergebnis 20 von 1548
Datensatz exportieren als...
BibTeX
Automated Quantification of Eye Tics Using Computer Vision and Deep Learning Techniques
Movement disorders, 2024-01, Vol.39 (1), p.183-191
Conelea, Christine
Liang, Hengyue
DuBois, Megan
Raab, Brittany
Kellman, Mia
Wellen, Brianna
Jacob, Suma
Wang, Sonya
Sun, Ju
Lim, Kelvin
2024
Volltextzugriff (PDF)
Details
Autor(en) / Beteiligte
Conelea, Christine
Liang, Hengyue
DuBois, Megan
Raab, Brittany
Kellman, Mia
Wellen, Brianna
Jacob, Suma
Wang, Sonya
Sun, Ju
Lim, Kelvin
Titel
Automated Quantification of Eye Tics Using Computer Vision and Deep Learning Techniques
Ist Teil von
Movement disorders, 2024-01, Vol.39 (1), p.183-191
Ort / Verlag
Hoboken, USA: John Wiley & Sons, Inc
Erscheinungsjahr
2024
Quelle
MEDLINE
Beschreibungen/Notizen
Background Tourette syndrome (TS) tics are typically quantified using “paper and pencil” rating scales that are susceptible to factors that adversely impact validity. Video‐based methods to more objectively quantify tics have been developed but are challenged by reliance on human raters and procedures that are resource intensive. Computer vision approaches that automate detection of atypical movements may be useful to apply to tic quantification. Objective The current proof‐of‐concept study applied a computer vision approach to train a supervised deep learning algorithm to detect eye tics in video, the most common tic type in patients with TS. Methods Videos (N = 54) of 11 adolescent patients with TS were rigorously coded by trained human raters to identify 1.5‐second clips depicting “eye tic events” (N = 1775) and “non‐tic events” (N = 3680). Clips were encoded into three‐dimensional facial landmarks. Supervised deep learning was applied to processed data using random split and disjoint split regimens to simulate model validity under different conditions. Results Area under receiver operating characteristic curve was 0.89 for the random split regimen, indicating high accuracy in the algorithm's ability to properly classify eye tic vs. non–eye tic movements. Area under receiver operating characteristic curve was 0.74 for the disjoint split regimen, suggesting that algorithm generalizability is more limited when trained on a small patient sample. Conclusions The algorithm was successful in detecting eye tics in unseen validation sets. Automated tic detection from video is a promising approach for tic quantification that may have future utility in TS screening, diagnostics, and treatment outcome measurement. © 2023 The Authors. Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.
Sprache
Englisch
Identifikatoren
ISSN: 0885-3185
eISSN: 1531-8257
DOI: 10.1002/mds.29593
Titel-ID: cdi_proquest_miscellaneous_2906178440
Format
–
Schlagworte
Adolescent
,
adolescents
,
Algorithms
,
Automation
,
Computer vision
,
Deep Learning
,
Humans
,
machine learning
,
Movement Disorders
,
Tic Disorders - diagnosis
,
tics
,
Tics - diagnosis
,
Tourette syndome
,
Tourette syndrome
,
Tourette Syndrome - diagnosis
,
Tourette Syndrome - therapy
,
Treatment Outcome
Weiterführende Literatur
Empfehlungen zum selben Thema automatisch vorgeschlagen von
bX