Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 10 von 54
Journal of chemical information and modeling, 2023-06, Vol.63 (12), p.3688-3696
2023
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Large-Scale Modeling of Sparse Protein Kinase Activity Data
Ist Teil von
  • Journal of chemical information and modeling, 2023-06, Vol.63 (12), p.3688-3696
Ort / Verlag
United States: American Chemical Society
Erscheinungsjahr
2023
Quelle
MEDLINE
Beschreibungen/Notizen
  • Protein kinases are a protein family that plays an important role in several complex diseases such as cancer and cardiovascular and immunological diseases. Protein kinases have conserved ATP binding sites, which when targeted can lead to similar activities of inhibitors against different kinases. This can be exploited to create multitarget drugs. On the other hand, selectivity (lack of similar activities) is desirable in order to avoid toxicity issues. There is a vast amount of protein kinase activity data in the public domain, which can be used in many different ways. Multitask machine learning models are expected to excel for these kinds of data sets because they can learn from implicit correlations between tasks (in this case activities against a variety of kinases). However, multitask modeling of sparse data poses two major challenges: (i) creating a balanced train–test split without data leakage and (ii) handling missing data. In this work, we construct a protein kinase benchmark set composed of two balanced splits without data leakage, using random and dissimilarity-driven cluster-based mechanisms, respectively. This data set can be used for benchmarking and developing protein kinase activity prediction models. Overall, the performance on the dissimilarity-driven cluster-based split is lower than on random split-based sets for all models, indicating poor generalizability of models. Nevertheless, we show that multitask deep learning models, on this very sparse data set, outperform single-task deep learning and tree-based models. Finally, we demonstrate that data imputation does not improve the performance of (multitask) models on this benchmark set.
Sprache
Englisch
Identifikatoren
ISSN: 1549-9596
eISSN: 1549-960X
DOI: 10.1021/acs.jcim.3c00132
Titel-ID: cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_10302492

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX