Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 2 von 2536

Details

Autor(en) / Beteiligte
Titel
Mini-batch optimization enables training of ODE models on large-scale datasets
Ist Teil von
  • Nature communications, 2022-01, Vol.13 (1), p.34-17, Article 34
Ort / Verlag
England: Nature Publishing Group
Erscheinungsjahr
2022
Quelle
MEDLINE
Beschreibungen/Notizen
  • Quantitative dynamic models are widely used to study cellular signal processing. A critical step in modelling is the estimation of unknown model parameters from experimental data. As model sizes and datasets are steadily growing, established parameter optimization approaches for mechanistic models become computationally extremely challenging. Mini-batch optimization methods, as employed in deep learning, have better scaling properties. In this work, we adapt, apply, and benchmark mini-batch optimization for ordinary differential equation (ODE) models, thereby establishing a direct link between dynamic modelling and machine learning. On our main application example, a large-scale model of cancer signaling, we benchmark mini-batch optimization against established methods, achieving better optimization results and reducing computation by more than an order of magnitude. We expect that our work will serve as a first step towards mini-batch optimization tailored to ODE models and enable modelling of even larger and more complex systems than what is currently possible.
Sprache
Englisch
Identifikatoren
ISSN: 2041-1723
eISSN: 2041-1723
DOI: 10.1038/s41467-021-27374-6
Titel-ID: cdi_doaj_primary_oai_doaj_org_article_0f2289d315c2486cb8004fb1365b28cf

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX