Send email Copy Email Address

SPARSE-ML

In this project, funded by a Starting Grant from the European Research Council (ERC), CISPA faculty member Dr. Rebekka Burkholz aims to harness the potential of deep learning on a large scale by combining methods from statistical physics with ideas from the fields of complex networks and machine learning.

© ERC

©ERC

WHAT IS SPARSE-ML ABOUT?

Deep learning continues to achieve impressive breakthroughs across disciplines and is a major driving force behind a multitude of industry innovations. Most of its successes are achieved by increasingly large neural networks that are trained on massive data sets. Their development inflicts costs that are only affordable by a few labs and prevent global participation in the creation of related technologies. The huge model sizes also pose computational challenges for algorithms that aim to address issues with features that are critical in real-world applications like fairness, adversarial robustness, and interpretability. The high demand of neural networks for vast amounts of data further limits their utility for solving highly relevant tasks in biomedicine, economics, ornatural sciences.

To democratize deep learning and to broaden its applicability, we have to find ways to learn small-scale models. With this end in view, we will promote sparsity at multiple stages of the machine learning pipeline and identify models that are scaleable, resource- and data-efficient, robust to noise, and provide insights into problems. To achieve this, we need to overcome two challenges: the identification of trainable sparse network structures and the de novo optimization of small-scale models.

The solutions that we propose combine ideas from statistical physics, complex network science, and machine learning. Our fundamental innovations rely on the insight that neural networks are a member of a cascade model class that we made analytically tractable on random graphs. Advancing our derivations will enable us to develop novel parameter initialization, regularization, and reparameterization methods that will compensate for the missing implicit benefits of overparameterization for learning. The significant reduction in model size achieved by our methods will help unlock the full potential of deep learning to serve society as a whole.

© ERC

©ERC

WHAT IS SPARSE-ML ABOUT?

Deep learning continues to achieve impressive breakthroughs across disciplines and is a major driving force behind a multitude of industry innovations. Most of its successes are achieved by increasingly large neural networks that are trained on massive data sets. Their development inflicts costs that are only affordable by a few labs and prevent global participation in the creation of related technologies. The huge model sizes also pose computational challenges for algorithms that aim to address issues with features that are critical in real-world applications like fairness, adversarial robustness, and interpretability. The high demand of neural networks for vast amounts of data further limits their utility for solving highly relevant tasks in biomedicine, economics, ornatural sciences.

To democratize deep learning and to broaden its applicability, we have to find ways to learn small-scale models. With this end in view, we will promote sparsity at multiple stages of the machine learning pipeline and identify models that are scaleable, resource- and data-efficient, robust to noise, and provide insights into problems. To achieve this, we need to overcome two challenges: the identification of trainable sparse network structures and the de novo optimization of small-scale models.

The solutions that we propose combine ideas from statistical physics, complex network science, and machine learning. Our fundamental innovations rely on the insight that neural networks are a member of a cascade model class that we made analytically tractable on random graphs. Advancing our derivations will enable us to develop novel parameter initialization, regularization, and reparameterization methods that will compensate for the missing implicit benefits of overparameterization for learning. The significant reduction in model size achieved by our methods will help unlock the full potential of deep learning to serve society as a whole.

NEWS

Rebekka Burkholz wants to democratize machine learning. Her starting point: Making artificial neural networks smaller and at the same time more efficient, so that they can eventually be developed on all devices and be available to more users. The European Research Council (ERC) is now funding her research project, called SPARSE-ML, for five years with an ERC Starting Grant totaling 1.5 million euros.

Dr. Rebekka Burkholz
CISPA-Faculty

PUBLICATIONS

Year 2022

Conference / Medium

NeurIPS
Thirty-sixth Conference on Neural Information Processing SystemsNeurIPS 2022

Conference / Medium

ICML
Proceedings of the 39th International Conference on Machine LearningInternational Conference on Machine Learning (ICML)

Conference / Medium

ICLR
International Conference on Learning RepresentationsThe Tenth International Conference on Learning Representations

Conference / Medium

ICLR
International Conference on Learning RepresentationsThe Tenth International Conference on Learning Representations

WORK WITH US!

Rebekka is still looking for PostDocs and PhD students in neural network sparsification, mean field theory, general deep learning theory, or graph neural networks for the SPARSE-ML project. Experience in numerics or statistical physics could also make a valuable contribution.

If interested, feel free to contact Rebekka directly or apply via: