Keynote Lectures
Towards Transparent, Physically Consistent Machine Learning Models
Robert Babuska, Delft University of Technology / Czech Technical University in Prague, Netherlands
Keynote Lecture
Uzay Kaymak, Eindhoven University of Technology, Netherlands
Keynote Lecture
Michael Berthold, University of Konstanz / KNIME AG, Germany
Towards Transparent, Physically Consistent Machine Learning Models
Robert Babuska
Delft University of Technology / Czech Technical University in Prague
Netherlands
Brief Bio
Robert Babuška received the M.Sc. (with honors) degree from the Czech Technical University in Prague in 1990, and the Ph.D. (cum laude) degree from Delft University of Technology, the Netherlands, in 1997. He holds a part-time appointment as Full Professor and Head of the Learning and Autonomous Control Group at TU Delft, and serves as Vice Director for Research and Head of the Machine Learning Group at the Czech Institute of Informatics, Robotics, and Cybernetics (CIIRC), CTU Prague. He was the founding director of both the TU Delft Robotics Institute and the ELLIS Unit Delft, which is part of the pan-European AI network ELLIS. His research interests include reinforcement learning, adaptive and learning control, and nonlinear system identification. He has applied these techniques across various fields, including process control, robotics, and aerospace.
Abstract
As machine learning is being rapidly adopted in a wide range of domains, the need for models that are both accurate and physically interpretable has never been more critical. This talk explores recent advances in symbolic regression, a rapidly evolving field that aims to derive concise, human-understandable models from data. The goal is to provide researchers and practitioners with actionable insights into building next-generation equation learners that combine the rigor of physics with the power of machine learning, even when training on sparse datasets. We begin by examining the evolution from traditional genetic programming (GP)-based symbolic regression to neural architectures that incorporate prior system knowledge and enforce physical plausibility. We then expand this paradigm to neuro-evolutionary algorithms that combine evolutionary search for neural network topologies with gradient-based fine-tuning of their parameters. Finally, we discuss transformer-based architectures that reduce the computational burden by pre-training a large, generic model that can be quickly queried for unseen data during the inference step. We give application examples from robotics where these advances are particularly impactful in providing accurate yet interpretable dynamic models, which are essential for reliable control, planning, and optimization.
Keynote Lecture
Uzay Kaymak
Eindhoven University of Technology
Netherlands
https://www.tue.nl/en/research/researchers/uzay-kaymak
Brief Bio
Available soon.
Keynote Lecture
Michael Berthold
University of Konstanz / KNIME AG
Germany
Brief Bio
Available soon.