Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Protecting Systems from Violating Constraints using Reference Governors and Related Algorithms
Ilya Kolmanovsky, Aerospace Engineering, University of Michigan, United States

A Learning Algorithm for Neural Networks and General Dynamic Systems
Feng Lin, Wayne State University, United States

Learning and Optimization: Robotic Use Cases
Nuno Lau, Dep. Electrónica, Telecomunicações e Informática, Universidade de Aveiro, Portugal

On the Use of Regulator Theory in Neuroscience with Implications for Robotics
Mireille E. Broucke, Electrical and Computer Engineering, University of Toronto, Canada

 

Protecting Systems from Violating Constraints using Reference Governors and Related Algorithms

Ilya Kolmanovsky
Aerospace Engineering, University of Michigan
United States
 

Brief Bio
Professor Ilya V. Kolmanovsky has received his Ph.D. degree in Aerospace Engineering in 1995, his M.S. degree in Aerospace Engineering in 1993 and his M.A. degree in Mathematics in 1995, all from the University of Michigan, Ann Arbor. He is presently a full professor in the Department of Aerospace Engineering at the University of Michigan. Professor Kolmanovsky’s research interests are in control theory for systems with state and control constraints, and in control applications to aerospace and automotive systems. Before joining the University of Michigan in January 2010, Kolmanovsky was with Ford Research and Advanced Engineering in Dearborn, Michigan for close to 15 years. He is a Fellow of IEEE, an Associate Fellow of AIAA, a recipient of the 2002 Donald P. Eckman Award of the American Automatic Control Council, of 2002 and 2016 IEEE Transactions on Control Systems Technology Outstanding Paper Awards, of SICE Technology Award, and of several awards of Ford Research and Advanced Engineering. He has co-authored over 180 journal articles, 3 edited books, 20 book chapters, and over 400 refereed conference papers. He is an inventor whose record includes over a hundred of granted United States patents.


Abstract
Constraints refer to limits imposed on state and control variables which must be satisfied during system operation. Examples of constraints include but are not limited to range and rate actuator limits, pressure and temperature safety limits and obstacle avoidance requirements. With the continuing trends towards growing system autonomy, improved performance and downsizing, constraint handling and limit protection functions are becoming increasingly important to enable engineered systems to operate safely at the “limits”.
The presentation will introduce the reference governor, an add-on predictive safety supervision algorithm that monitors and modifies, if it becomes necessary, commands passed to the nominal system to ensure that constraints are satisfied. Approaches to the design and implementation of reference governors will be described along with the supporting theory. The potential for the applications of reference governors to automotive and aerospace systems will be illustrated with several examples.
Recent extensions of reference governors which include controller state and reference governors, action governors, and feasibility governors which supervise Model Predictive Controllers will be described. The learning reference governor, which integrates learning into the reference governor operation, to handle constraints in uncertain systems will also be introduced.
The presentation will end with speaker’s perspectives on challenges and opportunities in handling constraints in increasingly autonomous systems.



 

 

A Learning Algorithm for Neural Networks and General Dynamic Systems

Feng Lin
Wayne State University
United States
 

Brief Bio
Feng Lin received his B.Eng. degree in electrical engineering from Shanghai Jiao Tong University, Shanghai, China, in 1982, and the M.A.Sc. and Ph.D. degrees in electrical engineering from the University of Toronto, Toronto, ON, Canada, in 1984 and 1988, respectively. He was a Post-Doctoral Fellow with Harvard University, Cambridge, MA, USA, from 1987 to 1988. Since 1988, he has been with the Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI, USA, where he is currently a Professor. His current research interests include discrete event systems, hybrid systems, neural networks, robust control, and their applications in alternative energy, biomedical systems, machine learning, and automotive control. He authored a book entitled “Robust Control Design: An Optimal Control Approach” and coauthored a paper that received a George Axelby outstanding paper award from the IEEE Control Systems Society. He was an associate editor of IEEE Transactions on Automatic Control. He is a fellow of IEEE.


Abstract
In the past 10 years or so, machine learning has made great progress, in terms of both theories and applications. Among various machine learning methods, deep learning is one of the most widely used methods. Significant advance has been made in speech recognition, object recognition and detection as well as in many other applications using deep learning. In this talk, we present a learning algorithm that is mathematically equivalent to the well-known back-propagation learning algorithm in neural networks but have the following advantages over the back-propagation algorithm. (1) It does not require a feedback network to back-propagate the errors. This makes its implementations, especially implementations on silicon, much simpler. (2) It is biologically plausible as all information needed for synapses to adapt is available in a biological neuron. Hence, artificial neural networks indeed mimic biological neural networks. (3) It can be applied to general dynamic systems, that is, many dynamic systems can adapt in a way similar to neural networks. This property allows us to introduce “intelligence” into a large class of engineering systems. In particular, we consider adaptation in control systems using the proposed learning algorithm, including adaptive state feedback, adaptive PID controller, and model reference adaptive control.



 

 

Learning and Optimization: Robotic Use Cases

Nuno Lau
Dep. Electrónica, Telecomunicações e Informática, Universidade de Aveiro
Portugal
 

Brief Bio
Nuno Lau is Associate Professor at Aveiro University, Portugal. He got is Electrical Engineering Degree from Oporto University in 1993, a DEA degree in Biomedical Engineering from Claude Bernard University, Lyon, in 1994 and the PhD from Aveiro University in 2003. His research interests include Intelligent Robotics, Artificial Intelligence, Multi-Agent Systems and Simulation. Nuno Lau has participated, often with the coordination role, in several research projects that have been awarded international prizes. His participation and leading role in RoboCup teams made him RoboCup World Champion 3 times in different Leagues and many other podium positions. Nuno Lau is the author of more than 150 publications in international conferences and journals, including several Best Paper awards. He Supervised 9 PhD students that concluded their degree, and is currently supervising 6 PhD students. Nuno Lau was President and Vice-President of the Portuguese Robotics Society. He is the Vice-Coordinator of the Institute of Electronics and Informatics Engineering of Aveiro, and Principal Investigator of the Intelligent Robotics and Systems group at the same research unit.


Abstract
Machine Learning and Optimization techniques are now widely used in many scientific disciplines. Using Machine Learning and Optimization in Robotics is a very challenging task as robots may be expensive and fragile, and the time and effort to collect data is, in general, high. Considering these premises, we have developed several techniques that through the use of simulators, and adapted learning/optimization algorithms, that use the data very efficiently, make the use of Learning and Optimization in Robotics an effective option for hand-coded approaches. These techniques have been applied to solve perception, reasoning and behavior level and multi-agent robotic tasks.
This talk will present some of these techniques, in different contexts, namely those related to ML/optimization based development of robotic skills, data preparation, Q-Batch update rule, multi-agent learning, adapted interfaces and mixed deep learning/heuristic classification.



 

 

On the Use of Regulator Theory in Neuroscience with Implications for Robotics

Mireille E. Broucke
Electrical and Computer Engineering, University of Toronto
Canada
 

Brief Bio
Mireille Broucke obtained the BS degree in Electrical Engineering from the University of Texas at Austin in 1984 and the MSEE and PhD degrees from the University of California, Berkeley in 1987 and 2000, respectively. She was a postdoc in Mechanical Engineering at University of California, Berkeley during 2000-2001. She has six years of industry experience in control design for the automotive and aerospace industries. During 1993-1996 she was a program manager and researcher at Partners for Advanced Transportation and Highways (PATH) at University of California, Berkeley. Since 2001 she has been at the University of Toronto where she is a professor in Electrical and Computer Engineering. Her current research interests are in the area of control theory applied to neuroscience.


Abstract
We explore the potential of using the internal model principle of control theory to explain the cerebellum, a major component of the brain. The cerebellum is involved in motor control, motor learning, posture and balance, gait control, eye movement, language regulation, emotion regulation, etc. It has been described as regulating and coordinating all movements of precision. Neuroscientists hypothesize that the cerebellum contains forward or inverse models of the systems it regulates. Unfortunately, conclusive experimental proof that this hypothesis is correct has not been forthcoming, despite a 30 year effort. This suggests that the problem is particularly challenging. But it also suggests that perhaps it is time for the hypothesis to be re-examined. In this talk we describe an alternative approach to modeling the cerebellum, based on a hypothesis that its primary function is disturbance rejection. We present an overview of the key methods in control theory that allow us to model the cerebellum, some of which have only been developed in the last five to ten years. Evidence of the efficacy of our approach is given in terms of the slow eye movement systems, the optokinetic system, the saccadic system, and visuomotor adaptation. We conclude the talk with implications for robotics.



footer