As artificial intelligence (AI) takes on increasingly critical roles โ€” from managing power grids to piloting autonomous vehicles โ€” making sure these systems are safe has never been more important. But how can we be certain that the AI controlling them can be trusted?  

A research team at the University of Waterloo is addressing this question using tools from applied mathematics and machine learning to rigorously check and verify the safety of AI-driven systems.  

โ€œAny time youโ€™re dealing with a dynamic system โ€” something that changes over time, such as an autonomous vehicle or a power grid โ€” you can mathematically model it using differential equations,โ€ said Dr. Jun Liu, professor of applied mathematics and Canada Research Chair in Hybrid Systems and Control.  



To predict the long-term behaviour of these systems, scientists and engineers rely on a mathematical tool called a โ€˜Lyapunov function.โ€™ Intuitively, this tool shows whether a system will naturally settle into a stable and safe stateโ€”much like a ball rolling to the bottom of a bowl and staying there. โ€œFinding such a function, however, is often a notoriously difficult task,โ€ Liu said. 

To tackle this challenge, Liu and his team turned to machine learning. They built a neural network that learns to satisfy the mathematical rules that determine whether a system remains stable and safe โ€” the same rules engineers rely on to keep power grids and autonomous vehicles under control. 

The researchers then used a separate reasoning system based on rigorous computation and mathematical logic to verify that these neural networks indeed satisfy the conditions required for safety guarantees. Together, these tools provide a way to confirm that AI controllers can safely and reliably manage complex systems.  


Science = Truth Graffiti T-Shirt โ€ข Unisex STEM Tee โ€ข Street Art Science Shirt โ€ข Gift for Scientist, Teacher, Nerd

It may sound surprising to use one form of AI to check another, but Liu explains that AI is a broad field. In their work, neural networks โ€” one common type of AI โ€” learn mathematical proofs of safety and sometimes even design the controllers themselves, while a logic-based system โ€” another form of reasoning AI โ€” verifies that those proofs are correct. Both are tasks researchers once had to do manually.  

โ€œTo be clear, no one is attempting to create factories or systems run entirely by AI without any human input,โ€ Liu said. โ€œThere are areas such as ethics that will always be guided by human judgment. What these AI controllers and proof assistants are doing is taking over computation-intensive tasks, like deciding how to deploy power in a grid or constructing tedious mathematical proofs, that will be able to free up humans for higher-level decisions.โ€ 

The framework has been tested on several challenging control problems, where it matched or exceeded traditional approaches. Liuโ€™s team is now developing it into an open-source toolbox and exploring industry collaborations to advance safe and trustworthy AI for physical systems. 

This work is part of Waterlooโ€™s broader commitment to advancing safe and trustworthy AI, supported by initiatives such as the TRuST Scholarly Network and recent federal efforts to promote responsible and transparent AI. 

The research, โ€œPhysics-informed neural network Lyapunov functions: PDE characterization, learning, and verification,โ€ appears in Automatica.  


Climate change a global threat to brain health, stroke experts say
Climate change increases stroke risks through extreme weather, with efforts needed to …

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading