1. toward to reliable ai for scientific computing

Hongkee Yoon

KAIST, Korea

15 July 2021 Thu 5 pm

                                      IBS Center for Theoretical Physics of Complex Systems (PCS), Administrative Office (B349), Theory Wing, 3rd floor

                                      Expo-ro 55, Yuseong-gu, Daejeon, South Korea, 34126 Tel: +82-42-878-8633                     

With the recent advances in artificial intelligence (AI), machine learning (ML) has been widely used in many fields with great success, including the scientific domain. Despite these successes in a wide range of computational fields, fundamental problems arise when applying ML to scientific problems. The question is about accuracy and reliability. In general, one can prepare more data sets to increase accuracy, but you have to imagine a slightly different situation in scientific computing. A scientific problem is basically the process of finding an ‘answer’ to an ‘unknown problem,’ but how well will ML work if the training data is not ready? Also, just because it works on a known data set doesn’t guarantee it will work for a set of problems we don’t yet know the answer to. Going beyond the excitement of ML being able to mimic complex functions, we may have to jump over these steps to use it in real problems actively. To increase ‘accuracy’ and ‘reliability,’ the explainable AI (XAI) concept has been proposed, and various attempts are being made to achieve this. Explainable or interpretable AI can improve reliability as well as reduce training costs in some cases. In this presentation, I would like to cover the concept of XAI and how it can be utilized for scientific problems, including accelerating Monte Carlo simulation & optimization problems.