{"id":134102,"date":"2023-03-01T11:18:22","date_gmt":"2023-03-01T16:18:22","guid":{"rendered":"https:\/\/www.ucf.edu\/news\/?p=134102"},"modified":"2023-03-16T14:03:59","modified_gmt":"2023-03-16T18:03:59","slug":"ucf-researcher-receives-doe-funding-to-advance-human-understanding-of-ai-reasoning","status":"publish","type":"post","link":"https:\/\/www.ucf.edu\/news\/ucf-researcher-receives-doe-funding-to-advance-human-understanding-of-ai-reasoning\/","title":{"rendered":"UCF Researcher Receives DOE Funding to Advance Human Understanding of AI Reasoning"},"content":{"rendered":"
A 女仆AV researcher has received funding from the U.S. Department of Energy (DOE) to enhance the current understanding of artificial intelligence (AI) reasoning.<\/p>\n
The project focuses on developing algorithms to create robust multi-modal explanations for foundation, or large, AI models through the exploration of several novel explainable AI methods. The DOE recently awarded $400,000 to fund the project.<\/p>\n
The project was one of 22 proposals selected for the DOE\u2019s 2022 Exploratory Research for Extreme-Scale Science (EXPRESS) grant, which promotes the study of innovative, high-impact ideas for advancing scientific discovery.<\/p>\n
Unlike task-specific models, foundation models are trained with a large set of data and can be applied to different tasks.<\/p>\n
These models are more efficient than humans in many challenging tasks and are being used in real-world applications like autonomous vehicles and scientific research, but few methods exist for explaining AI decisions to humans, blocking the wide adoption of AI in fields that ultimately require human trust, such as science.<\/p>\n
By creating algorithms that provide meaningful explanations for a model\u2019s decision-making, AI systems can be deployed with higher levels of human trust and understanding, the researchers say.<\/p>\n
Rickard Ewetz, lead researcher of the project and an associate professor in UCF\u2019s Department of <\/a>Electrical and Computer Engineering<\/a>, says AI models need to be transparent in order to be trusted by humans.<\/p>\n \u201cIt\u2019s not just a black box that takes an input and gives an output. You need to be able to explain how the neural network reasons,\u201d Ewetz says.<\/p>\n Instead of examining model gradients, which are the emphasis of many explainable AI efforts over the last decade, the project focuses on providing meaningful explanations of AI models through innovations such as the implementation of symbolic reasoning to describe AI reasoning with trees, graphs, automata and equations.<\/p>\n The researchers aim to not only provide needed explanations for a model\u2019s decision-making but also estimate the model\u2019s explanation accuracy and knowledge limits.<\/p>\n Sumit Jha, co-researcher of the project and a computer science professor at the University of Texas at San Antonio, says that explainable AI is especially necessary with the rapid deployment of AI models.<\/p>\n \u201cIn general, AI will not tell you why it made a mistake or provide explanations for what it is doing,\u201d Jha says. \u201cPeople are accepting AI with a sort of blind trust that it is going to work. This is very worrying because eventually there will be good AI and bad AI.\u201d<\/p>\n