Avi Levin
Avi is an AI tech lead at Citi Innovation Lab.
He holds a master's in computer science with a thesis on interpretable machine learning.

Sessions
Large Language Models (LLMs) offer new avenues for explaining and debugging machine learning models through natural language interfaces. This talk explores how LLMs can interpret both interpretable models, such as Generalized Additive Models (GAMs), and complex black-box models using post-hoc methods. By analyzing modular components of interpretable models, LLMs can provide insights without exceeding context window limitations. We also demonstrate how LLMs leverage their extensive prior knowledge to detect anomalies and suggest potential issues in models. Attendees will learn practical techniques for using LLMs to enhance model transparency and trust in AI systems.
You can find the slides and the code here: https://github.com/avilog/shap2llm/blob/main/examples/XAI%20LLMs.pptx