Explainable AI (XAI)
One important challenge in machine learning is the “black box” problem, in which an artificial intelligence reaches a result without any humans being able to explain why.
This problem is typically present in deep artificial neural networks, in which the hidden layers are impenetrable. To tackle this problem, our researchers have introduced the notion of explainable AI (XAI), artificial intelligence the results of which can be understood by humans. The XAI position is usually characterised in terms of three properties: transparency, interpretability, and explainability. While the first two have standard definitions, explainability is not understood in a uniform manner.
Businesses increasingly rely on artificial intelligence (AI) systems to make decisions that can significantly affect individual rights, human safety, and critical business operations. But how do these models derive their conclusions? What data do they use? And can we trust the results?
Addressing these questions is the essence of “explainability,” and getting it right is becoming essential. While many companies have begun adopting basic tools to understand how and why AI models render their insights, unlocking the full value of AI requires a comprehensive strategy.
Even as explainability gains importance, it is becoming significantly harder. Modeling techniques that today power many AI applications, such as deep learning and neural networks, are inherently more difficult for humans to understand. For all the predictive insights AI can deliver, advanced machine learning engines often remain a black box. The solution isn’t simply finding better ways to convey how a system works; rather, it’s about creating tools and processes that can help even the deep expert understand the outcome and then explain it to others.
Complicating matters, different consumers of the AI system’s data have different explainability needs. A bank that uses an AI engine to support credit decisions will need to provide consumers who are denied a loan with a reason for that outcome. Loan officers and AI practitioners might need even more granular information to help them understand the risk factors and weightings used in rendering the decision to ensure the model is tuned optimally. And the risk function or diversity office may need to confirm that the data used in the AI engine are not biased against certain applicants. Regulators and other stakeholders also will have specific needs and interests.
In order to tackle these AI / machine learning explainability related challenges, a group of leading researchers at Tel-Aviv University developed innovative models and algorithms, addressing major challenges and protected by patents.
Application areas:
• Finance
• Health
• Autonomous Vehicles and Robots
• Legal
• Military and Home-land-security
• People Analytics
For further information please refer to following researchers and their web sites:
• Prof. Irad Ben-Gal, https://lambda.eng.tau.ac.il/
• Prof. Lior Wolf, https://www.cs.tau.ac.il/~wolf/
• Yair Eran, VP Business Development at Ramot, yair.eran@ramot.org