Mapping out a trip to a Digital & Intelligent Future

Key Takeaways

Share on linkedin
Share on twitter
Share on google
Share on facebook
Artificial Intelligence is an approach to make a computer/machine, a robot, or a product to think like a human being. AI is basically simulations done on machines. Its core idea is to learn and mimic or emulate what humans do with utmost precision and accuracy. As the influence of AI becomes predominant in the current business context, it is imperative that the current black box model is not sufficient for practitioners as they are unwilling to blindly trust the predictions made by the algorithm. That’s where Explainable AI (XAI) comes into the picture. As humans find it ever so difficult to understand the decision making process of AI (as AI is being implemented in crucial avenues such as military technology and medicine), the importance of a robust methodology to implement models which can explain the decision making process comes to the forefront. Mr. Anirban Nandi is an expert in the field of AI and has given in-depth insights into XAI and its significance in the current business environment. With close to 15 years of professional experience, Anirban specializes in Data Sciences, Business Analytics, and Data Engineering spanning across various verticals of online & offline Retail. Here are the key takeaways from the session.

Explainable AI (XAI)

Explainable AI (XAI) is an emerging field in machine learning that aims to address how black box decisions of AI systems are made (‘Black box’ refers to the difficulty in interpreting the decision making process of an AI). XAI term tends to refer to the initiatives and efforts made to tackle AI transparency and trust concerns. In layman’s terms, XAI tries to help understand the key parameters in the algorithm which have contributed to the decision making process. Model accuracy and explainability are the bedrock for XAI.

Machine Learning Interpretability

ML interpretability is a subset of XAI. ML interpretability is a core concept of XAI and helps humans build trust in AI systems. Interpretability in models help us evaluate their decisions and obtain information that the end result alone cannot confer. The trust in the system is generated by concentrating on three aspects :

  • Fairness (Are predictions made with discernible bias?)
  • Accountability (Is it possible to trace these predictions reliably back to something or someone?)
  • Transparency (Can we explain why and how predictions are made?)

As Machine Learning and Artificial Intelligence gain traction in the current business milieu, it is vital that data leaders foray into more robust and understandable data models to help organizations make data driven decisions which focuses both on accuracy and explainability.

To get an in-depth understanding of XAI and the most commonly used XAI algorithms, kindly watch the session by Anirban Nandi.

Anirban Nandi
Head of Analytics (Vice President)

Watch his full session here

Share on linkedin
Share on twitter
Share on google
Share on facebook

Institute of Product Leadership is Asia’s First Business School providing accredited degree programs and certification courses exclusively in Product Management and Marketing , Data Science and Technology Management.

Get exclusive news about upcoming programs and events.
Copyright © 2021 Institute of Product Leadership