Machine Learning Interpretability Toolkit
We will discuss a little about what it means to develop AI in a transparent way. We will introduce our interpretability toolkit which enables you to use different stateoftheart interpretability methods to explain your models decisions. By using this toolkit during the training phase of the AI development cycle, you can use interpretability output of a model to verify hypotheses and build trust with stakeholders. You can also use the insights for debugging, validating model behavior, and to check for bias
|
|