内容简介:Give your non-technical stakeholders a glimpse into black-box models with more engaging partial density plots.People don’t trust what they don’t understand. Artificial intelligence and machine learning algorithms are some of the most powerful technologies
Give your non-technical stakeholders a glimpse into black-box models with more engaging partial density plots.
May 14 ·4min read
People don’t trust what they don’t understand. Artificial intelligence and machine learning algorithms are some of the most powerful technologies we have at our disposal, but they are also the most misunderstood. Hence, one of the most critical responsibilities of a data scientist is to communicate complex information in easy-to-understand ways.
Black Box Models
Perhaps one of the biggest misconceptions about neural networks is the notion that we can’t see directly into the models that produce results. We can see our inputs and outputs, and we can measure the results, but we don’t truly understand the relationship between them. From a practicality standpoint, this is problematic because, like humans, the nature of relationships can change with time. What a vision AI perceives as a truck today may reflect what trucks will look like tomorrow.
Most changes, however, are not as jarring as Tesla’s Cybertruck. How do we know algorithms are keeping up with gradual changes to common assumptions if we can’t see inside them? We open the box. And one of the best tools we have at our disposal is the partial dependence plot (PDP).
Partial Dependence Plots
The creators of Scikit-Learn describe partial dependence plots this way:
Partial dependence plots (PDP) show the dependence between the target response and a set of ‘target’ features, marginalizing over the values of all other features (the ‘complement’ features).
In other words, PDP allows us to see how a change in a predictor variable affects the change in the target variable. Below is a sample of PDP’s that show the effect that different traits of a home have on the predicted price.
From these plots, we can see that as median income and age of the home increases, predicted price tends to increase. However, as average occupancy in an area increases, predicted price decreases. The lines at the bottom represent the distribution of observations.
These plots are incredibly easy to understand and easy to create. With a fitted model, dataset (X features only), and a list of input features, you can generate the above plots with a single line of code after importing the relevant libraries:
import matplotlib.pyplot as plt from sklearn.inspection import partial_dependence, plot_partial_dependenceplot_partial_dependence(model, X, features)
These plots are great for almost any type of regression model. However, I have found that non-technical stakeholders sometimes have difficulty interpreting the results when applying PDPs to classification tasks. What’s more, they are not particularly engaging to look at for a presentation. Let’s dress it up and add some functionality to this.
Prettified PDPs
For illustrative purposes, we’ll use the Titanic dataset . We’ll build a simple model using the XGBoost classification model that attempts to identify survivors based on several input features. We are most interested in figuring out how our model uses age as a predictor of survivorship (no pun intended).
from xgboost import XGBClassifierdf = pd.read_csv('titanic.csv') X = df[['Age', 'SibSp', 'Parch', 'Fare']] y = df['Survived']model = XGBClassifier() model.fit(X, y)fig = plt.figure(figsize(10, 9)) plot_partial_dependence(model, X, ['Age'], fig=fig) plt.show()
As we can see, our model has identified that older persons are less likely to survive, all other factors being equal. We can also see that most passengers were between 20 and 40 years old.
Wouldn’t it be great if we could get a clearer picture of the age distribution by plotting a histogram on the same chart? What about displaying partial dependence values as percentages? Wouldn’t it be nice if we could also visualize the decision boundary? We can do all of this by grabbing the partial dependence values by using the partial_dependence method and plotting the results ourselves. Fortunately, I have already created a function that will do this for you.
from sklearn.inspection import partial_dependence
The above function will produce a PDP for a single input variable and allows for the input of a target name for axis labels and chart title. Furthermore, it provides options to display y-ticks as percentages, change the decision boundary, and return partial dependence values for further analysis. By sticking with the standard settings and passing a name for the target, we get the following:
plot_pdp(model, X, 'Age', target='Survival')
With this, we get a much richer view of the age distribution. We can clearly see where age crosses the decision boundary. We labeled axes in a way that will make it easier for non-technical stakeholders to read and understand. From here, you can play around with the options to see how they change the display of the chart or modify the code to your liking.
If nothing else, I would encourage you to come up with new ways to share your work to create more engagement with non-technical audiences.
以上所述就是小编给大家介绍的《Prettifying Partial Density Plots in Python》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Linux内核完全剖析
赵炯 / 机械工业出版社 / 2008.10 / 99.00元
本书对早期Linux内核(v0.12)全部代码文件进行了详细、全面的注释和说明,旨在帮助读者用较短的时间对Linux的工作机理获得全面而深刻的理解,为进一步学习和研究Linux打下坚实的基础。虽然选择的版本较低,但该内核已能够正常编译运行,并且其中已包括了Linux工作原理的精髓。书中首先以Linux源代码版本的变迁为主线,介绍了Linux的历史,同时着重说明了各个内核版本的主要区别和改进,给出了......一起来看看 《Linux内核完全剖析》 这本书的介绍吧!