Note that in small datasets with 10 classes i.e., CIFAR10, we can find WordNet hypotheses for all nodes. However, in large datasets with 1000 classes i.e., ImageNet, we can only find WordNet hypotheses for a subset of nodes.
Trying NBDTs in under a minute
Interested in trying out an NBDT, now ? Without installing anything, you can view more example outputs online and even try out our web demo . Alternatively, use our command-line utility to run inference (Install with pip install nbdt). Below, we run inference on a picture of a cat .
nbdt https://images.pexels.com/photos/126407/pexels-photo-126407.jpeg?auto=compress&cs=tinysrgb&dpr=2&w=32 # this can also be a path to local image
This outputs both the class prediction and all the intermediate decisions.
Prediction: cat // Decisions: animal (99.47%), chordate (99.20%), carnivore (99.42%), cat (99.86%)
You can load a pretrained NBDT in just a few lines of Python as well. Use the following to get started. We support several neural networks and datasets.
from nbdt.model import HardNBDTfrom nbdt.models import wrn28_10_cifar10model = wrn28_10_cifar10()model = HardNBDT( pretrained=True, dataset='CIFAR10', arch='wrn28_10_cifar10', model=model)
For reference, see the script for the command-line tool we ran above; only ~20 lines are directly involved in transforming the input and running inference. For more instructions on getting started and examples, see our Github repository .
How it Works
The training and inference process for a Neural-Backed Decision Tree can be broken down into four steps.
- Construct a hierarchy for the decision tree. This hierarchy determines which sets of classes the NBDT must decide between. We refer to this hierarchy as an Induced Hierarchy .
- This hierarchy yields a particular loss function, that we call the Tree Supervision Loss ⁵. Train the original neural network, without any modifications , using this new loss.
- Start inference by passing the sample through the neural network backbone. The backbone is all neural network layers before the final fully-connected layer.
- Finish inference by running the final fully-connected layer as a sequence of decision rules, which we call Embedded Decision Rules . These decisions culminate in the final prediction.
For more detail, see our paper (Sec 3).
Conclusion
Explainable AI does not fully explain how the neural network reaches a prediction: Existing methods explain the image’s impact on model predictions but do not explain the decision process. Decision trees address this, but unfortunately, images⁷ are kryptonite for decision tree accuracy.
We thus combine neural networks and decision trees. Unlike predecessors that arrived at the same hybrid design, our neural-backed decision trees (NBDTs) simultaneously address the failures (1) of neural networks to provide justification and (2) of decision trees to attain high accuracy. This primes a new category of accurate, interpretable NBDTs for applications like medicine and finance. To get started, see the project page .
By Alvin Wan , * Lisa Dunlap , * Daniel Ho , Jihan Yin , Scott Lee , Henry Jin , Suzanne Petryk , Sarah Adel Bargal , Joseph E. Gonzalez
where * denotes equal contribution
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
时间的朋友2018
罗振宇 / 中信出版集团 / 2019-1
2018年,有点不一样。 从年头到现在,各种信息扑面而来。不管你怎么研判这些信息的深意,有一点是有共识的:2018,我们站在了一个时代的门槛上,陌生,崭新。就像一个少年长大了,有些艰困必须承当,有些道路只能独行。 用经济学家的话说,2018年,我们面对的是一次巨大的“不确定性”。 所谓“不确定性”,就是无法用过去的经验判断未来事情发生的概率。所以,此时轻言乐观、悲观,都没有什么意......一起来看看 《时间的朋友2018》 这本书的介绍吧!