内容简介:Context: About a month agoProponents of Bayesian neural networks often claim that trained BNNs output distributions which capture epistemic uncertainty. Epistemic uncertainty is incredibly valuable for a wide variety of applications, and we agree with the
Context: About a month ago Carles asserted on Twitter that Bayesian Neural Networks make no sense . This generated lots of good discussion, including a thorough response from Andrew Gordon Wilson defending BNNs. However, we feel that most responses missed the point of our critique. This blog post is a more thorough justification of our original arguments.
Proponents of Bayesian neural networks often claim that trained BNNs output distributions which capture epistemic uncertainty. Epistemic uncertainty is incredibly valuable for a wide variety of applications, and we agree with the Bayesian approach in general. However, we argue that BNNs require highly informative priors to handle uncertainty. We show that if the prior does not distinguish between functions that generalize and functions that don’t, Bayesian inference cannot provide useful uncertainties. This puts into question the standard argument that “uninformative priors” are appropriate when the true prior distribution is unknown.
What is Bayesian Inference?
In discussions on Twitter, many researchers seem to believe that “Bayesian” is synonymous with “uncertainty-aware”, or that any algorithm that uses sets or distributions of outcomes must be Bayesian. We would like to make it clear that in our view, this is not a fair characterization. The Bayesian approach to uncertainty, which involves updating prior distributions into posterior distributions using Bayes’ Rule, is certainly one of the most popular approaches. But there are other, non-Bayesian approaches as well; for example, concentration inequalities are clearly non-Bayesian, but they allow us to compute confidence intervals and uncertainty sets. (For reference, the word “Bayesian” in Bayesian Neural Network is, in fact, a reference to Rev. Bayes. Surprising but true!)
At its core, Bayes’s Rule is nothing more than a relationship between conditional probability distributions:
This is a powerful, fundamental relationship, to be sure; but any conceptions of “belief updating” or “distributions over possible worlds” are nothing more than post-hoc interpretations. Bayes’s Rule simply says that for any two non-independent random variables $A$ and $B$, seeing that $B$ took a specific value $b$ changes the distribution of the random variable $A$. In standard lingo, the term
is called the prior, is the likelihood, andis the posterior. This wording stems from the fact that we have an original (prior) distribution for the random variable $A$, and then use the observed $b$ to provide an updated distribution (the posterior).
Let’s consider how we might apply the Bayesian framework to a classification problem. We have some input space $\chi$ and some output space $Y$, which we assume are discrete (for the sake of simplicity). There exists some ground-truth function
that we are interested in. It’s useful to think of $f$ as a vector , where indexing the vector at the index equates to evaluating the function, . A Bayesian approach to the problem says that in the real world there is a random variable of classification tasks, and that is just a sample from it. We will use to denote the distribution of . (From now on, we will just abbreviate it to .) Since a dataset of input output pairs is most definitely not independent from , we can use Bayes’s Rule to know what the distribution of is, given that we have observed:
The term
just denotes that if , then the dataset would contain labels equivalent to the outputs of . Why is this conditional distribution interesting? Because if the dataset was informative enough, the distribution of might collapse to a single point and we might not have any uncertainty over what is. Even if the distribution does not collapse to a single point we could still do many interesting things with. For example, we can provide estimates by marginalizing over it,
Or by finding the maximum a posteriori estimator,
But even more interestingly, we can use the distribution to provide uncertainty: the distribution of what the particular outputs
might be. Given a test point , we can output the probability that. This can be very important; for example, in many sensitive applications, it is essential to abstain from making predictions when uncertain. Up to this point, Bayesian methods look very appealing.
But there is one core problem with the Bayesian framework. In practice, we never have access to the prior distribution
! Who could ever claim to know the real-world distribution of functions that solve classification tasks? Not us, and certainly not Bayesians. Instead, BNNs simply choose an arbitrary prior distribution over functions, and Bayesian inference is performed to compute . The question of whether is close to the true distribution is swept under the rug. In the face of this issue, some Bayesians justify the validity of BNN methods by claiming that choosing “uninformative” distributions is sound when the true distribution is unknown. However, the quality of the uncertainties outputted by BNNs is completely dependent on the prior , so as we will show, questions about the mismatch of andshould not be dismissed so quickly. Others take the opposite perspective, and claim that because neural networks convert the uninformative prior in weight space into a structured prior in function space, the prior is actually close enough to be good. But as we will also discuss, known properties of neural networks call that into question.
Uncertainties from Bayesian Neural Nets with Generalization-Agnostic Priors
In order to show the profound importance of priors in Bayesian networks, we introduce generalization-agnostic priors . Performing Bayesian inference with such priors cannot reduce the uncertainty of
for any . This will show that, for the Bayesian framework to be useful to deep learning, the priors used mustbe connected with the generalization properties of neural networks, by assigning higher probability to functions that generalize well than to those that don’t. To our knowledge, there is no current work to determine if current priors satisfy these necessary conditions, and in fact we provide some informal arguments for why it’s likely that they don’t.
Consider a dataset
which contains all the pairs in (i.e. ), but also contains some “corrupted” input-output pairs, , s.t. . It was shown by Zhang et al 2017 that we can train a neural network to perfectly fit . In other words s.t. . Thus, our networks have so much capacity that not only can they fit the correct labels, they can fit arbitrary corrupted labels! Of course, even though any network trained on will achieve , the performance on any test set is going to be terrible. Define a prior to be “generalization-agnostic” if . In other words, if it assigns similar probability to functions that generalize well ( or functions close to it) and to functions that generalize poorly (like ). What is the problem with these priors? Since the likelihood of the data for and is, and since the prior probabilities are similar, this implies that the posterior probabilities are also similar. This can be easily seen,
By construction,
yields the wrong output for some test point , . Thus, under a generalization-agnostic prior, no matter how big the datasetis, we will never be able to reduce the uncertainty on what the right output is. Clearly, for Bayesian inference to make sense, it’s crucial that our priors are capable of distinguishing between functions that generalize well and functions that don’t.
Are Current BNNs Generalization-Agnostic?
It’s common to use simple priors for BNNs, e.g. independent Gaussian distributions over the weights. Are we really to believe that such a simple distribution is capable of distinguishing between nicely-generalizing networks and poorly-generalizing networks? The following two facts provide an intuitive argument that suggests that simple priors (like Gaussians) are in fact generalization-agnostic. 1) Gaussian priors
are smooth in the sense that they assign similar probability to nearby points and 2) training a neural network on a dataset (corrupted or not) results in a tiny change in the weights from the initialization. Thus, it seems reasonable to expect that the prior probability assigned to good weights (those trained in ), and bad weights (those trained on), should be similar.
But in practice, BNNs do generalize to test points, and do seem to output reasonable uncertainty estimates. (Although it’s worth noting that simpler approaches, like ensembles, consistently outperform BNNs.) Wouldn’t that be impossible if their priors were generalization-agnostic? Well, there is another piece to the puzzle: approximation. Computing
is a highly non-trivial task called Bayesian inference; a large community studies tractable approximations to this quantity. (For example, variational inference formulates the problem of computing as an optimization problem.) The trickiness of computingcould actually be the key to why BNNs with generalization-agnostic priors do something reasonable, despite their true posteriors being useless. They might not be learning anything close to the true posterior! In other words, the B in BNN might…not be doing much.
We do want to emphasize, though, that this section contains pure speculation. Also, it could be the case that there exists work already addressing these questions that we are unaware of. If that is the case, we would highly appreciate any references, and we will update this blog post accordingly.
A Sober Look at Bayesian Neural Networks
Good uncertainty estimates must be centered around the generalization properties of NNs. To have any guarantees that the uncertainties provided by BNNs are useful, we first need to understand what makes a specific neural network
generalize well or generalize badly. That would allow us to define priors with which to perform Bayesian inference. But we simply don’t have that understanding yet.
So viewed through this lens, BNNs with arbitrary priors are nothing more than an architectural decision. A BNN is just a neural network that maps its input to a distribution over outputs; the prior is just a hyperparameter of the model. Just making the network Bayesian bought us nothing. It will only be helpful if we find a good prior, and validate that we are actually doing accurate inference. If you personally believe that exploring this space of priors (similar to exploring the space of architectures or hyperparameters) is particularly promising, then that is a good reason to keep working on BNNs.
But when a Bayesian tells you that BNNs provide good uncertainty estimates, that is equivalent to claiming that they have access to a good prior in weight or function space. We should ask, “what evidence are you providing that your priors are any good?” The onus is on the Bayesian community to demonstrate that they are.
Regardless of whether you believe that we can find good generalization-aware priors, it’s important that we, as a field, stop ignoring the crucial importance that the prior plays in the Bayesian framework. We need to think critically and not be swayed by sloppy arguments like “uninformative priors are good under uncertainty.”
Note on Neural Tangent Kernel line of work. NTK is interesting because we actually have gained something from thinking about infinite-width nets as Bayesian models. They become linear! And thus, simple to analyze. They provide insights into how to think about standard neural network learning. We think that the likelihood that NTK will be of practical use is low, since the computational cost and scalability of these modes is much higher than standard deep nets, but we do expect this research direction will have long term impact on the field from a theoretical standpoint. We are personally very excited about this line of work. For example, we think it can provide valuable insights on solving the notorious stability issues of deep RL methods.
A Final Note
A little bit of context for this blog post. We’ve been thinking about uncertainty in NNs because of our current research is centered on showing the how taking into account uncertainties is essential for sample-efficient RL. The tweet thread linked in the beginning summarized our opinions around uncertainties and BNNs, and it received a fair bit of attention. Overall, it led to a set of very lively and interesting discussions, a blog post reply, and an enormous amount of interesting references. But surprisingly, some senior members of the community responded with some personal attacks on Carles to an audience of tens of thousands. This behaviour strongly discourages young researchers to publicly think and discuss ideas. We all understand that putting your ideas out in the world means that they will get critiqued and picked apart, especially when you are critiquing the foundations of a whole field, but the discussion should stay exclusively on the science. Even more so when the two parties have different levels of seniority and influence. The right to talk about what research directions are promising and which ones aren’t should not be reserved to professors and well-established researchers. If young people don’t feel comfortable to talk openly, we will all miss important ideas.
Hit us up on Twitter ( Carles and Jacob ) to continue the discussion! Come at us, Bayesians ;-)
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Sprint
Jake Knapp、John Zeratsky、Braden Kowitz / Simon & Schuster / 2016-3-8 / GBP 14.60
媒体推荐 “Every business leader I know worries about the same thing: Are we moving fast enough? The genius of Jake Knapp’s Sprint is its step-by-step breakdown of what it takes to solve big problems an......一起来看看 《Sprint》 这本书的介绍吧!