- Good decision making is based on some assessment of uncertainties
  - Medical diagnosis
  - Asymmetric costs situations
  - Benefit/risk evaluation
  - Multi-factorial decisions
  - Self driving cars
  - ...
@////////////////////////////////////////
@////////// Sources of uncertainty
@eval-header: return highlightLi(part+=1)
# @copy: global
## What is uncertainty?
> Uncertainty refers to epistemic situations involving **imperfect or unknown information**.
> \
> \
> It applies to predictions of **future events**, to **physical measurements** that are already made, or to the **unknown**. \
> Uncertainty arises in partially observable and/or stochastic environments, as well as due to **ignorance, indolence, or both**. \
> \
> It arises in any number of fields, including insurance, philosophy, physics, **statistics**, economics, finance, psychology, sociology, **engineering**, metrology, meteorology, ecology and information science.
> wikipedia
// machine learning is engineering of statistical software
## Sources of Uncertainty in Models
- Traditional ideal (deterministic) models, like rules in physics
  - e.g., $x_{t+1} = f(x_{t})$ (e.g., dynamics, $f$ includes gravity etc)
  - e.g., $Y = f(X)$ (e.g., ideal gas law, 
@:.helped-svg
## Toy 2D-Dataset: *Epistemic*{.epi} Uncertainty
## A Quick Visual Summary on Uncertainty 
## How Neural Nets do Classification? *(reminder) (example with 3 classes)*{.dense}
{.halfwidth .centered .will-alea .step}
Case 2 !
{.halfwidth .centered .will-epi .step style="vertical-align: top"}
@anim: %+alea: .will-alea
@anim: %+epi: .will-epi +
## ReLU Networks are Overconfident (Hein et al., CVPR2019)
{style="position: absolute; top: 10px; right: 0;"}
- Over-confident predictions
- A deep model doesn't know *what it doesn't know*{.epi} {.challenge}
@anim: .moons
- NB: it is also over-confident in *regions of inherent uncertainty*{.alea} {.dense}
( Image from the companion-webpage https://github.com/max-andr/relu_networks_overconfident of:
  
- "Bayesianism"
  - Everything as random variables
  - Use (conditional) probabilities ... a lot // A generalization of traditional logic
  - .{.empty}
- Two probability rules
    - .{.empty}
    - Product rule:
      $P(A, B) = P(A|B) ~ P(B)$ = $P(B|A) ~ P(A)$
      {.dense}
    - .{.empty}
    - Marginalization, Sum rule:
      $P(B) = \sum_A P(A,B) \triangleq \sum_a P(A=a,B)$
      {.dense}
    - @anim: .floatright +
- And in the 
- Use probabilities to
  - represent non-deterministic laws
  - represent uncertainty (*aleatoric*{.alea} and *epistemic*{.epi})
  - reason about uncertainty (do learning, inference)
  {.dense}
- Considering
  - some parameters (e.g., weights of the network, $W$)
  - some dataset (e.g., both training inputs and labels, $X$)
- We have *﹏*{.pen}
  - $P(W|X) = \frac{P(X|W) ~ P(W)}{P(X)} \propto P(X|W) ~ P(W)$
- More verbosely
  - $P_{posterior}(weights | trainset) = \frac{P_{likelihood}(trainset|weights) ~ P_{prior}(weights)}{P_{constant}(trainset)}$
{.denser .no-bullet .challenge}
- Posterior probability
  - probability distribution of the parameters given the training set
  - i.e. what we know about the parameters after seeing the training set
  {.dense}
## Principle of Bayesian Neural Networks
- (Biraja Ghoshal, Allan Tucker)
- dropweight on a BCNN
- improve classification
- good quantification
- human/machine combination
@: .no-libyli .paper-with-image .two-lines
## Towards safe deep learning: accurately quantifying biomarker uncertainty in neural network predictions
- (Zach Eaton-Rosen, Felix Bragman, Sotirios Bisdas, Sebastien Ourselin, M. Jorge Cardoso)
{style="max-width:300px"}
@: .no-libyli .paper-with-image
## ... (cont)
@: .no-libyli .paper-with-image .two-lines
## Propagating uncertainty across cascaded medical imaging tasks for improved deep learning inference
- Raghav Mehta, Thomas Christinck, Tanya Nair, Paul Lemaitre, Douglas L. Arnold, Tal Arbel
{style="max-width:300px"}
## Gaussian processes (GP)
    (Avoiding pathologies in very deep networks, Duvenaud et al.)
## Going beyond probabilities?
- Probabilities are a way of represent belief
- It might be necessary to also represent confidence
- Some possible directions
  - 
(e.g., Deep Evidential Regression (above))