Conditioned expectancy value

caused expectancy values and caused probabilities concerning an part σ algebra represents a Verallgemeinerung of conditioned probabilities . They are used among other things with the formulation of Martingalen.


Table of contents

introduction

in a probability area <math> (\ omega, \ mathcal {A}, P)< /math> the conditioned probability gives <math> P (A|B)< /math> on, like probably the event <math> A< /math> is, if one information about occurring <math> B< /math> received.

More generally one knows P (A <after> the probability math|\ mathcal {B})< /math> probably ask, which indicates, like <math> A< /math> is, if one information about occurring and/or. Nichteintreten of a quantity <math> \ mathcal {B}< /math> from events received. The events, over which one has such an information, always form an σ-algebra. If the information consists for example of the fact that one the value of variates <math> X_1,…, X_n< /math> knows, then one knows /math about all events <of the form> math \ {(X_1,…, X_n) \ in E \<}> Answer, D. h. <math> \ mathcal {B}< /math> σ-algebra produced by the variates is math \ sigma <(>X_1,…, X_n) /math in this< case>. The probability <math> P (A|\ mathcal {B})< /math> depends then on which value <math> X_1,…, X_n< /math> generally assume, or, which events of <math> \ mathcal {B}< /math> occurred and which did not occur, D. h. <math> P (A|\ mathcal {B})< /math> a function of math <\> omega is measurably relative \ in< \> omega /math , those <math> \ mathcal {B}< /math> is. In analogy to the definition by cases formula for the total probability B results \ <in> \ mathcal {B} /math for everyone< math>

<math> \ int_B P (D \ omega) \, P (A|\ mathcal {B}) (\ omega) \; = \; P (B \ cap A)< /math>,

which can be also written as <math> E (\ mathrm1_B P (A|\ mathcal {B})) = E (\ mathrm1_B \ mathrm1_A)< /math>, whereby <math> \ mathrm1_A< /math>, <math> \ mathrm1_B< /math> the indicator function of <math> A< /math> and/or. <math> B< /math> is.

This beginning represents a possibility, the term of the conditioned probability <math> to P (A|B)< /math> to generalize. Another possibility is that one for conditioned probability distribution <math> the P (\; \ cdot \; |B)< /math> belonging expectancy value of a variate <math> X:\Omega\to\R< /math> regarded. Both beginnings are combined in the following definition.

definition

<math> X< /math> a variate with values is in <math> [- \ infty, + \ infty]< /math> in a probability area <math> (\ omega, \ mathcal {A}, P)< /math>, and <math> \ mathcal {B} \ subset \ mathcal {A}< /math> is an part σ algebra.

A variate <math> Y< /math> with values in <math> [- \ infty, + \ infty]< /math> conditioned expectancy value of math <X> /math< is called> relative <math> \ mathcal {B}< /math>, written <math> E (X|\ mathcal {B})< /math>, if the following conditions are fulfilled:

  • <math> Y< /math> math \ mathcal <{>B is measurably relative}< /math>.
  • For all <math> B \ in \ mathcal {B} </math>, for <math> the E (\ mathrm1_B Y)< /math> (finally or infinitely), applies math E <(>\ mathrm1_B Y) is defined = E (\ mathrm1_B X) \,< /math>.

Two different conditioned expectancy values of <math> X< /math> relative <math> \ mathcal {B}< /math> differ at the most on an empty set in <math> \ mathcal {B}< /math>, whereby itself the uniform way of writing <math> E (X|\ mathcal {B})< /math> to justify leaves.

Math <\> mathcal {B is}< /math> of variates <math> the X_1< /math>,…, <math> X_n< /math> if σ-algebra produced <math> \ sigma (X_1,…, X_n)< for /math>, then one writes also <math> E (X|X_1,…, X_n)< /math>.


The conditioned probability of an event <math> A \ in \ mathcal {A}< /math> relative <math> \ mathcal {B}< /math> is defined as the variate

<math> P (A | \ mathcal {B}) = E (\ mathrm1_A | \ mathcal {B}) </math>.

There the conditioned probabilities <math> P (A | \ mathcal {B}) </math> different events <math> A \ in \ mathcal {A}< /math> thus without purchase are to each other defined and are not clearly fixed, must <math> P (\; \ cdot \; | \ mathcal {B}) (\ omega) </math> generally no probability measure its. If this is however the case, D. h. if one the conditioned probabilities <math> P (A | \ mathcal {B}) </math>, <math> A \ in \ mathcal {A}< /math> to a stochastic core <math> \ pi< /math> by <math> (\ omega, \ mathcal {B})< /math> after <math> (\ omega, \ mathcal {A})< /math> to summarize can,

<math> P (A | \ mathcal {B}) (\ omega) = \ pi (\ omega; A) </math> for all <math> \ A \ in \ mathcal {A} /math, one speaks omega \ in \< omega>,

\, of regular conditioned probability.


Factorizing: The conditioned expectancy value <math> E (X|X_1,…, X_n)< /math>, that as a function of <math> \ omega< /math> is defined, X_n /math leaves itself as <a function> of math X_1,…<,> represent: There is a measurable function <math> f< /math>, so that

<math> E (X|X_1,…, X_n) (\ omega) \, = \, f (X_1 (\ omega),…, X_n (\ omega)) </math> for all <math> \ omega \ in \ omega< /math>.


Existence: The general existence of conditioned expectancy values for integrable variates (variates, which possess a finite expectancy value) can be shown with the sentence of radon Nikodym. In the definition indicated here the conditioned expectancy value exists <math> to E (X|\ mathcal {B})< /math> exactly if it a quantity <math> B \ in \ mathcal {B}< /math> gives, so that <math> \ mathrm1_B X< /math> and <math> \ mathrm1_ {B^c} X< /math> are quasi integrable, and it applies then <math> for E (X|\ mathcal {B}) = E (X^+|\ mathcal {B}) - E (X^|\ mathcal {B})< /math> nearly everywhere. (One could use also latter expression for the definition, around cases like <math> E (X||X|) =0< /math> for a Cauchy distributed variate to seize, then however inconsistent expectancy values was received.)

regular one conditioned probabilities, also in faktorisierter form, exists in Polish areas with Borel σ algebra, more generally applies: Is <math> Z< /math> any variate with values in a Polish area, then exists a version of the distribution <math> to P (Z \ in \, \ cdot \, \, | X_1,…, X_n)< /math> in the form of a stochastic core <math> \ pi< /math>:

<math> P (Z \ in \, \ cdot \, \, | X_1,…, X_n) (\ omega) \, = \, \ pi (X_1 (\ omega),…, X_n (\ omega) \; \; \ cdot \;) </math> for all <math> \ omega \ in \ omega< /math>


examples

simple σ-algebras: Math <B> is \ in \ mathcal {B}< /math> with <math> P (B)>,< and> math B /math <possesses> 0< /math> except itself and the empty quantity no subsets in <math> \ mathcal {B}< /math>, then is correct the value of <math> P (A | \ mathcal {B}) </math> on <math> B< /math> with the conventional conditioned probability :

<math> P (A | \ mathcal {B}) (\ omega) = \ frac {P (A \ cap B)}{P (B)} </math> for all <math> \ omega \ in B< /math>


Count on densities: Math <f_> {X is, Y}: (A, b) \ times (C, D) \ ton (0, \ infty)< of /math> a limited density of the common distribution of variates <math> X, Y< /math>, then is

<math> f_ {X|Y} (x, y) = {f_ {X, Y} (x, y) \ more over \ int_a^b f_ {X, Y} (u, y) you} </math>

the density of a regular conditioned distribution <math> P (X \ in \, \ cdot \, \, | Y)< /math> in the faktorisierten form.

arithmetic rules

the equations are, as far as nothing else is indicated to understand in each case in such a way that the left side exists exactly (in the sense of the above definition) if the right side exists.

  • For trivial σ-algebra <math> \ mathcal {B} = \ {\ emptyset, \ omega \}< /math> result simple expectancy values and probabilities:
    <math> E (X|\ mathcal {B}) (\ omega) = E (X) </math> for all <math> \ omega \ in \ omega< /math>
    <math> P (A|\ mathcal {B}) (\ omega) = P (A) </math> for all <math> \ omega \ in \ omega< /math>
  • Is <math> X< /math> independently of <math> \ mathcal {B}< /math>, then applies <math> E (X|\ mathcal {B}) = E (X) </math> nearly everywhere.
  • Math <\> mathcal {B is} = \ mathcal {A}< /math> or <math> X< /math> measurably relative <math> \ mathcal {B}< /math>, then applies <math> E (X|\ mathcal {B}) = X </math> nearly everywhere.
  • For part σ algebras <math> \ mathcal {C} \ subset \ mathcal {B} \ subset \ mathcal {A}< /math> math <E> (E (X applies|\ mathcal {B})|\ mathcal {C}) = E (X|\ mathcal {C})< /math> and <math> E (E (X|\ mathcal {C})|\ mathcal {B}) = E (X|\ mathcal {C})< /math> nearly everywhere.
  • It applies <math> E (X_1 + X_2 | \ mathcal {B}) = E (X_1 | \ mathcal {B}) + E (X_2 | \ mathcal {B}) </math> nearly everywhere, if <math> X_1< /math> or <math> X_2< /math> a finite expectancy value possesses.
  • It applies <math> E (A X | \ mathcal {B}) = A E (X | \ mathcal {B}) </math> nearly everywhere for real numbers <math> A \ ne 0 </math>.
  • Monotonie: Out <math> X_1 \ le X_2 </math> follows <math> E (X_1 | \ mathcal {B}) \ le E (X_2 | \ mathcal {B}) </math> nearly everywhere, if the conditioned expectancy values exist.
  • Monotonous convergence: Out <math> X_n \ uparrow X </math> follows <math> E (X_n | \ mathcal {B}) \ uparrow E (X | \ mathcal {B}) </math> nearly everywhere, if the conditioned expectancy values exist and <math> E (X_1 | \ mathcal {B}) > - \ infty </math> nearly everywhere.
  • Jensen inequation: Is <math> f: \ mathbb {R} \ rightarrow \ mathbb {R} </math> a convex function, then applies <math> f (E (X|\ mathcal {B})) \ le E (f (X)|\ mathcal {B}) </math> nearly everywhere, if the conditioned expectancy values exist.
  • Is <math> Y </math> measurably relative <math> \ mathcal {B}< /math>, then is <math> E (YX|\ mathcal {B}) = Y E (X|\ mathcal {B}) </math> nearly everywhere, if the conditioned expectancy values exist. In particular is <math> E (Y (X - E (X|\ mathcal {B})) = 0 </math> nearly everywhere, D. h. the conditioned expectancy value <math> E (X|\ mathcal {B}) </math> the orthogonale projection of math X /math is in the sense of the dot product of L 2 <(>P<)> on the area <math> \ mathcal {of B}< /math> - measurable functions.
 

  > German to English > de.wikipedia.org (Machine translated into English)