# Expectancy value

the expectancy value is a size within the probability theory. The expectancy value is heuristically spoken variates that value, which results in the case of a often times repetition of the which is the basis experiment as average value of the actual results. The law of the large numbers in most cases securestoo that the strictly defined term agrees with the heuristic explanation.

In the discrete case the expectancy value is calculated as the sum of the products of the probabilities of each possible result of the experiment and „the values of “these results. Expectancy values must neither finally nor possible results of the coincidence experimentits.

## definition

### expectancy value discrete variates

Is [itex] X< /math> a discrete variate, those the values x 1, x 2,… with the respective probabilities p 1, p 2,… , is calculated the expectancy value accepts E (X) too:

[itex] E (X) = \ sum_ {i} x_i \ cdot p_i< /math>

The variate takes [itex] to X< /math> countable infinitely manyValues on, then an infinite row is present . In this case the expectancy value E (X) exists only, if this row converges absolutely.

### expectancy value variates with density function

has a variate X a probability density function f (x), then computes itself the expectancy value too

< math> E (X) = \ int_ {- \ infty} ^ \ inftyx f (x) dx. [/itex]

The expectancy value exists only, if the integral [itex] \ int_ {- \ infty} ^ \ infty \ left| x \ right| f (x) dx< /math> converged.

### expectancy value of two variates with common density function

credit a variate X and a variate Y a common probability density function f (x, y), then computes itself the expectancy value of oneFunction g (X, Y) from X and Y to

< math> E (g (X, Y))= \ int_ {- \ infty} ^ \ infty \ int_ {- \ infty} ^ \ infty g (x, y) f (x, y) dxdy. [/itex]

The expectancy value exists only, if the integral <left> math \ int_ {- \ infty} ^ \ infty \ int_ {- \ infty} ^ \ infty \| g (x, y) \ right| f (x, y) dxdy< /math> converged.

In particular is:

[itex] E (X) = \ int_ {- \ infty} ^ \ infty \ int_ {- \ infty} ^ \ infty x f (x, y) dxdy. [/itex]

### general definition

general becomes the expectancy value as the integral concerning the probability measure defines: Is [itex] X< /math> a P - integrable or quasi-integrable variate of a probability area [itex] (\ omega, \ sigma, P)< /math> after [itex] (\ overline {\ R}, \ mathcal {B})< /math>, whereby [itex] \ mathcal {B}< /math> Borel σ-algebra over [itex] \ overline {\ R}< /math> is, then one defines

< math> E (X) = \ int_ \ omega X \, dP< /math>.

If the variate [itex] X< /math> is discrete or a density possesses,this expectancy value agrees with the above representations.

## examples

### cubes

the experiment is that cubes with a cube. The variate X is the gewürfelte eye number. The probabilities p i, one of the numbers of 1,…, 6 to cubes,are in each case 1/6.

[itex] E (X) = \ sum_ {i=1} ^6 i \ cdot \ frac {1} {6} = 3.5< /math>

If one surrenders thus 1000 time throw ELT, which adds thrown eye numbers and divided by 1000, with high probability a value in close proximity to 3,5. With only one throw one will however never receive 3.5.

### pc. Peter citizen play

the so-called pc. Peter citizen play is a play with infinite expectancy value: One throws a coin, shows her to head, receives one 2€, shows her number, may one again throw. If one throws now head, one receives 4€, throws one again number, thusone may throw third time, to etc. One sees immediately that the expectancy value

< cdot> math E (X) = 2 \ cdot \ frac {1} {2} + 4 \ cdot \ frac {1} {4} + \ cdots = 1 + 1 + \ cdots = \ sum_ {i=1} ^ \ infty 2^i \ \ frac {1} {2^i} = \ infty< /math>

is. Even if one still so often plays the play, one becomes at the endnever a consequence of plays have, with which the means of all profits is infinite.

## arithmetic rules

the expectancy value is linear, since the integral is a linear operator. From it the following two very useful rules result:

### expectancy value of the sums ofn variates

< math> E \ left (\ sum_ {i=1} ^nX_i \ right) = \ sum_ {i=1} ^nE (X_i)< /math>

### linear transformation

linear transformation kX + D

< math> E (kX+d) =kE (X) +d< /math>

In particular:

[itex] E (cX) =cE (X)< /math>

### expectancy value of the product of two stochastically independent variates

< math> E (X \, Y) =E (X) E (Y)< /math>

### probabilities as expectancy values

probabilities of events can be expressed also over the expectation value. For everyoneEvent [itex] A< /math> math

< P> (A) applies = E (\ mathrm1_A) \,< for /math>,

whereby [itex] \ mathrm1_A< /math> the indicator function of [itex] A< /math> is.

This connection is often useful, approximately for the proof of the Tschebyschow inequation.

## expectancy values of functions of variates

if Y=g (X) again a variate is, then one knows the expectancy value of Ycompute as follows:

[itex] E (Y) = \ int_ {- \ infty} ^ \ infty g (x) f (x) dx< /math>.

Also in this case the expectancy value exists only, if [itex] \ int_ {- \ infty} ^ \ infty \ left| g (x) \ right| f (x) dx< /math> converged.

With a discrete variate one uses a sum:

[itex] E (Y) = \ sum_ {i} g (x_i) \ cdot p_i< /math>

If the sum is not finite, then the row must converge absolutely therebythe expectancy value exists.

## quantum-mechanical expectancy value

is [itex] \ psi (r, t) = \ langle r|\ psi (t) \ rangle< /math> the wave function of a particle in a certain condition [itex]|\ psi (t) \ rangle< /math> and is [itex] \ has O< /math> an operator, then is

[itex] \ langle \ has O \ rangle_ {|\ psi (t) \ rangle}: =

\ langle \ psi (t)|\ O has |\ psi (t) \ rangle= \ int_ {M^2} d^d r d^d r^ \ prime \ psi^ \ star (r, t) \ langle r|\ O has|r^ \ prime \ rangle \ psi (r^ \ prime, t)< /math>

the quantum-mechanical expectancy value of [itex] \ has O< /math> in the condition [itex]|\ psi (r, t) \ rangle< /math>. [itex] M< /math> here the local area, in which the particle moves, is [itex] D< /math> is the dimension of [itex] M< /math>, and a highranking star stands for complex conjugation.

Math <\> has O /math< leaves itself> as formal power series< math> O (\ r has, \ has p)< /math>write (and that is often like that), then one uses the formula

< math> \ langle \ has O \ rangle_ \ psi = \ int_M d^d r \ psi^ \ star (r, t) O (r, \ frac {\ hbar} {i} \ nabla_r) \ psi (r, t). [/itex]

The index at that expectancy value-clammy is shortened not only like here, but omitted sometimes also completely.

### example

the expectancy value of the place of residence is

[itex] \ langle \ has r \ rangle =

\ int_M d^d r \ psi^ \ star (r, t) r \ psi (r, t) = \ int_M d^d r r|\ psi (r, t)|^2 = \ int_M d^d r rf (r, t),< /math>

whereby we identified the probability density function of quantum mechanics in the local area. In physics one writes [itex] \ rho< /math> (rho) instead of [itex] f< /math>.

## expectancy value of stencils

is [itex] X< /math> one [itex] m\ times n< /math> Matrix, then is defined the expectancy value of the matrix as:

[itex]

\ mathrm {E} [X] = \ mathrm {E} \ begin {bmatrix}

``` x_ {1.1} & x_ {1.2} & \ cdots & x_ {1, n} \ \ x_ {2.1} & x_ {2.2} & \ cdots & x_ {2, n} \ \ \ vdots \ \ x_ {to m, 1} & x_ {m, 2} & \ cdots & x_ {m, n}
```

\ end {bmatrix} = \ begin {bmatrix}

` \ mathrm {E} (x_ {1.1})& \ mathrm {E} (x_ {1.2}) & \ cdots & \ mathrm {E} (x_ {1, n}) \ \ \ mathrm {E} (x_ {2.1}) & \ mathrm {E} (x_ {2.2}) & \ cdots & \ mathrm {E} (x_ {2, n}) \ \ \ vdots \ \ \ mathrm {E} (x_ {m, 1}) & \ mathrm {E} (x_ {m, 2}) & \ cdots & \ mathrm {E} (x_ {m, n})`

\ end {bmatrix} [/itex]