# Linear algebra

linear algebra (also vector algebra) is a subsection of mathematics, which is occupied with vector spaces and linear illustrations between these. This includes in particular also the view of linear sets of equations and stencils also.

There vector spaces andtheir linear illustrations an important aid within many fields of mathematics is, ranks linear algebra among the bases of mathematics. Outside of the abstract mathematics are applications and. A. in the natural sciences and in the economic science (z. B.in the optimization).

Linear algebra developed from two concrete requirements: on the one hand the release from linear sets of equations, on the other hand the computational description of geometrical objects, analytic geometry in such a way specified. (Therefore some authors call linear algebra linear geometry.)

## history

the history of modern linear algebra is enough back into the years 1843 and 1844. 1843 devised William Rowan Hamilton (of that the termVector comes) with the quaternions an extension of the complex numbers. 1844 published Hermann grass man its book „the rulers expansion teachings “. Arthur Cayley led then 1857 with [itex] the 2 \ times 2< /math> - stencils one of the grundlegensten algebraic ideas in.

## Linear sets of equations

major items: Linear set of equations

as linear sets of equations one designates a summary of equations of the kind

< math> x_1+x_2 = 1< /math>
[itex] 3x_1+6x_2 = 4< /math>

One receives such sets of equations from many everyday questions, for example:

In which relationship one must a 30%-ige solution andmix a 60%-ige solution, in order to receive a 40%-ige solution?

The substantial abstraction step of linear algebra consists now of it, the left sides as a function [itex] A< /math> the unknown quantities [itex] x= (x_1, x_2)< /math> to understand:

[itex] A (x) = \ begin {pmatrix} x_1+x_2 \ \ 3x_1+6x_2 \ end {pmatrix}< to /math>

Then the solution of the set of equations becomes thatTask: Find [itex] an x< /math>, so that

< math> to A (x) = \ begin {pmatrix} 1 \ \ 4 \ end {pmatrix}< /math>

applies. The over each other letter is only a formalism in order to be able to deal with more than one number at the same time.

Instead of [itex] A< /math> one notes also simply the relevant numbers in form of a rectangleand the object calls a matrix:

[itex] A= \ begin {pmatrix} 1 & 1 \ \ 3 & 6 \ end {pmatrix}< to /math>.

One states that the function [itex] A< /math> special characteristics has, it is a linear illustration: Is [itex] x< /math> a solution for the set of equations [itex] A (x) =b< /math>, and [itex] y< /math>a solution of the set of equations [itex] A (y) =c< /math>, then is

[itex] z=x+y= \ begin {pmatrix} x_1+y_1 \ \ x_2+y_2 \ \ x_3+y_3 \ ends {pmatrix}< to /math>

a solution of [itex] A (z) =b+c< /math>. One knows also in the form [itex] A (x+y) =A (x) +A (y)< /math> write. Is far [itex] \ lambda< /math> any real number, then is [itex] A (\ lambda x) = \ lambda \ cdot A (x)< /math>; math

< \> lambda x= \ begin {pmatrix} \ lambda x_1 is \ \ \ lambda x_2 \ \ \ lambda x_3 \ ends {pmatrix}< to /math>.

## analytic geometry

the other origin of linear algebra is in the computational description 2 - and three-dimensional (Euclidean) area, also “opinion area” mentioned. Points in the area know assistance of a coordinate system by Tripel [itex] (x_1, x_2, x_3)< /math> by numbers to be described. ThatType of illustration of the shift leads to the term of the vector, the direction and amount of the shift indicates. Many physical dimension, for example forces, have always this aspect of direction.

There one also vectors by Zahlentripel [itex] (a_1, a_2, a_3)< /math> to describe, blurs the separation between vectors can andPoints: one point [itex] P< /math> its radius vector corresponds, to that from the origin after [itex] P< /math> shows.

Many of the illustration types, for example turns around axles by the origin or reflections on levels by the origin, regarded in classical geometry, belong to that to the class linear illustrations, which was mentioned already above.

## of vector spaces and linear algebra

the term of the vector space develops as abstraction of the above examples: A vector space is a quantity, whose elements vectors are called, togetherwith

• vectors of a multiplication of vectors with real numbers , scalar multiplication.

This addition and the scalar multiplication must fulfill still some simple characteristics, which apply also to the vectors in the opinion area.

One could say that vector spaces straight soit is defined that one can speak of linear illustrations between them.

In a further Verallgemeinerung one can replace the real numbers by other bodies.

## related terms

in certain way is the term of the vector spacealready too generally. One can assign a dimension to each vector space, for example has the level dimension 2 and the area dimension 3. There are however vector spaces, whose dimension is not finite, and many of the well-known characteristics are lost. It has itselfbut when very successfully proved to equip infinite-dimensional vector spaces with an additional topological structure; the investigation of topological vector spaces is the subject of the Funktionalanalysis.

The remainder of this article concerns itself with the case of finite dimensions.

## vectors and stencils

vectorscan by their components be described, (depending upon application) as (here three-dimensional) column vector

< math> \ mathbf {the A} = \ begin {pmatrix} 3 \ \ 7 \ \ 2 \ end {pmatrix}< to /math>

or (here 4-dimensionaler) line vector

< math> \ mathbf {b} = \ begin {pmatrix} 4 & 6 & 3 & 7 \ end {pmatrix}< to /math>

are written.

In the literature becomeVectors differently of other sizes differentiated: Small letters, fat-printed small letters, underlined small letters or small letters with an arrow over it are used. This article uses small letters.

A matrix is indicated by a “raster” by numbers. Here is a matrix with 4 linesand 3 columns:

[itex] \ mathbf {M} = \ begin {pmatrix}

8 & 2 & 9 \ \4 & 8 & 2 \ \8 & 3 & 7 \ \5 & 9 & 1\ end {pmatrix}< to /math>

Stencils are mostly named capital letters.

Individual elements of a vector becomeColumn vectors usually by an index indicated: 2. Element of the vector A indicated above would be then A 2 =7. Sometimes in line vectors an exponent is used, whereby one must watch out whether a vector indexing or an exponent is present: Withthe above example b one has for instance b 4 =7.

Array elements are indicated by two indices. The elements are represented by small letters: m 2.3 =2 is the element of the 2. Line in the 3. Column.

The generalized termthese things is tensor, scalars are tensors 0. Stage, vectors tensors 1. Stage, stencils tensors 2. Stage. A tensor n. Stage can be represented by a n-dimensional number cube.

### stencils of special form

in linear algebra placesto bring itself frequently the problem of stencils by means of elementary line shapings or basis changes on a special form. Thereby the following forms are important:

## Endomorphismen and square stencils

during the representation of onelinear illustration - described how under matrix - there is the special case of a linear illustration [itex] f< /math> a finite-dimensional vector space in itself (a so-called. Endomorphismus). One knows then the same basis [itex] v< /math> for Urbild and picture coordinates and receives a square usesMatrix [itex] A = {} _vf_v< /math>, so that the application of the linear illustration of the link multiplication with [itex] A< /math> corresponds. The twice Hintereinanderausführung of this illustration corresponds then to the multiplication with [itex] A^2< /math> etc., and one knows all polynomialen expressions with [itex] A< /math> (Sums of multiples ofPowers of [itex] A< /math>) as linear illustrations of the vector space understand.

### inverting barness

similar to the arithmetic rule [itex] x^0=1< /math> with numbers the zeroth power of a square matrix is the diagonal matrix [itex] E< /math> with louder ones on the diagonals, it corresponds to the identity illustration of everyoneVector on itself. Negative powers of a square matrix [itex] A< /math> can be computed only if through [itex] A< /math> given linear illustration invertable is, thus no two different vectors [itex] u_1< /math> and [itex] u_2< /math> on the same vector [itex] Au_1=Au_2< /math> illustrates. Differently expressed, must fora invertable matrix [itex] A< /math> out [itex] u_1-u_2 \ ne 0< /math> always [itex] A (u_1-u_2) \ ne 0< /math> follow, the linear set of equations [itex] Au=0< /math> thus only the solution may do [itex] 0< /math> have. To a invertable matrix [itex] A< /math> exists an inverse matrix [itex] to A^ {- 1}< /math> with [itex] A^ {- 1} A=AA^ {- 1} =E< /math>.

### determinants

a determinantis a special function, which assigns a number to a square matrix. This number gives information over some characteristics to the matrix. For example it shows itself by it whether a matrix is invertable. A further important application is the computation of thecharacteristic polynomial and thus the eigenvalues of the matrix. There are closed formulas for the computation of the determinants, like the Laplace' expansion theorem or the Leibniz formula. These formulas are however rather from theoretical importance, since their expenditure rises with larger stencils strongly.In practice one can compute determinants at the easiest, by bringing the matrix with the help of the Gauss algorithm in upper or lower triangle form, the determinant is then simply the product of the main diagonal elements.

### computation of powers by means of Diagonalisierung

Motivation: The Fibonacci sequence [itex] f_n< /math> is defined by the recursive formula [itex] f_0 = 0< /math>, [itex] f_1 = 1< /math> and [itex] f_ {n+1} = f_n + f_ {n-1}< /math>, which choose f_0 is

<equivalent>} = {1 \ choose 0} /math with math {f_1< \>

and

< math> {f_ {n+1} \ choose f_n} = \ begin {pmatrix} 1&1 \ \ 1&0 \ end {pmatrix} \ cdot {f_n \ choose f_ {n-1}}< to /math>,

thuswith the not recursive formula

< math> {f_ {n+1} \ choose f_n} = \ begin {pmatrix} 1&1 \ \ 1&0 \ end {pmatrix} ^n \ cdot {to 1 \ choose 0}< /math>,

in that [itex] the n< /math> - width unit power of a matrix [itex] A< /math> occurs.

The behavior of such a matrix with exponentiation is not easy to recognize; however math <the n> /math< becomes> - width unit power of a diagonal matrixsimply by exponentiation of each individual diagonal entry computes. If it now a invertable matrix [itex] T< /math> gives, so that [itex] T^ {- 1} A T< /math> Diagonal form has, leaves itself the exponentiation of [itex] A< /math> to the exponentiation of a diagonal matrix attribute in accordance with the equation [itex] (T^ {- 1} A T) ^n =T^ {- 1} A^n T< /math> (the left side of this equation is then [itex] the n< /math> - width unit power of a diagonal matrix). Its behavior (with exponentiation, in addition, with other operations) shows itself general more easily by Diagonalisierung of a matrix.

One seizes [itex] A = {} _vf_v< /math> as matrixa linear illustration up, then the transformation matrix is [itex] T< /math> the basis change matrix to another basis [itex] v'< /math>, thus [itex] T = {} _ve_ {v'}< /math> (whereby the identity illustration [itex] e< /math> each vector on itself illustrates). Then math T^ <is >{- 1} RK = {} _ {v'} f_ {v'}< /math>.

In mentioned above the examplea transformation matrix leaves itself [itex] to T< /math> find, so that

< math> T^ {- 1} \ cdot A \ cdot T = \ begin {pmatrix} \ Phi&0 \ \ 0&1 \ Phi \ ends {pmatrix}< to /math>

a diagonal matrix is, in that the golden section [itex] \ Phi = 1/2 + \ sqrt5/2< /math> occurs. (From this one finally receives the formula [itex] f_n = 1 \ sqrt5 \ cdot [(1/2+ \ sqrt5/2) ^n - (1/2 \ sqrt5/2) ^n]< /math>.)

### definition of the eigenvalue

as one comes from the matrix [itex] A< /math> on the number [itex] \ Phi< /math>? By the diagonal matrix one recognizes immediately that

< math> \ begin {pmatrix} \ to Phi&0 \ \ 0&1 \ Phi \ end {pmatrix} \ cdot {1 \ choose 0} = {\ Phi \ choose 0}< /math>,

it gives thus a vector [itex] u< /math> not equal zero, thatby multiplication with the diagonal matrix komponentenweise multiplies (more exactly: en< math> \ Phi< /math> -) becomes facht: [itex] (T^ {- 1} RK) u = \ Phi u< /math>. [itex] \ Phi< /math> an eigenvalue of the matrix is called math T^ { <-> 1} RK /math because of this< characteristic> (with self-vector [itex] u< /math>). In the case from diagonal stencils the eigenvalues are equal to the diagonal entries.

[itex] \ Phi< /math> is howeveralso at the same time eigenvalue of the original matrix [itex] A< /math> (with self-vector [itex] do< /math>), the eigenvalues remain unchanged with transformation of the matrix. The diagonal form of the matrix [itex] A< /math> arises thus from their eigenvalues, and around the eigenvalues of [itex] A< /math> to find, one must examine, for which numbers [itex] x< /math> the linear set of equations [itex] outer ones = xu< /math> a solution different of zero [itex] u< /math> has (or, differently expressed, the matrix [itex] xE-A< /math>is not invertable).

The looked for numbers [itex] x< /math> exactly those are, those the determinant of the matrix [itex] xE-A< /math>make zero. This determinant is a polynomialer expression with [itex] x< /math> (the so-called characteristic polynomial of [itex] A< /math>); in case of the 2-mal-2-Matrix specified above [itex] A< /math> this results in the quadratic equation< math> x^2 - x - 1=0< /math> with the two solutions [itex] x= \ Phi< /math> and[itex] x=1 \ Phi< /math>. The associated self-vectors are solutions of the linear sets of equations [itex] outer ones = \ Phi u< /math> and/or. [itex] outer ones = (1 \ Phi) u< /math>, from them one receives then the transformation matrix [itex] to T< /math>.

### Diagonalisierbarkeit

whether a matrix is diagonalisierbar, depends on the used counting range. [itex] A< /math>is z. B. over the rational numbers not diagonalisierbar, because the eigenvalues [itex] \ Phi< /math> and [itex] 1 \ Phi< /math> surds are. In addition, the Diagonalisierbarkeit can fail independently of the counting range, if not “sufficient” eigenvalues are present; thus for instance the Jordan form matrix has

< math> \ begin {pmatrix} 1&1 \ \ 0&1 \ ends {pmatrix}< to /math>

onlythe eigenvalue 1 (as solution of the quadratic equation [itex] (x-1) ^2 = 0< /math>) and is not diagonalisierbar. With sufficient large counting range (z. B. over the complex numbers) however each matrix can be diagonalisieren or transformed into Jordan standard format.

There the transformation, means this last statement corresponds to a matrix the basis change of a linear illustration that one can always select a basis, which is illustrated “in a simple manner” to a linear illustration with sufficient large counting range: In the case of the Diagonalisierbarkeit each basis vector becomeson a multiple of itself shown (is thus a self-vector); in the case of the Jordan form on a multiple of itself plus possibly. the previous basis vector. This theory of the linear illustration can be generalized on bodies, which are not “sufficient large”; inthem other standard formats must be regarded beside the Jordan form (z. B. the Frobenius standard format).

## literature

 Wikibooks: Linear algebra - learning and teaching materials