Zum Inhalt springen

Linear systems and matrices – Serlo

Aus Wikibooks

A practical example

[Bearbeiten]

Linear dependencies often occur in practice.

Example (Linear dependence in a practical situation)

Imagine a company that launches two types of home fragrance essences (“Spring” and “Exotic”) by mixing two raw materials that the company purchases (violet fragrance and jasmine oil) in different compositions:

  • 10kg of the essence “Spring” consists of 7kg of violet fragrance and 3kg of jasmine oil.
  • 10kg of the “Exotic” fragrance consists of 2kg of violet fragrance and 8kg of jasmine oil.

The cost of violet fragrance and jasmine oil increases slightly from time to time. We therefore do not commit ourselves to specific values, but rather designate:

  • the variable price (in your preferred currency) for 1kg of violet fragrance with
  • and the variable price for 1kg of jasmine oil as .

The company is now interested in two things:

  • How much does it have to pay for the raw materials needed for 10kg of “Spring”?

Answer:

  • How much does it have to pay for the raw materials required for 10kg of “Exotic”?

Answer:

Here we have a dependency of two variables and on two other variables and . We can describe these as the system of equations:

Alternatively, we can write this dependency as matrix multiplication:

Definition

[Bearbeiten]

Definition (Linear system)

Let be a field. A linear system of equations over with equations for variables has the form:

where for and .

The linear system is called homogeneous if , otherwise it is called inhomogeneous.

Definition (Matrix of coefficients)

The matrix is called the coefficient matrix of the linear system from the above definition. The vector is called the right-hand side.

Definition (Solution set of a linear system)

The vector is called the solution of the above linear system if . The set

is called solution set of the system.

Examples

[Bearbeiten]

Example

is a linear system of equations over with 2 equations for the varibles with coefficient matrix and right-hand side .

Linear system with a unique solution

[Bearbeiten]

Example (Linear system with a unique solution)

We have already distinguished between different “types” of linear systems in the previous chapter.

This linear system can be represented by the matrix

Applying the Gaussian algorithm we obtain:

We get no solution for . Or to put it another way, we are looking for a solution to

Therefore, the system of equations with cannot be solved or has no solution. However, if we fix, e.g., , we obtain a unique solution.

By multiplying the 3rd line by 1/2 and subtracting the 3rd from the second line or the third from the first line, we then obtain the Gaussian normal form.

Now we have found a unique solution. Thanks to the Gaussian normal form, we can read it off directly:

Linear system with a non-unique solution

[Bearbeiten]

Example (Linear system with a non-unique solution)

Linear systems of equations do not always have a unique solution. To see this, let us consider the following linear system of equations over .

After running the Gaussian algorithm, we realize that there is no unique solution. The system of equations has an infinite number of solutions, as the system of equations has more variables than equations. Formally speaking, the rank of the matrix is less than , where is the size of the matrix.

Consequently, the system of equations has no unique solution but infinitely many solutions. We may explicitly convince ourselves that this is true, proceeding as follows:

  1. Find a variable that is fixed. For example, set . It doesn't matter which variable is fixed.
  2. Solve all other equations depending on the fixed variable.

For the example above, set . Now solve the first equation for . Thus :

As we set , we have:

We now solve the second equation for .

This process is repeated for all equations. In this example, . Nevertheless, we can stop here, as the 3rd line only consists of zeros. The solution space now has the form

Now we insert the rearranged equations for . Our in our example is arbitrary, and since our linear system of equations was realized via , our is also from . This gives us the following solution space:

We can now represent the solution space more simply. To do so, we factor out . The solution space can then be written as

A solution space trick

[Bearbeiten]

Example (Linear system with a non-unique solution)

As we have already seen, setting up a solution space is not complicated. Nevertheless, it is error-prone. However, there exists a trick that simplifies setting up a solution space. Let's look at the previous solution:

For this trick, it is important that the matrix is in Gaussian normal form and that the following properties are fulfilled for the matrix:

  • The lower left triangular matrix contains only zeros
  • First element of a row is a 1 and lies on the diagonal
  • All elements in the column above and below are zero

If you try to use this trick on a matrix that does not meet these requirements, the trick will produce an incorrect result!

Proceed as follows:

  1. Replace zeros with -1 on the diagonal
  2. Each column where the diagonal zero has been replaced with -1 is automatically a solution vector.
  3. The vector is the support vector in the system of equations

Let's see how this trick works on the system of equations considered above.

We have replaced the last diagonal zero with -1. This column is our solution vector. Thus, our first (and as we will see later, our only) solution vector is .


The support vector is the vector b, that is, the rightmost column. So the support vector is .

This gives us the solution space:

Linear systems and linear maps

[Bearbeiten]

As the name suggests, there is an important connection between linear systems of equations and linear maps. For example, let's look at the following linear system

We can combine these two equations into one equation by using the vector notation. The left-hand side is a vector of variables , the right-hand side is a constant vector. We can thus write the columns of the system of equations as vectors and obtain

The left-hand side depends on the variables and . These can be described by a function

This gives us the equation

In particular, is a linear map.

Exercise (Linear map)

Show that is a linear map.

Solution (Linear map)

With , the original and target space of the map are both vector spaces.

Proof step: Proof of additivity

For all we have

This proves additivity of .

Proof step: Proof of homogeniety

Für alle gilt

This shows the homogeneity of and we have thus proven that this map is linear (i.e., a homomorphism).

We have rewritten the above system of equations into an equation for a linear map that results in a constant vector. In other words, the search for a solution to the above system of equations is the same as the search for an image of the right-hand side under . In fact, we obtain the coefficient matrix of the above system of equations as the representation matrix of with respect to the basis .

More generally, for a linear system with equations and variables over a field , we find a corresponding linear map as follows: Let be the coefficient matrix and the right-hand side of the linear system. Then we set . This function is linear and agrees with the above construction. Then the search for a solution to the system of equations corresponds to the search for a preimage of under , that is, an satisfying .