Zero solution of the system. Homogeneous systems of linear algebraic equations. How to find the fundamental system of solutions to a linear equation

System m linear equations c n called unknowns system of linear homogeneous equations if all free terms are equal to zero. Such a system looks like:

Where and ij (i = 1, 2, …, m; j = 1, 2, …, n) - given numbers; x i– unknown.

A system of linear homogeneous equations is always consistent, since r(A) = r(). It always has at least zero ( trivial) solution (0; 0; …; 0).

Let us consider under what conditions homogeneous systems have non-zero solutions.

Theorem 1. A system of linear homogeneous equations has nonzero solutions if and only if the rank of its main matrix is r fewer unknowns n, i.e. r < n.

1). Let a system of linear homogeneous equations have a nonzero solution. Since the rank cannot exceed the size of the matrix, then, obviously, rn. Let r = n. Then one of the minor sizes n n different from zero. Therefore, the corresponding system of linear equations has a unique solution: . . . This means that there are no other solutions other than trivial ones. So, if there is a non-trivial solution, then r < n.

2). Let r < n. Then the homogeneous system, being consistent, is uncertain. This means that it has an infinite number of solutions, i.e. has non-zero solutions.

Consider a homogeneous system n linear equations c n unknown:

(2)

Theorem 2. Homogeneous system n linear equations c n unknowns (2) has non-zero solutions if and only if its determinant is equal to zero: = 0.

If system (2) has a non-zero solution, then = 0. Because when the system has only a single zero solution. If = 0, then the rank r the main matrix of the system is less than the number of unknowns, i.e. r < n. And, therefore, the system has an infinite number of solutions, i.e. has non-zero solutions.

Let us denote the solution of system (1) X 1 = k 1 , X 2 = k 2 , …, x n = k n as a string .

Solutions of a system of linear homogeneous equations have the following properties:

1. If the line is a solution to system (1), then the line is a solution to system (1).

2. If the lines and are solutions of system (1), then for any values With 1 and With 2 their linear combination is also a solution to system (1).

The validity of these properties can be verified by directly substituting them into the equations of the system.

From the formulated properties it follows that any linear combination of solutions to a system of linear homogeneous equations is also a solution to this system.

System of linearly independent solutions e 1 , e 2 , …, e r called fundamental, if each solution of system (1) is a linear combination of these solutions e 1 , e 2 , …, e r.

Theorem 3. If rank r matrices of coefficients for variables of the system of linear homogeneous equations (1) are less than the number of variables n, then any fundamental system of solutions to system (1) consists of n–r decisions.

That's why common decision system of linear homogeneous equations (1) has the form:

Where e 1 , e 2 , …, e r– any fundamental system of solutions to system (9), With 1 , With 2 , …, with p– arbitrary numbers, R = n–r.

Theorem 4. General solution of the system m linear equations c n unknowns is equal to the sum of the general solution of the corresponding system of linear homogeneous equations (1) and an arbitrary particular solution of this system (1).

Example. Solve the system

Solution. For this system m = n= 3. Determinant

by Theorem 2, the system has only a trivial solution: x = y = z = 0.

Example. 1) Find general and particular solutions of the system

2) Find the fundamental system of solutions.

Solution. 1) For this system m = n= 3. Determinant

by Theorem 2, the system has nonzero solutions.

Since there is only one independent equation in the system

x + y – 4z = 0,

then from it we will express x =4z- y. Where do we get an infinite number of solutions: (4 z- y, y, z) – this is the general solution of the system.

At z= 1, y= -1, we get one particular solution: (5, -1, 1). Putting z= 3, y= 2, we get the second particular solution: (10, 2, 3), etc.

2) In the general solution (4 z- y, y, z) variables y And z are free, and the variable X- dependent on them. In order to find the fundamental system of solutions, let’s assign values ​​to the free variables: first y = 1, z= 0, then y = 0, z= 1. We obtain partial solutions (-1, 1, 0), (4, 0, 1), which form the fundamental system of solutions.

Illustrations:

Rice. 1 Classification of systems of linear equations

Rice. 2 Study of systems of linear equations

Presentations:

· Solution SLAE_matrix method

· Solution of SLAE_Cramer method

· Solution SLAE_Gauss method

· Packages for solving mathematical problems Mathematica, MathCad: searching for analytical and numerical solutions to systems of linear equations

Control questions:

1. Define a linear equation

2. What type of system does it look like? m linear equations with n unknown?

3. What is called solving systems of linear equations?

4. What systems are called equivalent?

5. Which system is called incompatible?

6. What system is called joint?

7. Which system is called definite?

8. Which system is called indefinite

9. List the elementary transformations of systems of linear equations

10. List the elementary transformations of matrices

11. Formulate a theorem on the application of elementary transformations to a system of linear equations

12. What systems can be solved using the matrix method?

13. What systems can be solved by Cramer's method?

14. What systems can be solved by the Gauss method?

15. List 3 possible cases that arise when solving systems of linear equations using the Gauss method

16. Describe the matrix method for solving systems of linear equations

17. Describe Cramer’s method for solving systems of linear equations

18. Describe Gauss’s method for solving systems of linear equations

19. What systems can be solved using an inverse matrix?

20. List 3 possible cases that arise when solving systems of linear equations using the Cramer method

Literature:

1. Higher mathematics for economists: Textbook for universities / N.Sh. Kremer, B.A. Putko, I.M. Trishin, M.N. Friedman. Ed. N.Sh. Kremer. – M.: UNITY, 2005. – 471 p.

2. General course of higher mathematics for economists: Textbook. / Ed. IN AND. Ermakova. –M.: INFRA-M, 2006. – 655 p.

3. Collection of problems in higher mathematics for economists: Textbook / Edited by V.I. Ermakova. M.: INFRA-M, 2006. – 574 p.

4. Gmurman V. E. Guide to solving problems in probability theory and magmatic statistics. - M.: Higher School, 2005. – 400 p.

5. Gmurman. V.E Probability theory and mathematical statistics. - M.: Higher School, 2005.

6. Danko P.E., Popov A.G., Kozhevnikova T.Ya. Higher mathematics in exercises and problems. Part 1, 2. – M.: Onyx 21st century: Peace and Education, 2005. – 304 p. Part 1; – 416 p. Part 2.

7. Mathematics in economics: Textbook: In 2 parts / A.S. Solodovnikov, V.A. Babaytsev, A.V. Brailov, I.G. Shandara. – M.: Finance and Statistics, 2006.

8. Shipachev V.S. Higher mathematics: Textbook for students. universities - M.: Higher School, 2007. - 479 p.


Related information.


We will continue to polish our technology elementary transformations on homogeneous system of linear equations.
Based on the first paragraphs, the material may seem boring and mediocre, but this impression is deceptive. In addition to further development of techniques, there will be a lot of new information, so please try not to neglect the examples in this article.

What is a homogeneous system of linear equations?

The answer suggests itself. A system of linear equations is homogeneous if the free term everyone equation of the system is zero. For example:

It is absolutely clear that a homogeneous system is always consistent, that is, it always has a solution. And, first of all, what catches your eye is the so-called trivial solution . Trivial, for those who do not understand the meaning of the adjective at all, means without a show-off. Not academically, of course, but intelligibly =) ...Why beat around the bush, let's find out if this system has any other solutions:

Example 1


Solution: to solve a homogeneous system it is necessary to write system matrix and with the help of elementary transformations bring it to a stepwise form. Please note that here there is no need to write down the vertical bar and the zero column of free terms - after all, no matter what you do with zeros, they will remain zeros:

(1) The first line was added to the second line, multiplied by –2. The first line was added to the third line, multiplied by –3.

(2) The second line was added to the third line, multiplied by –1.

Dividing the third line by 3 doesn't make much sense.

As a result of elementary transformations, an equivalent homogeneous system is obtained , and, using the inverse of the Gaussian method, it is easy to verify that the solution is unique.

Answer:

Let us formulate an obvious criterion: a homogeneous system of linear equations has just a trivial solution, If system matrix rank(in this case 3) is equal to the number of variables (in this case – 3 pieces).

Let's warm up and tune our radio to the wave of elementary transformations:

Example 2

Solve a homogeneous system of linear equations

To finally consolidate the algorithm, let’s analyze the final task:

Example 7

Solve a homogeneous system, write the answer in vector form.

Solution: let’s write down the matrix of the system and, using elementary transformations, bring it to a stepwise form:

(1) The sign of the first line has been changed. Once again I draw attention to a technique that has been encountered many times, which allows you to significantly simplify the next action.

(1) The first line was added to the 2nd and 3rd lines. The first line, multiplied by 2, was added to the 4th line.

(3) The last three lines are proportional, two of them have been removed.

As a result, a standard step matrix is ​​obtained, and the solution continues along the knurled track:

– basic variables;
– free variables.

Let us express the basic variables in terms of free variables. From the 2nd equation:

– substitute into the 1st equation:

So the general solution is:

Since in the example under consideration there are three free variables, the fundamental system contains three vectors.

Let's substitute a triple of values into the general solution and obtain a vector whose coordinates satisfy each equation of the homogeneous system. And again, I repeat that it is highly advisable to check each received vector - it will not take much time, but it will completely protect you from errors.

For a triple of values find the vector

And finally for the three we get the third vector:

Answer: , Where

Those wishing to avoid fractional values ​​may consider triplets and get an answer in equivalent form:

Speaking of fractions. Let's look at the matrix obtained in the problem and let us ask ourselves: is it possible to simplify the further solution? After all, here we first expressed the basic variable through fractions, then through fractions the basic variable, and, I must say, this process was not the simplest and not the most pleasant.

Second solution:

The idea is to try choose other basis variables. Let's look at the matrix and notice two ones in the third column. So why not have a zero at the top? Let's carry out one more elementary transformation:

Homogeneous systems of linear algebraic equations

As part of the lessons Gaussian method And Incompatible systems/systems with a common solution we considered inhomogeneous systems of linear equations, Where free member(which is usually on the right) at least one from the equations was different from zero.
And now, after a good warm-up with matrix rank, we will continue to polish the technique elementary transformations on homogeneous system of linear equations.
Based on the first paragraphs, the material may seem boring and mediocre, but this impression is deceptive. In addition to further development of techniques, there will be a lot of new information, so please try not to neglect the examples in this article.

What is a homogeneous system of linear equations?

The answer suggests itself. A system of linear equations is homogeneous if the free term everyone equation of the system is zero. For example:

It is absolutely clear that a homogeneous system is always consistent, that is, it always has a solution. And, first of all, what catches your eye is the so-called trivial solution . Trivial, for those who do not understand the meaning of the adjective at all, means without a show-off. Not academically, of course, but intelligibly =) ...Why beat around the bush, let's find out if this system has any other solutions:

Example 1

Solution: to solve a homogeneous system it is necessary to write system matrix and with the help of elementary transformations bring it to a stepwise form. Please note that here there is no need to write down the vertical bar and the zero column of free terms - after all, no matter what you do with zeros, they will remain zeros:

(1) The first line was added to the second line, multiplied by –2. The first line was added to the third line, multiplied by –3.

(2) The second line was added to the third line, multiplied by –1.

Dividing the third line by 3 doesn't make much sense.

As a result of elementary transformations, an equivalent homogeneous system is obtained , and, using the inverse of the Gaussian method, it is easy to verify that the solution is unique.



Answer:

Let us formulate an obvious criterion: a homogeneous system of linear equations has just a trivial solution, If system matrix rank(in this case 3) is equal to the number of variables (in this case – 3 pieces).

Let's warm up and tune our radio to the wave of elementary transformations:

Example 2

Solve a homogeneous system of linear equations

From the article How to find the rank of a matrix? Let us recall the rational technique of simultaneously decreasing the matrix numbers. Otherwise, you will have to cut large, and often biting fish. An approximate example of a task at the end of the lesson.

Zeros are good and convenient, but in practice the case is much more common when the rows of the system matrix linearly dependent. And then the emergence of a general solution is inevitable:

Example 3

Solve a homogeneous system of linear equations

Solution: let’s write down the matrix of the system and, using elementary transformations, bring it to a stepwise form. The first action is aimed not only at obtaining a single value, but also at decreasing the numbers in the first column:

(1) A third line was added to the first line, multiplied by –1. The third line was added to the second line, multiplied by –2. At the top left I got a unit with a “minus”, which is often much more convenient for further transformations.

(2) The first two lines are the same, one of them was deleted. Honestly, I didn’t push the solution - it turned out that way. If you perform transformations in a template manner, then linear dependence lines would have been revealed a little later.

(3) The second line was added to the third line, multiplied by 3.

(4) The sign of the first line was changed.

As a result of elementary transformations, an equivalent system was obtained:

The algorithm works exactly the same as for heterogeneous systems. The variables “sitting on the steps” are the main ones, the variable that did not get a “step” is free.

Let's express the basic variables through a free variable:

Answer: common decision:

The trivial solution is included in the general formula, and it is unnecessary to write it down separately.

The check is also carried out according to the usual scheme: the resulting general solution must be substituted into the left side of each equation of the system and a legal zero must be obtained for all substitutions.

It would be possible to finish this quietly and peacefully, but the solution to a homogeneous system of equations often needs to be represented in vector form by using fundamental system of solutions. Please forget about it for now analytical geometry, since now we will talk about vectors in the general algebraic sense, which I opened a little in the article about matrix rank. There is no need to gloss over the terminology, everything is quite simple.


Solving systems of linear algebraic equations (SLAEs) is undoubtedly the most important topic in a linear algebra course. A huge number of problems from all branches of mathematics come down to solving systems of linear equations. These factors explain the reason for this article. The material of the article is selected and structured so that with its help you can

  • choose the optimal method for solving your system of linear algebraic equations,
  • study the theory of the chosen method,
  • solve your system of linear equations by considering detailed solutions to typical examples and problems.

Brief description of the article material.

First, we give all the necessary definitions, concepts and introduce notations.

Next, we will consider methods for solving systems of linear algebraic equations in which the number of equations is equal to the number of unknown variables and which have a unique solution. Firstly, we will focus on Cramer’s method, secondly, we will show the matrix method for solving such systems of equations, and thirdly, we will analyze the Gauss method (the method of sequential elimination of unknown variables). To consolidate the theory, we will definitely solve several SLAEs in different ways.

After this, we will move on to solving systems of linear algebraic equations of general form, in which the number of equations does not coincide with the number of unknown variables or the main matrix of the system is singular. Let us formulate the Kronecker-Capelli theorem, which allows us to establish the compatibility of SLAEs. Let us analyze the solution of systems (if they are compatible) using the concept of a basis minor of a matrix. We will also consider the Gauss method and describe in detail the solutions to the examples.

We will definitely dwell on the structure of the general solution of homogeneous and inhomogeneous systems of linear algebraic equations. Let us give the concept of a fundamental system of solutions and show how the general solution of a SLAE is written using the vectors of the fundamental system of solutions. For a better understanding, let's look at a few examples.

In conclusion, we will consider systems of equations that can be reduced to linear ones, as well as various problems in the solution of which SLAEs arise.

Page navigation.

Definitions, concepts, designations.

We will consider systems of p linear algebraic equations with n unknown variables (p can be equal to n) of the form

Unknown variables, - coefficients (some real or complex numbers), - free terms (also real or complex numbers).

This form of recording SLAE is called coordinate.

IN matrix form writing this system of equations has the form,
Where - the main matrix of the system, - a column matrix of unknown variables, - a column matrix of free terms.

If we add a matrix-column of free terms to matrix A as the (n+1)th column, we get the so-called extended matrix systems of linear equations. Typically, an extended matrix is ​​denoted by the letter T, and the column of free terms is separated by a vertical line from the remaining columns, that is,

Solving a system of linear algebraic equations called a set of values ​​of unknown variables that turns all equations of the system into identities. The matrix equation for given values ​​of the unknown variables also becomes an identity.

If a system of equations has at least one solution, then it is called joint.

If a system of equations has no solutions, then it is called non-joint.

If a SLAE has a unique solution, then it is called certain; if there is more than one solution, then – uncertain.

If the free terms of all equations of the system are equal to zero , then the system is called homogeneous, otherwise - heterogeneous.

Solving elementary systems of linear algebraic equations.

If the number of equations of a system is equal to the number of unknown variables and the determinant of its main matrix is ​​not equal to zero, then such SLAEs will be called elementary. Such systems of equations have a unique solution, and in the case of a homogeneous system, all unknown variables are equal to zero.

We started studying such SLAEs in high school. When solving them, we took one equation, expressed one unknown variable in terms of others and substituted it into the remaining equations, then took the next equation, expressed the next unknown variable and substituted it into other equations, and so on. Or they used the addition method, that is, they added two or more equations to eliminate some unknown variables. We will not dwell on these methods in detail, since they are essentially modifications of the Gauss method.

The main methods for solving elementary systems of linear equations are the Cramer method, the matrix method and the Gauss method. Let's sort them out.

Solving systems of linear equations using Cramer's method.

Suppose we need to solve a system of linear algebraic equations

in which the number of equations is equal to the number of unknown variables and the determinant of the main matrix of the system is different from zero, that is, .

Let be the determinant of the main matrix of the system, and - determinants of matrices that are obtained from A by replacement 1st, 2nd, …, nth column respectively to the column of free members:

With this notation, unknown variables are calculated using the formulas of Cramer’s method as . This is how the solution to a system of linear algebraic equations is found using Cramer's method.

Example.

Cramer's method .

Solution.

The main matrix of the system has the form . Let's calculate its determinant (if necessary, see the article):

Since the determinant of the main matrix of the system is nonzero, the system has a unique solution that can be found by Cramer’s method.

Let's compose and calculate the necessary determinants (we obtain the determinant by replacing the first column in matrix A with a column of free terms, the determinant by replacing the second column with a column of free terms, and by replacing the third column of matrix A with a column of free terms):

Finding unknown variables using formulas :

Answer:

The main disadvantage of Cramer's method (if it can be called a disadvantage) is the complexity of calculating determinants when the number of equations in the system is more than three.

Solving systems of linear algebraic equations using the matrix method (using an inverse matrix).

Let a system of linear algebraic equations be given in matrix form, where the matrix A has dimension n by n and its determinant is nonzero.

Since , matrix A is invertible, that is, there is an inverse matrix. If we multiply both sides of the equality by the left, we get a formula for finding a matrix-column of unknown variables. This is how we obtained a solution to a system of linear algebraic equations using the matrix method.

Example.

Solve system of linear equations matrix method.

Solution.

Let's rewrite the system of equations in matrix form:

Because

then the SLAE can be solved using the matrix method. Using the inverse matrix, the solution to this system can be found as .

Let's construct an inverse matrix using a matrix from algebraic additions of elements of matrix A (if necessary, see the article):

It remains to calculate the matrix of unknown variables by multiplying the inverse matrix to a matrix-column of free members (if necessary, see the article):

Answer:

or in another notation x 1 = 4, x 2 = 0, x 3 = -1.

The main problem when finding solutions to systems of linear algebraic equations using the matrix method is the complexity of finding the inverse matrix, especially for square matrices of order higher than third.

Solving systems of linear equations using the Gauss method.

Suppose we need to find a solution to a system of n linear equations with n unknown variables
the determinant of the main matrix of which is different from zero.

The essence of the Gauss method consists of sequential exclusion of unknown variables: first, x 1 is excluded from all equations of the system, starting from the second, then x 2 is excluded from all equations, starting from the third, and so on, until only the unknown variable x n remains in the last equation. This process of transforming system equations to sequentially eliminate unknown variables is called direct Gaussian method. After completing the forward stroke of the Gaussian method, x n is found from the last equation, using this value from the penultimate equation, x n-1 is calculated, and so on, x 1 is found from the first equation. The process of calculating unknown variables when moving from the last equation of the system to the first is called inverse of the Gaussian method.

Let us briefly describe the algorithm for eliminating unknown variables.

We will assume that , since we can always achieve this by rearranging the equations of the system. Let's eliminate the unknown variable x 1 from all equations of the system, starting with the second. To do this, to the second equation of the system we add the first, multiplied by , to the third equation we add the first, multiplied by , and so on, to the nth equation we add the first, multiplied by . The system of equations after such transformations will take the form

where and .

We would have arrived at the same result if we had expressed x 1 in terms of other unknown variables in the first equation of the system and substituted the resulting expression into all other equations. Thus, the variable x 1 is excluded from all equations, starting from the second.

Next, we proceed in a similar way, but only with part of the resulting system, which is marked in the figure

To do this, to the third equation of the system we add the second, multiplied by , to the fourth equation we add the second, multiplied by , and so on, to the nth equation we add the second, multiplied by . The system of equations after such transformations will take the form

where and . Thus, the variable x 2 is excluded from all equations, starting from the third.

Next, we proceed to eliminating the unknown x 3, while we act similarly with the part of the system marked in the figure

So we continue the direct progression of the Gaussian method until the system takes the form

From this moment we begin the reverse of the Gaussian method: we calculate x n from the last equation as , using the obtained value of x n we find x n-1 from the penultimate equation, and so on, we find x 1 from the first equation.

Example.

Solve system of linear equations Gauss method.

Solution.

Let us exclude the unknown variable x 1 from the second and third equations of the system. To do this, to both sides of the second and third equations we add the corresponding parts of the first equation, multiplied by and by, respectively:

Now we eliminate x 2 from the third equation by adding to its left and right sides the left and right sides of the second equation, multiplied by:

This completes the forward stroke of the Gauss method; we begin the reverse stroke.

From the last equation of the resulting system of equations we find x 3:

From the second equation we get .

From the first equation we find the remaining unknown variable and thereby complete the reverse of the Gauss method.

Answer:

X 1 = 4, x 2 = 0, x 3 = -1.

Solving systems of linear algebraic equations of general form.

In general, the number of equations of the system p does not coincide with the number of unknown variables n:

Such SLAEs may have no solutions, have a single solution, or have infinitely many solutions. This statement also applies to systems of equations whose main matrix is ​​square and singular.

Kronecker–Capelli theorem.

Before finding a solution to a system of linear equations, it is necessary to establish its compatibility. The answer to the question when SLAE is compatible and when it is inconsistent is given by Kronecker–Capelli theorem:
In order for a system of p equations with n unknowns (p can be equal to n) to be consistent, it is necessary and sufficient that the rank of the main matrix of the system be equal to the rank of the extended matrix, that is, Rank(A)=Rank(T).

Let us consider, as an example, the application of the Kronecker–Capelli theorem to determine the compatibility of a system of linear equations.

Example.

Find out whether the system of linear equations has solutions.

Solution.

. Let's use the method of bordering minors. Minor of the second order different from zero. Let's look at the third-order minors bordering it:

Since all the bordering minors of the third order are equal to zero, the rank of the main matrix is ​​equal to two.

In turn, the rank of the extended matrix is equal to three, since the minor is of third order

different from zero.

Thus, Rang(A), therefore, using the Kronecker–Capelli theorem, we can conclude that the original system of linear equations is inconsistent.

Answer:

The system has no solutions.

So, we have learned to establish the inconsistency of a system using the Kronecker–Capelli theorem.

But how to find a solution to an SLAE if its compatibility is established?

To do this, we need the concept of a basis minor of a matrix and a theorem about the rank of a matrix.

The minor of the highest order of the matrix A, different from zero, is called basic.

From the definition of a basis minor it follows that its order is equal to the rank of the matrix. For a non-zero matrix A there can be several basis minors; there is always one basis minor.

For example, consider the matrix .

All third-order minors of this matrix are equal to zero, since the elements of the third row of this matrix are the sum of the corresponding elements of the first and second rows.

The following second-order minors are basic, since they are non-zero

Minors are not basic, since they are equal to zero.

Matrix rank theorem.

If the rank of a matrix of order p by n is equal to r, then all row (and column) elements of the matrix that do not form the chosen basis minor are linearly expressed in terms of the corresponding row (and column) elements forming the basis minor.

What does the matrix rank theorem tell us?

If, according to the Kronecker–Capelli theorem, we have established the compatibility of the system, then we choose any basis minor of the main matrix of the system (its order is equal to r), and exclude from the system all equations that do not form the selected basis minor. The SLAE obtained in this way will be equivalent to the original one, since the discarded equations are still redundant (according to the matrix rank theorem, they are a linear combination of the remaining equations).

As a result, after discarding unnecessary equations of the system, two cases are possible.

    If the number of equations r in the resulting system is equal to the number of unknown variables, then it will be definite and the only solution can be found by the Cramer method, the matrix method or the Gauss method.

    Example.

    .

    Solution.

    Rank of the main matrix of the system is equal to two, since the minor is of second order different from zero. Extended Matrix Rank is also equal to two, since the only third order minor is zero

    and the second-order minor considered above is different from zero. Based on the Kronecker–Capelli theorem, we can assert the compatibility of the original system of linear equations, since Rank(A)=Rank(T)=2.

    As a basis minor we take . It is formed by the coefficients of the first and second equations:

    The third equation of the system does not participate in the formation of the basis minor, so we exclude it from the system based on the theorem on the rank of the matrix:

    This is how we obtained an elementary system of linear algebraic equations. Let's solve it using Cramer's method:

    Answer:

    x 1 = 1, x 2 = 2.

    If the number of equations r in the resulting SLAE is less than the number of unknown variables n, then on the left sides of the equations we leave the terms that form the basis minor, and we transfer the remaining terms to the right sides of the equations of the system with the opposite sign.

    The unknown variables (r of them) remaining on the left sides of the equations are called main.

    Unknown variables (there are n - r pieces) that are on the right sides are called free.

    Now we believe that free unknown variables can take arbitrary values, while the r main unknown variables will be expressed through free unknown variables in a unique way. Their expression can be found by solving the resulting SLAE using the Cramer method, the matrix method, or the Gauss method.

    Let's look at it with an example.

    Example.

    Solve a system of linear algebraic equations .

    Solution.

    Let's find the rank of the main matrix of the system by the method of bordering minors. Let's take a 1 1 = 1 as a non-zero minor of the first order. Let's start searching for a non-zero minor of the second order bordering this minor:

    This is how we found a non-zero minor of the second order. Let's start searching for a non-zero bordering minor of the third order:

    Thus, the rank of the main matrix is ​​three. The rank of the extended matrix is ​​also equal to three, that is, the system is consistent.

    We take the found non-zero minor of the third order as the basis one.

    For clarity, we show the elements that form the basis minor:

    We leave the terms involved in the basis minor on the left side of the system equations, and transfer the rest with opposite signs to the right sides:

    Let's give the free unknown variables x 2 and x 5 arbitrary values, that is, we accept , where are arbitrary numbers. In this case, the SLAE will take the form

    Let us solve the resulting elementary system of linear algebraic equations using Cramer’s method:

    Hence, .

    In your answer, do not forget to indicate free unknown variables.

    Answer:

    Where are arbitrary numbers.

Summarize.

To solve a system of general linear algebraic equations, we first determine its compatibility using the Kronecker–Capelli theorem. If the rank of the main matrix is ​​not equal to the rank of the extended matrix, then we conclude that the system is incompatible.

If the rank of the main matrix is ​​equal to the rank of the extended matrix, then we select a basis minor and discard the equations of the system that do not participate in the formation of the selected basis minor.

If the order of the basis minor is equal to the number of unknown variables, then the SLAE has a unique solution, which can be found by any method known to us.

If the order of the basis minor is less than the number of unknown variables, then on the left side of the system equations we leave the terms with the main unknown variables, transfer the remaining terms to the right sides and give arbitrary values ​​to the free unknown variables. From the resulting system of linear equations we find the main unknown variables using the Cramer method, the matrix method or the Gauss method.

Gauss method for solving systems of linear algebraic equations of general form.

The Gauss method can be used to solve systems of linear algebraic equations of any kind without first testing them for consistency. The process of sequential elimination of unknown variables makes it possible to draw a conclusion about both the compatibility and incompatibility of the SLAE, and if a solution exists, it makes it possible to find it.

From a computational point of view, the Gaussian method is preferable.

See its detailed description and analyzed examples in the article Gauss method for solving systems of general linear algebraic equations.

Writing a general solution to homogeneous and inhomogeneous linear algebraic systems using vectors of the fundamental system of solutions.

In this section we will talk about simultaneous homogeneous and inhomogeneous systems of linear algebraic equations that have an infinite number of solutions.

Let us first deal with homogeneous systems.

Fundamental system of solutions homogeneous system of p linear algebraic equations with n unknown variables is a collection of (n – r) linearly independent solutions of this system, where r is the order of the basis minor of the main matrix of the system.

If we denote linearly independent solutions of a homogeneous SLAE as X (1) , X (2) , …, X (n-r) (X (1) , X (2) , …, X (n-r) are columnar matrices of dimension n by 1) , then the general solution of this homogeneous system is represented as a linear combination of vectors of the fundamental system of solutions with arbitrary constant coefficients C 1, C 2, ..., C (n-r), that is, .

What does the term general solution of a homogeneous system of linear algebraic equations (oroslau) mean?

The meaning is simple: the formula specifies all possible solutions of the original SLAE, in other words, taking any set of values ​​of arbitrary constants C 1, C 2, ..., C (n-r), using the formula we will obtain one of the solutions of the original homogeneous SLAE.

Thus, if we find a fundamental system of solutions, then we can define all solutions of this homogeneous SLAE as .

Let us show the process of constructing a fundamental system of solutions to a homogeneous SLAE.

We select the basis minor of the original system of linear equations, exclude all other equations from the system and transfer all terms containing free unknown variables to the right-hand sides of the system equations with opposite signs. Let's give the free unknown variables the values ​​1,0,0,...,0 and calculate the main unknowns by solving the resulting elementary system of linear equations in any way, for example, using the Cramer method. This will result in X (1) - the first solution of the fundamental system. If we give the free unknowns the values ​​0,1,0,0,…,0 and calculate the main unknowns, we get X (2) . And so on. If we assign the values ​​0.0,…,0.1 to the free unknown variables and calculate the main unknowns, we obtain X (n-r) . In this way, a fundamental system of solutions to a homogeneous SLAE will be constructed and its general solution can be written in the form .

For inhomogeneous systems of linear algebraic equations, the general solution is represented in the form , where is the general solution of the corresponding homogeneous system, and is the particular solution of the original inhomogeneous SLAE, which we obtain by giving the free unknowns the values ​​0,0,...,0 and calculating the values ​​of the main unknowns.

Let's look at examples.

Example.

Find the fundamental system of solutions and the general solution of a homogeneous system of linear algebraic equations .

Solution.

The rank of the main matrix of homogeneous systems of linear equations is always equal to the rank of the extended matrix. Let's find the rank of the main matrix using the method of bordering minors. As a non-zero minor of the first order, we take element a 1 1 = 9 of the main matrix of the system. Let's find the bordering non-zero minor of the second order:

A minor of the second order, different from zero, has been found. Let's go through the third-order minors bordering it in search of a non-zero one:

All third-order bordering minors are equal to zero, therefore, the rank of the main and extended matrix is ​​equal to two. Let's take . For clarity, let us note the elements of the system that form it:

The third equation of the original SLAE does not participate in the formation of the basis minor, therefore, it can be excluded:

We leave the terms containing the main unknowns on the right sides of the equations, and transfer the terms with free unknowns to the right sides:

Let us construct a fundamental system of solutions to the original homogeneous system of linear equations. The fundamental system of solutions of this SLAE consists of two solutions, since the original SLAE contains four unknown variables, and the order of its basis minor is equal to two. To find X (1), we give the free unknown variables the values ​​x 2 = 1, x 4 = 0, then we find the main unknowns from the system of equations
.

Let's consider homogeneous system m linear equations with n variables:

(15)

A system of homogeneous linear equations is always consistent, because it always has a zero (trivial) solution (0,0,…,0).

If in system (15) m=n and , then the system has only a zero solution, which follows from Cramer’s theorem and formulas.

Theorem 1. Homogeneous system (15) has a nontrivial solution if and only if the rank of its matrix is ​​less than the number of variables, i.e. . r(A)< n.

Proof. The existence of a nontrivial solution to system (15) is equivalent to a linear dependence of the columns of the system matrix (i.e., there are numbers x 1, x 2,...,x n, not all equal to zero, such that equalities (15) are true).

According to the basis minor theorem, the columns of a matrix are linearly dependent  when not all columns of this matrix are basic, i.e.  when the order r of the basis minor of the matrix is ​​less than the number n of its columns. Etc.

Consequence. A square homogeneous system has non-trivial solutions  when |A|=0.

Theorem 2. If columns x (1), x (2),..., x (s) are solutions to a homogeneous system AX = 0, then any linear combination of them is also a solution to this system.

Proof. Consider any combination of solutions:

Then AX=A()===0. etc.

Corollary 1. If a homogeneous system has a nontrivial solution, then it has infinitely many solutions.

That. it is necessary to find such solutions x (1), x (2),..., x (s) of the system Ax = 0, so that any other solution of this system is represented in the form of their linear combination and, moreover, in a unique way.

Definition. The system k=n-r (n is the number of unknowns in the system, r=rg A) of linearly independent solutions x (1), x (2),…, x (k) of the system Ах=0 is called fundamental system of solutions this system.

Theorem 3. Let a homogeneous system Ах=0 with n unknowns and r=rg A be given. Then there is a set of k=n-r solutions x (1), x (2),…, x (k) of this system, forming a fundamental system of solutions.

Proof. Without loss of generality, we can assume that the basis minor of the matrix A is located in the upper left corner. Then, by the basis minor theorem, the remaining rows of matrix A are linear combinations of the basis rows. This means that if the values ​​x 1, x 2,…, x n satisfy the first r equations, i.e. equations corresponding to the rows of the basis minor), then they also satisfy other equations. Consequently, the set of solutions to the system will not change if we discard all equations starting from the (r+1)th one. We get the system:

Let us move the free unknowns x r +1 , x r +2 ,…, x n to the right side, and leave the basic ones x 1 , x 2 ,…, x r on the left:

(16)

Because in this case all b i =0, then instead of the formulas

c j =(M j (b i)-c r +1 M j (a i , r +1)-…-c n M j (a in)) j=1,2,…,r ((13), we get:

c j =-(c r +1 M j (a i , r +1)-…-c n M j (a in)) j=1,2,…,r (13)

If we set the free unknowns x r +1 , x r +2 ,…, x n to arbitrary values, then with respect to the basic unknowns we obtain a square SLAE with a non-singular matrix for which there is a unique solution. Thus, any solution of a homogeneous SLAE is uniquely determined by the values ​​of the free unknowns x r +1, x r +2,…, x n. Consider the following k=n-r series of values ​​of free unknowns:

1, =0, ….,=0,

1, =0, ….,=0, (17)

………………………………………………

1, =0, ….,=0,

(The series number is indicated by a superscript in parentheses, and the series of values ​​are written in the form of columns. In each series =1 if i=j and =0 if ij.

The i-th series of values ​​of free unknowns uniquely correspond to the values ​​of ,,...,basic unknowns. The values ​​of the free and basic unknowns together give solutions to system (17).

Let us show that the columns e i =,i=1,2,…,k (18)

form a fundamental system of solutions.

Because These columns, by construction, are solutions to the homogeneous system Ax=0 and their number is equal to k, then it remains to prove the linear independence of solutions (16). Let there be a linear combination of solutions e 1 , e 2 ,…, e k(x (1) , x (2) ,…, x (k)), equal to the zero column:

1 e 1 +  2 e 2 +…+  k e k ( 1 X (1) + 2 X(2) +…+ k X(k) = 0)

Then the left side of this equality is a column whose components with numbers r+1,r+2,…,n are equal to zero. But the (r+1)th component is equal to  1 1+ 2 0+…+ k 0= 1 . Similarly, the (r+2)th component is equal to  2 ,…, the kth component is equal to  k. Therefore  1 =  2 = …= k =0, which means linear independence of solutions e 1 , e 2 ,…, e k ( x (1) , x (2) ,…, x (k)).

The constructed fundamental system of solutions (18) is called normal. By virtue of formula (13), it has the following form:

(20)

Corollary 2. Let e 1 , e 2 ,…, e k-normal fundamental system of solutions of a homogeneous system, then the set of all solutions can be described by the formula:

x=c 1 e 1 +s 2 e 2 +…+с k e k (21)

where с 1,с 2,…,с k – take arbitrary values.

Proof. By Theorem 2, column (19) is a solution to the homogeneous system Ax=0. It remains to prove that any solution to this system can be represented in the form (17). Consider the column X=y r +1 e 1 +…+y n e k. This column coincides with the column y in elements with numbers r+1,...,n and is a solution to (16). Therefore the columns X And at coincide, because solutions of system (16) are uniquely determined by the set of values ​​of its free unknowns x r +1 ,…,x n , and the columns at And X these sets are the same. Hence, at=X= y r +1 e 1 +…+y n e k, i.e. solution at is a linear combination of columns e 1 ,…,y n normal FSR. Etc.

The proven statement is true not only for a normal FSR, but also for an arbitrary FSR of a homogeneous SLAE.

X=c 1 X 1 + c 2 X 2 +…+s n - r X n - r - common decision systems of linear homogeneous equations

Where X 1, X 2,…, X n - r – any fundamental system of solutions,

c 1 ,c 2 ,…,c n - r are arbitrary numbers.

Example. (p. 78)

Let us establish a connection between the solutions of the inhomogeneous SLAE (1) and the corresponding homogeneous SLAE (15)

Theorem 4. The sum of any solution to the inhomogeneous system (1) and the corresponding homogeneous system (15) is a solution to system (1).

Proof. If c 1 ,…,c n is a solution to system (1), and d 1 ,…,d n is a solution to system (15), then substituting the unknown numbers c into any (for example, i-th) equation of system (1) 1 +d 1 ,…,c n +d n , we get:

B i +0=b i h.t.d.

Theorem 5. The difference between two arbitrary solutions of the inhomogeneous system (1) is a solution to the homogeneous system (15).

Proof. If c 1 ,…,c n and c 1 ,…,c n are solutions of system (1), then substituting the unknown numbers c into any (for example, i-th) equation of system (1) 1 -с 1 ,…,c n -с n , we get:

B i -b i =0 p.t.d.

From the proven theorems it follows that the general solution of a system of m linear homogeneous equations with n variables is equal to the sum of the general solution of the corresponding system of homogeneous linear equations (15) and an arbitrary number of a particular solution of this system (15).

X neod. =X total one +X frequent more than once (22)

As a particular solution to an inhomogeneous system, it is natural to take the solution that is obtained if in the formulas c j =(M j (b i)-c r +1 M j (a i, r +1)-…-c n M j (a in)) j=1,2,…,r ((13) set all numbers c r +1 ,…,c n equal to zero, i.e.

X 0 =(,…,,0,0,…,0) (23)

Adding this particular solution to the general solution X=c 1 X 1 + c 2 X 2 +…+s n - r X n - r corresponding homogeneous system, we obtain:

X neod. =X 0 +C 1 X 1 +C 2 X 2 +…+S n - r X n - r (24)

Consider a system of two equations with two variables:

in which at least one of the coefficients a ij 0.

To solve, we eliminate x 2 by multiplying the first equation by a 22, and the second by (-a 12) and adding them: Eliminate x 1 by multiplying the first equation by (-a 21), and the second by a 11 and adding them: The expression in parentheses is the determinant

Having designated ,, then the system will take the form:, i.e., if, then the system has a unique solution:,.

If Δ=0, and (or), then the system is inconsistent, because reduced to the form If Δ=Δ 1 =Δ 2 =0, then the system is uncertain, because reduced to form

Share with friends or save for yourself:

Loading...