Inhomogeneous systems of equations. Introduction

The term "system" is used in various sciences. Accordingly, different definitions of the system are used in different situations: from philosophical to formal. For the purposes of the course, the following definition is best suited: a system is a set of elements united by links and functioning together to achieve a goal.

Systems are characterized by a number of properties, the main of which are divided into three groups: static, dynamic and synthetic.

1.1 Static properties of systems

static properties are called features of some state of the system. This is what the system possesses at any fixed point in time.

Integrity. Every system acts as something unified, whole, isolated, different from everything else. This property is called system integrity. It allows you to divide the whole world into two parts: the system and the environment.

Openness. The isolated system, distinguished from everything else, is not isolated from the environment. On the contrary, they are connected and exchange various types of resources (substance, energy, information, etc.). This feature is referred to as "openness".

The connections of the system with the environment are directional: according to one, the environment affects the system (system inputs), according to others, the system influences the environment, does something in the environment, gives something to the environment (system outputs). The description of the inputs and outputs of the system is called the black box model. In such a model, there is no information about the internal features of the system. Despite the apparent simplicity, such a model is often enough to work with the system.

In many cases, when controlling equipment or people, information only about the inputs and outputs of the system allows you to successfully achieve the goal. However, this model must meet certain requirements. For example, the user may experience difficulties if he does not know that in some TV models the power button does not need to be pressed, but pulled out. Therefore, for successful management, the model must contain all the information necessary to achieve the goal. When attempting to satisfy this requirement, four types of errors can arise, which stem from the fact that the model always contains a finite number of connections, while the number of connections in a real system is unlimited.

An error of the first kind occurs when the subject erroneously considers the relationship as significant and decides to include it in the model. This leads to the appearance of unnecessary, unnecessary elements in the model. An error of the second kind, on the contrary, is made when a decision is made to exclude an allegedly insignificant connection from the model, without which, in fact, the achievement of the goal is difficult or even impossible.

The answer to the question of which error is worse depends on the context in which it is asked. It is clear that the use of a model containing an error inevitably leads to losses. Losses can be small, acceptable, intolerable and unacceptable. The damage caused by a Type I error is due to the fact that the information introduced by it is redundant. When working with such a model, you will have to spend resources on fixing and processing unnecessary information, for example, spending computer memory and processing time on it. This may not affect the quality of the solution, but it will definitely affect the cost and timeliness. Losses from an error of the second kind - damage from the fact that there is not enough information to fully achieve the goal, the goal cannot be fully achieved.

Now it is clear that the worst mistake is the one, the losses from which are greater, and this depends on the specific circumstances. For example, if time is a critical factor, then an error of the first kind becomes much more dangerous than an error of the second kind: a decision made on time, even if not the best, is preferable to an optimal, but late one.

Type III error is considered to be the consequences of ignorance. In order to assess the significance of some connection, you need to know that it exists at all. If this is not known, then the question of including the connection in the model is not at all worth it. In the event that such a connection is insignificant, then in practice its presence in reality and its absence in the model will be imperceptible. If the relationship is significant, then there will be difficulties similar to those with a Type II error. The difference is that the Type III error is more difficult to correct: it requires the extraction of new knowledge.

An error of the fourth kind occurs when an erroneous assignment of a known significant connection to the number of inputs or outputs of the system. For example, it is well established that in 19th-century England, the health of men wearing top hats far exceeded that of men wearing caps. It hardly follows from this that the type of headgear can be considered as an input for a system for predicting the state of health.

Internal heterogeneity of systems, distinctness of parts. If you look inside the "black box", it turns out that the system is heterogeneous, not monolithic. It can be found that different qualities in different parts of the system are different. The description of the internal heterogeneity of the system is reduced to the isolation of relatively homogeneous areas, drawing boundaries between them. This is how the concept of the parts of the system appears. On closer examination, it turns out that the selected large parts are also inhomogeneous, which requires the selection of even smaller parts. The result is a hierarchical description of the parts of the system, which is called the composition model.

Information about the composition of the system can be used to work with the system. The goals of interaction with the system can be different, and therefore the models of the composition of the same system can also differ. At first glance, it is not difficult to distinguish the parts of the system, they are "striking". In some systems, parts arise arbitrarily, in the process of natural growth and development (organisms, societies, etc.). Artificial systems are deliberately assembled from previously known parts (mechanisms, buildings, etc.). There are also mixed types of systems, such as reserves, agricultural systems. On the other hand, from the point of view of the rector, student, accountant and business executive, the university consists of different parts. The plane consists of different parts from the point of view of the pilot, the stewardess, the passenger. The difficulties of creating a composition model can be represented by three provisions.

First, the whole can be divided into parts in different ways. In this case, the method of division is determined by the goal. For example, the composition of a car is presented in different ways to novice motorists, future professional drivers, mechanics preparing to work in a car service center, and salespeople in car dealerships. It is natural to ask whether parts of the system "really" exist? The answer is contained in the formulation of the property in question: we are talking about the distinguishability, and not about the separability of parts. One can distinguish between the parts of the system necessary to achieve the goal, but one cannot separate them.

Secondly, the number of parts in the composition model also depends on the level at which the fragmentation of the system is stopped. The pieces on the terminal branches of the resulting hierarchical tree are called elements. In various circumstances, decomposition is terminated at different levels. For example, when describing upcoming work, you have to give instructions to an experienced worker and a novice in varying degrees of detail. Thus, the composition model depends on what is considered elementary. There are cases when an element has a natural, absolute character (cell, individual, phoneme, electron).

Thirdly, any system is part of a larger system, and sometimes several systems at once. Such a metasystem can also be divided into subsystems in different ways. This means that the outer boundary of the system has a relative, conditional character. The definition of the boundaries of the system is made taking into account the goals of the subject who will use the system model.

Structured. The property of structuredness lies in the fact that the parts of the system are not isolated, not independent of each other; they are interconnected and interact with each other. At the same time, the properties of the system essentially depend on how exactly its parts interact. Therefore, information about the connections of the elements of the system is so important. The list of essential links between the elements of the system is called the system structure model. Endowment of any system with a certain structure is called structuredness.

The concept of structuring further deepens the idea of ​​the integrity of the system: connections, as it were, hold the parts together, hold them as a whole. Integrity, noted earlier as an external property, receives a reinforcing explanation from within the system - through the structure.

When building a model of the structure, certain difficulties are also encountered. The first of these is related to the fact that the structure model is determined after the composition model is chosen, and depends on what exactly the composition of the system is. But even with a fixed composition, the structure model is variable. This is due to the possibility of different ways to determine the significance of relationships. For example, a modern manager is recommended, along with the formal structure of his organization, to take into account the existence of informal relations between employees, which also affect the functioning of the organization. The second difficulty stems from the fact that each element of the system, in turn, is a "little black box". So all four types of errors are possible when determining the inputs and outputs of each element included in the structure model.

1.2 DYNAMIC PROPERTIES OF SYSTEMS

If we consider the state of the system at a new point in time, then again we can find all four static properties. But if you superimpose the “photographs” of the system at different points in time on top of each other, then it will be found that they differ in details: during the time between two points of observation, some changes occurred in the system and its environment. Such changes may be important when working with the system, and, therefore, should be reflected in the descriptions of the system and taken into account when working with it. Features of changes over time inside the system and outside it are called the dynamic properties of the system. Generally, four dynamic properties of a system are distinguished.

Functionality. Processes Y(t) occurring at the outputs of the system are considered as its functions. The functions of the system are its behavior in the external environment, the results of its activities, the products produced by the system.

From the multiplicity of outputs follows the multiplicity of functions, each of which can be used by someone and for something. Therefore, the same system can serve different purposes. The subject using the system for his own purposes will naturally evaluate its functions and arrange them in relation to his needs. This is how the concepts of main, secondary, neutral, undesirable, superfluous function, etc. appear.

Stimulability. Certain processes also occur at the inputs of the system. X(t), affecting the system and turning after a series of transformations in the system into Y(t). Impact X(t) are called incentives, and the susceptibility of any system to external influences and the change in its behavior under these influences are called stimulability.

Variability of the system over time. In any system, there are changes that must be taken into account. In terms of the system model, we can say that the values ​​of internal variables (parameters) can change Z(t), the composition and structure of the system, and any combination thereof. The nature of these changes can also be different. Therefore, further classifications of changes may be considered.

The most obvious classification is according to the rate of change (slow, fast. The rate of change is measured relative to some rate taken as a standard. It is possible to introduce a large number speed gradations. It is also possible to classify the tendencies of changes in the system concerning its structure and composition.

We can talk about such changes that do not affect the structure of the system: some elements are replaced by others, equivalent ones; options Z(t) can change without changing the structure. This type of system dynamics is called its functioning. Changes can be quantitative in nature: there is an increase in the composition of the system, and although its structure automatically changes as well, this does not affect the properties of the system until a certain point (for example, the expansion of a garbage dump). Such changes are called system growth. With qualitative changes in the system, its essential properties change. If such changes are in a positive direction, they are called development. With the same resources, a developed system achieves better results, new positive qualities (functions) may appear. This is due to an increase in the level of consistency, organization of the system.

Growth occurs mainly due to the consumption of material resources, development - due to the assimilation and use of information. Growth and development may occur simultaneously, but they are not necessarily linked. Growth is always limited (due to limited material resources), and development from the outside is not limited, since information about the external environment is inexhaustible. Development is the result of learning, but learning cannot be done instead of the learner. Therefore, there is an internal restriction on development. If the system “does not want” to learn, it cannot and will not develop.

In addition to the processes of growth and development, reverse processes can also occur in the system. Changes inverse to growth are called recession, contraction, decrease. The reverse development of the change is called degradation, loss or weakening of useful properties.

The considered changes are monotonous, that is, they are directed "in one direction". Obviously, monotonous changes cannot last forever. In the history of any system, periods of decline and rise, stability and instability can be distinguished, the sequence of which forms an individual life cycle of the system.

You can use other classifications of processes occurring in the system: according to predictability, processes are divided into random and deterministic; according to the type of time dependence, processes are divided into monotonous, periodic, harmonic, impulse, etc.

Existence in a changing environment. Not only this system is changing, but also all the others. For the system under consideration, this looks like a continuous change in the environment. This circumstance has many consequences for the system itself, which must adapt to new conditions in order not to perish. When considering a specific system, attention is usually paid to the features of a particular reaction of the system, for example, the reaction rate. If we consider systems that store information (books, magnetic media), then the speed of reaction to changes in the external environment should be minimal to ensure the preservation of information. On the other hand, the response rate of the control system must be many times greater than the rate of change in the environment, since the system must choose the control action even before the state of the environment changes irreversibly.

1.3 SYNTHETIC PROPERTIES OF SYSTEMS

Synthetic properties include generalizing, integral, collective properties that describe the interaction of the system with the environment and take into account integrity in the most general sense.

Emergence. Combining elements into a system leads to the emergence of qualitatively new properties that are not derived from the properties of the parts, inherent only in the system itself and existing only as long as the system is one whole. Such qualities of the system are called
emergent (from the English "to arise").

Examples of emergent properties can be found in various fields. For example, none of the parts of an airplane can fly, but the airplane still flies. The properties of water, many of which are not fully understood, do not follow from the properties of hydrogen and oxygen.

Let there be two black boxes, each of which has one input, one output and performs one operation - adds one to the number at the input. When connecting such elements according to the scheme shown in the figure, we get a system without inputs, but with two outputs. At each cycle of work, the system will give out a larger number, while only even numbers will appear on one input, and only odd numbers on the other.




A

b

Fig.1.1. Connection of system elements: a) system with two outputs; b) parallel connection of elements

The emergent properties of a system are determined by its structure. This means that different combinations of elements will produce different emergent properties. For example, if you connect elements in parallel, then functionally new system will not differ from one element. Emergence will manifest itself in increasing the reliability of the system due to the parallel connection of two identical elements - that is, due to redundancy.

It should be noted an important case when the elements of the system have all its properties. This situation is typical for the fractal construction of the system. At the same time, the principles of structuring the parts are the same as those of the system as a whole. An example of a fractal system is an organization in which management is built identically at all levels of the hierarchy.

Inseparability into parts. This property is, in fact, a consequence of emergence. It is emphasized especially because it practical importance large, and underestimation is very common.

When a part is removed from the system, two important events. First, the composition of the system changes, and hence its structure. It will be a different system with different properties. Secondly, the element withdrawn from the system will behave differently due to the fact that its environment will change. All this suggests that when considering an element separately from the rest of the system, care should be taken.

Inherence. The system is all the more integral (from the English inherent - “being part of something”), the better it is coordinated, adapted to environment compatible with it. The degree of inherence is different and may change. The expediency of considering inherence as one of the properties of the system is related to the fact that the degree and quality of the implementation of the chosen function by the system depend on it. In natural systems, inherence is increased by natural selection. In artificial systems, inherence should be a special concern of the designer.

In a number of cases, inherence is provided with the help of intermediate, intermediary systems. Examples include adapters for using foreign electrical appliances in conjunction with Soviet-style sockets; intermediate software(for example, the COM service in Windows), which allows two programs from different manufacturers to communicate with each other.

Expediency. In systems created by man, the subordination of both structure and composition to the achievement of the set goal is so obvious that it can be recognized as a fundamental property of any artificial system. This property is called expediency. The goal for which the system is created determines which emergent property will ensure the achievement of the goal, and this, in turn, dictates the choice of the structure and composition of the system. In order to extend the concept of expediency to natural systems, it is necessary to clarify the concept of purpose. The refinement is carried out on the example of an artificial system.

The history of any artificial system begins at some point in time 0, when the existing value of the state vector Y 0 turns out to be unsatisfactory, that is, a problematic situation arises. The subject is dissatisfied with this condition and would like to change it. Let him be satisfied with the values ​​of the state vector Y*. This is the first definition of purpose. Further, it turns out that Y* does not exist now and cannot, for a number of reasons, be achieved in the near future. The second step in defining a goal is to recognize it as a desirable future state. It immediately becomes clear that the future is not limited. The third step in refining the notion of goal is to estimate the time T* when the desired state Y* can be reached under given conditions. Now the target becomes two-dimensional, it is a point (T*, Y*) on the graph. The task is to move from the point (0, Y 0) to the point (T*, Y*). But it turns out that this path can be taken along different trajectories, and only one of them can be realized. Let the choice fell on the trajectory Y*( t). Thus, the goal is now understood not only as the final state (T*, Y*), but also as the entire trajectory Y*( t) (“intermediate goals”, “plan”). So the goal is the desired future states Y*( t).

After time T* the state Y* becomes real. Therefore, it becomes possible to define the goal as a future real state. This makes it possible to say that natural systems also have the property of expediency, which allows us to approach the description of systems of any nature from a unified standpoint. The main difference between natural and artificial systems is that natural systems, obeying the laws of nature, realize objective goals, and artificial systems created for the realization of subjective goals.


Solving systems of linear algebraic equations (SLAE) is undoubtedly the most important topic of the linear algebra course. A huge number of problems from all branches of mathematics are reduced to solving systems linear equations. These factors explain the reason for creating this article. The material of the article is selected and structured so that with its help you can

  • choose the optimal method for solving your system of linear algebraic equations,
  • study the theory of the chosen method,
  • solve your system of linear equations, having considered in detail the solutions of typical examples and problems.

Brief description of the material of the article.

First, we give all the necessary definitions, concepts, and introduce some notation.

Next, we consider methods for solving systems of linear algebraic equations in which the number of equations is equal to the number of unknown variables and which have a unique solution. First, let's focus on the Cramer method, secondly, we will show the matrix method for solving such systems of equations, and thirdly, we will analyze the Gauss method (the method of successive elimination of unknown variables). To consolidate the theory, we will definitely solve several SLAEs in various ways.

After that, we turn to solving systems of linear algebraic equations general view, in which the number of equations does not coincide with the number of unknown variables or the main matrix of the system is degenerate. We formulate the Kronecker-Capelli theorem, which allows us to establish the compatibility of SLAEs. Let us analyze the solution of systems (in the case of their compatibility) using the concept of the basis minor of a matrix. We will also consider the Gauss method and describe in detail the solutions of the examples.

Be sure to dwell on the structure of the general solution of homogeneous and inhomogeneous systems of linear algebraic equations. Let us give the concept of a fundamental system of solutions and show how the general solution of the SLAE is written using the vectors of the fundamental system of solutions. For a better understanding, let's look at a few examples.

In conclusion, we consider systems of equations that are reduced to linear ones, as well as various problems, in the solution of which SLAEs arise.

Page navigation.

Definitions, concepts, designations.

We will consider systems of p linear algebraic equations with n unknown variables (p may be equal to n ) of the form

Unknown variables, - coefficients (some real or complex numbers), - free members (also real or complex numbers).

This form of SLAE is called coordinate.

IN matrix form this system of equations has the form ,
Where - the main matrix of the system, - the matrix-column of unknown variables, - the matrix-column of free members.

If we add to the matrix A as the (n + 1)-th column the matrix-column of free terms, then we get the so-called expanded matrix systems of linear equations. Usually, the augmented matrix is ​​denoted by the letter T, and the column of free members is separated by a vertical line from the rest of the columns, that is,

By solving a system of linear algebraic equations called a set of values ​​of unknown variables , which turns all the equations of the system into identities. The matrix equation for the given values ​​of the unknown variables also turns into an identity.

If a system of equations has at least one solution, then it is called joint.

If the system of equations has no solutions, then it is called incompatible.

If a SLAE has a unique solution, then it is called certain; if there is more than one solution, then - uncertain.

If the free terms of all equations of the system are equal to zero , then the system is called homogeneous, otherwise - heterogeneous.

Solution of elementary systems of linear algebraic equations.

If the number of system equations is equal to the number of unknown variables and the determinant of its main matrix is ​​not equal to zero, then we will call such SLAEs elementary. Such systems of equations have a unique solution, and in the case of a homogeneous system, all unknown variables are equal to zero.

We began to study such SLAEs in high school. When solving them, we took one equation, expressed one unknown variable in terms of others and substituted it into the remaining equations, then took the next equation, expressed the next unknown variable and substituted it into other equations, and so on. Or they used the addition method, that is, they added two or more equations to eliminate some unknown variables. We will not dwell on these methods in detail, since they are essentially modifications of the Gauss method.

The main methods for solving elementary systems of linear equations are the Cramer method, the matrix method and the Gauss method. Let's sort them out.

Solving systems of linear equations by Cramer's method.

Let us need to solve a system of linear algebraic equations

in which the number of equations is equal to the number of unknown variables and the determinant of the main matrix of the system is different from zero, that is, .

Let be the determinant of the main matrix of the system, and are determinants of matrices that are obtained from A by replacing 1st, 2nd, …, nth column respectively to the column of free members:

With such notation, the unknown variables are calculated by the formulas of Cramer's method as . This is how the solution of a system of linear algebraic equations is found by the Cramer method.

Example.

Cramer method .

Solution.

The main matrix of the system has the form . Calculate its determinant (if necessary, see the article):

Since the determinant of the main matrix of the system is nonzero, the system has a unique solution that can be found by Cramer's method.

Compose and calculate the necessary determinants (the determinant is obtained by replacing the first column in matrix A with a column of free members, the determinant - by replacing the second column with a column of free members, - by replacing the third column of matrix A with a column of free members):

Finding unknown variables using formulas :

Answer:

The main disadvantage of Cramer's method (if it can be called a disadvantage) is the complexity of calculating the determinants when the number of system equations is more than three.

Solving systems of linear algebraic equations by the matrix method (using the inverse matrix).

Let the system of linear algebraic equations be given in matrix form , where the matrix A has dimension n by n and its determinant is nonzero.

Since , then the matrix A is invertible, that is, there is an inverse matrix . If we multiply both parts of the equality by on the left, then we get a formula for finding the column matrix of unknown variables. So we got the solution of the system of linear algebraic equations by the matrix method.

Example.

Solve System of Linear Equations matrix method.

Solution.

Let's rewrite the system of equations in matrix form:

Because

then SLAE can be solved by the matrix method. By using inverse matrix the solution to this system can be found as .

Let's build an inverse matrix using a matrix of algebraic complements of the elements of matrix A (if necessary, see the article):

It remains to calculate - the matrix of unknown variables by multiplying the inverse matrix on the matrix-column of free members (if necessary, see the article):

Answer:

or in another notation x 1 = 4, x 2 = 0, x 3 = -1.

The main problem in finding solutions to systems of linear algebraic equations by the matrix method is the complexity of finding the inverse matrix, especially for square matrices of order higher than the third.

Solving systems of linear equations by the Gauss method.

Suppose we need to find a solution to a system of n linear equations with n unknown variables
the determinant of the main matrix of which is different from zero.

The essence of the Gauss method consists in the successive exclusion of unknown variables: first, x 1 is excluded from all equations of the system, starting from the second, then x 2 is excluded from all equations, starting from the third, and so on, until only the unknown variable x n remains in the last equation. Such a process of transforming the equations of the system for the successive elimination of unknown variables is called direct Gauss method. After the completion of the forward run of the Gaussian method, x n is found from the last equation, x n-1 is calculated from the penultimate equation using this value, and so on, x 1 is found from the first equation. The process of calculating unknown variables when moving from the last equation of the system to the first is called reverse Gauss method.

Let us briefly describe the algorithm for eliminating unknown variables.

We will assume that , since we can always achieve this by rearranging the equations of the system. We exclude the unknown variable x 1 from all equations of the system, starting from the second one. To do this, add the first equation multiplied by to the second equation of the system, add the first multiplied by to the third equation, and so on, add the first multiplied by to the nth equation. The system of equations after such transformations will take the form

where , a .

We would come to the same result if we expressed x 1 in terms of other unknown variables in the first equation of the system and substituted the resulting expression into all other equations. Thus, the variable x 1 is excluded from all equations, starting from the second.

Next, we act similarly, but only with a part of the resulting system, which is marked in the figure

To do this, add the second multiplied by to the third equation of the system, add the second multiplied by to the fourth equation, and so on, add the second multiplied by to the nth equation. The system of equations after such transformations will take the form

where , a . Thus, the variable x 2 is excluded from all equations, starting from the third.

Next, we proceed to the elimination of the unknown x 3, while acting similarly with the part of the system marked in the figure

So we continue the direct course of the Gauss method until the system takes the form

From this moment, we begin the reverse course of the Gauss method: we calculate x n from the last equation as , using the obtained value x n we find x n-1 from the penultimate equation, and so on, we find x 1 from the first equation.

Example.

Solve System of Linear Equations Gaussian method.

Solution.

Let's exclude the unknown variable x 1 from the second and third equations of the system. To do this, to both parts of the second and third equations, we add the corresponding parts of the first equation, multiplied by and by, respectively:

Now we exclude x 2 from the third equation by adding to its left and right parts the left and right parts of the second equation, multiplied by:

On this, the forward course of the Gauss method is completed, we begin the reverse course.

From the last equation of the resulting system of equations, we find x 3:

From the second equation we get .

From the first equation we find the remaining unknown variable and this completes the reverse course of the Gauss method.

Answer:

X 1 \u003d 4, x 2 \u003d 0, x 3 \u003d -1.

Solving systems of linear algebraic equations of general form.

In the general case, the number of equations of the system p does not coincide with the number of unknown variables n:

Such SLAEs may have no solutions, have a single solution, or have infinitely many solutions. This statement also applies to systems of equations whose main matrix is ​​square and degenerate.

Kronecker-Capelli theorem.

Before finding a solution to a system of linear equations, it is necessary to establish its compatibility. The answer to the question when SLAE is compatible, and when it is incompatible, gives Kronecker–Capelli theorem:
for a system of p equations with n unknowns (p can be equal to n ) to be consistent it is necessary and sufficient that the rank of the main matrix of the system is equal to the rank of the extended matrix, that is, Rank(A)=Rank(T) .

Let us consider the application of the Kronecker-Cappelli theorem for determining the compatibility of a system of linear equations as an example.

Example.

Find out if the system of linear equations has solutions.

Solution.

. Let us use the method of bordering minors. Minor of the second order different from zero. Let's go over the third-order minors surrounding it:

Since all bordering third-order minors are equal to zero, the rank of the main matrix is ​​two.

In turn, the rank of the augmented matrix is equal to three, since the minor of the third order

different from zero.

Thus, Rang(A) , therefore, according to the Kronecker-Capelli theorem, we can conclude that the original system of linear equations is inconsistent.

Answer:

There is no solution system.

So, we have learned to establish the inconsistency of the system using the Kronecker-Capelli theorem.

But how to find the solution of the SLAE if its compatibility is established?

To do this, we need the concept of the basis minor of a matrix and the theorem on the rank of a matrix.

The highest order minor of the matrix A, other than zero, is called basic.

It follows from the definition of the basis minor that its order is equal to the rank of the matrix. For a non-zero matrix A, there can be several basic minors; there is always one basic minor.

For example, consider the matrix .

All third-order minors of this matrix are equal to zero, since the elements of the third row of this matrix are the sum of the corresponding elements of the first and second rows.

The following minors of the second order are basic, since they are nonzero

Minors are not basic, since they are equal to zero.

Matrix rank theorem.

If the rank of a matrix of order p by n is r, then all elements of the rows (and columns) of the matrix that do not form the chosen basis minor are linearly expressed in terms of the corresponding elements of the rows (and columns) that form the basis minor.

What does the matrix rank theorem give us?

If, by the Kronecker-Capelli theorem, we have established the compatibility of the system, then we choose any basic minor of the main matrix of the system (its order is equal to r), and exclude from the system all equations that do not form the chosen basic minor. The SLAE obtained in this way will be equivalent to the original one, since the discarded equations are still redundant (according to the matrix rank theorem, they are a linear combination of the remaining equations).

As a result, after discarding the excessive equations of the system, two cases are possible.

    If the number of equations r in the resulting system is equal to the number of unknown variables, then it will be definite and the only solution can be found by the Cramer method, the matrix method or the Gauss method.

    Example.

    .

    Solution.

    Rank of the main matrix of the system is equal to two, since the minor of the second order different from zero. Extended matrix rank is also equal to two, since the only minor of the third order is equal to zero

    and the minor of the second order considered above is different from zero. Based on the Kronecker-Capelli theorem, one can assert the compatibility of the original system of linear equations, since Rank(A)=Rank(T)=2 .

    As a basis minor, we take . It is formed by the coefficients of the first and second equations:

    The third equation of the system does not participate in the formation of the basic minor, so we exclude it from the system based on the matrix rank theorem:

    Thus we have obtained an elementary system of linear algebraic equations. Let's solve it by Cramer's method:

    Answer:

    x 1 \u003d 1, x 2 \u003d 2.

    If the number of equations r in the resulting SLAE is less than the number of unknown variables n , then we leave the terms that form the basic minor in the left parts of the equations, and transfer the remaining terms to the right parts of the equations of the system with the opposite sign.

    The unknown variables (there are r of them) remaining on the left-hand sides of the equations are called main.

    Unknown variables (there are n - r of them) that ended up on the right side are called free.

    Now we assume that the free unknown variables can take arbitrary values, while the r main unknown variables will be expressed in terms of the free unknown variables in a unique way. Their expression can be found by solving the resulting SLAE by the Cramer method, the matrix method, or the Gauss method.

    Let's take an example.

    Example.

    Solve System of Linear Algebraic Equations .

    Solution.

    Find the rank of the main matrix of the system by the bordering minors method. Let us take a 1 1 = 1 as a non-zero first-order minor. Let's start searching for a non-zero second-order minor surrounding this minor:

    So we found a non-zero minor of the second order. Let's start searching for a non-zero bordering minor of the third order:

    Thus, the rank of the main matrix is ​​three. The rank of the augmented matrix is ​​also equal to three, that is, the system is consistent.

    The found non-zero minor of the third order will be taken as the basic one.

    For clarity, we show the elements that form the basis minor:

    We leave the terms participating in the basic minor on the left side of the equations of the system, and transfer the rest with opposite signs to the right sides:

    We give free unknown variables x 2 and x 5 arbitrary values, that is, we take , where are arbitrary numbers. In this case, the SLAE takes the form

    We solve the obtained elementary system of linear algebraic equations by the Cramer method:

    Hence, .

    In the answer, do not forget to indicate free unknown variables.

    Answer:

    Where are arbitrary numbers.

Summarize.

To solve a system of linear algebraic equations of a general form, we first find out its compatibility using the Kronecker-Capelli theorem. If the rank of the main matrix is ​​not equal to the rank of the extended matrix, then we conclude that the system is inconsistent.

If the rank of the main matrix is ​​equal to the rank of the extended matrix, then we choose the basic minor and discard the equations of the system that do not participate in the formation of the chosen basic minor.

If the order of the basis minor is equal to the number unknown variables, then the SLAE has a unique solution that can be found by any method known to us.

If the order of the basis minor is less than the number of unknown variables, then we leave the terms with the main unknown variables on the left side of the equations of the system, transfer the remaining terms to the right sides and assign arbitrary values ​​to the free unknown variables. From the resulting system of linear equations, we find the main unknown variables by the Cramer method, the matrix method or the Gauss method.

Gauss method for solving systems of linear algebraic equations of general form.

Using the Gauss method, one can solve systems of linear algebraic equations of any kind without their preliminary investigation for compatibility. The process of successive elimination of unknown variables makes it possible to draw a conclusion about both the compatibility and inconsistency of the SLAE, and if a solution exists, it makes it possible to find it.

From the point of view of computational work, the Gaussian method is preferable.

See its detailed description and analyzed examples in the article Gauss method for solving systems of linear algebraic equations of general form.

Recording the general solution of homogeneous and inhomogeneous linear algebraic systems using the vectors of the fundamental system of solutions.

In this section, we will focus on joint homogeneous and inhomogeneous systems of linear algebraic equations that have an infinite number of solutions.

Let's deal with homogeneous systems first.

Fundamental decision system A homogeneous system of p linear algebraic equations with n unknown variables is a set of (n – r) linearly independent solutions of this system, where r is the order of the basis minor of the main matrix of the system.

If we designate linearly independent solutions of a homogeneous SLAE as X (1) , X (2) , ..., X (n-r) (X (1) , X (2) , ..., X (n-r) are column matrices of dimension n by 1), then the general solution of this homogeneous system is represented as a linear combination of vectors of the fundamental system of solutions with arbitrary constant coefficients С 1 , С 2 , …, С (n-r) , that is, .

What does the term general solution of a homogeneous system of linear algebraic equations (oroslau) mean?

The meaning is simple: the formula specifies all possible solutions to the original SLAE, in other words, taking any set of values ​​of arbitrary constants C 1 , C 2 , ..., C (n-r) , according to the formula we will get one of the solutions of the original homogeneous SLAE.

Thus, if we find a fundamental system of solutions, then we can set all solutions of this homogeneous SLAE as .

Let us show the process of constructing a fundamental system of solutions for a homogeneous SLAE.

We choose the basic minor of the original system of linear equations, exclude all other equations from the system, and transfer to the right-hand side of the equations of the system with opposite signs all terms containing free unknown variables. Let's give the free unknown variables the values ​​1,0,0,…,0 and calculate the main unknowns by solving the resulting elementary system of linear equations in any way, for example, by the Cramer method. Thus, X (1) will be obtained - the first solution of the fundamental system. If we give the free unknowns the values ​​0,1,0,0,…,0 and calculate the main unknowns, then we get X (2) . And so on. If we give the free unknown variables the values ​​0,0,…,0,1 and calculate the main unknowns, then we get X (n-r) . This is how the fundamental system of solutions of the homogeneous SLAE will be constructed and its general solution can be written in the form .

For inhomogeneous systems of linear algebraic equations, the general solution is represented as

Let's look at examples.

Example.

Find the fundamental system of solutions and the general solution of a homogeneous system of linear algebraic equations .

Solution.

The rank of the main matrix of homogeneous systems of linear equations is always equal to the rank of the extended matrix. Let us find the rank of the main matrix by the method of fringing minors. As a nonzero minor of the first order, we take the element a 1 1 = 9 of the main matrix of the system. Find the bordering non-zero minor of the second order:

A minor of the second order, different from zero, is found. Let's go through the third-order minors bordering it in search of a non-zero one:

All bordering minors of the third order are equal to zero, therefore, the rank of the main and extended matrix is ​​two. Let's take the basic minor. For clarity, we note the elements of the system that form it:

The third equation of the original SLAE does not participate in the formation of the basic minor, therefore, it can be excluded:

We leave the terms containing the main unknowns on the right-hand sides of the equations, and transfer the terms with free unknowns to the right-hand sides:

Let us construct a fundamental system of solutions to the original homogeneous system of linear equations. The fundamental system of solutions of this SLAE consists of two solutions, since the original SLAE contains four unknown variables, and the order of its basic minor is two. To find X (1), we give the free unknown variables the values ​​x 2 \u003d 1, x 4 \u003d 0, then we find the main unknowns from the system of equations
.

  • §5. Trigonometric form of a complex number. Moivre formula. root extraction
  • §6. Complex functions
  • Complex functions of one real variable
  • The exponential function zez with a complex exponent and its properties
  • Euler formulas. The exponential form of a complex number
  • Chapter 3 Polynomials
  • §1. Ring of polynomials
  • §2. Division of polynomials by decreasing powers
  • §3. Mutually simple and irreducible polynomials. Euclid's theorem and algorithm
  • §4. Zeros (roots) of a polynomial. Multiplicity of zero. Decomposition of a polynomial into a product of irreducible polynomials over the field c and r
  • Exercises
  • Chapter 4 vector spaces
  • §1. Vector space of polynomials over a field of p coefficients
  • §2. Vector spaces p n over the field p
  • §3. Vectors in Geometric Space
  • 3.1. Types of vectors in geometric space
  • From the similarity of the triangles abs and av"c" it follows (both in the case of    and in the case of   ) that.
  • 3.3. Defining free vectors using a Cartesian coordinate system and matching them with vectors from the vector space r3
  • 3.4. Dot product of two free vectors
  • Exercises
  • §4. vector subspace
  • 4.1. Subspace generated by a linear combination of vectors
  • 4.2. Linear dependence and independence of vectors
  • 4.3. Theorems on linearly dependent and linearly independent vectors
  • 4.4. Base and rank of the vector system. Basis and dimension of a vector subspace generated by a system of vectors
  • 4.5. Basis and dimension of the subspace generated by the system
  • §5. Basis and dimension of a vector space
  • 5.1. Building a basis
  • 5.2. Basic properties of the basis
  • 5.3. Basis and dimension of the space of free vectors
  • §6. Isomorphism between n-dimensional vector spaces k and p n over the field p
  • §8. Linear mappings of vector spaces
  • 8.1. Linear display rank
  • 8.2. Coordinate notation of linear mappings
  • Exercises
  • Chapter 5 Matrices
  • §1. Matrix rank. Elementary Matrix Transformations
  • §2. Algebraic operations on matrices.
  • Let matrices
  • §3. Isomorphism between vector space
  • §4. The scalar product of two vectors from the space Rn
  • §5. Square matrices
  • 5.1. inverse matrix
  • 5.2. Transposed square matrix.
  • Exercises
  • Chapter 6 Determinants
  • §1. Definition and properties of the determinant that follow from the definition
  • §2. Decomposition of the determinant by the elements of the column (row). Alien complement theorem
  • §3. Geometric representation of the determinant
  • 3.1. Vector product of two free vectors
  • 3.2. Mixed product of three free vectors
  • §4. Using determinants to find the rank of matrices
  • §5. Construction of the inverse matrix
  • Exercises
  • Chapter 7 Systems of Linear Equations
  • §1. Definitions. Cooperative and non-cooperative systems
  • §2. Gauss method
  • §3. Matrix and vector forms of writing linear
  • 3. Matrix-column of free members matrix size k 1.
  • §4. Cramer system
  • §5. Homogeneous system of linear equations
  • §6. Inhomogeneous system of linear equations
  • Exercises
  • Chapter 8 Matrix Reduction
  • §1. Transition matrix from one basis to another
  • 1.1. Transition matrix associated with the transformation
  • 1.2. Orthogonal transition matrices
  • §2. Changing the linear mapping matrix when changing bases
  • 2.1. Eigenvalues, eigenvectors
  • 2.2. Reducing a square matrix to a diagonal form
  • §3. Real linear and quadratic forms
  • 3.1. Reduction of a quadratic form to a canonical form
  • 3.2. A certain quadratic form. Sylvester's criterion
  • Exercises
  • §6. Inhomogeneous system of linear equations

    If in the system of linear equations (7.1) at least one of the free terms V i is different from zero, then such a system is called heterogeneous.

    Let an inhomogeneous system of linear equations be given, which can be represented in vector form as

    , i = 1,2,.. .,To, (7.13)

    Consider the corresponding homogeneous system

    i = 1,2,... ,To. (7.14)

    Let the vector
    is the solution heterogeneous system(7.13) and the vector
    is a solution of the homogeneous system (7.14). Then, it is easy to see that the vector
    is also a solution to the inhomogeneous system (7.13). Really



    Now, using formula (7.12) of the general solution of the homogeneous equation, we have

    Where
    any number from R, A
    are fundamental solutions of a homogeneous system.

    Thus, the solution of an inhomogeneous system is the combination of its particular solution and the general solution of the corresponding homogeneous system.

    Solution (7.15) is called a general solution of an inhomogeneous system of linear equations. It follows from (7.15) that a compatible inhomogeneous system of linear equations has a unique solution if the rank r(A) of the main matrix A matches the number n unknown system (Cramer's system), if r(A)  n, then the system has an infinite set of solutions, and this set of solutions is equivalent to the subspace of solutions of the corresponding homogeneous system of equations of dimension nr.

    Examples.

    1. Let an inhomogeneous system of equations be given in which the number of equations To= 3, and the number of unknowns n = 4.

    X 1 – X 2 + X 3 –2X 4 = 1,

    X 1 – X 2 + 2X 3 – X 4 = 2,

    5X 1 – 5X 2 + 8X 3 – 7X 4 = 3.

    Determine the ranks of the main matrix A and extended A * this system. Because the A And A * non-zero matrices and k = 3 n, so 1  r (A), r * (A * )  3. Consider second-order minors of matrices A And A * :

    Thus, among the second-order minors of matrices A And A * there is a non-zero minor, so 2 r(A),r * (A * )  3. Now consider the third order minors

    , since the first and second columns are proportional. Same for minor
    .

    And so all the minors of the third order of the main matrix A are equal to zero, therefore, r(A) = 2. For the augmented matrix A * there are still minors of the third order

    Therefore, among the third-order minors of the extended matrix A * there is a non-zero minor, so r * (A * ) = 3. This means that r(A)  r * (A * ) and then, based on the Kornecker-Capelli theorem, we conclude that this system is inconsistent.

    2. Solve the system of equations

    3X 1 + 2X 2 + X 3 + X 4 = 1,

    3X 1 + 2X 2 – X 3 – 2X 4 = 2.

    For this system
    and therefore 1 r(A),r * (A * )  2. Consider for matrices A And A * minors of the second order

    Thus, r(A)= r * (A * ) = 2, and hence the system is consistent. As basic variables, we choose any two variables for which the second-order minor, composed of the coefficients of these variables, is not equal to zero. Such variables can be, for example,

    X 3 and X 4 because
    Then we have

    X 3 + X 4 = 1 – 3X 1 – 2X 2 ,

    X 3 – 2X 4 = 2 – 3X 1 – 2X 2 .

    We define a particular solution heterogeneous system. For this we set X 1 = X 2 = 0.

    X 3 + X 4 = 1,

    X 3 – 2X 4 = 2.

    Solution for this system: X 3 = 4, X 4 = - 3, therefore, = (0,0,4, –3).

    We now define the general solution of the corresponding homogeneous equation

    X 3 + X 4 = – 3X 1 – 2X 2 ,

    X 3 – 2X 4 = – 3X 1 – 2X 2 .

    Let's put: X 1 = 1, X 2 = 0

    X 3 + X 4 = –3,

    X 3 – 2X 4 = –3.

    Solution of this system X 3 = –9, X 4 = 6.

    Thus

    Now let's put X 1 = 0, X 2 = 1

    X 3 + X 4 = –2,

    X 3 – 2X 4 = –2.

    Solution: X 3 = – 6, X 4 = 4, and then

    After a particular solution has been determined , inhomogeneous equation and fundamental solutions
    And of the corresponding homogeneous equation, we write down the general solution of the inhomogeneous equation.

    Where
    any number from R.

    The theory of general equilibrium of Walras, which is the ideological basis of a centralized economy, has a number of undoubted advantages, namely: the integrity and certainty of the conclusions that make it very attractive for economic analysis.

    However, within the framework of this theory, it is impossible to adequately describe a decentralized economy. We are talking about the mechanism of coordination, the temporal aspect of economic processes, the nature of flows and agents.

    The practice of “groping” equilibrium in Walrasian theory essentially implies that none of the market participants can influence prices, that each agent has perfect knowledge of supply and demand, that the process of “groping” occurs instantly, and, finally, that transactions are absolutely unacceptable until “true prices” are established by “groping”, i.e. centralized control over all flows. Thus, this model, which implies very significant limitations, is very similar to the ideal image of the Soviet economy.

    As the Polish economist Lange argued, “there is nothing more important than understanding the laws of a decentralized economy. First of all, because it is the only reality with which we are dealing.”

    The French economist Jean-Paul Fitoussi argues that there is something intermediate between the state and the market, and by this intermediate he understands the variety of forms of coordination of their relations and connections. These two-way connections are not limited to either the transmission of an order or the direct contact of the participants in the exchange within the framework of a specific contract. An order matters only to the extent that it is carried out. This creates some asymmetry between the positions of superior and subordinate in favor of the latter. It is in the power of the subordinate that the execution of the order lies. Of course, the boss can check the execution of orders and, as Stalin did in his time, punish the executor. But verification is also an order that reproduces the original asymmetry. Each check is followed by a validation check. Thus, already at the very foundation of a centralized economy are the origins of decentralization - operational and informational asymmetry - heterogeneity.

    According to Jacques Sapir, five such forms of heterogeneity can be distinguished.

    1. Heterogeneity of products associated with unequal possibilities of their substitution. This is determined not only by the nature of the product, but also by the specific way it is included in a particular technological or economic process.

    2. The heterogeneity of economic agents, which is not limited to differences between the employee, entrepreneur and capitalist. Dominance means a situation in which around some types of behavior or around some agents there is a spontaneous organization of other types of behavior or agents, i.e., the formation of a team. The transition from the individual to the collective level is carried out through cooperation within the collective of organizations that act as economic agents. They, in turn, imply heterogeneity in the methods of interaction and coordination.

    3. Heterogeneity of time. It can take two different and complementary forms. One of them is related to the fact that the acts of consumption, savings or production by different agents have different time durations - the continuum. This is the problem of non-uniformity of "action time". The emergence of another form of time heterogeneity is related to what we call the time frame within which the decision of each agent remains valid. In this case, we can talk about "time intervals".

    4. Heterogeneity of enterprises as local production systems. Even if the products produced are identical, the behavior of a small enterprise differs significantly from the behavior of an enterprise with a large number of employees. In addition, there is a difference between producing a simple product and producing a complex product, and so on.

    5. Heterogeneity of the spaces in which economic activities take place. The unequal provision of different regions with factors of production, both material and human, naturally affects the relative price of these factors.

    The typology of heterogeneities by J. Sapir would be incomplete without two more heterogeneities:

    6. Heterogeneity of the information space, due to the geographical and historical and cultural features of the economic space.

    7. Political heterogeneity of regions and countries, which ensures the security of investments and access to information sources, and significantly affects their investment attractiveness. Example economic development China illustrates this very clearly.

    Previous

    The most common feature of any inhomogeneous system is the presence of two ( or more) phases that are separated from each other by a pronounced interface. In this feature, heterogeneous systems differ from solutions, which also consist of several components that form a homogeneous mixture. One of the phases, continuous, will be called the dispersed phase, and the other, finely divided and distributed in the first, will be called the dispersed phase. Depending on the type of dispersion medium, there are heterogeneous mixtures, liquid and gas. In table. 5.1 is a classification of inhomogeneous systems according to the type of dispersed and dispersed phases.

    Table 5.1

    Classification of heterogeneous systems

    Classification and characteristics of heterogeneous systems

    heterogeneous system is considered a system that consists of two or more phases. Each phase has its own interface and can be mechanically separated from the other.

    An inhomogeneous system consists of an internal (dispersed) phase and an external phase (dispersion medium) containing particles of the dispersed phase. Systems in which liquids are the external phase are called inhomogeneous liquid systems, and if gases are called inhomogeneous gas systems . Heterogeneous systems called heterogeneous, and homogeneous - homogeneous. A homogeneous liquid system is understood as a pure liquid or a solution of any substances in it. An inhomogeneous, or heterogeneous, liquid system is a liquid in which there are any undissolved substances in the form of tiny particles. Heterogeneous systems are often called dispersed.

    There are the following types of heterogeneous systems: suspensions, emulsions, foams, dusts, fumes, fogs.

    Suspension is a system consisting of a continuous liquid phase in which solid particles are suspended. For example, sauces with flour, starched milk, molasses with sugar crystals.

    Suspensions, depending on the particle size, are divided into coarse (particle size over 100 microns), fine (0.1-100 microns) and colloidal solutions containing solid particles with a size of 0.1 microns or less.

    Emulsion- this is a system consisting of a liquid and drops of another liquid distributed in it, which did not dissolve in the first. This is, for example, milk, a mixture of vegetable oil and water. There are gas emulsions in which the dispersion medium is a liquid and the dispersed phase is a gas.

    Foam is a system consisting of a liquid and gas bubbles distributed in it. For example, creams and other whipped products. Foams are close to emulsions in their properties.

    Emulsions and foams are characterized by the possibility of the transition of the dispersed phase into the dispersion medium and vice versa. This transition, which is possible at a certain mass ratio of phases, is called phase inversion or simply inversion.

    Aerosols called dispersed system with a gaseous dispersion medium and a solid or liquid dispersed phase, which consists of particles from a quasi-molecular to a microscopic size, which have the property of being in suspension for a more or less long time. This concept combines dust, smoke, fog. For example, flour dust formed during grain grinding, sifting, transportation of flour; sugar dust formed during the drying of sugar, etc. Smoke is formed during the combustion of solid fuels, fog - during the condensation of steam.

    In aerosols, the dispersion medium is gas or air, while the dispersed phase in dust and smoke is solids, and in fogs it is liquid.

    Dust and smoke- systems consisting of gas and solid particles distributed in them with sizes of 5-50 microns and 0.3-5 microns, respectively. Fog is a system consisting of a gas and liquid droplets of 0.3-3 microns in size distributed in it, formed as a result of condensation.

    A qualitative indicator characterizing the uniformity of aerosol particles in size is the degree of dispersity. An aerosol is called monodisperse when its constituent particles are of the same size, and polydisperse when it contains particles of different sizes. Monodisperse aerosols practically do not exist in nature. There are only some aerosols, which, in terms of particle size, only approach monodisperse systems (hyphae of fungi, specially obtained fogs, etc.).

    Dispersed or heterogeneous systems, depending on the number of dispersed phases, can be single- and multi-component. For example, milk is a multicomponent system (it has two dispersed phases: fat and protein); sauces (dispersed phases are flour, fat, etc.).

    Separation methods heterogeneous systems are classified depending on the size of suspended particles of the dispersed phase, the difference between the densities of the dispersed and continuous phases, as well as the viscosity of the continuous phase. The following main separation methods are used: sedimentation, filtration, centrifugation, wet separation, electropurification.

    precipitation is a separation process in which solid or liquid particles of the dispersed phase suspended in a liquid or gas are separated from the continuous phase under the action of gravity, centrifugal or electrostatic. Settling under the action of gravity is called settling.

    Filtering - process separation using a porous partition capable of passing liquid or gas and retaining solid particles suspended in the medium. Filtration is carried out under the action of pressure forces and is used for finer separation of suspensions and dusts than during precipitation.

    centrifugation- the process of separating suspensions and emulsions under the action of centrifugal force.

    Wet separation- the process of capturing particles suspended in a gas with the help of a liquid.

    Electrocleaning- purification of gases under the influence of electric forces.

    Methods for separating liquid and heterogeneous gas systems are based on the same principles, but the equipment used has a number of features.