Classification of heterogeneous systems. §6

Classification of heterogeneous systems.  §6

Solution of linear systems algebraic equations(SLAE) is undoubtedly the most important topic in the linear algebra course. A huge number of problems from all branches of mathematics come down to solving systems linear equations. These factors explain the reason for this article. The material of the article is selected and structured so that with its help you can

  • choose the optimal method for solving your system of linear algebraic equations,
  • study the theory of the chosen method,
  • solve your system of linear equations by considering detailed solutions to typical examples and problems.

Brief description of the article material.

First, we give all the necessary definitions, concepts and introduce notations.

Next, we will consider methods for solving systems of linear algebraic equations in which the number of equations is equal to the number of unknown variables and which have a unique solution. Firstly, we will focus on Cramer’s method, and secondly, we will show matrix method solving such systems of equations, thirdly, we will analyze the Gauss method (the method of sequential elimination of unknown variables). To consolidate the theory, we will definitely solve several SLAEs in different ways.

After this, we will move on to solving systems of linear algebraic equations of general form, in which the number of equations does not coincide with the number of unknown variables or the main matrix of the system is singular. Let us formulate the Kronecker-Capelli theorem, which allows us to establish the compatibility of SLAEs. Let us analyze the solution of systems (if they are compatible) using the concept of a basis minor of a matrix. We will also consider the Gauss method and describe in detail the solutions to the examples.

We will definitely dwell on the structure of the general solution of homogeneous and inhomogeneous systems of linear algebraic equations. Let us give the concept of a fundamental system of solutions and show how the general solution of a SLAE is written using the vectors of the fundamental system of solutions. For a better understanding, let's look at a few examples.

In conclusion, we will consider systems of equations that can be reduced to linear ones, as well as various tasks, in the solution of which SLAEs arise.

Page navigation.

Definitions, concepts, designations.

We will consider systems of p linear algebraic equations with n unknown variables (p can be equal to n) of the form

Unknown variables - coefficients (some real or complex numbers), - free terms (also real or complex numbers).

This form of recording SLAE is called coordinate.

IN matrix form writing this system of equations has the form,
Where - the main matrix of the system, - a column matrix of unknown variables, - a column matrix of free terms.

If we add a matrix-column of free terms to matrix A as the (n+1)th column, we get the so-called extended matrix systems of linear equations. Typically, an extended matrix is ​​denoted by the letter T, and the column of free terms is separated by a vertical line from the remaining columns, that is,

Solving a system of linear algebraic equations called a set of values ​​of unknown variables that turns all equations of the system into identities. The matrix equation for given values ​​of the unknown variables also becomes an identity.

If a system of equations has at least one solution, then it is called joint.

If a system of equations has no solutions, then it is called non-joint.

If a SLAE has a unique solution, then it is called certain; if there is more than one solution, then – uncertain.

If the free terms of all equations of the system are equal to zero , then the system is called homogeneous, otherwise - heterogeneous.

Solving elementary systems of linear algebraic equations.

If the number of equations of a system is equal to the number of unknown variables and the determinant of its main matrix is ​​not equal to zero, then such SLAEs will be called elementary. Such systems of equations have a unique solution, and in the case homogeneous system all unknown variables are zero.

We began to study such SLAEs in high school. When solving them, we took one equation, expressed one unknown variable in terms of others and substituted it into the remaining equations, then took the next equation, expressed the next unknown variable and substituted it into other equations, and so on. Or they used the addition method, that is, they added two or more equations to eliminate some unknown variables. We will not dwell on these methods in detail, since they are essentially modifications of the Gauss method.

The main methods for solving elementary systems of linear equations are the Cramer method, the matrix method and the Gauss method. Let's sort them out.

Solving systems of linear equations using Cramer's method.

Suppose we need to solve a system of linear algebraic equations

in which the number of equations is equal to the number of unknown variables and the determinant of the main matrix of the system is different from zero, that is, .

Let be the determinant of the main matrix of the system, and - determinants of matrices that are obtained from A by replacement 1st, 2nd, …, nth column respectively to the column of free members:

With this notation, unknown variables are calculated using the formulas of Cramer’s method as . This is how the solution to a system of linear algebraic equations is found using Cramer's method.

Example.

Cramer's method .

Solution.

The main matrix of the system has the form . Let's calculate its determinant (if necessary, see the article):

Since the determinant of the main matrix of the system is nonzero, the system has a unique solution that can be found by Cramer’s method.

Let's compose and calculate the necessary determinants (we obtain the determinant by replacing the first column in matrix A with a column of free terms, the determinant by replacing the second column with a column of free terms, and by replacing the third column of matrix A with a column of free terms):

Finding unknown variables using formulas :

Answer:

The main disadvantage of Cramer's method (if it can be called a disadvantage) is the complexity of calculating determinants when the number of equations in the system is more than three.

Solving systems of linear algebraic equations using the matrix method (using an inverse matrix).

Let a system of linear algebraic equations be given in matrix form, where the matrix A has dimension n by n and its determinant is nonzero.

Since , then matrix A is invertible, that is, there exists inverse matrix. If we multiply both sides of the equality by the left, we get a formula for finding a matrix-column of unknown variables. This is how we obtained a solution to a system of linear algebraic equations using the matrix method.

Example.

Solve system of linear equations matrix method.

Solution.

Let's rewrite the system of equations in matrix form:

Because

then the SLAE can be solved using the matrix method. Using the inverse matrix, the solution to this system can be found as .

Let's construct an inverse matrix using a matrix from algebraic additions of elements of matrix A (if necessary, see the article):

It remains to calculate the matrix of unknown variables by multiplying the inverse matrix to a matrix-column of free members (if necessary, see the article):

Answer:

or in another notation x 1 = 4, x 2 = 0, x 3 = -1.

The main problem when finding solutions to systems of linear algebraic equations using the matrix method is the complexity of finding the inverse matrix, especially for square matrices of order higher than third.

Solving systems of linear equations using the Gauss method.

Suppose we need to find a solution to a system of n linear equations with n unknown variables
the determinant of the main matrix of which is different from zero.

The essence of the Gauss method consists of sequentially eliminating unknown variables: first, x 1 is excluded from all equations of the system, starting from the second, then x 2 is excluded from all equations, starting from the third, and so on, until only the unknown variable x n remains in the last equation. This process of transforming system equations to sequentially eliminate unknown variables is called direct Gaussian method. After completing the forward stroke of the Gaussian method, x n is found from the last equation, using this value from the penultimate equation, x n-1 is calculated, and so on, x 1 is found from the first equation. The process of calculating unknown variables when moving from the last equation of the system to the first is called inverse of the Gaussian method.

Let us briefly describe the algorithm for eliminating unknown variables.

We will assume that , since we can always achieve this by rearranging the equations of the system. Let's eliminate the unknown variable x 1 from all equations of the system, starting with the second. To do this, to the second equation of the system we add the first, multiplied by , to the third equation we add the first, multiplied by , and so on, to the nth equation we add the first, multiplied by . The system of equations after such transformations will take the form

where , and .

We would have arrived at the same result if we had expressed x 1 in terms of other unknown variables in the first equation of the system and substituted the resulting expression into all other equations. Thus, the variable x 1 is excluded from all equations, starting from the second.

Next, we proceed in a similar way, but only with part of the resulting system, which is marked in the figure

To do this, to the third equation of the system we add the second, multiplied by , to the fourth equation we add the second, multiplied by , and so on, to the nth equation we add the second, multiplied by . The system of equations after such transformations will take the form

where , and . Thus, the variable x 2 is excluded from all equations, starting from the third.

Next, we proceed to eliminating the unknown x 3, while we act similarly with the part of the system marked in the figure

So we continue the direct progression of the Gaussian method until the system takes the form

From this moment we begin the reverse of the Gaussian method: we calculate x n from the last equation as , using the obtained value of x n we find x n-1 from the penultimate equation, and so on, we find x 1 from the first equation.

Example.

Solve system of linear equations Gauss method.

Solution.

Let us exclude the unknown variable x 1 from the second and third equations of the system. To do this, to both sides of the second and third equations we add the corresponding parts of the first equation, multiplied by and by, respectively:

Now we eliminate x 2 from the third equation by adding to its left and right side the left and right sides of the second equation, multiplied by:

This completes the forward stroke of the Gauss method; we begin the reverse stroke.

From the last equation of the resulting system of equations we find x 3:

From the second equation we get .

From the first equation we find the remaining unknown variable and thereby complete the reverse of the Gauss method.

Answer:

X 1 = 4, x 2 = 0, x 3 = -1.

Solving systems of linear algebraic equations of general form.

In general, the number of equations of the system p does not coincide with the number of unknown variables n:

Such SLAEs may have no solutions, have a single solution, or have infinitely many solutions. This statement also applies to systems of equations whose main matrix is ​​square and singular.

Kronecker–Capelli theorem.

Before finding a solution to a system of linear equations, it is necessary to establish its compatibility. The answer to the question when SLAE is compatible and when it is inconsistent is given by Kronecker–Capelli theorem:
In order for a system of p equations with n unknowns (p can be equal to n) to be consistent, it is necessary and sufficient that the rank of the main matrix of the system be equal to the rank of the extended matrix, that is, Rank(A)=Rank(T).

Let us consider, as an example, the application of the Kronecker–Capelli theorem to determine the compatibility of a system of linear equations.

Example.

Find out whether the system of linear equations has solutions.

Solution.

. Let's use the method of bordering minors. Minor of the second order different from zero. Let's look at the third-order minors bordering it:

Since all the bordering minors of the third order are equal to zero, the rank of the main matrix is ​​equal to two.

In turn, the rank of the extended matrix is equal to three, since the minor is of third order

different from zero.

Thus, Rang(A), therefore, using the Kronecker–Capelli theorem, we can conclude that the original system of linear equations is inconsistent.

Answer:

The system has no solutions.

So, we have learned to establish the inconsistency of a system using the Kronecker–Capelli theorem.

But how to find a solution to an SLAE if its compatibility is established?

To do this, we need the concept of a basis minor of a matrix and a theorem about the rank of a matrix.

Minor highest order matrix A, different from zero, is called basic.

From the definition of a basis minor it follows that its order is equal to the rank of the matrix. For a non-zero matrix A there can be several basis minors; there is always one basis minor.

For example, consider the matrix .

All third-order minors of this matrix are equal to zero, since the elements of the third row of this matrix are the sum of the corresponding elements of the first and second rows.

The following second-order minors are basic, since they are non-zero

Minors are not basic, since they are equal to zero.

Matrix rank theorem.

If the rank of a matrix of order p by n is equal to r, then all row (and column) elements of the matrix that do not form the chosen basis minor are linearly expressed in terms of the corresponding row (and column) elements forming the basis minor.

What does the matrix rank theorem tell us?

If, according to the Kronecker–Capelli theorem, we have established the compatibility of the system, then we choose any basis minor of the main matrix of the system (its order is equal to r), and exclude from the system all equations that do not form the selected basis minor. The SLAE obtained in this way will be equivalent to the original one, since the discarded equations are still redundant (according to the matrix rank theorem, they are a linear combination of the remaining equations).

As a result, after discarding unnecessary equations of the system, two cases are possible.

    If the number of equations r in the resulting system is equal to the number of unknown variables, then it will be definite and the only solution can be found by the Cramer method, the matrix method or the Gauss method.

    Example.

    .

    Solution.

    Rank of the main matrix of the system is equal to two, since the minor is of second order different from zero. Extended Matrix Rank is also equal to two, since the only third order minor is zero

    and the second-order minor considered above is different from zero. Based on the Kronecker–Capelli theorem, we can assert the compatibility of the original system of linear equations, since Rank(A)=Rank(T)=2.

    As a basis minor we take . It is formed by the coefficients of the first and second equations:

    The third equation of the system does not participate in the formation of the basis minor, so we exclude it from the system based on the theorem on the rank of the matrix:

    This is how we obtained an elementary system of linear algebraic equations. Let's solve it using Cramer's method:

    Answer:

    x 1 = 1, x 2 = 2.

    If the number of equations r in the resulting SLAE is less than the number of unknown variables n, then on the left sides of the equations we leave the terms that form the basis minor, and we transfer the remaining terms to the right sides of the equations of the system with the opposite sign.

    The unknown variables (r of them) remaining on the left sides of the equations are called main.

    Unknown variables (there are n - r pieces) that are on the right sides are called free.

    Now we believe that free unknown variables can take arbitrary values, while the r main unknown variables will be expressed through free unknown variables in a unique way. Their expression can be found by solving the resulting SLAE using the Cramer method, the matrix method, or the Gauss method.

    Let's look at it with an example.

    Example.

    Solve a system of linear algebraic equations .

    Solution.

    Let's find the rank of the main matrix of the system by the method of bordering minors. Let's take a 1 1 = 1 as a non-zero minor of the first order. Let's start searching for a non-zero minor of the second order bordering this minor:

    This is how we found a non-zero minor of the second order. Let's start searching for a non-zero bordering minor of the third order:

    Thus, the rank of the main matrix is ​​three. The rank of the extended matrix is ​​also equal to three, that is, the system is consistent.

    We take the found non-zero minor of the third order as the basis one.

    For clarity, we show the elements that form the basis minor:

    We leave the terms involved in the basis minor on the left side of the system equations, and transfer the rest with opposite signs to the right sides:

    Let's give the free unknown variables x 2 and x 5 arbitrary values, that is, we accept , where are arbitrary numbers. In this case, the SLAE will take the form

    Let us solve the resulting elementary system of linear algebraic equations using Cramer’s method:

    Hence, .

    In your answer, do not forget to indicate free unknown variables.

    Answer:

    Where are arbitrary numbers.

Summarize.

To solve a system of general linear algebraic equations, we first determine its compatibility using the Kronecker–Capelli theorem. If the rank of the main matrix is ​​not equal to the rank of the extended matrix, then we conclude that the system is incompatible.

If the rank of the main matrix is ​​equal to the rank of the extended matrix, then we select a basis minor and discard the equations of the system that do not participate in the formation of the selected basis minor.

If the order of the basis minor is equal to the number of unknown variables, then the SLAE has a unique solution, which can be found by any method known to us.

If the order of the basis minor is less than the number of unknown variables, then on the left side of the system equations we leave the terms with the main unknown variables, transfer the remaining terms to the right sides and give arbitrary values ​​to the free unknown variables. From the resulting system of linear equations we find the main unknowns variables by method Cramer, matrix method or Gaussian method.

Gauss method for solving systems of linear algebraic equations of general form.

The Gauss method can be used to solve systems of linear algebraic equations of any kind without first testing them for consistency. The process of sequential elimination of unknown variables makes it possible to draw a conclusion about both the compatibility and incompatibility of the SLAE, and if a solution exists, it makes it possible to find it.

From a computational point of view, the Gaussian method is preferable.

Watch it detailed description and analyzed examples in the article the Gauss method for solving systems of linear algebraic equations of general form.

Writing a general solution to homogeneous and inhomogeneous linear algebraic systems using vectors of the fundamental system of solutions.

In this section we will talk about simultaneous homogeneous and inhomogeneous systems of linear algebraic equations that have an infinite number of solutions.

Let us first deal with homogeneous systems.

Fundamental system of solutions homogeneous system of p linear algebraic equations with n unknown variables is a collection of (n – r) linearly independent solutions of this system, where r is the order of the basis minor of the main matrix of the system.

If we denote linearly independent solutions of a homogeneous SLAE as X (1) , X (2) , …, X (n-r) (X (1) , X (2) , …, X (n-r) are columnar matrices of dimension n by 1) , then the general solution of this homogeneous system is represented as a linear combination of vectors of the fundamental system of solutions with arbitrary constant coefficients C 1, C 2, ..., C (n-r), that is, .

What does the term general solution of a homogeneous system of linear algebraic equations (oroslau) mean?

The meaning is simple: the formula sets everything possible solutions the original SLAE, in other words, taking any set of values ​​of arbitrary constants C 1, C 2, ..., C (n-r), according to the formula we will obtain one of the solutions to the original homogeneous SLAE.

Thus, if we find a fundamental system of solutions, then we can define all solutions of this homogeneous SLAE as .

Let us show the process of constructing a fundamental system of solutions to a homogeneous SLAE.

We select the basis minor of the original system of linear equations, exclude all other equations from the system and transfer all terms containing free unknown variables to the right-hand sides of the system equations with opposite signs. Let's give free unknowns variable values 1,0,0,…,0 and calculate the main unknowns by solving the resulting elementary system of linear equations in any way, for example, using the Cramer method. This will result in X (1) - the first solution of the fundamental system. If we give the free unknowns the values ​​0,1,0,0,…,0 and calculate the main unknowns, we get X (2) . And so on. If we assign the values ​​0.0,…,0.1 to the free unknown variables and calculate the main unknowns, we obtain X (n-r) . In this way, a fundamental system of solutions to a homogeneous SLAE will be constructed and its general solution can be written in the form .

For inhomogeneous systems of linear algebraic equations, the general solution is represented in the form , where is the general solution of the corresponding homogeneous system, and is the particular solution of the original inhomogeneous SLAE, which we obtain by giving the free unknowns the values ​​0,0,...,0 and calculating the values ​​of the main unknowns.

Let's look at examples.

Example.

Find the fundamental system of solutions and the general solution of a homogeneous system of linear algebraic equations .

Solution.

The rank of the main matrix of homogeneous systems of linear equations is always equal to the rank of the extended matrix. Let's find the rank of the main matrix using the method of bordering minors. As a non-zero minor of the first order, we take element a 1 1 = 9 of the main matrix of the system. Let's find the bordering non-zero minor of the second order:

A minor of the second order, different from zero, has been found. Let's go through the third-order minors bordering it in search of a non-zero one:

All third-order bordering minors are equal to zero, therefore, the rank of the main and extended matrix is ​​equal to two. Let's take . For clarity, let us note the elements of the system that form it:

The third equation of the original SLAE does not participate in the formation of the basis minor, therefore, it can be excluded:

We leave the terms containing the main unknowns on the right sides of the equations, and transfer the terms with free unknowns to the right sides:

Let us construct a fundamental system of solutions to the original homogeneous system of linear equations. Fundamental system solutions of this SLAE consists of two solutions, since the original SLAE contains four unknown variables, and the order of its basis minor is equal to two. To find X (1), we give the free unknown variables the values ​​x 2 = 1, x 4 = 0, then we find the main unknowns from the system of equations
.

The term “system” is used in various sciences. Respectively, different situations Various definitions of the system are used: from philosophical to formal. For the purposes of the course, the following definition is best suited: a system is a set of elements united by connections and functioning together to achieve a goal.

Systems are characterized by a number of properties, the main of which are divided into three groups: static, dynamic and synthetic.

1.1 Static properties of systems

Static properties are the features of a certain state of the system. This is what the system has at any given point in time.

Integrity. Every system appears as something unified, whole, separate, different from everything else. This property is called system integrity. It allows you to divide the whole world into two parts: the system and the environment.

Openness. The isolated system, distinguished from everything else, is not isolated from the environment. On the contrary, they are connected and exchange various types resources (matter, energy, information, etc.). This feature is designated by the term “openness”.

The connections between the system and the environment are directional: in some ways, the environment influences the system (system inputs), in others, the system influences the environment, does something in the environment, and outputs something to the environment (system outputs). The description of the inputs and outputs of a system is called a black box model. This model lacks information about internal features systems. Despite its apparent simplicity, such a model is often quite sufficient for working with the system.

In many cases, when managing equipment or people, information only about the inputs and outputs of the system allows you to successfully achieve the goal. However, for this, the model must meet certain requirements. For example, the user may experience difficulties if he does not know that on some TV models the power button must be pulled out rather than pressed. Therefore, for successful management, the model must contain all the information necessary to achieve the goal. When trying to satisfy this requirement, four types of errors can occur, which stem from the fact that the model always contains a finite number of connections, whereas in a real system the number of connections is unlimited.

An error of the first type occurs when a subject mistakenly views a relationship as significant and decides to include it in the model. This leads to the appearance of extra, unnecessary elements in the model. An error of the second type, on the contrary, is made when a decision is made to exclude a supposedly insignificant connection from the model, without which, in fact, achieving the goal is difficult or even impossible.

The answer to the question of which error is worse depends on the context in which it is asked. It is clear that using a model containing an error inevitably leads to losses. Losses can be small, acceptable, intolerable or unacceptable. The damage caused by a type I error is due to the fact that the information it contains is superfluous. When working with such a model, you will have to spend resources on recording and processing unnecessary information, for example, wasting computer memory and processing time on it. This may not affect the quality of the solution, but it will certainly affect the cost and timeliness. Losses from an error of the second type are damage from the fact that there is not enough information to fully achieve the goal; the goal cannot be fully achieved.

Now it is clear that the worse mistake is the one from which the losses are greater, and this depends on specific circumstances. For example, if time is a critical factor, then an error of the first type becomes much more dangerous than an error of the second type: a decision made on time, even if not the best, is preferable to an optimal, but late one.

An error of the third kind is considered to be the consequences of ignorance. In order to assess the significance of a certain connection, you need to know that it exists at all. If this is not known, then the question of including the connection in the model is not worth it at all. If such a connection is insignificant, then in practice its presence in reality and absence in the model will be unnoticeable. If the connection is significant, then difficulties will arise similar to those with a type II error. The difference is that a type 3 error is more difficult to correct: this requires acquiring new knowledge.

An error of the fourth kind occurs when a known essential connection is erroneously attributed to the number of inputs or outputs of the system. For example, it is well established that in 19th-century England the health of men wearing top hats was significantly superior to that of men wearing caps. It hardly follows from this that the type of headdress can be considered as an input for a system for predicting health status.

Internal heterogeneity of systems, distinctness of parts. If you look inside the “black box”, it turns out that the system is heterogeneous, not monolithic. One may find that different qualities differ in different parts of the system. The description of the internal heterogeneity of the system comes down to isolating relatively homogeneous areas and drawing boundaries between them. This is how the concept of parts of the system appears. Upon closer examination, it turns out that the identified large parts are also heterogeneous, which requires identifying even smaller parts. The result is a hierarchical description of the parts of the system, which is called a composition model.

Information about the composition of the system can be used to work with the system. The goals of interaction with the system may be different, and therefore the composition models of the same system may also differ. At first glance, it is not difficult to distinguish the parts of the system; they “catch the eye.” In some systems, parts arise randomly, in the process natural growth and development (organisms, societies, etc.). Artificial systems are deliberately assembled from previously known parts (mechanisms, buildings, etc.). There are also mixed types of systems, such as nature reserves and agricultural systems. On the other hand, from the point of view of the rector, student, accountant and business manager, the university consists of different parts. An airplane consists of different parts from the point of view of the pilot, flight attendant, and passenger. The difficulties of creating a composition model can be represented in three ways.

First, the whole can be divided into parts in different ways. In this case, the method of division is determined by the goal. For example, the composition of a car is presented differently to novice car enthusiasts, future professional drivers, mechanics preparing to work in a car service center, and salespeople in car dealerships. It is natural to ask whether parts of the system “really” exist? The answer is contained in the formulation of the property in question: we are talking about distinguishability, and not about the separability of parts. You can distinguish between the parts of the system needed to achieve the goal, but you cannot separate them.

Secondly, the number of parts in the composition model also depends on the level at which the fragmentation of the system is stopped. The parts on the terminal branches of the resulting hierarchical tree are called elements. In different circumstances, decomposition is terminated at different levels. For example, when describing upcoming work, it is necessary to give an experienced worker and a novice instructions of varying degrees of detail. Thus, the composition model depends on what is considered elemental. There are cases when an element has a natural, absolute character (cell, individual, phoneme, electron).

Thirdly, any system is part larger system, and sometimes several systems at once. Such a metasystem can also be divided into subsystems in different ways. This means that the external boundary of the system is relative, conditional. The boundaries of the system are determined taking into account the goals of the subject who will use the system model.

Structure. The property of structuredness is that the parts of the system are not isolated, not independent of each other; they are interconnected and interact with each other. Moreover, the properties of the system significantly depend on how exactly its parts interact. Therefore, information about the connections of system elements is so important. The list of essential connections between system elements is called a system structure model. The endowment of any system with a certain structure is called structuring.

The concept of structuring further deepens the idea of ​​the integrity of the system: connections, as it were, hold the parts together and hold them together as a whole. Integrity, noted earlier as an external property, receives a supporting explanation from within the system - through structure.

When building a model, structures also occur certain difficulties. The first of these is due to the fact that the structure model is determined after the composition model is selected, and depends on what exactly the composition of the system is. But even with a fixed composition, the structure model is variable. This is due to the possibility of defining the significance of connections in different ways. For example, a modern manager is recommended, along with the formal structure of his organization, to take into account the existence of informal relationships between employees, which also affect the functioning of the organization. The second difficulty stems from the fact that each element of the system, in turn, is a “little black box”. So all four types of errors are possible when defining the inputs and outputs of each element included in the structure model.

1.2 DYNAMIC PROPERTIES OF SYSTEMS

If we consider the state of the system at a new point in time, we can again detect all four static properties. But if you superimpose “photographs” of the system at different points in time on top of each other, you will find that they differ in detail: during the time between the two moments of observation, some changes occurred in the system and its environment. Such changes may be important when working with the system, and, therefore, must be reflected in system descriptions and taken into account when working with it. The features of changes over time inside and outside the system are called the dynamic properties of the system. There are usually four different dynamic properties systems.

Functionality. Processes Y(t) occurring at the outputs of the system are considered as its functions. The functions of a system are its behavior in the external environment, the results of its activities, and the products produced by the system.

From the multiplicity of outputs follows a multiplicity of functions, each of which can be used by someone and for something. Therefore, the same system can serve different purposes. A subject using a system for his own purposes will naturally evaluate its functions and organize them in relation to his needs. This is how the concepts of main, secondary, neutral, undesirable, superfluous function, etc. appear.

Stimulability. Certain processes also occur at the system inputs X(t), affecting the system and turning after a series of transformations in the system into Y(t). Impacts X(t) are called stimuli, and the very susceptibility of any system to external influences and the change in its behavior under these influences is called stimulability.

Variability of the system over time. In any system, changes occur that must be taken into account. In terms of the system model, we can say that the values ​​of internal variables (parameters) can change Z(t), composition and structure of the system and any combinations thereof. The nature of these changes may also be different. Therefore, further classifications of changes may be considered.

The most obvious classification is by the rate of change (slow, fast. The rate of change is measured relative to any speed taken as a standard. It is possible to introduce large quantity speed gradations. It is also possible to classify trends in changes in the system regarding its structure and composition.

We can talk about changes that do not affect the structure of the system: some elements are replaced by others that are equivalent; options Z(t) can change without changing the structure. This type of system dynamics is called its functioning. Changes can be quantitative in nature: the composition of the system increases, and although its structure automatically changes, this does not affect the properties of the system until a certain point (for example, the expansion of a landfill). Such changes are called system growth. With qualitative changes in the system, its essential properties change. If such changes go in a positive direction, they are called development. With the same resources, a developed system achieves better results, and new positive qualities (functions) may appear. This is due to an increase in the level of consistency and organization of the system.

Growth occurs mainly due to the consumption of material resources, development - due to the assimilation and use of information. Growth and development can occur simultaneously, but they are not necessarily related. Growth is always limited (due to limited material resources), and development from the outside is not limited, since information about the external environment is inexhaustible. Development is the result of training, but training cannot be carried out instead of the learner. Therefore, there is an internal limitation on development. If the system “does not want” to learn, it cannot and will not develop.

In addition to the processes of growth and development, reverse processes can also occur in the system. Changes opposite to growth are called decline, contraction, decrease. A change that is opposite to development is called degradation, loss or weakening of beneficial properties.

The changes considered are monotonic, that is, they are directed “in one direction.” Obviously, monotonous changes cannot last forever. In the history of any system, one can distinguish periods of decline and rise, stability and instability, the sequence of which forms an individual life cycle systems.

You can use other classifications of processes occurring in the system: according to predictability, processes are divided into random and deterministic; According to the type of time dependence, processes are divided into monotonic, periodic, harmonic, pulsed, etc.

Existence in a changing environment. Not only is it changing this system, but also everyone else. For the system under consideration, this looks like a continuous change in the environment. This circumstance has many consequences for the system itself, which must adapt to new conditions in order not to perish. When considering a specific system, attention is usually paid to the characteristics of a particular reaction of the system, for example, the reaction rate. If we consider systems that store information (books, magnetic media), then the speed of response to changes in the external environment should be minimal to ensure the preservation of information. On the other hand, the response speed of the control system must be many times greater than the rate of change in the environment, since the system must select a control action even before the state of the environment changes irreversibly.

1.3 SYNTHETIC PROPERTIES OF SYSTEMS

Synthetic properties include generalizing, integral, collective properties that describe the interaction of the system with the environment and take into account integrity in the most general sense.

Emergence. The combination of elements into a system leads to the emergence of qualitatively new properties that are not derived from the properties of the parts, inherent only in the system itself and existing only as long as the system is one whole. Such qualities of the system are called
emergent (from the English “to arise”).

Examples of emergent properties can be found in various fields. For example, none of the parts of the plane can fly, but the plane, nevertheless, flies. The properties of water, many of which are not fully understood, do not follow from the properties of hydrogen and oxygen.

Let there be two black boxes, each of which has one input, one output and performs one operation - adding one to the number at the input. When connecting such elements according to the diagram shown in the figure, we obtain a system without inputs, but with two outputs. At each cycle of operation the system will issue larger number, in this case only even numbers will appear at one input, and only odd numbers at the other.




A

b

Fig.1.1. Connection of system elements: a) system with two outputs; b) parallel connection of elements

The emergent properties of a system are determined by its structure. This means that with different combinations of elements, different emergent properties will arise. For example, if you connect elements in parallel, then functionally new system will not differ from one element. Emergence will manifest itself in increasing the reliability of the system due to the parallel connection of two identical elements - that is, due to redundancy.

It is worth noting an important case when the elements of the system possess all its properties. This situation is typical for the fractal construction of the system. At the same time, the principles of structuring the parts are the same as those of the system as a whole. An example of a fractal system is an organization in which management is structured identically at all levels of the hierarchy.

Inseparability into parts. This property is, in fact, a consequence of emergence. It is especially emphasized because its practical importance is great, and underestimation is very common.

When a part is removed from the system, two things happen: important events. Firstly, this changes the composition of the system, and therefore its structure. This will be a different system with different properties. Secondly, an element removed from the system will behave differently due to the fact that its environment will change. All of this is to say that caution should be used when considering an element in isolation from the rest of the system.

Inherence. The more integral the system is (from the English inherent - “being part of something”), the better it is coordinated, adapted to environment, is compatible with it. The degree of inherence varies and can change. The expediency of considering inherence as one of the properties of the system is due to the fact that the degree and quality of the system’s implementation of the chosen function depends on it. In natural systems, inherence increases through natural selection. In artificial systems, inherence should be a special concern of the designer.

In some cases, inherence is ensured with the help of intermediate, intermediary systems. Examples include adapters for using foreign electrical appliances in conjunction with Soviet-style sockets; middleware (such as the COM service in Windows) that allows two programs from different manufacturers to communicate with each other.

Expediency. In systems created by man, the subordination of both structure and composition to achieving the set goal is so obvious that it can be recognized fundamental property any artificial system. This property is called expediency. The goal for which the system is created determines which emergent property will ensure the achievement of the goal, and this, in turn, dictates the choice of the structure and composition of the system. In order to extend the concept of expediency to natural systems, it is necessary to clarify the concept of goal. The clarification is carried out using an artificial system as an example.

The history of any artificial system begins at some point in time 0, when the existing value of the state vector Y 0 turns out to be unsatisfactory, that is, a problematic situation arises. The subject is unhappy with this condition and would like to change it. Let him be satisfied by the values ​​of the state vector Y*. This is the first definition of the goal. Further, it is discovered that Y* does not exist now and cannot, for a number of reasons, be achieved in the near future. The second step in defining a goal is to recognize it as a desired future state. It immediately becomes clear that the future is not limited. The third step in clarifying the concept of a goal is to estimate the time T* when the desired state Y* can be achieved under given conditions. Now the target becomes two-dimensional, it is a point (T*, Y*) on the graph. The task is to move from point (0, Y 0) to point (T*, Y*). But it turns out that this path can be taken along different trajectories, and only one of them can be realized. Let the choice fall on the trajectory Y*( t). Thus, the goal now means not only the final state (T*, Y*), but also the entire trajectory Y*( t) (“intermediate goals”, “plan”). So, the goal is the desired future states Y*( t).

After time T*, state Y* becomes real. Therefore, it becomes possible to define the goal as a future real state. This makes it possible to say that natural systems also have the property of expediency, which allows us to approach the description of systems of any nature from a unified position. The main difference between natural and artificial systems is that natural systems, obeying the laws of nature, realize objective goals, and artificial systems are created to realize subjective goals.

Most common feature any heterogeneous system- presence of two ( or more) phases that are separated from each other by a pronounced interface. This feature distinguishes heterogeneous systems from solutions, which also consist of several components that form a homogeneous mixture. We will call one of the phases, continuous, dispersive, and the other, finely divided and distributed in the first, dispersed phase. Depending on the type of dispersion medium, heterogeneous mixtures, liquid and gas, are distinguished. In table 5.1 shows the classification of inhomogeneous systems according to the type of dispersed and dispersed phases.

Table 5.1

Classification of heterogeneous systems

Classification and characteristics of heterogeneous systems

Heterogeneous system A system is considered to be a system that consists of two or more phases. Each phase has its own interface and can be mechanically separated from the other.

A heterogeneous system consists of an internal (dispersed) phase and an external phase (dispersion medium), in which particles of the dispersed phase are located. Systems in which the external phase is liquid are called inhomogeneous liquid systems, and if gases are called inhomogeneous gas systems . Heterogeneous systems are called heterogeneous, and homogeneous - homogeneous. A homogeneous liquid system is understood as a pure liquid or a solution of any substances in it. A heterogeneous, or heterogeneous, liquid system is a liquid in which there are any undissolved substances in the form of tiny particles. Heterogeneous systems are often called dispersed.

The following types of inhomogeneous systems are distinguished: suspensions, emulsions, foams, dusts, fumes, mists.

Suspension is a system consisting of a continuous liquid phase in which solid particles are suspended. For example, sauces with flour, starchy milk, molasses with sugar crystals.

Depending on the particle size, suspensions are divided into coarse (particle size more than 100 microns), fine (0.1-100 microns) and colloidal solutions, containing solid particles 0.1 microns in size or less.

Emulsion is a system consisting of a liquid and drops of another liquid distributed in it that have not dissolved in the first. This is, for example, milk, a mixture of vegetable oil and water. There are gas emulsions in which the dispersion medium is liquid and the dispersed phase is gas.

Foam is a system consisting of a liquid and gas bubbles distributed in it. For example, creams and other whipped products. Foams are similar in properties to emulsions.

Emulsions and foams are characterized by the possibility of transition of the dispersed phase into a dispersion medium and vice versa. This transition, possible at a certain mass ratio of phases, is called phase inversion or simply inversion.

Aerosols called disperse system with a gaseous dispersion medium and a solid or liquid dispersed phase, which consists of particles from quasi-molecular to microscopic sizes that have the property of being suspended for a more or less long time. This concept combines dust, smoke, and fog. For example, flour dust formed during grain grinding, sifting, and transportation of flour; sugar dust formed during drying of sugar, etc. Smoke is formed when solid fuel is burned, fog is formed when steam condenses.

In aerosols, the dispersion medium is gas or air, and the dispersed phase in dust and smoke is solids, in fogs - liquid.

Dust and smoke- systems consisting of gas and solid particles distributed in them with sizes of 5-50 microns and 0.3-5 microns, respectively. Fog is a system consisting of gas and liquid droplets of 0.3-3 microns in size distributed in it, formed as a result of condensation.

A qualitative indicator characterizing the uniformity of aerosol particles in size is the degree of dispersion. An aerosol is called monodisperse when its constituent particles are of the same size, and polydisperse when it contains particles of different sizes. Monodisperse aerosols practically do not exist in nature. There are only a few aerosols whose particle sizes only approach monodisperse systems (fungal hyphae, specially produced mists, etc.).

Dispersed or heterogeneous Depending on the number of dispersed phases, systems can be single- or multicomponent. For example, milk is a multicomponent system (it has two dispersed phases: fat and protein); sauces (dispersed phases are flour, fat, etc.).

Separation methods heterogeneous systems are classified depending on the size of suspended particles of the dispersed phase, the difference in densities of the dispersed and continuous phases, as well as the viscosity of the continuous phase. The following main separation methods are used: sedimentation, filtration, centrifugation, wet separation, electropurification.

Precipitation is a separation process in which solid or liquid particles of a dispersed phase suspended in a liquid or gas are separated from the continuous phase under the influence of gravity, centrifugal or electrostatic forces. Sedimentation by gravity is called settling.

Filtration - process separation using a porous partition capable of passing liquid or gas and retaining solid particles suspended in the medium. Filtration is carried out under the influence of pressure forces and is used for a finer separation of suspensions and dusts than during sedimentation.

Centrifugation- the process of separating suspensions and emulsions under the influence of centrifugal force.

Wet separation- the process of trapping particles suspended in a gas using a liquid.

Electrocleaning- purification of gases under the influence of electrical forces.

Methods for separating liquid and inhomogeneous gas systems are based on the same principles, but the equipment used has a number of features.


2.4.1. Definition. Let us be given an inhomogeneous system of linear equations

Consider a homogeneous system

whose matrix of coefficients coincides with the matrix of coefficients of system (2.4.1). Then system (2.4.2) is called reduced homogeneous system (2.4.1).

2.4.2. Theorem. The general solution of an inhomogeneous system is equal to the sum of some particular solution of the inhomogeneous system and the general solution of the reduced homogeneous system.

Thus, to find a general solution to the inhomogeneous system (2.4.1) it is sufficient:

1) Research it for compatibility. In case of compatibility:

2) Find the general solution of the reduced homogeneous system.

3) Find any particular solution to the original (inhomogeneous) one.

4) By adding the found particular solution and the general solution of the given one, find the general solution of the original system.

2.4.3. Exercise. Investigate the system for compatibility and, in the case of compatibility, find its general solution in the form of the sum of the particular and the general given.

Solution. a) To solve the problem, we use the above scheme:

1) We examine the system for compatibility (by the method of bordering minors): The rank of the main matrix is ​​3 (see the solution to Exercise 2.2.5, a), and the non-zero minor of the maximum order is composed of elements of the 1st, 2nd, 4th rows and 1st, 3 -th, 4th columns. To find the rank of the extended matrix, we border it with the 3rd row and 6th column of the extended matrix: =0. Means, rg A =rg=3, and the system is consistent. In particular, it is equivalent to the system

2) Let's find a general solution X 0 reduced homogeneous system

X 0 ={(-2a - b ; a ; b ; b ; b ) | a , b Î R}

(see solution to Exercise 2.2.5, a)).

3) Let's find any particular solution x h of the original system . To do this, in system (2.4.3), equivalent to the original one, the free unknowns x 2 and x We assume that 5 is equal to, for example, zero (this is the most convenient data):

and solve the resulting system: x 1 =- , x 3 =- , x 4 =-5. Thus, (- ; 0; - ; -5; 0) ¾ is a particular solution of the system.

4) Find the general solution X n of the original system :

X n={x h }+X 0 ={(- ; 0; - ; -5; 0)} + {(-2a - b ; a ; b ; b ; b )}=

={(- -2a - b ; a ; - + b ; -5+b ; b )}.

Comment. Compare the answer you received with the second answer in example 1.2.1 c). To obtain the answer in the first form for 1.2.1 c) the basic unknowns are taken x 1 , x 3 , x 5 (the minor for which is also not equal to zero), and as free ¾ x 2 and x 4 .

§3. Some applications.

3.1. On the issue of matrix equations. We remind you that matrix equation over the field F is an equation in which the unknown is a matrix over the field F .


The simplest matrix equations are equations of the form

AX=B , XA =B (2.5.1)

Where A , B ¾ given (known) matrix over a field F , A X ¾ such matrices, upon substitution of which equations (2.5.1) turn into true matrix equalities. In particular, the matrix method of certain systems is reduced to solving a matrix equation.

In the case when the matrices A in equations (2.5.1) are non-degenerate, they have solutions, respectively X =A B And X =B.A. .

In the case when at least one of the matrices on the left side of equations (2.5.1) is singular, this method is no longer suitable, since the corresponding inverse matrix A does not exist. In this case, finding solutions to equations (2.5.1) is reduced to solving systems.

But first, let's introduce some concepts.

Let us call the set of all solutions of the system general decision . Let us call a separately taken solution of an indefinite system private solution .

3.1.1. Example. Solve matrix equation over field R.

A) X = ; b) X = ; V) X = .

Solution. a) Since =0, then the formula X =A B is not suitable for solving this equation. If in the work XA =B matrix A has 2 rows, then the matrix X has 2 columns. Number of lines X must match the number of lines B . That's why X has 2 lines. Thus, X ¾ some square matrix of the second order: X = . Let's substitute X into the original equation:

Multiplying the matrices on the left side of (2.5.2), we arrive at the equality

Two matrices are equal if and only if they have the same dimensions and their corresponding elements are equal. Therefore (2.5.3) is equivalent to the system

This system is equivalent to the system

Solving it, for example, using the Gaussian method, we come to a set of solutions (5-2 b , b , -2d , d ), Where b , d run independently of each other R. Thus, X = .

b) Similar to a) we have X = and.

This system is inconsistent (check it out!). Therefore, this matrix equation has no solutions.

c) Let us denote this equation by AX =B . Because A has 3 columns and B has 2 columns then X ¾ some matrix of dimension 3´2: X = . Therefore we have the following chain of equivalences:

We solve the last system using the Gaussian method (we omit comments)

Thus, we arrive at the system

whose solution is (11+8 z , 14+10z , z , -49+8w , -58+10w ,w ) Where z , w run independently of each other R.

Answer: a) X = , b , d Î R.

b) There are no solutions.

V) X = z , w Î R.

3.2. On the issue of permutability of matrices. In general, the product of matrices is non-commutable, that is, if A And B such that AB And B.A. are defined, then, generally speaking, AB ¹ B.A. . But an example identity matrix E shows that commutability is also possible A.E. =E.A. for any matrix A , if only A.E. And E.A. were determined.

In this section we will consider problems of finding the set of all matrices that commute with a given one. Thus,

Unknown x 1 , y 2 and z 3 can take any value: x 1 =a , y 2 =b , z 3 =g . Then

Thus, X = .

Answer. A) X d ¾ any number.

b) X ¾ set of matrices of the form , where a , b And g ¾ any numbers.



top