The simplest flows are Markov processes and solution chains. Elements of queuing theory. Modeling of this process

Federal agency on education in the Russian Federation

FGOU SPO "Perevozsky Construction College"

Course work

by discipline " Mathematical methods»

on the topic “SMO with limited waiting time. Closed QS"

Introduction........................................................ ........................................................ ....... 2

1. Fundamentals of queuing theory............................................................ ...... 3

1.1 The concept of a random process.................................................... .................... 3

1.2 Markov random process.................................................... ................ 4

1.3 Event streams................................................................... ........................................... 6

1.4 Kolmogorov equations for state probabilities. Final state probabilities.............................................................. ........................................................ ........ 9

1.5 Problems of queuing theory.................................................... .. 13

1.6 Classification of queuing systems.................................................. 15

2. Queuing systems with waiting.................................................... 16

2.1 Single-channel QS with waiting.................................................... ........... 16

2.2 Multi-channel QS with waiting.................................................... ......... 25

3. Closed QS.................................................... ............................................... 37

The solution of the problem................................................ ........................................... 45

Conclusion................................................. ........................................................ . 50

Bibliography................................................ ..................................... 51


In this course we will look at various queuing systems (QS) and queuing networks (Queuing).

A queuing system (QS) is understood as a dynamic system designed to efficiently service the flow of requests (service requirements) under restrictions on system resources.

QS models are convenient for describing individual subsystems of modern computing systems, such as the processor subsystem - main memory, input-output channel, etc. A computing system as a whole is a set of interconnected subsystems, the interaction of which is probabilistic. An application for solving a certain problem entering a computing system goes through a sequence of stages of counting, accessing external storage devices and input-output devices. After completing a certain sequence of such stages, the number and duration of which depends on the complexity of the program, the request is considered serviced and leaves the computer system. Thus, the computing system as a whole can be represented by a set of QS, each of which reflects the process of functioning of an individual device or a group of similar devices that are part of the system.

A set of interconnected QSs is called a queuing network (stochastic network).

To begin with, we will look at the basics of the theory of QS, then we will move on to familiarize ourselves in detailed content with QS with expectation and closed QS. The course also includes a practical part, in which we will learn in detail how to apply theory in practice.


Queuing theory is one of the branches of probability theory. This theory considers probabilistic problems and mathematical models (before that we considered deterministic mathematical models). Let us remind you that:

Deterministic mathematical model reflects the behavior of an object (system, process) from the perspective full certainty in the present and future.

Probabilistic mathematical model takes into account the influence of random factors on the behavior of an object (system, process) and, therefore, evaluates the future from the standpoint of the probability of certain events.

Those. here, as, for example, in game theory problems are considered in conditions uncertainty .

Let us first consider some concepts that characterize “stochastic uncertainty”, when the uncertain factors included in the problem are random variables (or random functions), the probabilistic characteristics of which are either known or can be obtained from experience. Such uncertainty is also called “favorable”, “benign”.

Strictly speaking, random disturbances are inherent in any process. It is easier to give examples of a random process than a “non-random” process. Even, for example, the process of running a clock (it seems to be a strictly calibrated work - “works like a clock”) is subject to random changes (moving forward, lagging behind, stopping). But as long as these disturbances are insignificant and have little effect on the parameters of interest to us, we can neglect them and consider the process as deterministic, non-random.

Let there be some system S (technical device, a group of such devices, a technological system - a machine, a site, a workshop, an enterprise, an industry, etc.). In system S leaks random process, if it changes its state over time (passes from one state to another), moreover, in a previously unknown random manner.

Examples:

1. System S– technological system (machine section). Machines break down from time to time and are repaired. The process taking place in this system is random.

2. System S- an aircraft flying at a given altitude along a specific route. Disturbing factors - weather conditions, crew errors, etc., consequences - bumpiness, violation of the flight schedule, etc.

A random process occurring in a system is called Markovsky, if for any moment of time t 0 probabilistic characteristics of a process in the future depend only on its state at the moment t 0 and do not depend on when and how the system reached this state.

Let the system be in a certain state at the moment t 0 S 0 . We know the characteristics of the state of the system in the present and everything that happened during t <t 0 (process history). Can we predict (predict) the future, i.e. what will happen when t >t 0 ? Not exactly, but some probabilistic characteristics of the process can be found in the future. For example, the probability that after some time the system S will be able S 1 or will remain in state S 0, etc.

Example. System S- a group of aircraft participating in air combat. Let x– number of “red” planes, y– number of “blue” aircraft. By the time t 0 number of surviving (not shot down) aircraft, respectively – x 0 , y 0 . We are interested in the probability that at a moment in time the numerical superiority will be on the side of the “reds”. This probability depends on what state the system was in at the time t 0, and not on when and in what sequence those shot down died until the moment t 0 planes.

On practice Markov processes are usually not found in their pure form. But there are processes for which the influence of “prehistory” can be neglected. And when studying such processes, Markov models can be used (queuing theory does not consider Markov queuing systems, but the mathematical apparatus that describes them is much more complex).

In operations research great importance have Markov random processes with discrete states and continuous time.

The process is called discrete state process, if its possible states S 1 , S 2, ... can be determined in advance, and the transition of the system from state to state occurs “in a jump,” almost instantly.

The process is called continuous time process, if the moments of possible transitions from state to state are not fixed in advance, but are uncertain, random and can occur at any moment.

Example. Technological system (section) S consists of two machines, each of which can fail (fail) at a random moment in time, after which the repair of the unit immediately begins, which also continues for an unknown, random time. The following system states are possible:

S 0 - both machines are working;

S 1 - the first machine is being repaired, the second is working;

S 2 - the second machine is being repaired, the first one is working;

S 3 - both machines are being repaired.

System transitions S from state to state occur almost instantly, at random moments when a particular machine fails or a repair is completed.

When analyzing random processes with discrete states, it is convenient to use a geometric scheme - state graph. The vertices of the graph are the states of the system. The arcs of the graph are possible transitions from state to state. For our example, the state graph is shown in Fig. 1.

Rice. 1. System state graph

Note. Transition from state S 0 in S 3 is not indicated in the figure, because it is assumed that the machines fail independently of each other. We neglect the possibility of simultaneous failure of both machines.

Event stream– a sequence of homogeneous events following one after another at some random moments in time.

In the previous example, this is a flow of failures and a flow of restorations. Other examples: the flow of calls at a telephone exchange, the flow of customers in a store, etc.

The flow of events can be visually represented by a series of points on the time axis O t- rice. 2.

Rice. 2. Image of the flow of events on the time axis

The position of each point is random, and only one implementation of the flow is depicted here.

Event flow intensity ( ) is the average number of events per unit of time.

Let's look at some properties (types) of event streams.

The stream of events is called stationary, if its probabilistic characteristics do not depend on time.

In particular, the intensity of the stationary flow is constant. The flow of events inevitably has condensations or rarefactions, but they are not of a regular nature, and the average number of events per unit of time is constant and does not depend on time.

The stream of events is called flow without consequences, if for any two non-overlapping sections of time and (see Fig. 2) the number of events that fall on one of them does not depend on how many events fall on the other. In other words, this means that the events that form the flow appear at certain points in time independently of each other and are each caused by its own causes.

The stream of events is called ordinary, if events appear in it one by one, and not in groups of several at once.

The stream of events is called simplest (or stationary Poisson), if it has three properties at once:

1) stationary;

2) ordinary;

3) has no consequences.

The simplest flow has the simplest mathematical description. It plays the same special role among flows as does law. normal distribution among other laws of distribution. Namely, when applied, it is enough large number independent, stationary and ordinary flows (comparable to each other in intensity) result in a flow close to the simplest.

For the simplest flow with intensity interval T between neighboring events has a so-called exponential distribution with density:

where is the parameter of the exponential law.

For random variable T, which has an exponential distribution, expected value is the reciprocal of the parameter, and the standard deviation is equal to the mathematical expectation:

Considering Markov processes with discrete states and continuous time, it is assumed that all transitions of the system S from state to state occur under the influence of simple event flows (call flows, failure flows, recovery flows, etc.). If all event streams transferring the system S from state to state of the simplest, then the process occurring in the system will be Markovian.

So, a system in state is affected by a simple flow of events. As soon as the first event of this flow appears, the system “jumps” from state to state (on the state graph along the arrow).

For clarity, on the system state graph, for each arc, the intensity of the flow of events that moves the system along this arc (arrow) is indicated. - intensity of the flow of events that transfers the system from state to . Such a graph is called marked. For our example, the labeled graph is shown in Fig. 3.

Rice. 3. Labeled system state graph

In this figure - the intensity of the failure flow; - intensity of recovery flow.

We assume that the average time to repair a machine does not depend on whether one machine is repaired or both at once. Those. Each machine is repaired by a separate specialist.

Let the system be in the state S 0 . In state S 1 it is translated by the flow of failures of the first machine. Its intensity is equal to:

where is the average failure-free operation time of the first machine.

From the state S 1 in S 0 the system is transferred by the flow of “repair completions” of the first machine. Its intensity is equal to:

where is the average repair time for the first machine.

The intensities of event flows that transfer the system along all arcs of the graph are calculated in a similar way. Having at our disposal a labeled graph of system states, we construct mathematical model of this process.

Let the system under consideration S has -possible states. The probability of the th state is the probability that at the moment of time, the system will be in the state. It is obvious that for any moment in time the sum of all state probabilities is equal to one:

To find all probabilities of states as functions of time, compose and solve Kolmogorov equationsspecial type equations in which the unknown functions are the probabilities of states. The rule for composing these equations is presented here without proof. But before introducing it, let’s explain the concept final probability of state .

What will happen to the state probabilities at ? Will they strive for any limits? If these limits exist and do not depend on the initial state of the system, then they are called final state probabilities .

where is the finite number of system states.

Final state probabilities– these are no longer variable quantities (functions of time), but constant numbers. It's obvious that:

Final state probability is essentially the average relative time the system remains in this state.

For example, the system S has three states S 1 , S 2 and S 3. Their final probabilities are respectively 0.2; 0.3 and 0.5. This means that a system in a limiting stationary state spends on average 2/10 of its time in the state S 1, 3/10 – able S 2 and 5/10 – able S 3 .

The rule for composing the Kolmogorov system of equations: in each equation of the system on the left side is the final probability of a given state, multiplied by the total intensity of all flows, leading from this state, A in his right parts– the sum of the products of the intensities of all flows, included in -th state, on the probabilities of the states from which these flows come.

Using this rule, we write a system of equations for our example :

.

This system of four equations with four unknowns, it would seem, can be completely solved. But these equations are homogeneous (they do not have a free term), and, therefore, they determine the unknowns only up to an arbitrary factor. However, you can use the normalization condition: and use it to solve the system. In this case, one (any) of the equations can be discarded (it follows as a consequence of the others).

Continuation of the example. Let the flow intensities be equal to: .

We discard the fourth equation and add a normalization condition instead:

.

Those. in the limiting, stationary mode the system S on average 40% of the time will be spent in a state of S 0 (both machines are operational), 20% - in good condition S 1 (the first machine is being repaired, the second is working), 27% - in condition S 2 (the second machine is being repaired, the first one is working), 13% - in condition S 3 (both machines are being repaired). Knowing these final probabilities can help estimate the average efficiency of the system and the workload of repair organs.

Let the system S able S 0 (fully operational) brings in an income of 8 conventional units per unit of time, able S 1 – income 3 conventional units, able S 2 – income 5 conventional units, able S 3 – does not generate income. Then, in the limiting, stationary mode, the average income per unit of time will be equal to: conventional units.

Machine 1 is repaired in a fraction of the time equal to: . Machine 2 is repaired in a fraction of the time equal to: . Arises optimization problem. Even though we can reduce the average repair time of the first or second machine (or both), it will cost us a certain amount. The question is, will the increased revenue associated with faster repairs pay for the increased repair costs? You will need to solve a system of four equations with four unknowns.

Examples of queuing service systems (QS): telephone exchanges, repair shops, ticket offices, information desks, machine tools and other technological systems, flexible control systems production systems etc.

Each QS consists of a certain number of service units, which are called service channels(these are machines, transport carts, robots, communication lines, cashiers, salespeople, etc.). Every QS is designed to serve some kind of flow of applications(requirements) arriving at some random moments in time.

Service of the request continues for some, generally speaking, random time, after which the channel is freed and ready to receive the next request. The random nature of the flow of applications and service time leads to the fact that at some periods of time an excessively large number of applications accumulate at the input of the QS (they either queue up or leave the QS unserved). In other periods, the system will work with underload or be completely idle.

The QS operation process is a random process with discrete states and continuous time. The state of the QS changes abruptly when certain events occur (the arrival of a new application, the end of service, the moment when an application that is tired of waiting leaves the queue).

Subject of queuing theory– construction mathematical models, connecting the given operating conditions of the QS (number of channels, their productivity, operating rules, nature of the flow of requests) with the characteristics that interest us - indicators of the effectiveness of the QS. These indicators describe the ability of the CMO to cope with the flow of applications. They can be: the average number of applications served by the QS per unit of time; average number of busy channels; average number of applications in queue; average waiting time for service, etc.

Mathematical analysis The work of the QS is greatly facilitated if the process of this work is Markovian, i.e. streams of events that transfer the system from state to state are the simplest. Otherwise, the mathematical description of the process becomes very complicated and it is rarely possible to bring it to specific analytical dependencies. In practice, non-Markov processes are reduced to Markov processes with approximation. The following mathematical apparatus describes Markov processes.

First division (based on the presence of queues):

1. QS with failures;

2. Queue with a queue.

In QS with failures an application received at a time when all channels are busy is rejected, leaves the QS and is not serviced in the future.

In SMO with a queue an application that arrives at a time when all channels are busy does not leave, but gets in a queue and waits for the opportunity to be served.

QS with queues are subdivided on different types depending on how the queue is organized - limited or unlimited. Restrictions may concern both the length of the queue and the waiting time, “service discipline”.

So, for example, the following QSs are considered:

· CMO with impatient requests (queue length and service time are limited);

· QS with priority service, i.e. some applications are processed out of turn, etc.

In addition, QSs are divided into open QSs and closed QSs.

In an open QS the characteristics of the flow of requests do not depend on the state of the QS itself (how many channels are occupied). In a closed QS– depend. For example, if one worker services a group of machines that require adjustment from time to time, then the intensity of the flow of “demands” from the machines depends on how many of them are already operational and awaiting adjustment.

The classification of SMO is far from limited to the above varieties, but this is enough.

Let's consider the simplest QS with waiting - a single-channel system (n - 1), which receives a flow of requests with intensity ; service intensity (i.e., on average, a continuously busy channel will issue serviced requests per unit (of time). A request received at a time when the channel is busy is queued and awaits service.

System with limited queue length. Let us first assume that the number of places in the queue is limited by the number m, i.e. if an application arrives at a time when there are already m-applications in the queue, it leaves the system unserved. In the future, by directing m to infinity, we will obtain the characteristics of a single-channel QS without restrictions on the queue length.

We will number the states of the QS according to the number of applications in the system (both being serviced and awaiting service):

The channel is free;

The channel is busy, there is no queue;

The channel is busy, one application is in queue;

The channel is busy, k-1 applications are in queue;

The channel is busy, applications are in queue.

The GSP is shown in Fig. 4. All intensities of event flows moving into the system along arrows from left to right are equal to , and from right to left - . Indeed, the flow of requests moves the system along the arrows from left to right (as soon as a request arrives, the system goes to the next state), while from right to left there is a flow of “releases” of a busy channel, which has an intensity (as soon as the next request is serviced, the channel will either become free or decrease number of applications in queue).

Rice. 4. Single-channel QS with waiting

Shown in Fig. 4 diagram is a diagram of reproduction and death. Let us write expressions for the limiting probabilities of states:

(5)

or using: :

(6)

The last line in (6) contains a geometric progression with the first term 1 and the denominator p, from which we obtain:

(7)

in connection with which the limiting probabilities take the form:

(8).

Expression (7) is valid only for< 1 (при = 1 она дает неопределенность вида 0/0). Сумма геометрической прогрессии со знаменателем = 1 равна m+2, и в этом случае:

Let us determine the characteristics of the QS: probability of failure, relative throughput q, absolute throughput A, average queue length, average number of applications associated with the system, average waiting time in the queue, average time spent by an application in the QS.

Probability of failure. Obviously, the application is rejected only if the channel is busy and all t-places in the queue are also busy:

(9).

Relative Bandwidth:

(10).

Average queue length. Let's find the average number of applications in the queue as the mathematical expectation of a discrete random variable R-number of applications in the queue:

With probability there is one application in the queue, with probability there are two applications, in general with probability there are k-1 applications in the queue, etc., from which:

(11).

Since , the sum in (11) can be interpreted as a derivative of the sum of the geometric progression:

Substituting this expression in (11) and using from (8), we finally obtain:

(12).

The average number of applications in the system. Next, we obtain a formula for the average number of -requests associated with the system (both standing in queue and being serviced). Since , where is the average number of applications under service, and k is known, it remains to determine . Since there is only one channel, the number of serviced requests can be 0 (with probability ) or 1 (with probability 1 - ), from which:

.

and the average number of applications associated with the QS is:

(13).

Average waiting time for an application in the queue. Let's denote it ; if a request comes into the system at some point in time, then with probability the service channel will not be busy, and it will not have to wait in line (the waiting time is zero). Most likely, she will come into the system while some request is being served, but there will be no queue in front of her, and the request will wait for the start of its servicing for a period of time (the average time of servicing one request). There is a probability that there will be another application in the queue before the application being considered, and the average waiting time will be equal to , etc.

If k=m+1, i.e. when a newly arriving request finds the service channel busy and m-requests in the queue (probability of this), then in this case the request does not queue (and is not served), so the waiting time is zero. The average waiting time will be:

if we substitute expressions for probabilities (8) here, we get:

(14).

Here we use relations (11), (12) (derivative of a geometric progression), as well as from (8). Comparing this expression with (12), we note that in other words, the average waiting time is equal to the average number of applications in the queue divided by the intensity of the application flow.

(15).

Average time an application stays in the system. Let us denote the mathematical expectation of a random variable as the time a request stays in the QS, which is the sum of the average waiting time in the queue and the average service time. If the system load is 100%, obviously, otherwise:

.

Example 1. A gas station (gas station) is a service station with one service channel (one pump).

The area at the station allows no more than three cars to be in line for refueling at the same time (m = 3). If there are already three cars in the queue, the next car arriving at the station does not join the queue. The flow of cars arriving for refueling has an intensity = 1 (car per minute). The refueling process lasts an average of 1.25 minutes.

Define:

probability of failure;

relative and absolute capacity of gas stations;

average number of cars waiting to refuel;

average number of cars at a gas station (including those being serviced);

average waiting time for a car in queue;

average time a car spends at a gas station (including service).

In other words, the average waiting time is equal to the average number of applications in the queue divided by the intensity of the application flow.

We first find the reduced intensity of the flow of applications: =1/1.25=0.8; =1/0.8=1.25.

According to formulas (8):

Probability of failure is 0.297.

Relative capacity of the QS: q=1-=0.703.

Absolute throughput of QS: A==0.703 cars per minute.

We find the average number of cars in the queue using formula (12):

those. The average number of cars waiting in line to fill the gas station is 1.56.

Adding to this value the average number of vehicles under service:

we get the average number of cars associated with a gas station.

Average waiting time for a car in queue according to formula (15):

Adding to this value, we get the average time that a car spends at a gas station:

Systems with unlimited waiting. In such systems, the value of m is not limited and, therefore, the main characteristics can be obtained by passing to the limit in previously obtained expressions (5), (6), etc.

Note that the denominator in the last formula (6) is the sum of an infinite number of terms of the geometric progression. This sum converges when the progression is infinitely decreasing, i.e. at<1.

It can be proven that<1 есть условие, при котором в СМО с ожиданием существует предельный установившийся режим, иначе такого режима не существует, и очередь при будет неограниченно возрастать. Поэтому в дальнейшем здесь предполагается, что <1.

If, then relations (8) take the form:

(16).

If there are no restrictions on the length of the queue, each application that comes into the system will be serviced, therefore q=1, .

We obtain the average number of applications in the queue from (12) at:

The average number of applications in the system according to formula (13) at:

.

The average waiting time is obtained from formula (14) with:

.

Finally, the average time an application stays in the QS is:

System with limited queue length. Let's consider a channel QS with waiting, which receives a flow of requests with intensity ; service intensity (for one channel); number of places in the queue.

System states are numbered according to the number of requests associated with the system:

no queue:

All channels are free;

One channel is occupied, the rest are free;

-channels are occupied, the rest are not;

All channels are occupied, there are no free channels;

there is a queue:

All n-channels are occupied; one application is in the queue;

All n-channels, r-requests in the queue are occupied;

All n-channels, r-requests in the queue are occupied.

GSP is shown in Fig. 17. Each arrow is marked with the corresponding intensities of event flows. Along the arrows from left to right, the system is always transferred by the same flow of requests with an intensity of

Rice. 17. Multi-channel QS with waiting

The graph is typical for the processes of reproduction and death, for which the solution was previously obtained. Let's write expressions for the limiting probabilities of states using the notation: (here we use the expression for the sum of a geometric progression with a denominator).

Thus, all state probabilities have been found.

Let us determine the performance characteristics of the system.

Probability of failure. An incoming request is rejected if all n-channels and all m-places in the queue are occupied:

(18)

The relative throughput complements the probability of failure to one:

Absolute throughput of QS:

(19)

Average number of busy channels. For QS with refusals, it coincided with the average number of applications in the system. For a QS with a queue, the average number of busy channels does not coincide with the average number of applications in the system: the latter value differs from the first by the average number of applications in the queue.

Let us denote the average number of occupied channels by . Each busy channel serves on average A-claims per unit of time, and the QS as a whole serves on average A-claims per unit of time. Dividing one by the other, we get:

The average number of requests in a queue can be calculated directly as the mathematical expectation of a discrete random variable:

(20)

Here again (the expression in parentheses) the derivative of the sum of the geometric progression occurs (see above (11), (12) - (14)), using the relation for it, we obtain:

Average number of applications in the system:

Average waiting time for an application in the queue. Let's consider a number of situations that differ in the state in which a newly arrived request will find the system and how long it will have to wait for service.

If a request does not find all channels busy, it will not have to wait at all (the corresponding terms in the mathematical expectation are equal to zero). If a request arrives at a time when all n-channels are busy and there is no queue, it will have to wait on average for a time equal to (because the “release flow” of -channels has intensity ). If a request finds all channels busy and one request in front of it in the queue, it will have to wait on average for a period of time (for each request in front), etc. If a request finds itself in a queue of -requests, it will have to wait on average for time If a newly arrived request finds m-requests already in the queue, then it will not wait at all (but will not be served). We find the average waiting time by multiplying each of these values ​​by the corresponding probabilities:

(21)

Just as in the case of a single-channel QS with waiting, we note that this expression differs from the expression for the average queue length (20) only by the factor , i.e.

.

The average residence time of a request in the system, as well as for a single-channel QS, differs from the average waiting time by the average service time multiplied by the relative throughput:

.

Systems with unlimited queue length. We considered a channel QS with waiting, when no more than m-requests can be in the queue at the same time.

Just as before, when analyzing systems without restrictions, it is necessary to consider the obtained relations for .

We obtain the probabilities of states from the formulas by passing to the limit (at ). Note that the sum of the corresponding geometric progression converges at and diverges at >1. Assuming that<1 и устремив в формулах величину m к бесконечности, получим выражения для предельных вероятностей состояний:

(22)

Probability of failure, relative and absolute throughput. Since each request will sooner or later be serviced, the characteristics of the QS throughput will be:

The average number of applications in the queue is obtained from (20):

,

and the average waiting time is from (21):

.

The average number of occupied channels, as before, is determined through the absolute throughput:

.

The average number of applications associated with the QS is defined as the average number of applications in the queue plus the average number of applications under service (average number of busy channels):

Example 2. A gas station with two pumps (n = 2) serves a flow of cars with an intensity of =0.8 (cars per minute). Average service time per machine:

There is no other gas station in the area, so the line of cars in front of the gas station can grow almost unlimitedly. Find the characteristics of the QS.

Because the<1, очередь не растет безгранично и имеет смысл говорить о предельном стационарном режиме работы СМО. По формулам (22) находим вероятности состояний:

etc.

We will find the average number of busy channels by dividing the absolute capacity of the QS A = = 0.8 by the service intensity = 0.5:

The probability of no queue at a gas station will be:

Average number of cars in queue:

Average number of cars at gas stations:

Average waiting time in queue:

Average time a car spends at a gas station:

QS with limited waiting time. Previously, we considered systems with waiting limited only by the queue length (the number of m-requests simultaneously in the queue). In such a QS, an application that has grown in a queue does not leave it until it waits for service. In practice, there are other types of QS in which an application, after waiting for some time, can leave the queue (so-called “impatient” applications).

Let's consider a QS of this type, assuming that the waiting time constraint is a random variable.

Let us assume that there is an n-channel QS with waiting, in which the number of places in the queue is unlimited, but the time a request stays in the queue is some random variable with a mean value, thus, each request in the queue is subject to a kind of Poisson " flow of care” with intensity:

If this flow is Poisson, then the process occurring in the QS will be Markovian. Let us find the state probabilities for it. The numbering of system states is associated with the number of applications in the system - both being served and standing in queue:

no queue:

All channels are free;

One channel is busy;

Two channels are busy;

All n-channels are occupied;

there is a queue:

All n-channels are occupied, one request is in the queue;

All n-channels are occupied, r-requests are in queue, etc.

The graph of states and transitions of the system is shown in Fig. 23.

Rice. 23. QS with limited waiting time

Let's mark this graph as before; all arrows leading from left to right will indicate the intensity of the flow of applications. For states without a queue, the arrows leading from them from right to left will, as before, indicate the total intensity of the flow servicing all occupied channels. As for states with a queue, the arrows leading from them from right to left will have the total intensity of the service flow of all n-channels plus the corresponding intensity of the flow of departures from the queue. If there are r-applications in the queue, then the total intensity of the flow of departures will be equal to .

As can be seen from the graph, there is a pattern of reproduction and death; using general expressions for the limiting probabilities of states in this scheme (using abbreviated notations, we write:

(24)

Let us note some features of a QS with limited waiting compared to the previously considered QS with “patient” requests.

If the length of the queue is not limited and the requests are “patient” (do not leave the queue), then the stationary limit regime exists only in the case (at , the corresponding infinite geometric progression diverges, which physically corresponds to unlimited growth of the queue at ).

On the contrary, in a QS with “impatient” requests leaving the queue sooner or later, the established service mode at is always achieved, regardless of the reduced intensity of the flow of requests. This follows from the fact that the series for in the denominator of formula (24) converges for any positive values ​​of and .

For a QS with “impatient” requests, the concept of “probability of failure” does not make sense - each request gets in line, but may not wait for service, leaving ahead of time.

Relative throughput, the average number of requests in the queue. The relative capacity q of such a QS can be calculated as follows. Obviously, all applications will be serviced, except those that leave the queue ahead of schedule. Let's calculate the average number of applications that leave the queue early. To do this, we calculate the average number of applications in the queue:

Each of these applications is subject to a “flow of departures” with an intensity of . This means that out of the average number of -applications in the queue, on average, -applications will leave without waiting for service, -applications per unit of time and in total per unit of time, on average -applications will be served. The relative capacity of the QS will be:

We still obtain the average number of occupied channels by dividing the absolute bandwidth A by:

(26)

Average number of applications in queue. Relation (26) allows you to calculate the average number of applications in the queue without summing the infinite series (25). From (26) we obtain:

and the average number of occupied channels included in this formula can be found as the mathematical expectation of a random variable Z, taking values ​​0, 1, 2,..., n with probabilities ,:

In conclusion, we note that if in formulas (24) we go to the limit at (or, what is the same, at ), then formulas (22) will be obtained, i.e., “impatient” applications will become “patient”.

So far we have considered systems in which the incoming flow is in no way connected with the outgoing flow. Such systems are called open-loop. In some cases, serviced requests are again received at the input after a delay. Such QSs are called closed. A clinic serving a given area, a team of workers assigned to a group of machines, are examples of closed systems.

In a closed QS, the same finite number of potential requirements circulates. Until a potential requirement has been realized as a service request, it is considered to be in a delay block. At the moment of implementation, it enters the system itself. For example, workers maintain a group of machines. Each machine is a potential requirement, turning into a real one at the moment of its breakdown. While the machine is working, it is in the delay block, and from the moment of breakdown until the end of the repair, it is in the system itself. Each worker is a service channel.

Let n- number of service channels, s- number of potential applications, n <s , - the intensity of the flow of applications for each potential requirement, μ - the intensity of service:

The probability of system downtime is determined by the formula

R 0 = .

Final probabilities of system states:

Pk= at k = at .

The average number of occupied channels is expressed through these probabilities

=P 1 + 2P 2 +…+n(P n +P n+ 1 +…+P s) or

=P 1 + 2P 2 +…+(n- 1)Pn- 1 +n( 1-P 0 -P 1 -…-P n-1 ).

Using this we find the absolute throughput of the system:

as well as the average number of applications in the system

M=s- =s- .

Example 1. The input of a three-channel QS with failures receives a flow of requests with an intensity =4 requests per minute, time for servicing a request by one channel t obs =1/μ =0.5 min. From the point of view of QS capacity, is it profitable to force all three channels to service requests at once, and the average service time is reduced by three times? How will this affect the average time an application spends in the CMO?

Solution. We find the probability of downtime of a three-channel QS using the formula

ρ = /μ =4/2=2, n=3,

P 0 = = = 0,158.

The probability of failure is determined by the formula:

P open = P n ==

P open = 0.21.

Relative system throughput:

R obsl = 1-R open 1-0,21=0,79.

Absolute system throughput:

A= P obsl 3,16.

The average number of occupied channels is determined by the formula:

1.58, share of channels occupied by servicing,

q = 0,53.

The average time an application stays in the QS is found as the probability that the application is accepted for service, multiplied by the average service time: t SMO 0.395 min.

Combining all three channels into one, we get a single-channel system with parameters μ= 6, ρ= 2/3. For a single-channel system, the probability of downtime is:

R 0 = = =0,6,

probability of failure:

P open =ρ P 0 = = 0,4,

relative throughput:

R obsl = 1-R open =0,6,

absolute throughput:

A=P obs =2.4.

t SMO =P obsl= =0.1 min.

As a result of combining channels into one, the system throughput decreased as the probability of failure increased. The average time an application spends in the system has decreased.

Example 2. The input of a three-channel QS with an unlimited queue receives a flow of requests with an intensity =4 applications per hour, average time to service one application t=1/μ=0.5 h. Find the system performance indicators.

For the system under consideration n =3, =4, μ=1/0.5=2, ρ= /μ=2, ρ/ n =2/3<1. Определяем вероятность простоя по формуле:

P= .

P 0 = =1/9.

We find the average number of applications in the queue using the formula:

L =.

L = = .

We calculate the average waiting time for an application in the queue using the formula:

t= = 0.22 hours.

Average time an application stays in the system:

T=t+ 0,22+0,5=0,72.

Example 3. There are 3 hairdressers working in the hairdressing salon, and there are 3 chairs in the waiting room. The flow of clients has intensity =12 clients per hour. Average service time t obsl =20 min. Determine the relative and absolute throughput of the system, the average number of occupied chairs, the average length of the queue, the average time that the client spends in the hairdresser.

For this task n =3, m =3, =12, μ =3, ρ =4, ρ/n=4/3. The probability of downtime is determined by the formula:

R 0 =.

P 0 = 0,012.

The probability of denial of service is determined by the formula

P open =P n+m = .

P open =P n + m 0,307.

Relative system capacity, i.e. service probability:

P obsl =1-P open 1-0,307=0,693.

Absolute throughput:

A= P obsl 12 .

Average number of busy channels:

.

The average queue length is determined by the formula:

L =

L= 1,56.

Average waiting time for service in queue:

t= h.

Average number of applications to the CMO:

M=L + .

Average time an application stays in the CMO:

T=M/ 0.36 hours

Example 4. A worker operates 4 machines. Each machine fails with intensity =0.5 failures per hour, average repair time t rem=1/μ=0.8 h. Determine the throughput of the system.

This problem considers a closed QS, μ =1.25, ρ=0.5/1.25=0.4. The probability of worker downtime is determined by the formula:

R 0 =.

P 0 = .

Probability of worker employment R zan = 1-P 0 . A=( 1-P 0 =0.85μ machines per hour.

Task:

Two workers operate a group of four machines. Stops of a working machine occur on average after 30 minutes. The average setup time is 15 minutes. Operating and setup time are distributed according to an exponential law.

Find the average share of free time for each worker and the average operating time of the machine.

Find the same characteristics for a system in which:

a) each worker is assigned two machines;

b) two workers always service the machine together, and with double intensity;

c) the only faulty machine is serviced by both workers at once (with double intensity), and when at least one more faulty machine appears, they begin to work separately, each serving one machine (first describe the system in terms of the processes of death and birth).

Solution:

The following states of system S are possible:

S 0 – all machines are operational;

S 1 – 1 machine is being repaired, the rest are in good working order;

S 2 – 2 machine is being repaired, the rest are in working order;

S 3 – 3 machine is being repaired, the rest are in working order;

S 4 – 4 machine is being repaired, the rest are in good working order;

S 5 – (1, 2) machines are being repaired, the rest are in good working order;

S 6 – (1, 3) machines are being repaired, the rest are in working order;

S 7 – (1, 4) machines are being repaired, the rest are in good working order;

S 8 – (2, 3) machines are being repaired, the rest are in good working order;

S 9 – (2, 4) machines are being repaired, the rest are in good working order;

S 10 – (3, 4) machines are being repaired, the rest are in good working order;

S 11 – (1, 2, 3) machines are being repaired, 4 machine is operational;

S 12 – (1, 2, 4) machines are being repaired, 3 machine is operational;

S 13 – (1, 3, 4) machines are being repaired, machine 2 is operational;

S 14 – (2, 3, 4) machines are being repaired, 1 machine is operational;

S 15 – all machines are repaired.

System state graph...

This system S is an example of a closed system, since each machine is a potential requirement, turning into a real one at the moment of its breakdown. While the machine is working, it is in the delay block, and from the moment of breakdown until the end of the repair, it is in the system itself. Each worker is a service channel.

If a worker is busy, he sets up μ-machines per unit time, system capacity:

Answer:

The average share of free time for each worker is ≈ 0.09.

Average machine operating time ≈ 3.64.

a) Each worker is assigned two machines.

The probability of worker downtime is determined by the formula:

Probability of worker employment:

If a worker is busy, he sets up μ-machines per unit time, system capacity:

Answer:

The average share of free time for each worker is ≈ 0.62.

Average machine operating time ≈ 1.52.

b) Two workers always service the machine together, and with double intensity.

c) The only faulty machine is serviced by both workers at once (with double intensity), and when at least one more faulty machine appears, they begin to work separately, each serving one machine (first describe the system in terms of the processes of death and birth).

Comparison of 5 answers:

The most effective way to organize workers at machines will be the initial version of the task.

Examples of the simplest queuing systems (QS) were discussed above. The term “protozoa” does not mean “elementary”. Mathematical models of these systems are applicable and successfully used in practical calculations.

The possibility of applying decision theory in queuing systems is determined by the following factors:

1. The number of applications in the system (which is considered as a QS) must be quite large (massively).

2. All applications received at the input of the QS must be of the same type.

3. To calculate using formulas, you need to know the laws that determine the receipt of applications and the intensity of their processing. Moreover, the order flows must be Poisson.

4. Structure of the QS, i.e. the set of incoming requirements and the sequence of application processing must be strictly fixed.

5. It is necessary to exclude subjects from the system or describe them as requirements with a constant processing intensity.

To the restrictions listed above, we can add one more, which has a strong impact on the dimension and complexity of the mathematical model.

6. The number of priorities used should be minimal. The priorities of applications must be constant, i.e. they cannot change during processing within the QS.

In the course of the work, the main goal was achieved - the main material of “QS with limited waiting time” and “Closed QS” was studied, which was set by the teacher of the academic discipline. We also got acquainted with the application of the acquired knowledge in practice, i.e. consolidated the material covered.


1) http://www.5ballov.ru.

2) http://www.studentport.ru.

3) http://vse5ki.ru.

4) http://revolution..

5) Fomin G.P. Mathematical methods and models in commercial activities. M: Finance and Statistics, 2001.

6) Gmurman V.E. Theory of Probability and Mathematical Statistics. M: Higher School, 2001.

7) Sovetov B.A., Yakovlev S.A. Systems modeling. M: Higher School, 1985.

8) Lifshits A.L. Statistical modeling of QS. M., 1978.

9) Ventzel E.S. Operations research. M: Nauka, 1980.

10) Ventzel E.S., Ovcharov L.A. Probability theory and its engineering applications. M: Nauka, 1988.

The QS process is a random process. A random (probabilistic or stochastic) process is understood as a process of changing the state of a system over time in accordance with probabilistic laws.

A process is called a process with discrete states if its possible states S1, S2, S3... can be listed in advance, and transitions of the system from state to state occur instantly (jump). A process is called a process with continuous time if the moments of possible transitions of the system from state to state are not fixed in advance, but are random.

The QS operation process is a random process with discrete states and continuous time.

A random process is called a Markov or random process without consequences if, for any moment of time t0, the probabilistic characteristics of the process in the future depend only on its state at a given moment t0 and do not depend on when and how the system came to this state.

An example of a Markov process: system S is a taxi meter. The state of the system at moment t is characterized by the number of kilometers traveled by the car up to this moment. Let the counter show S0 at time t0. The probability that at the moment t>t0 the counter will show this or that number of kilometers (more precisely, the corresponding number of rubles) S1 depends on S0, but does not depend on at what points in time the counter readings changed before the moment t0.

In some cases, the prehistory of the processes under consideration can simply be neglected and Markov models can be used to study them.

When analyzing random processes with discrete states, it is convenient to use a geometric scheme - the so-called state graph. Typically, system states are depicted by rectangles (circles), and possible transitions from state to state are depicted by arrows (oriented arcs) connecting the states (Fig. 1).

Figure 1 - State graph

For a mathematical description of a Markov random process with discrete states and continuous time flowing in a QS, we will get acquainted with one of the important concepts of probability theory - the concept of a flow of events.

The flow of events is understood as a sequence of homogeneous events following one after another at some random moments in time

Examples could be:

  • - flow of calls at the telephone exchange;
  • - flow of switching on of devices in the household electrical network;
  • - flow of freight trains arriving at the railway station:
  • - flow of computer malfunctions (failures);
  • - a stream of shots aimed at the target.

The flow is characterized by intensity n - the frequency of occurrence of events or the average number of events entering the QS per unit time.

The flow of events is called regular if events follow one another at certain intervals. Such a flow is relatively rare in practice, but is of some interest as an extreme case.

A flow of events is called stationary if its probabilistic characteristics do not depend on time. In particular, the intensity of a stationary flow is a constant value: .

A flow of events is called a flow without aftereffect if, for any two non-overlapping sections of time and _, the number of events falling on one of them does not depend on the number of events falling on the others. For example, the flow of passengers entering the subway has virtually no impact. And, say, the flow of customers leaving the counter with purchases already has consequences (if only because the time interval between individual customers cannot be less than the minimum service time for each of them).

A flow of events is called ordinary if the probability of two or more events occurring in a small (elementary) time interval is negligible compared to the probability of one event occurring. In other words, a stream of events is ordinary if events appear in it singly and not in groups.

A stream of events is said to be simplest (or stationary Poisson) if it is both stationary, ordinary, and without consequence.

The simplest flow as a limit arises in the theory of random processes as naturally as in the theory of probability the normal distribution is obtained by superposition (superposition) of a sufficiently large number n of independent, stationary and ordinary flows (comparable to each other in intensity), the result is a flow close to the simplest with intensity l, equal to the sum of the intensities of the incoming flows:

Let us consider the simplest flow of events on the time axis as an unlimited sequence of random points. (Fig. 2)

Figure 2 - Flow of events

It can be shown that for the simplest flow, the number m of events (points) falling on an arbitrary time segment φ is distributed according to Poisson’s law

for which the mathematical expectation of a random variable is equal to its variance:

In particular, the probability that no event will occur during time φ (m = 0) is equal to

Let us find the distribution of the time interval T between arbitrary two neighboring events of the simplest flow.

In accordance with the formula, the probability that during a period of time of length t none of the subsequent events will occur is equal to

and the probability of the opposite event, i.e. distribution function of the random variable T, is

The probability density of a random variable is the derivative of its distribution function:

The distribution given by the probability density or distribution function is called exponential (or exponential). Thus, the time interval between two neighboring arbitrary events has an exponential distribution, for which the mathematical expectation is equal to the standard deviation of the random variable

and vice versa according to the intensity of the flow of l.

The most important property of the exponential distribution (inherent only in the exponential distribution) is the following: if a period of time distributed according to the exponential law has already lasted for some time φ, then this does not in any way affect the distribution law of the remaining part of the interval (T-φ): it will be the same , as well as the law of distribution of the entire interval T.

In other words, for a time interval T between two consecutive neighboring events of a flow that has an exponential distribution, any information about how long this interval lasted does not affect the distribution law of the remaining part.

For the simplest flow with intensity l, the probability of at least one flow event occurring in an elementary (small) time interval?t is equal to

Study questions:

Basic concepts of Markov processes.

Event streams.

Poisson flow.

Discrete Markov chains.

Ergodic and absorbing chains.

Continuous Markov chains.

Applications of Markov processes.

Theory of Markov random processes.

Probability theory has a very interesting history. The roots of science go far back centuries; in the most ancient states - China, India, Egypt, Greece, some elements of probability theory were used for population censuses and even to determine the number of enemy troops.

The founder of the theory is considered to be mathematician, physicist and philosopher B. Pascal. He first took up the theory of probability under the influence of questions posed to him by one of the courtiers of the French court - the Chevalier de Mere, a brilliant gentleman, philosopher, art critic and gambler. But the game was also a reason for deep reflection. De Mere asked B. Pascal two famous questions:

1. How many times do you need to throw two dice so that the number of times two sixes fall out at once is more than half of the total number of throws?

2. How to fairly divide the money bet by two players if for some reason they stopped the game prematurely?

These problems served as the reason for the initial introduction of the concept of “mathematical expectation” and the formulation of the basic theorems of addition and multiplication of probabilities. Practical applications were soon identified: insurance, demography, etc.

Jacob Bernoulli discovered the law of large numbers, which made it possible to establish a connection between the probability of any random event and the frequency of its occurrence, observed directly from experience.

Further advances in the development of probability theory are associated with P. Laplace, C. Gauss, S. Poisson and others.

In Russia, mathematician V.Ya. Bunyakovsky at the beginning of the 19th century. wrote the first textbook on probability theory and developed its terminology in its modern form. P.A. Chebyshev, A.A. Markov and A.M. Lyapunov introduced the concept of a “random variable”, with which a new branch of probability theory began to develop - the theory of random processes.

Basic concepts of Markov processes

The functioning of various systems is a sequence of transitions from one state to another. If the state of the system changes randomly over time, then the sequence of states can be considered as a random process.

The system is called discrete state system, if the set of its states is finite, and transitions from one state to another are carried out abruptly.

The transition process is called chain.

Definition of a Markov chain

There is some physical system having a finite number TO all possible phase states. Let, depending on the intervention of chance, the system step by step (at moments of time t 0 ) abruptly changes its phase state, that is, transitions take place Q 0 ®Q 1 ®…, Where Q n =Q(t n)– system state through n steps, and Q 0 =Q(t 0)– initial state of the system.

where is one of the possible state spaces.

Probability of transition at m-step (conditional probability):

Thus, to calculate the joint probabilities Р(Q 0 , ..,Q n) it is necessary to set the initial state of the system and indicate the physical mechanism for the change of states, which allows one to calculate the transition probabilities.

1. A special (degenerate) case of a Markov chain. The change of all states occurs independently, that is, the probability of any state at the mth step does not depend on what states the system was in at previous times.

– sequence of independent tests.

2. Probability of the phase state of the parameter Qn at a point in time tn depends only on the state the system was in at the immediately preceding point in time tn-1, and does not depend on what states the system was in at earlier times t 0 ,…,t n-2.

3. Markov chain of order, if the probability of a new state depends only on m states of the system immediately preceding it:

The time a system stays in a certain state can be either discrete or continuous. Depending on this, systems with discrete or continuous time are distinguished.

The simplest probabilistic characteristic of a random process is a set of state probabilities P 1 (t), P 2 (t), ... P n (t), Where P i (t)– probability of the system transitioning to the state S i at a point in time t. Normalization condition P 1 +P 2 +...+P n =1.

If during operation the system is in a state S i, then the probability of its transition to the state Sj in general it depends not only on the state S i, but also from the previous state.

A random process occurring in a system is called Markovsky(a process without aftereffect), if for any moment in time t 0 probability of the system state in the future (with t>t 0) depends only on the state in the present (with t=t 0) and does not depend on how and in what way the system came to this state (i.e. does not depend on the previous history).

Event Streams

The transition of the system to a certain state is event.

The sequence of system transitions to the state S i represents stream of events.

The stream of events is called ordinary, if the event in it occurs one at a time.

Time intervals t 1, t 2, ... t n ordinary flow can be the same or different, discrete or continuous, random or non-random.

If time intervals t 1, t 2, ... t n are non-random values, then the flow is called regular or deterministic, and this flow is described by specifying the values T 1 ,T 2 , ... T n.

If T 1 ,T 2 , ... T n are random, then the flow is called random and it is characterized by the law of distribution of quantities T 1 ,T 2 ,... T n.

In practice, there are often systems in which T i– continuous random variable. In these cases, the system can be described by the probability density f(t 1 , t 2 , ... t n), Where t i– specific value of a random variable T i.

The stream is called stationary, if its probabilistic characteristics do not change over time, i.e. probability of occurrence of a particular number of events m to a section of the time axis t¢+t depends only on the length of the segment t and does not depend on where on the time axis the segment is selected.

The intensity (density) of the flow of events (the average number of events per unit time) is constant.

If the time interval t i is a uniform random variable, then such a flow called aftereffect flow and its state is in probabilistic dependence on the previous state.

If random variables t i independent, then such a flow is called flow with limited aftereffect and the probability density of this flow is equal to the product of the probability densities:

f(t 1 ,t 2 , ...t n) = f 1 (t 1) f 2 (t 2) ... f n (t n)(6.5)

A flow with limited aftereffect can be stationary and uniform in time. In this case, all intervals between adjacent events have the same distribution law:

f i (t i) = f (t i)(6.6)

A flow without aftereffect is called a random flow if, for any non-overlapping time segments, the number of events falling on one of them does not depend on how many events fall on other segments.

Poisson flow

Streams of random events are called Poisson, if the number of thread events m, falling on any area t, distributed according to Poisson's law

P m = e - a , (6.7)

Where A– average number of events located in the area t.

A Poisson flow is stationary if the event density l is constant, then the average number of events is lt, otherwise the flow will be unsteady.

A random flow of events, which has the property of stationarity, ordinaryness and has no aftereffect, is called the simplest and is stationary Poisson flow.

Sifted streams

The process of transitions of a system with discrete operating time can be considered as the impact of a discrete flow of events, which is characterized by the fact that at times t 1, t 2, ..., t n events occur with probability P i. The distribution function of such a flow is:

Sifting through the flow of events S 1, S 2, ... S n, which occur at certain points in time with probabilities p 1 , p 2 , ... p n, means transforming these probabilities into , , ..., . If the flow is stationary, then these probabilities are equal: = =...=1-p.

In this case, p is a sifting constant, which is determined either by the influence of some destabilizing factor, or is determined by the exclusion of any events from the set of states of the system.

Examples of streams with limited aftereffect are Erlang streams. They are formed by regular sifting of the simplest flow, while regular sifting is understood as a procedure that results in the exclusion of several subsequent events in the original flow. If every odd event is eliminated in the simplest flow, then the remaining events form a second-order Erlang flow. The time interval between neighboring events in such a flow is the sum of independent random variables and , distributed according to the exponential law ( = + ).

If in the simplest flow only every third event is stored, then we get an Erlang flow of the third order, etc. In general, Erlang stream k- about the simplest stream produced by an exception is called (k- 1) events and saving k-th event.

Discrete Markov chains

A Markov random process with discrete states and discrete operating time describes the system S with a finite number of states. In this case, transitions are possible at fixed times t 1 , t 2 , ..., t k. The process occurring in this system can be represented as a chain of random events

S 1 (0) ® S 2 (1) ® ... ® S i (n) ® ... ® S n (k).

This sequence is called a discrete Markov chain if for each step n=1,2, ... k probability of transitions from any state (S i ®S j) does not depend on how the system arrived at the state S i. Each transition of the system corresponds to a conditional probability

P. (6.9)

For each step number n possible transitions form full group of events.

homogeneous, if the transition probabilities do not depend on the step number. A complete description of such a chain can be a square matrix of transition probabilities

P 11 P 12 ... P 1n
P ij = P 21 P 22 ... P2n
... ... ... ...
Pn1 Pn2 ... Pnn

and the vector of the initial probability distribution for all states at time t=0.

= . (6.10)

The transition probabilities corresponding to impossible transitions are equal to 0, and the probabilities along the main diagonal correspond to the fact that the system has not changed its state.

A discrete Markov chain is called heterogeneous, if the transition probabilities change with the step number. To describe such circuits it is necessary to set k transition probability matrices P ij (k– number of considered steps). The main task of the analysis of Markov processes is to determine the probability of all states of the system after any number of steps. Moreover, if the matrix of transition probabilities and the vector of the initial distribution are known, then the probabilities of the system states after each step are determined by the total probability formula:

P(A) = P(B i)*P(A/B i)(6.11)

After the first step the probability P i can be defined as follows:

P i (1) = P j (0)P ji , (6.12)

Where Pj(0) – vector of initial states,

P ji– row of the matrix of conditional probabilities.

P i (2) = P j (1)P ji = P j (0)P ji (1)(6.13)

After k steps:

P i (k) = P j (k-1)P ji = P j (0)P ji (k),(6.14)

Where Pji(k)– probabilities of system transitions from the state S i V Sj behind k steps.

If a transition from the state is possible S i in a state Sj behind k steps, then the value P ji (k)>0. If the reverse transition is possible in the same number of steps, then the state S i called returnable. The probability that the system will go out of state S i and for k steps will return to it, equal to 1 for return states.

State S i - irrevocable, if this probability is different from 1.

States S i And Sj are called communicating, if transition is possible S i ®S j in a finite number of steps.

In previous lectures we learned how to simulate the occurrence of random events. That is, we can play which of possible events will occur and in which quantity. To determine this, you need to know the statistical characteristics of the occurrence of events, for example, such a value could be the probability of an event occurring, or the probability distribution of different events, if there are infinitely many types of these events.

But often it is also important to know When this or that event will specifically occur in time.

When there are many events and they follow each other, they form flow. Note that the events must be homogeneous, that is, somewhat similar to each other. For example, the appearance of drivers at gas stations wanting to refuel their car. That is, homogeneous events form a certain series. In this case, it is assumed that the statistical characteristic of this phenomenon (the intensity of the flow of events) is given. The intensity of the event flow indicates how many average such events occur per unit of time. But exactly when each specific event will occur must be determined using modeling methods. It is important that when we generate, for example, 1000 events in 200 hours, their number will be approximately equal to the average intensity of occurrence of events 1000/200 = 5 events per hour, which is a statistical value characterizing this flow as a whole.

The flow intensity in a sense is the mathematical expectation of the number of events per unit time. But in reality it may turn out that 4 events appear in one hour, 6 in another, although on average there are 5 events per hour, so one value is not enough to characterize the flow. The second quantity characterizing how large the spread of events is relative to the mathematical expectation is, as before, dispersion. Actually, it is this value that determines the randomness of the occurrence of an event, the weak predictability of the moment of its occurrence. We will talk about this value in the next lecture.

A stream of events is a sequence of homogeneous events occurring one after another at random intervals. On the time axis, these events look like shown in Fig. 28.1.


An example of a flow of events is the sequence of moments when planes arriving at an airport touch down on the runway.

Flow intensity λ this is the average number of events per unit of time. The flow intensity can be calculated experimentally using the formula: λ = N/T n, Where N number of events that occurred during observation T n.

If the interval between events τ j equal to a constant or defined by some formula in the form: t j = f(t j 1), then the flow is called deterministic. Otherwise the flow is called random.

There are random streams:

  • ordinary: the probability of the simultaneous occurrence of two or more events is zero;
  • stationary: frequency of occurrence of events λ (t) = const( t) ;
  • without aftereffect: the probability of a random event occurring does not depend on the moment of occurrence of previous events.

Poisson flow

It is customary to take the Poisson flow as the flow standard in modeling..

Poisson flow this is an ordinary flow without aftereffect.

As previously stated, the probability that during a time interval (t 0 , t 0 + τ ) will happen m events is determined from Poisson's law:

Where a Poisson parameter.

If λ (t) = const( t) , that is stationary Poisson flow(simplest). In this case a = λ · t . If λ = var( t) , that is unsteady Poisson flow.

For the simplest flow, the probability of occurrence m events during τ is equal to:

The probability of non-occurrence (that is, none, m= 0 ) events over time τ is equal to:

Rice. 28.2 illustrates the dependency P 0 from time. Obviously, the longer the observation time, the less likely it is that no event will occur. Moreover, the higher the value λ , the steeper the graph goes, that is, the faster the probability decreases. This corresponds to the fact that if the intensity of the occurrence of events is high, then the probability of the event not occurring quickly decreases with observation time.

The probability of at least one event occurring ( PХБ1С) is calculated as follows:

because P HB1S + P 0 = 1 (either at least one event will appear, or none will appear, the other is not given).

From the graph in Fig. 28.3 it is clear that the probability of the occurrence of at least one event tends to unity over time, that is, with a corresponding long-term observation of the event, it will definitely happen sooner or later. The longer we observe an event (the more t), the greater the probability that the event will occur; the graph of the function increases monotonically.

The greater the intensity of the occurrence of the event (the more λ ), the faster this event occurs, and the faster the function tends to unity. Parameter on the chart λ represented by the steepness of the line (slope of the tangent).

If you increase λ , then when observing an event during the same time τ , the probability of an event occurring increases (see Fig. 28.4). Obviously, the graph starts from 0, since if the observation time is infinitesimal, then the probability that the event will occur during this time is negligible. And vice versa, if the observation time is infinitely long, then the event will definitely occur at least once, which means that the graph tends to a probability value equal to 1.

By studying the law, you can determine that: m x = 1/λ , σ = 1/λ , that is, for the simplest flow m x = σ . The equality of the mathematical expectation to the standard deviation means that this flow is a flow without aftereffect. The dispersion (more precisely, the standard deviation) of such a flow is large. Physically, this means that the time of occurrence of an event (the distance between events) is poorly predictable, random, and lies in the interval m x – σ < τ j < m x + σ . Although it is clear that on average it is approximately equal to: τ j = m x = T n/ N . An event can occur at any time, but within the range of this moment τ j relatively m x to [ σ ; +σ ] (the magnitude of the aftereffect). In Fig. Figure 28.5 shows the possible positions of event 2 relative to the time axis for a given σ . In this case, they say that the first event does not affect the second, the second does not affect the third, and so on, that is, there is no aftereffect.

Within the meaning of P equals r(see lecture 23. Modeling a random event. Modeling a complete group of incompatible events), therefore, expressing τ from the formula (*) , finally, to determine the intervals between two random events we have:

τ = 1/ λ Ln( r) ,

Where r a random number uniformly distributed from 0 to 1, which is taken from the RNG, τ interval between random events (random variable τ j ).

Example 1. Let's consider the flow of products arriving at a technological operation. Products arrive randomly on average eight pieces per day (flow rate λ = 8/24 [units/hour]). It is necessary to simulate this process within T n = 100 hours. m = 1/λ = 24/8 = 3 , that is, on average, one part per three hours. notice, that σ = 3 . In Fig. Figure 28.6 presents an algorithm that generates a stream of random events.

In Fig. Figure 28.7 shows the result of the algorithm: the moments in time when the parts arrived for the operation. As can be seen, in just the period T n = 100 production unit processed N= 33 products. If we run the algorithm again, then N may turn out to be equal, for example, 34, 35 or 32. But on average, for K algorithm runs N will be equal to 33.33 If you calculate the distances between events t With i and time points defined as 3 i, then on average the value will be equal to σ = 3 .

Modeling extraordinary flows of events

If it is known that the flow is not ordinary, then it is necessary to model, in addition to the moment of occurrence of the event, also the number of events that could appear at this moment. For example, cars arrive at a railway station as part of a train at random times (regular train flow). But at the same time, a train may have a different (random) number of cars. In this case, the flow of carriages is spoken of as a flow of extraordinary events.

Let's assume that M k = 10 , σ = 4 (that is, on average, in 68 cases out of 100, there are from 6 to 14 cars in the train) and their number is distributed according to the normal law. In the place marked (*) in the previous algorithm (see Fig. 28.6), you need to insert the fragment shown in Fig. 28.8.

Example 2. The solution to the following problem is very useful in production. What is the average daily downtime of the equipment of a technological node if the node processes each product for a random time specified by the intensity of the flow of random events λ 2? At the same time, it has been experimentally established that products are also brought for processing at random times specified by the flow λ 1 in batches of 8 pieces, and the batch size fluctuates randomly according to the normal law with m = 8 , σ = 2 (see lecture 25). Before modeling T= 0 there were no products in stock. It is necessary to simulate this process within T n = 100 hours.

In Fig. Figure 28.9 presents an algorithm that randomly generates a flow of arrivals of batches of products for processing and a flow of random events exit of batches of products from processing.

In Fig. Figure 28.10 shows the result of the algorithm: the moments in time when parts arrived at the operation, and the moments in time when parts left the operation. The third line shows how many parts were in the queue for processing (lying in the node’s warehouse) at different points in time.

By noting for the processing node the times when it was idle waiting for the next part (see in Fig. 28.10 the time sections highlighted in red), we can calculate the total downtime of the node for the entire observation time, and then calculate the average downtime during the day. For this implementation, this time is calculated as follows:

T etc. Wed. = 24 · ( t 1 ave. + t 2 ave. + t 3 ave. + t 4 ave + + t N etc.)/ T n.

Exercise 1 . Changing the value σ , install dependency T etc. Wed. ( σ ) . Setting the cost for node downtime at 100 euros/hour, determine the enterprise’s annual losses from irregularity in the work of suppliers. Suggest the wording of the clause of the enterprise’s contract with suppliers “The amount of the fine for delay in delivery of products.”

Task 2. By changing the amount of initial warehouse filling, determine how the enterprise's annual losses from irregularity in the work of suppliers will change, depending on the amount of inventory accepted at the enterprise.

Simulation of non-stationary event streams

In some cases, the flow intensity may change over time λ (t) . Such a flow is called unsteady. For example, the average number of ambulances per hour leaving the station in response to calls from the population of a large city may vary throughout the day. It is known, for example, that the largest number of calls falls on the intervals from 23 to 01 am and from 05 to 07 am, while in other hours it is half as much (see Fig. 28.11).

In this case, the distribution λ (t) can be specified either by a graph, a formula, or a table. And in the algorithm shown in Fig. 28.6, in the place marked (**), you will need to insert the fragment shown in Fig. 28.12.

Transcript

1 AN Moiseev AA Nazarov Asymptotic analysis of a high-intensity semi-Markov flow 9 UDC 5987 AN Moiseev AA Nazarov Asymptotic analysis of a high-intensity semi-Markov flow of events A study of a high-intensity semi-Markov flow of events is presented. It is shown that for the flow under consideration, the probability distribution of the number of events occurring over a fixed time interval, subject to an unlimited increase in the intensity of the flow, can be approximated by a normal distribution. The parameters of this distribution are obtained in the work. Key words: high-intensity flow of events, semi-Markov flow, asymptotic analysis. One of the basic elements of queuing systems and networks is the incoming flow of requests. Modern telecommunication networks and distributed information processing systems require high throughput of information transmission channels. Thus in these systems, the number of data packets arriving for processing per unit of time is very high. In terms of queuing theory, in such cases they speak of a high intensity of the incoming flow. In particular, in the work, the high-intensity flow model is used to simulate the flow of incoming messages of a multiphase distributed data processing system. The works were studied properties of high-intensity recurrent MMPP- and MAP-flows This paper presents an analysis of the properties of a high-intensity semi-Markovian (Semi-Markovian or SM-) flow as the most general model of event flows Mathematical model Consider a semi-Markov flow of homogeneous events specified as follows Let (ξ n τ n ) stationary two-dimensional Markov process with discrete time Here ξ n is a discrete component taking values ​​from to K τ n is a continuous component taking non-negative values. We will assume that the evolution of the process is determined by the elements of the so-called semi-Markov matrix A (x) = ( Ak ν ) k ν= as follows K: x Akν (x) = P ξ n+ =ν τ n+< ξ n = k N Здесь N некоторая большая величина которая введена искусственно чтобы явным образом подчеркнуть малость величин τ n В теоретических исследованиях будем полагать N и таким образом τ n На практике полученные результаты можно использовать для аппроксимации соответствующих величин при достаточно больших значениях N (в условии высокой интенсивности потока) Пусть в момент времени t = произошло изменение состояния процесса {ξ n τ n } Последовательность моментов времени t n определяемая рекуррентным выражением tn+ = tn+τ n+ для n = называется полумарковским потоком случайных событий определяемым полумарковской матрицей A(x) Процесс ξ n =ξ(t n) называют вложенной в полумарковский поток цепью Маркова Поскольку средняя длина интервалов τ n обратно пропорциональна N то при N интенсивность наступления событий в таком потоке будет неограниченно расти Такой поток событий будем называть высокоинтенсивным полумарковским или HISM-потоком (от High-Intensive Semi- Markovian) Ставится задача нахождения числа событий m(t) наступивших в этом потоке в течение интервала времени (t) Вывод уравнений Колмогорова Пусть z(t) длина интервала времени от момента t до момента наступления следующего события в потоке; k(t) случайный процесс значения которого на каждом из интервалов = () Отсюда получаем матричное дифференциальное уравнение относительно функции R(z): R (z) = R ()[ I A (z) ] (3) граничное условие для которого при z имеет вид R () = λr (4) где λ некоторый коэффициент вектор-строка r есть стационарное распределение состояний вложенной цепи Маркова Этот вектор является решением уравнения Колмогорова r= r P где P= lim A (z) есть стохастическая матрица определяющая вероятности переходов вложенной цепи z Маркова Таким образом решение уравнения (3) имеет вид z R() z = R ()[ I A () x ] dx (5) Пусть R= R () есть стационарное распределение значений полумарковского процесса k(t) тогда при z из (5) получаем R= R ()[ I A(x) ] dx=λ r[ I A(x) ] dx=λr [ P A(x) ] dx=λra (6) где A матрица с элементами Akν = [ Pkν Akν(x) ] dx Умножая левую и правую части равенства (6) на единичный вектор-столбец E получим RE = =λrae откуда находим значение коэффициента λ: λ= (7) rae Доклады ТУСУРа 3 (9) сентябрь 3

3 AN Moiseev AA Nazarov Asymptotic analysis of high-intensity semi-Markov flow jum Let us introduce the notation Hkuzt () = e Pkmzt () where j = imaginary unit and u is some variable Multiplying () by e jum and summing over m from to we obtain m= Hkuzt () Hkuzt ( ) Hku (t) K ju Hku (t) = + e Aν k (z) N ν= Taking into account the row vector notation H(u z t) = (H(u z t) H(K u z t)), this equation takes the form H(uzt) H(uzt) H(u t) ju = + e A(z) I (8) N We will solve the differential matrix equation (8) asymptotically by the method under the condition of unlimitedly increasing intensity λn of the semi-Markov flow under consideration as N First-order asymptotics Let us introduce the notation N =ε u= ε w H(uzt) = F (wzt ε) From (8) we obtain F(wzt ε) F(wzt ε) F(w t ε) jwε ε = + e A(z) I ( 9) Theorem The asymptotic solution F(wzt) = lim F (wzt ε) of equation (9) has the form ε () () jw λ F wzt = R ze t () where R(z) is determined by the expression (5) Proof Let’s do it in (9) passing to the limit ε we obtain the equation F(wzt) F(w t) = + [ A(z) I ] which has the form similar to () Therefore, the function F (w z t) can be represented as F(wzt) = R (z) Φ(wt) () where Φ (w t) is some scalar function. Let us pass to the limit z in (9) and sum up all the components of this equation (to do this, we multiply both sides on the right by the unit column vector E) We obtain F(w t ε) F (w t ε) ε E= e P I E Substitute here the expression () use the expansion e = + jε w+ O(ε) divide both sides by ε and pass to the limit ε: Φ(wt) RE = jwr () PE Φ(wt) from where, taking into account (4), we obtain a differential equation for the function Φ (w t): Φ(wt) = jwλφ (wt) Solving this equation under the initial condition Φ (w) = we obtain the solution jwλt Φ (wt) = e Substitute this expression in ( ) we obtain () The theorem is proven ju Nt Second-order asymptotics Let us make the replacement H(uzt) = H (uzte) λ in (8): H(uzt) H(uzt) H(u t) ju + juλ H(u z t) = + e A(z) I () N Let us introduce the notation N =ε u= ε w H(uzt) = F (wzt ε) (3) TUSUR Reports 3 (9) September 3

4 MANAGEMENT COMPUTER ENGINEERING AND INFORMATION SCIENCE Then () will be rewritten as F(wzt ε) F(wzt ε) F(w t ε) ε + λf (wzt ε) = + e A(z) I (4) Theorem Asymptotic solution F( wzt) = lim F (wzt ε) equation (4) has the form ε (jw) F (wzt) = R (z)exp (λ+κ) t (5) where R(z) is determined by the expression (5) κ= fe (6) row vector f satisfies a system of linear algebraic equations f I P =λ rp R λ a (7) f AE= a = rae A = x da (x) Proof Let us pass to the limit ε in (4) we obtain the equation F( wzt) F(w t) = + [ A(z) I ] which has the form similar to () Therefore, the function F (w z t) can be represented as F(wzt) = R (z) Φ(wt) (8) where Φ ( w t) some scalar function The solution to equation (4) will be sought in the form of expansion F(wzt ε) =Φ (wt) R(z) + jε wf (z) + O(ε) (9) where f(z) is some vector -function (string) Substituting this expression into (4) and applying the expansion e = + jε w+ O(ε) after some transformations we obtain ( ) λφ (wt) R() z=φ (wt) R() z+ f () z+ R() A() z I + R() A() z+ f () A() z I+ A () z + O(ε) Taking into account (3) (4) dividing both sides by jεw and canceling Φ ( w t) we obtain λ R(z) = f (z) + λ ra(z) + f ()[ A(z) I ] + O(ε) From here, for ε, we obtain a differential equation for the unknown vector function f(z) f ( z) = f ()[ I A(z) ] λ[ ra(z) R (z) ] integrating which under the initial condition f() = we obtain the expression z f(z) = ( f ()[ I A(x) ] λ [ ra(x) R (x) ]) dx () We will look for f(z) in the class of functions satisfying the condition lim ( f ()[ I A(x) ] λ[ ra(x) R (x) ]) = x From here we obtain f ()[ I P] λ[ rp R ] = () Subtracting the left side of this equality from the integrand () taking into account (6) we obtain f() = f () A+λrA λ [ R R (x) ] dx () It can be shown that [ R R (x) ] dx= λ ra where A = x da (x) Taking this into account, multiplying both sides () on the right by the unit vector E we obtain TUSUR Reports 3 (9) September 3

5 AN Moiseev AA Nazarov Asymptotic analysis of high-intensity semi-Markov flow 3 λ a [ f () A f()] E = (3) where a = rae Assuming that f() E = and denoting f = f () from () and ( 3) we obtain the system of equations (7) Let us pass to the limit z in (4) and multiply both sides of the equation by E on the right, we obtain F(w t ε) F(w t ε) jw (w t) jw jw (w t) ε ε e F ε ε E+ ε λf ε E= P I E= E (e) () 3 Substitute here (9) and apply the expansion e = + jε w+ + O(ε) we obtain Φ(wt) (jεw) 3 ε RE+ λφ (wt) RE = Φ (wt)[ R () + f ()] E jw ε + + O(ε) Reducing similar reductions by ε using notation (6) and passing to the limit at ε we obtain the following differential equation for the unknown function Φ (w t): Φ(wt) (jw) = Φ(wt) (λ+κ) (jw) solving which under the initial condition Φ (w) = we obtain Φ (wt) = exp (λ+κ) t Substituting this expression in (8) we obtain (5) The theorem is proven Approximation of the distribution of the number of events occurring in the HISM flow Making changes in (5) inverse to (3) and returning to the function H(u z t) we obtain (ju) H(u z t) R (z)exp juλ Nt + (λ+κ) Nt Thus, the characteristic function of the number of events occurring in a high-intensity semi-Markov flow during time t satisfies the relation (ju) hut () = H(u t) E exp juλ Nt+ (λ+κ) Nt That is, for sufficiently large values N distribution of the number of events occurring in a HISM flow during time t can be approximated by a normal distribution with mathematical expectation λnt and variance (λ + κ)nt where λ and κ are determined by expressions (7) and (6) Numerical results As an example for numerical calculations Let's consider the problem of modeling events in a high-intensity semi-Markov flow specified by a third-order semi-Markov matrix A(x) written in the form A(x) = P * G(x) where P is a stochastic matrix; G(x) is a matrix composed of some distribution functions; operation * Hadamard product of matrices We will consider an example when the elements of the matrix G(x) correspond to gamma distribution functions with parameters of shape α kν and scale β kν k ν = 3, which we will represent in the form of matrices α and β, respectively. We will choose the following specific parameter values: P = 3 5 α = 5 4 β = As a result of the calculations, the following parameter values ​​were obtained: λ 99; κ 96 For this problem, flow simulation was performed for values ​​N = 3 and empirical distributions of the number of events in intervals of length t = were constructed. Series of distributions of empirical data and corresponding approximations for N = and N = are presented graphically in Fig. (for other values ​​of N, the graphs are almost coincide and become indistinguishable in the figure) TUSUR reports 3 (9) September 3

6 4 4 MANAGEMENT COMPUTER ENGINEERING AND INFORMATION SCIENCE 5 8 N = N = Fig Comparison of the polygon of relative frequencies of the empirical distribution () and the approximating distribution series () To assess the accuracy of the approximation of the distribution, we will use the Kolmogorov distance Dq = sup Fq(x) F(x) Here F q (x) empirical distribution function F(x) function x of the distribution of a normal random variable with the characteristics found above The table shows the dependence of the quality of approximation on the value N N δ relative errors in calculating the mathematical a δ D D q 8% 6% 464 expectations δ a and variance δ D as well as the Kolmogorov distance D q for the considered cases 9% 7% % 5% Figure shows a graph showing % 4% 44 the decrease in the Kolmogorov distance between the empirical and 8% % analytical (normal) distributions with increasing values ​​of N D q You can notice that already at 5 N > 3, a sufficiently high quality of Gaussian approximation of the number of events in the considered high-intensity semi-Markov flow is achieved (the Kolmogorov distance does not exceed) 3 Fig. Change in the Kolmogorov distance D q depending on the flow intensity (logarithmic scale in N) N Conclusion The work presents study of a high-intensity semi-Markov stream of events It is shown that, under the condition of unlimited growth of its intensity, the distribution of the number of events occurring in a given stream during a time interval of a fixed length can be approximated by a normal distribution. The parameters of this distribution are obtained in the work. The considered numerical examples demonstrate the applicability of the obtained asymptotic results for HISM-streams of events Similar results were obtained earlier for other types of high-intensity flows: recurrent MMPP MAP TUSUR Reports 3 (9) September 3

7 AN Moiseev AA Nazarov Asymptotic analysis of high-intensity semi-Markov flow 5 References Gnedenko BV Introduction to the theory of queuing / BV Gnedenko IN Kovalenko 4th ed. revision M: Publishing house LKI 7 4 s Grachev VV Multiphase queuing model of a distributed data processing system / VV Grachev AN Moiseev AA Nazarov VZ Yampolsky // TUSUR reports (6) h C Moiseev A Investigation of High Intensive General Flow / A Moiseev A Nazarov // Proc of the IV International Conference “Problems of Cybernetics and Informatics” (PCI) Baku: IEEE P Moiseev A Investigation of the High Intensive Markov-Modulated Poisson Process / A Moiseev A Nazarov // Proc Of The International Conference On Application Of Information And Communication Technology And Statistics In Economy And Education (ICAICTSEE-) Sofia: University Of National And World Economy P Moiseev AN Study of high-intensity MAP flow / AN Moiseev AA Nazarov // Izv. Tom Polytechnic University 3 T 3 S Korolyuk VS Stochastic models of systems Kiev: Nauk Dumka s 7 Nazarov AA Theory of probability and random processes: textbook / AA Nazarov AF Terpugov-e izd ispr Tomsk: NTL Publishing House 4 p 8 Nazarov AA Method of asymptotic analysis in queuing theory / AA Nazarov SP Moiseeva Tomsk: NTL Publishing House 6 p 9 Korn G Handbook of mathematics for scientists and engineers / G Korn T Korn M: Science with Rykov VV Mathematical statistics and experimental planning: textbook / VV Rykov VY Itkin M: MAKS Press 38 with Moiseev Alexander Nikolaevich Candidate of Technical Sciences Associate Professor, Department of Software Engineering, Tomsk State University (TSU) Tel: 8 (38-) Email: Nazarov Anatoly Andreevich Doctor of Technical Sciences Professor Head of the Department of Probability Theory and Mathematical Statistics TSU Tel: 8 (38-) Email: Moiseev AN Nazarov AA Asymptotic analysis of the high-intensive semi-Markovian arrival process Investigation of the high -intensive semi-Markovian arrival process is presented in the paper It is shown that a distribution of the number of arrivals in the process during some period under asymptotic condition of an infinite growth of the process rate can be approximated by normal distribution The characteristics of the approximation are obtained as well The analytical results are supported by numeric examples Keywords: high-intensive arrival process semi-markovian process asymptotic analysis TUSUR Reports 3 (9) September 3


BIBLIOGRAPHY. Balasanyan S.Sh. Stratified model for assessing and analyzing the efficiency of functioning of complex technological systems with many states // News of the Tomsk Polytechnic

ASYMPTOTIC ANALYSIS OF AN OPEN LOOP NON-MARKOV QUESTION NETWORK HIMMPP (GI) K A. Nazarov, A. Moiseev Tomsk State University Tomsk, Russia [email protected] The work presents

BULLETIN OF TOMSK STATE UNIVERSITY 2008 Department of Computer Science and Information Science 3(4) UDC 6239; 592 SV Lopukhova RESEARCH OF MMR FLOW BY THE ASYMPTOTIC METHOD OF THE TH ​​ORDER The work considers

S.A. Matveev, A.N. Moiseev, A.A. Nazarov. Application of the method of initial moments 9 UDC 59.87 S.A. Matveev, A.N. Moiseev, A.A. Nazarov Application of the method of initial moments to study a multiphase system

BULLETIN OF TOMSK STATE UNIVERSITY 7 Management of computer technology and information science UDC 5987 TA Karlykhanova SIFT FLOW METHOD FOR STUDYING THE GI/GI/ SYSTEM For a queuing system

UDC 6.39.; 59. S.V. Lopukhova A.A. Nazarov STUDY OF MAR-FLOW BY THE METHOD OF ASYMPTOTIC ANALYSIS OF THE NTH ORDER The MAR-flow is considered. This flow has been studied using the asymptotic method.

BULLETIN OF TOMSK STATE UNIVERSITY 8 Management of computer technology and information science 4(5) MATHEMATICAL MODELING UDC 59.87 V.A. Vavilov A.A. Nazarov MATHEMATICAL MODELING OF UNSTABLE

Branch of Kemerovo State University in Anzhero-Sudzhensk National Research Tomsk State University Kemerovo State University Institute of Management Problems

BULLETIN OF TOMSK STATE UNIVERSITY Management of computer technology and information science 3() UDC 59.87 I.A. Ivanovskaya S.P. Moiseeva RESEARCH OF A MODEL OF PARALLEL SERVICE OF MULTIPLE ORDERS

BULLETIN OF TOMSK STATE UNIVERSITY 2011 Management, computer technology and information science 3(16) INFORMATION PROCESSING UDC 519.872 I.L. Lapatin, A.A. Nazarov CHARACTERISTICS OF MARKOV SYSTEMS

A.A. Nazarov I.A. Semenov. Comparison of asymptotic and prelimit characteristics 187 UDC 4.94:519.872 A.A. Nazarov I.A. Semenova Comparison of asymptotic and prelimit characteristics of the MAP/M/ system

Branch of Kemerovo State University in Anzhero-Sudzhensk National Research Tomsk State University Kemerovo State University Institute of Management Problems

Statistical radiophysics and information theory Lecture 7 8. Continuous-time Markov chains Continuous-time Markov chains are a Markov random process X t, consisting of

BULLETIN OF TOMSK STATE UNIVERSITY 9 Management of computer technology and information science (7) MATHEMATICAL MODELING UDC 5987 VA Vavilov MATHEMATICAL MODELING UNSTABLE RANDOM NETWORKS

CHAPTER 5. MARKOV PROCESSES WITH CONTINUOUS TIME AND A DISCRETE SET OF STATES As a result of studying this chapter, students should: know the definitions and properties of Markov processes with continuous

As a manuscript Zadiranova Lyubov Aleksandrovna RESEARCH OF MATHEMATICAL MODELS OF FLOW IN INFINITELY LINEAR QS WITH REPEATED MAINTENANCE OF REQUIREMENTS 05.13.18 Mathematical modeling, numerical

BULLETIN OF TOMSK STATE UNIVERSITY 7 Management of computer technology and information science UDC 59 NV Stepanova AF Terpugov PRICE MANAGEMENT IN THE SALE OF PERISHABLE PRODUCTS Management is considered

BULLETIN OF TOMSK STATE UNIVERSITY Management, computer technology and information science () UDC 59.865 K.I. Livshits, Ya.S. Bagel PROBABILITY OF RUIN OF AN INSURANCE COMPANY UNDER DOUBLE STOCHASTIC

UDC 6-5 Spectral characteristics of linear functionals and their applications to the analysis and synthesis of stochastic control systems K.A. Rybakov The article introduces the concept of spectral characteristics of linear

As a manuscript Lapatin Ivan Leonidovich RESEARCH OF MATHEMATICAL MODELS OF OUTPUT STREAM OF QUEUE SYSTEMS WITH AN UNLIMITED NUMBER OF DEVICES 05.13.18 Mathematical modeling, numerical

Contents Chapter Random processes Simple homogeneous Markov chain Markov equation Simple homogeneous Markov chain 4 Properties of the transition matrix 5 Numerical experiment: stabilization of the probability distribution

INSTITUTE OF COMPUTATIONAL MATHEMATICS AND MATHEMATICAL GEOPHYSICS SIBERIAN BRANCH OF THE RUSSIAN ACADEMY OF SCIENCES MARCHUKOV SCIENTIFIC READINGS 017 June 5 July 14, 017 Proceedings Editorial Board academician

STUDY OF RQ-SYSTEM M GI 1 BY ASYMPTOTIC ANALYSIS METHOD UNDER HEAVY LOAD CONDITIONS E. Moiseeva, A. Nazarov Tomsk State University Tomsk, Russia [email protected] The work considers

UDC 6-5:59 NS Demin SV Rozhkova OV Rozhkova FILTERING IN DYNAMIC SYSTEMS BY CONTINUOUS-DISCRETE OBSERVATIONS WITH MEMORY IN THE PRESENCE OF ANOMALIC INTERFERENCE II CONTINUOUS-DISCRETE OBSERVATIONS In this work

Numerical methods Topic 2 Interpolation V I Velikodny 2011 2012 academic year 1 The concept of interpolation Interpolation is a method of approximately or accurately finding any value from known individual values

Ukrainian Mathematical Journal Volume 5 (28), 3, 293 34 On boundary value problems for an ordinary differential operator with matrix coefficients Anna V Agibalova (Presented by M M Malamud) Abstract

Lecture 2. Statistics of the first type. Pointed estimates and their properties Bure V.M., Grauer L.V. ShAD St. Petersburg, 2013 Bure V.M., Grauer L.V. (SHAD) Lecture 2. Statistics of the first type. Dotted St. Petersburg,

Management of computer technology and information science UDC 6-5:59 RESEARCH OF THE EFFICIENCY OF A DISCRETE OBSERVATION CHANNEL WITH MEMORY IN THE EXTRAPOLATION PROBLEM NS Demin OV Rozhkova* Tomsk State University

Statistical radiophysics and information theory Lecture 6 7. Markov* random processes and Markov chains. *Markov Andrey Andreevich (born 1890) Russian mathematician, academician Markov random process

Siberian Mathematical Journal July August, 2003 Volume 44, 4 UDC 51921+5192195 ON THE COMPONENTS OF FACTORIZATION REPRESENTATION FOR THE Dwelling Time of SEMI-CONTINUOUS RANDOM WALKS IN THE BAND IN S Lugavów

As a manuscript Gorbatenko Anna Evgenievna RESEARCH OF QUEUE SYSTEMS WITH CORRELATE FLOWS UNDER SPECIAL LIMITING CONDITIONS 05.13.18 Mathematical modeling, numerical methods

Management of computer technology and information science UDC 59. INFORMATION ASPECT IN THE JOINT PROBLEM OF CONTINUOUS-DISCRETE FILTERING AND INTERPOLATION. ANALYSIS S.V. Rozhkova O.V. Rozhkova Tomsk Polytechnic

Siberian Mathematical Journal July August, 2005. Volume 46, 4 UDC 519.21 ON FACTORIZATION REPRESENTATIONS IN BOUNDARY PROBLEMS FOR RANDOM WALKS SPECIFIED ON A MARKOV CHAIN ​​V. I. Lotov, N. G. Orlova

Lecture 3 Stability of equilibrium and motion of a system When considering steady motions, we write the equations of perturbed motion in the form d dt A Y where the column vector is a square matrix of constant coefficients

Chapter 1 Differential equations 1.1 The concept of a differential equation 1.1.1 Problems leading to differential equations. In classical physics, each physical quantity is associated with

Lecture CHARACTERISTIC FUNCTION PURPOSE OF THE LECTURE: to construct a method for linearization of functions of random variables; introduce the concept of a complex random variable and obtain its numerical characteristics; determine characteristic

Modeling systems using Markov random processes Basic concepts of Markov processes The function X(t) is called random if its value for any argument t is a random variable.

1. FINITE HOMOGENEOUS MARKOV CHAINS Consider a sequence of random variables ξ n, n 0, 1,..., each of which is distributed discretely and takes values ​​from the same set (x 1,...,

Chapter 6 Fundamentals of Stability Theory Lecture Problem Statement Basic Concepts Previously, it was shown that the solution to the Cauchy problem for a normal system ODE = f, () continuously depends on the initial conditions at

Sin cos R Z cos ImZ cos sin sin The solutions found in this way form a fundamental system of solutions and therefore the general solution of the system has the form or, in more detail, sin cos cos sin cos cos cos sin sin

Structural reliability. Theory and practice Kashtanov V.A. STRUCTURE CONTROL IN QUEUE AND RELIABILITY MODELS Using controlled semi-Markov processes, the optimal

MATHEMATICAL MODEL OF AN INSURANCE COMPANY IN THE FORM OF A QUEU SERVICE SYSTEM M M I. Sinyakova, S. Moiseeva National Research Tomsk State University Tomsk, Russia [email protected]

UDC 59. SEPARATION THEOREM IN THE CASE OF OBSERVATIONS WITH MEMORY N.S. Demin, S.V. Rozhkova Tomsk State University Tomsk Polytechnic University E-mail: [email protected] Proof is given

By the conditions of the theorem L B (m Then, due to the linearity of the operator L, we have: m m m L L ] B [ Systems of linear differential equations with constant coefficients Eigenvalues ​​and eigenvectors

REFERENCES Kalashnikova TV Izvekov NU Integration of the demand orientation method into the pricing system of a retail chain // News of Tomsk Polytechnic University T 3 6 S 9 3 Fomin

INSTITUTE OF COMPUTATIONAL MATHEMATICS AND MATHEMATICAL GEOPHYSICS SIBERIAN BRANCH OF THE RUSSIAN ACADEMY OF SCIENCES MARCHUKOV SCIENTIFIC READINGS 217 June 25 July 14, 217 Proceedings Editorial Board academician

TOPIC 7. Random processes. The purpose of the content of topic 7 is to give initial concepts about random processes and Markov chains in particular; outline the range of economic problems that models use in their solution,

Lecture 4. Confidence intervals Bure V.M., Grauer L.V. ShAD St. Petersburg, 2013 Bure V.M., Grauer L.V. (SHAD) Lecture 4. Confidence intervals St. Petersburg, 2013 1 / 49 Contents Contents 1 Confidence intervals

Siberian Mathematical Journal January February, 2. Volume 41, 1 UDC 517.948 ASYMPTOTICS OF SOLUTIONS TO SINGULARLY PERTURBED NONLINEAR INTEGRODIFERENTIAL EQUATIONS M. K. Dauylbaev Abstract: Considered singularly

Lecture Modeling systems using Markov random processes Basic concepts of Markov processes A function X(t) is called random if its value for any argument t is random

7 (), 9 G. V. Boykova about the world of the world Abstract: for a second-order differential equation, a solution has been found that represents

NATURAL AND EXACT SCIENCES UDC 57977 ABOUT THE CONTROL OF LINEAR SINGULARLY PERTURBED SYSTEMS WITH SMALL DELAY Candidate of Physical and Mathematical Sciences Associate Professor KOPEYKINA T B GUSEINOVA A S Belarusian National Technical

Computer modeling. SMO. Lecture 2 1 Contents Chapter 2. Representation of a QS by a Markov random process... 1 I. Classification of QS according to Kendall... 1 II. Markov random process... 2 III. Markovsky

48 Vestnik RAU Series physical, mathematical and natural sciences, 1, 28, 48-59 UDC 68136 ASSESSMENT OF RELIABILITY CHARACTERISTICS OF DISTANCE LEARNING SYSTEMS PART 2 HV Kerobyan, NN Khublaryan, AG Oganesyan Russian-Armenian

Basic concepts of the theory of difference schemes. Examples of constructing difference schemes for initial-boundary value problems. A large number of problems in physics and technology lead to boundary value or initial boundary value problems for linear

4 (0) 00 Bayesian analysis when the estimated parameter is a random normal process The problem of Bayesian estimation of a sequence of unknown average values ​​q q... q... is considered

RUSSIAN TECHNOLOGICAL UNIVERSITY MIREA ADDITIONAL CHAPTERS OF HIGHER MATHEMATICS CHAPTER 3. SYSTEMS OF DIFFERENTIAL EQUATIONS The work is devoted to modeling dynamic systems using elements

SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS WITH CONSTANT COEFFICIENTS Reduction to one equation of the th order From a practical point of view, linear systems with constant coefficients are very important

1 Title of the document Ovsyannikov A.V. STATISTICAL INEQUALITIES IN SUPERREGULAR STATISTICAL EXPERIMENTS OF ESTIMATION THEORY // West National Academy of Sciences Belarus, 009. Ser fz-mat. navuk. P.106-110

UDC 59 EV Novitskaya AF Terpugov DETERMINATION OF THE OPTIMAL VOLUME OF A LOT OF PRODUCT AND THE RETAIL PRICE OF SALES OF CONTINUALLY PERISHABLE PRODUCTS The problem of determining the optimal volume of a batch of goods is considered

Moscow State Technical University named after NE Bauman Faculty of Fundamental Sciences Department of Mathematical Modeling A. K. K. K. K., A. K. REPLACEMENT

Math-Net.Ru All-Russian mathematical portal A. A. Nazarov, T. V. Lyubina, Non-Markov dynamic RQ system with an incoming MMP flow of requests, Avtomat. and telemekh., 213, issue 7, 89 11 Use

MINISTRY OF EDUCATION OF THE RUSSIAN FEDERATION KRASNOYARSK STATE UNIVERSITY UDC BBK Compiled by: N.A. Pinkina DEPARTMENT OF HIGHER MATHEMATICS Linear algebra. Solving typical examples. Test options

Lecture 2 Solving systems of linear equations. 1. Solving systems of 3 linear equations using the Cramer method. Definition. A system of 3 linear equations is a system of the form In this system, the required quantities are

Share with friends or save for yourself:

Loading...