Context and process handle. Process concept. Process management operating system subsystem. Process states. Process Context and Descriptor Process Management Subsystem

The task management system ensures their passage through the computer. Depending on the state of the process, it needs to allocate one or another resource. For example, a new process needs to be placed in memory by allocating address space to it; include in the list of tasks competing for processor time.

One of the main subsystems of a multiprogram OS that directly affects the functioning of a computer is process and thread management subsystem. It deals with their creation and destruction, and also distributes processor time between simultaneously existing processes and threads in the system.

When multiple tasks are running simultaneously on a system, although threads are created and executed asynchronously, they may need to interact, for example, when exchanging data. Therefore, thread synchronization is one of the important functions of the process and thread management subsystem.

Communication between processes is carried out using shared variables and special basic operations called primitives.

The process and thread management subsystem has the ability to perform the following operations on processes:

– creation (spawning)/destruction of a process;

– pause/resume the process;

– blocking/waking the process;

– start the process;

– change of process priority;

The process and thread management subsystem is responsible for providing processes with the necessary resources. The OS maintains special information structures in memory, in which it records which resources are allocated to each process. A resource can be assigned to a process for sole use or for shared use with other processes. Some of the resources are allocated to a process when it is created, and some are allocated dynamically based on requests at runtime. Resources can be assigned to a process for its entire life or only for a certain period. When performing these functions, the process management subsystem interacts with other OS subsystems responsible for resource management, such as the memory management subsystem, input/output subsystem, and file system.

1. Creating and deleting processes and threads

Creating a process means first of all creating process handle, which is one or more information structures containing all the information about the process necessary for the operating system to manage it. This issue was discussed in detail earlier, now we just recall that such information may include, for example, a process identifier, data on the location of the executable module in memory, the degree of privilege of the process (priority and access rights), etc.

Creating a process involves loading the codes and data of the process's executable program from disk into RAM. In this case, the process management subsystem interacts with the memory management subsystem and the file system. In a multi-threaded system, when a process is created, the OS creates at least one thread of execution for each process. When creating a thread, just like when creating a process, the OS generates a special information structure - a thread descriptor, which contains the thread identifier, data on access rights and priority, thread state, etc. Once created, a thread (or process) is in a state of readiness for execution (or in an idle state in the case of a special-purpose OS).

Tasks are created and deleted based on appropriate requests from users or other tasks. A task can spawn a new task - in many systems, a thread can contact the OS with a request to create a so-called. child streams. The generating task is called the “ancestor” or “parent”, and the child task is called the “descendant” or “child task”. An "ancestor" can suspend or delete its child task, while a "child" cannot manage the "ancestor".

Different operating systems structure the relationship between child threads and their parents differently. In some operating systems, their execution is synchronized (after the parent thread terminates, all its children are removed from execution), in others, children are executed asynchronously with respect to the parent thread.

After the process is completed, the OS “cleanses traces” of its presence in the system - closes all files with which the process worked, frees up areas of RAM allocated for codes, data and system information structures of the process. OS queues and resource lists that contained references to the process being terminated are corrected.

2. Scheduling and dispatching processes and threads

The planning strategy determines which processes are selected to be executed to achieve the goal. Strategies can be different, for example:

– if possible, finish calculations in the same order in which they were started;

– give preference to shorter processes;

– provide all users (user tasks) with the same services, including the same waiting time.

During the life of a process, the execution of its threads can be interrupted and continued many times.

The transition from the execution of one thread to another is carried out as a result of planning And dispatching.

Planning threads are implemented based on information stored in process and thread descriptors. When scheduling, the priority of threads, the time they wait in the queue, the accumulated execution time, the intensity of I/O access, and other factors can be taken into account. The OS schedules threads to execute regardless of whether they belong to the same or different processes. Planning is understood as the task of selecting such a set of processes so that they conflict as little as possible during execution and use the computing system as efficiently as possible.

In various information sources there are different interpretations of the concepts of “planning” and “dispatching”. Thus, some authors divide planning into long-term (global) and short-term (dynamic, i.e. the current most efficient distribution), and the latter is called dispatching. According to other sources, dispatching is understood as the implementation of a decision made at the planning stage. We will stick with this option.

Planning includes solving two problems:

determining the moment of time for changing the active thread;

selecting a thread to execute from a queue of ready threads.

There are many scheduling algorithms that solve these problems in different ways. It is the planning features that determine the specifics of the operating system. Let's look at them a little later.

In most operating systems, scheduling is carried out dynamically, i.e. decisions are made during work based on an analysis of the current situation. Threads and processes appear at random times and terminate unpredictably.

Static the scheduling type can be used in specialized systems in which the entire set of simultaneously executing tasks is defined in advance (real-time systems). The scheduler creates a schedule based on knowledge of the characteristics of a set of tasks. This schedule is then used by the operating system for scheduling.

Dispatching consists in implementing the solution found as a result of planning, i.e. in switching one process to another. Dispatching comes down to the following:

saving the context of the current thread that needs to be changed;

launching a new thread for execution.

The context of the thread reflects, firstly, the state of the computer hardware at the time of the interruption (the value of the program counter, the contents of general-purpose registers, the processor operating mode, flags, interrupt masks, and other parameters), and secondly, the parameters of the operating environment (links to open files, data on unfinished I/O operations, error codes executed by a given thread of system calls, etc.).

In the context of a thread, we can distinguish a part that is common to all threads of a given process (links to open files), and a part that relates only to a given thread (the contents of registers, the program counter, the processor mode). For example, in the NetWare environment there are three types of contexts - global context (process context), thread group context, and individual thread context. The relationship between the data of these contexts is similar to the relationship between global and local variables in a program. The hierarchical organization of contexts speeds up thread switching: when switching from a thread of one group to a thread of another group within the same process, the global context does not change, but only the group context changes. Global context switching occurs only when moving from the thread of one process to the thread of another process.

3. Planning algorithms

From the point of view of solving the first scheduling problem (selecting the moment of time to change the active thread), scheduling algorithms are divided into two large classes - preemptive and non-preemptive algorithms:

non-repressive– the active thread can execute until it itself transfers control to the system so that it selects another ready thread from the queue;

displacing– the operating system decides to change the task being performed and switches the processor to another thread.

The main difference between these scheduling algorithms is the degree of centralization of the thread scheduling mechanism. Let's consider the main characteristics, advantages and disadvantages of each class of algorithms.

Non-preemptive algorithms. Application program, having received control from the OS, it itself determines the moment of completion of the next cycle of its execution and only then transfers control to the OS using some system call. Consequently, user control of the application is lost for an arbitrary period of time. Developers need to take this into account and create applications so that they work “in parts,” periodically interrupting and transferring control to the system, i.e. During development, the functions of a scheduler are also performed.

Advantages this approach:

– interruption of the flow at an inconvenient moment is excluded;

– the problem of simultaneous use of data is solved, because during each execution cycle, the task uses them exclusively and is sure that no one else can change them;

– higher speed of switching from stream to stream.

Disadvantages are difficult program development and increased requirements for the programmer's qualifications, as well as the possibility of one thread taking over the processor if it accidentally or deliberately loops.

Preemptive Algorithms– a cyclic or circular type of scheduling, in which the operating system itself decides whether to interrupt the active application and switches the processor from one task to another in accordance with one or another criterion. In a system with such algorithms, the programmer does not have to worry about the fact that his application will be executed simultaneously with other tasks. Examples include the operating systems UNIX, Windows NT/2000, OS/2. Algorithms of this class are focused on high-performance execution of applications.

Preemptive algorithms can be based on the concept of quantization or on a priority mechanism.

Algorithms based on quantization. Each thread is given a limited continuous slice of processor time (its value should not be less than 1 ms - usually several tens of ms). A thread is moved from a running state to a ready state if the quantum is exhausted. Quanta can be the same for all flows or different.

When allocating quanta to a thread, different principles can be used: these quanta can be of a fixed value or change during different periods of the thread’s life. For example, for some specific flow, the first quantum can be quite large, and each subsequent quantum allocated to it can have a shorter duration (reduction to specified limits). This creates an advantage for shorter threads, and long-running tasks move into the background. Another principle is based on the fact that processes that frequently perform I/O operations do not fully utilize the time slices allocated to them. To compensate for this injustice, a separate queue can be formed from such processes, which has privileges over other threads. When selecting the next thread for execution, this queue is first scanned, and only if it is empty, a thread is selected from the general queue ready for execution.

These algorithms do not use any prior information about the tasks. Service differentiation in in this case is based on the “history of existence” of the flow in the system.

From the point of view of the second scheduling problem (the principle of choosing to execute the next thread), algorithms can also be conditionally divided into classes: non-priority and priority algorithms. With non-priority maintenance, the next task is selected in a certain predetermined order without taking into account their relative importance and maintenance time. When implementing priority disciplines, some tasks are given priority to enter the execution state.

Now let's look at some of the most common planning disciplines.


First come, first served service. The processor is allocated using the FIFO (First In First Out) principle, i.e. in the order in which requests for service are received. This approach allows you to implement the strategy of "finishing calculations in the order they appear whenever possible." Those tasks that were blocked during execution, after entering the ready state, are queued in front of those tasks that have not yet been executed. Thus, two queues are created: one of tasks that have not yet been executed, and the other of tasks that have transitioned from the pending state.

This discipline is implemented as non-preemptive when tasks release the processor voluntarily.

Dignity This algorithm is easy to implement. Disadvantage– under heavy load, short tasks are forced to wait in the system for a long time. The following approach eliminates this drawback.

The shortest process is served first. According to this algorithm, the thread with the minimum estimated time required to complete its work is assigned next to execution. Here, preference is given to threads that have little time left before they complete. This reduces the number of pending tasks in the system. Disadvantage is the need to know the estimated times in advance, which is not always possible. As a rough approximation, in some cases you can use the time the thread spent last receiving control.

The algorithm belongs to the category of non-preemptive, priority-free.

The named algorithms can be used for batch operating modes, when the user does not expect the system to respond. For interactive computing, it is necessary first of all to ensure acceptable response time and equal service for multi-terminal systems. For single-user systems, it is desirable that those programs that are directly worked with have better response times than background jobs. In addition, some applications, while running without direct user interaction, must still be guaranteed to receive their share of processor time (for example, an e-mail program). To solve such problems, priority service methods and the concept of quantization are used.


Carousel discipline, or circularR.R.(Round Robin). This discipline refers to preemptive algorithms and is based on quantization. Each task receives processor time in portions - quanta. After the end of the time quantum, the task is removed from the processor and placed at the end of the queue of processes ready for execution, and the next task is accepted for servicing by the processor. For optimal operation of the system, it is necessary to correctly select the law according to which time slices are allocated to tasks.

The quantum value is chosen as a compromise between the acceptable system response time to user requests (so that their simplest requests do not cause long waits) and the overhead costs of frequently changing tasks. When interrupted, the OS must save a sufficiently large amount of information about the current process, put the handle of the canceled task into a queue, and load the context of the new task. With a small time slice and frequent switches, the relative share of such overhead will become large, and this will degrade the performance of the system as a whole. If the time slice is large and the queue of ready tasks increases, the system response will become poor.

In some operating systems, it is possible to explicitly specify the value of a time slice or the permissible range of its values. For example, in OS/2, the CONFIG.SYS file uses the TIMESLICE operator to specify the minimum and maximum values ​​for a time slice: TIMESLICE=32.256 indicates that the time slice can be changed from 32 to 256 milliseconds.

This service discipline is one of the most common. In some cases, when the OS does not explicitly support the round-robin scheduling discipline, such maintenance can be organized artificially. For example, some RTOS use scheduling with absolute priorities, and when priorities are equal, the queuing principle applies. That is, only a task with a higher priority can remove a task from execution. If necessary, organize service evenly and equally, i.e. To ensure that all jobs receive the same time slices, the system operator can implement such servicing himself. To do this, it is enough to assign the same priorities to all user tasks and create one high-priority task, which should not do anything other than be scheduled for execution on a timer at specified time intervals. This task will only remove the current application from execution, it will move to the end of the queue, and the task itself will immediately leave the processor and give it to the next process in the queue.

In its simplest implementation, carousel service discipline assumes that all jobs have the same priority. If it is necessary to introduce a priority servicing mechanism, several queues are usually organized, depending on priorities, and servicing of a lower priority queue is carried out only when the higher priority queue is empty. This algorithm is used to schedule in OS/2 and Windows NT systems.

Planning according to priorities.

An important concept underlying many preemptive algorithms is preemptive service. Such algorithms use information found in the flow descriptor - its priority. Different systems define priority differently. In some systems, the highest priority value may be considered numerically highest value, in others, on the contrary, the highest priority is considered zero.

Typically, the priority of a thread is directly related to the priority of the process within which the thread is running. Process priority assigned by the operating system when it is created, taking into account whether the process is a system one or an application one, what is the status of the user who launched the process, and whether there was an explicit user instruction to assign a certain priority to the process. The priority value is included in the process handle and is used when assigning priority to its threads. If a thread is not initiated by a user command, but as a result of another thread executing a system call, then the OS must take into account the parameters of the system call to assign priority to it.

When planning program maintenance according to the previously described algorithms, a situation may arise when some control or management tasks cannot be implemented for a long period of time due to increasing load in the system (especially in RTOS). Moreover, the consequences due to the untimely completion of such tasks may be more serious than due to the failure to complete some programs with a higher priority. In this case, it would be advisable to temporarily change the priority of "emergency" tasks (those whose processing time has expired), and after execution, restore the previous value. The introduction of mechanisms for dynamically changing priorities makes it possible to implement a faster system response to short user requests (which is important during interactive work), but at the same time guarantee the fulfillment of any requests.

So the priority could be static(fixed) or dynamic(changing system depending on the situation in it). So-called base thread priority directly depends on the basic priority of the process that generated it. In some cases, the system can increase the priority of a thread (and to varying degrees), for example, if the processor time slice allocated to it has not been fully used, or lower the priority otherwise. For example, the OS gives priority more to threads waiting for keyboard input and less to threads performing disk operations. In some systems that use a dynamic priority mechanism, rather complex formulas are used to change the priority, which involve the values ​​of the basic priorities, the degree of load of the computer system, the initial priority value specified by the user, etc.

There are two types of priority scheduling: maintenance with relative priorities and service with absolute priorities. In both cases, the selection of a thread for execution is carried out in the same way - the thread with the highest priority is selected, and the moment of changing the active thread is determined differently. In a relative priority system, the active thread runs until it leaves the processor (either waits, an error occurs, or the thread terminates). In a system with absolute priorities, interruption of the active thread, in addition to the reasons indicated, also occurs if a thread with a higher priority than the active one appears in the queue of ready threads. Then the running thread is interrupted and put into a ready state.

A system with relative priority scheduling minimizes switching costs, but a single task can occupy the processor for a long time. This service mode is not suitable for time-sharing and real-time systems, but in batch processing systems (for example, OS/360) it is widely used. Absolute priority scheduling is suitable for facility management systems where rapid response to events is important.

Mixed planning type used in many operating systems: priority-based scheduling algorithms are combined with the concept of quantization.

DEVELOPMENT OF A TRAINING OPERATING SYSTEM MODULE

Guidelines

for course design in the discipline

"OS"

for full-time students

directions

INTRODUCTION. 4

1. Theoretical section. 4

1.1. Process control subsystem. 4

1.1.1. Context and process handle. 5

1.1.2. Process planning algorithms. 6

1.1.3. Preemptive and non-preemptive scheduling algorithms. 9

1.1.4. Process model and functions of the process management subsystem of the educational operating system 12

1.2. Memory management subsystem.. 17

1.2.1. Page distribution. 18

1.2.2. Segment distribution. 22

1.2.3. Page-segment distribution. 23

1.2.4. Page replacement algorithms. 24

1.3. File management. thirty

1.3.1. File names. thirty

1.3.2. File types. 32

1.3.3. Physical organization and file address. 33

2. The procedure for completing the course project. 38

3. Options for tasks. 39

Bibliography 42

APPENDIX A... 43

INTRODUCTION

The purpose of the course project: to study theoretical basis building operating system modules. Gain practical skills in developing a program that is part of the operating system.

Theoretical section

The functions of a stand-alone computer's operating system are typically grouped either according to the types of local resources that the OS manages or according to specific tasks that apply to all resources. Sometimes such groups of functions are called subsystems. The most important subsystems are the process, memory, file, and external device management subsystems, and the subsystems common to all resources are the user interface, data protection, and administration subsystems.

Process control subsystem

The most important part of the operating system, which directly affects the functioning of the computer, is the process control subsystem. For each newly created process, the OS generates system information structures that contain data about the process's needs for computer system resources, as well as about the resources actually allocated to it. Thus, a process can also be defined as some application for consuming system resources.

In order for a process to execute, the operating system must assign it an area of ​​RAM to house the process's code and data, and provide it with the required amount of processor time. In addition, the process may need access to resources such as files and I/O devices.

In a multitasking system, a process can be in one of three main states:

RUNNING - the active state of a process, during which the process has all the necessary resources and is directly executed by the processor;

WAITING is the passive state of a process, the process is blocked, it cannot execute for its own internal reasons, it is waiting for some event to occur, for example, the completion of an I/O operation, receiving a message from another process, or the release of some resource it needs;

READY is also a passive state of a process, but in this case the process is blocked due to circumstances external to it: the process has all the resources required for it, it is ready to execute, but the processor is busy executing another process.

During the life cycle, each process moves from one state to another in accordance with the process scheduling algorithm implemented in a given operating system. A typical process state graph is shown in Figure 1.1.

Figure 1.1 — Process state graph in a multitasking environment

In a single-processor system, there can be only one process in the RUNNING state, and in each of the WAITING and READY states there can be several processes; these processes form queues of waiting and ready processes, respectively.

Life cycle process begins with the READY state, when the process is ready to execute and is waiting for its turn. When activated, the process goes into the RUNNING state and remains in it until either it releases the processor itself, going into the WAITING state for some event, or it is forcibly evicted from the processor, for example, due to the exhaustion of the allotted this process CPU time quantum. In the latter case, the process returns to the READY state. The process transitions to this state from the WAITING state after the expected event occurs.

Articles to read:

Basics of programming. Process management

process isolation;
  • scheduling the execution of processes and threads (in general, we should also talk about scheduling tasks);
  • thread dispatching;
  • organization of interprocess interaction;
  • synchronization of processes and threads;
  • termination and destruction of processes and threads.
  • Five main events lead to the creation of a process:

  • fulfilling a running process's request to create a process;
  • a user request to create a process, such as when logging on interactively;
  • initiate a batch job;
  • creation by the operating system of a process necessary for the operation of any services.
  • Typically, when the OS boots, several processes are created. Some of them are high-priority processes that interact with users and perform assigned work. The remaining processes are background processes, they are not associated with specific users, but perform special functions - for example, related to e-mail, Web pages, printing, transferring files over the network, periodic launch of programs (for example, disk defragmentation) etc. Background processes are called daemons.

    A new process can be created at the request of the current process. Creating new processes is useful in cases where the task being performed can most easily be formed as a set of related, but nevertheless independent, interacting processes. In interactive systems, the user can start a program by typing a command on the keyboard or by double-clicking the program icon. In both cases, a new process is created and the program is launched in it. IN batch processing systems on mainframes, users submit a job (possibly using remote access), and the OS creates a new process and starts the next job from the queue when the necessary resources are freed.

    From a technical point of view, in all of these cases, a new process is formed in the same way: the current process fulfills a system request to create a new process. The process and thread management subsystem is responsible for providing processes with the necessary resources. The OS maintains special information structures in memory, in which it records which resources are allocated to each process. It can assign resources to a process for sole use or shared use with other processes. Some of the resources are allocated to the process when it is created, and some are allocated dynamically based on requests during lead time. Resources can be allocated to a process for its entire life or only for a certain period. When performing these functions, the process control subsystem interacts with other OS subsystems responsible for resource management, such as the memory management subsystem, input/output subsystem, file system.

    To prevent processes from interfering with resource allocation, and also could not damage each other’s codes and data, the most important task of the OS is to isolate one process from another. For this operating system provides each process with a separate virtual address space so that no process can directly access the commands and data of another process.

    In an OS where processes and threads exist, a process is considered as a request to consume all types of resources except one - processor time. This critical resource is distributed by the operating system among other units of work - threads, which got their name due to the fact that they represent sequences (threads of execution) of commands. The transition from the execution of one thread to another is carried out as a result of planning And dispatching. The work of determining when the current thread should be interrupted and the thread that should be allowed to run is called scheduling. Thread scheduling is done based on information stored in process and thread descriptors. When planning, it is taken into account thread priority, their waiting time in queue, accumulated lead time, intensity of I/O access and other factors.

    Dispatching consists of implementing the solution found as a result of planning, i.e. in switching the processor from one thread to another. Dispatching takes place in three stages:

    • saving the context of the current thread;
    • loading the context of the thread selected as a result of scheduling;
    • launching a new thread for execution.

    When a system runs multiple independent tasks simultaneously, additional problems arise. Although threads arise and execute synchronously, they may need to interact, such as when exchanging data. To communicate with each other, processes and threads can use a wide range of possibilities: pipes (in UNIX), mailboxes (Windows), remote procedure calls, sockets (in Windows they connect processes on different machines). Matching thread speeds is also very important to prevent race conditions (where multiple threads try to modify the same file), deadlocks, and other collisions that occur when sharing resources.

    Synchronization threads is one of the most important functions of the process and thread management subsystem. Modern operating systems provide a variety of synchronization mechanisms, including semaphores, mutexes, critical regions, and events. All these mechanisms work with threads, not processes. So when a thread blocks on a semaphore, other threads in that process can continue running.

    Every time a process exits—due to one of the following events: normal exit, error exit, fatal error exit, killing by another process—the OS takes steps to clean up its traces on the system. The process management subsystem closes all files with which the process worked, freeing up areas of RAM allocated for codes, data and system information structures of the process. All possible OS queues and a list of resources that had links to the process being terminated are corrected.

    As already noted, to support multiprogramming, The OS must design for itself those internal units of work between which the processor and other computer resources will be divided. The question arises: what is the fundamental difference between these units of work, what multiprogramming effect can be obtained from their use, and in what cases should these operating system units of work be created?

    Obviously, any operation of a computer system consists of executing some program. Therefore, a certain program code is associated with both the process and the thread, which is issued in the form of an executable module. In the simplest case, a process consists of a single thread, and some modern operating systems still have this situation. Multiprogramming in such OS it is carried out at the process level. When interaction is necessary, processes turn to the operating system, which, acting as an intermediary, provides them with means of interprocess communication - channels, mail shares, shared memory sections, etc.

    However, in systems that do not have the concept of a thread, problems arise when organizing parallel computations within a process. And such a need may arise. The point is that a single process can never be executed faster than in single-program mode. However, an application running within a single process may have internal parallelism, which, in principle, could speed up its solution. If, for example, the program provides for access to an external device, then during this operation it is possible not to block the execution of the entire process, but to continue calculations in another branch of the program.

    Running multiple jobs in parallel within a single interactive application improves user efficiency. So, when working with a text editor, it is desirable to be able to combine typing new text with such lengthy operations as reformatting a significant part of the text, saving it on a local or remote disk.

    It's not hard to imagine a future version of the compiler that can automatically compile source code files during pauses while typing a program. Then warnings and error messages would appear in real time, and the user would immediately see where he went wrong. Modern spreadsheets recalculate data in the background as soon as the user changes anything. Word processors break text into pages, check it for spelling and grammatical errors, typing in the background, saving text every few minutes, etc. In all these cases, threads are used as a means of parallelizing computations.

    These tasks could be assigned to a programmer, who would write a dispatcher program that implements parallelism within a single process. However, this is very difficult, and the program itself would be very confusing and difficult to debug.

    Another solution is to create several processes for one application for each of the parallel

    One of the main subsystems of any modern multiprogram OS that directly affects the functioning of the computer is the process and thread management subsystem. The main functions of this subsystem:

      creating processes and threads;

      providing processes and threads with the necessary resources;

      process isolation;

      scheduling the execution of processes and threads (in general, we should also talk about scheduling tasks);

      thread dispatching;

      organization of interprocess interaction;

      synchronization of processes and threads;

      termination and destruction of processes and threads.

    1. Five main events lead to the creation of a process:

      fulfilling a running process's request to create a process;

      a user request to create a process, such as when logging on interactively;

      initiate a batch job;

      creation by the operating system of a process necessary for the operation of any services.

    Typically, when the OS boots, several processes are created. Some of them are high-priority processes that interact with users and perform assigned work. The remaining processes are background processes, they are not associated with specific users, but perform special functions - for example, related to e-mail, Web pages, output to seal, file transfer By network, periodic launch of programs (for example, disk defragmentation) etc. Background processes are called daemons.

    A new process can be created By request of the current process. Creating new processes is useful in cases where the task being performed can most easily be formed as a set of related, but nevertheless independent, interacting processes. In interactive systems user can launch a program by typing a command on the keyboard or by double-clicking the program icon. In both cases a new process is created and launch there are programs in it. IN batch processing systems on mainframes, users submit a job (possibly using remote access), and the OS creates a new process and starts the next job from the queue when the necessary resources are freed.

    2. From a technical point of view, in all of these cases, a new process is formed in the same way: the current process executes the system request to create a new process. The process and thread management subsystem is responsible for providing processes with the necessary resources. The OS maintains special information structures in memory, in which it records which resources are allocated to each process. It can assign resources to a process for sole use or shared use with other processes. Some of the resources are allocated to the process when it is created, and some are allocated dynamically By inquiries in lead time. Resources can be allocated to a process for its entire life or only for a certain period. When performing these functions, the process control subsystem interacts with other OS subsystems responsible for resource management, such as the memory management subsystem, input/output subsystem, file system.

    3. To prevent processes from interfering with resource allocation, and also could not damage each other’s codes and data, The most important task of the OS is to isolate one process from another. For this operating system provides each process with a separate virtual address space so that no process can directly access the commands and data of another process.

    4. In an OS where processes and threads exist, a process is considered as a request to consume all types of resources except one - processor time. This most important resource distributed by the operating system between other units of work - threads, which got their name due to the fact that they represent sequences (threads of execution) of commands. The transition from the execution of one thread to another is carried out as a result ofplanning Anddispatching . Job By Determining the moment at which the current thread should be interrupted and the thread that should be allowed to run is called scheduling. Thread scheduling is done based on information stored in process and thread descriptors. When planning, it is taken into account thread priority, their waiting time in queue, accumulated lead time, intensity of I/O access and other factors.

    5. Dispatching consists in implementing the solution found as a result of planning, i.e. in switching the processor from one thread to another. Dispatching takes place in three stages:

      saving the context of the current thread;

      launching a new thread for execution.

    6. When a system runs multiple independent tasks simultaneously, additional problems arise. Although threads arise and execute synchronously, they may need to interact, for example, when exchanging data. To communicate with each other, processes and threads can use a wide range of capabilities: channels (in UNIX), mailboxes ( Windows), remote procedure call, sockets (in Windows connect processes on different machines). Matching thread speeds is also very important to prevent race conditions (where multiple threads try to change the same file), deadlocks and other collisions that occur when resources are shared.

    7. Synchronization threads is one of the most important functions of the process and thread management subsystem. Modern operating systems provide a variety of synchronization mechanisms, including semaphores, mutexes, critical regions, and events. All these mechanisms work with threads, not processes. That's why when flow blocks on a semaphore, other threads in this process can continue running.

    8. Every time the process ends, – and this happens due to one of the following events: normal exit, exit By mistake, exit By fatal error, destruction by another process - the OS takes steps to “clean up traces” of its presence in the system. The process management subsystem closes all files with which the process worked, freeing up areas of RAM allocated for codes, data and system information structures of the process. Performed correction all kinds of OS queues and list resources that contained links to the process being terminated.

    The functions of a stand-alone computer's operating system are typically grouped either according to the types of local resources that the OS manages or according to specific tasks that apply to all resources. Sometimes such groups of functions are called subsystems. The most important resource management subsystems are the process, memory, file, and external device management subsystems, and the subsystems common to all resources are the user interface, data security, and administration subsystems. The most important part of the operating system, which directly affects the functioning of the computer, is the process control subsystem. For each newly created process, the OS generates system information structures that contain data about the process's needs for computer system resources, as well as about the resources actually allocated to it. Thus, a process can also be defined as some application for consuming system resources. In a multiprogram operating system, several processes can exist simultaneously. Some processes are generated at the initiative of users and their applications; such processes are usually called user processes. Other processes, called system processes, are initialized by the operating system itself to perform their functions. An important task of the operating system is to protect the resources allocated to a given process from other processes. One of the most carefully protected process resources are the areas of RAM in which process code and data are stored. The set of all areas of RAM allocated by the operating system to a process is called its address space. Each process is said to operate in its own address space, referring to the protection of address spaces provided by the OS. Other types of resources are also protected, such as files, external devices, etc. The operating system can not only protect resources allocated to one process, but also organize their sharing, for example, allowing access to a certain area of ​​\u200b\u200bmemory by several processes. During the life of the process, its execution can be interrupted and continued many times. In order to resume execution of a process, it is necessary to restore the state of its operating environment. The state of the operating environment is identified by the state of the registers and program counter, the operating mode of the processor, pointers to open files, information about uncompleted I/O operations, error codes of system calls performed by a given process, etc. d. This information is called progress context. They say that when a process changes, a context switch occurs. Thus, the process management subsystem schedules the execution of processes, that is, it distributes processor time between several simultaneously existing processes in the system, creates and destroys processes, provides processes with the necessary system resources, maintains synchronization of processes, and also ensures interaction between processes.

    Share with friends or save for yourself:

    Loading...