Operating system a collection of system programs that together control the operation of the computer system. It is a program that manages computer hardware, provides a basis for application programs and acts as an intermediary between the user and the computer hardware. It offers a simplified man-machine interface (MMI) to overcome the complexity of the actual machine.
Operating System Evolution The operating system has evolved from the earlier times and thus, they are divided into four generations.
First Generation (1945-1955, User Driven)
The earliest digital computers had no operating systems.
Machines of that time were so primitive that programs were often entered one bit at a time on rows of mechanical switches.
No programming languages (not even assembly language) were known.
Operating systems were unknown and were not used
Second Generation (1955-1965, Batch processing)
Punched cards were introduced in the early 1950’s which somewhat improved the routine.
The first operating systems were implemented in early 1950’s by The General Motors Research Laboratories for their IBM 701.
They generally did one job at a time.
They were called single-stream batch processing systems because programs and data were submitted in groups or batches.
Third Generation (1965-1980, Multiprogramming)
The systems of the 1960’s were also batch processing systems but they could run multiple jobs at once.
So multiprogramming concept was developed in which several jobs are in main memory at once; a processor is switched from job to job as needed to keep several jobs advancing while keeping the peripheral devices in use.
Another major feature in the third generation operating system was the technique called spooling (Simultaneous Peripherals operations On Line). In spooling, a high-speed device like a disk is interposed between a running program and a low-speed device involved with the program in input/output. Instead of writing directly to a printer, for example, outputs a written to the disk. Programs can run to completion faster and other programs can be initiated sooner. When the printer becomes available, the outputs may be printed.
Another feature was the time sharing technique, a variant of multiprogramming technique in which each user has an on-line (directly connected) terminal. Timesharing systems were developed to multiprogram large number of simultaneous interactive users.
With the development of LSI (Large Scale Integration) circuits and chips, operating system entered the personal computer and workstation age.
Microprocessor technology evolved to the point that it becomes possible to build desktop computers as powerful as the mainframes of the 1970s.
Some of the popular operating systems like the windows operating system (XP, Vista, Windows 7, Windows 8, Windows 10) and Mac operating systems are some examples.
Computer Hardware Review An operating system is intimately tied to the hardware of the PC it runs on. It amplifies the PC's instruction set and deals with its resources. To work, it must know a great deal about the hardware, at any rate about how the hardware appears to the programmer. Reasonably, a simple PC can be preoccupied to a model taking after that of figure-a. The CPU, memory, and I/O devices are all connected by a system bus and communicate with each other over it. Advanced PCs have a more complicated structure including numerous buses.
Some of the hardware components are discussed below.
Processors The Central Processing Unit (CPU; sometimes just called processor) is a machine that can execute computer programs. It is sometimes referred to as the brain of the computer.There are four stages that about all CPUs use in their operation: fetch, decode, execute, and writeback. The first step, fetch, includes retrieving an instruction from program memory. In the 'decode' step, the instruction is separated into parts that have significance to other portions of the CPU. Amid the execute step different parts of the CPU, for example, the arithmetic logic unit (ALU) and the floating point unit (FPU) are associated so they can perform the desired operation. The last step, writeback, just writes back the result of the execute step to some type of memory.
Memory Memory is one of the major components in any computer.Ideally , a memory should be extremely fast (faster than executing an instruction so that CPU is not held my memory), abundantly large, and cheap. There is no technology in the present time that satisfies all these properties, so a different approach is taken based on the requirement which is shown in the figure-c. The top layers have higher speed, smaller capacity, and greater cost per bit than the lower layer ones.
Power Supply The power supply as its name may suggest is the device that supplies power to every one of the components in the PC. Its case holds a transformer, voltage control, and usually a cooling fan. The power supply changes over around 100-120 volts of AC power to low-voltage DC power for the internal components to use. The most widely recognized computer power supplies are worked to acclimate with the ATX form factor. This enables different power supplies to be interchangeable with various parts inside the PC. ATX power supplies additionally are intended to turn on and off utilizing a sign from the motherboard, and provide support for modern functions, for example, standby mode.
I/O devices The CPU and memory are not the only resources that the operating system must manage. I/O devices also interact with the operating system heavily. I/O devices generally consist of two parts: a controller and the device itself. The controller is a chip or a set of chips that physically controls the device by accepting the commands from the operating system. In most cases, the actual control of the device is very complicated and detailed, so it is the job of the controller to present a simpler interface to the operating system. Some of the I/O devices are keyboard, mouse, monitor, microphone, earphone, printer, scanner, etc.
Operating System Structure For efficient execution and implementation, an OS should be partitioned into separate subsystems, each with precisely characterized tasks, inputs, outputs, and execution attributes. These subsystems can then be organized in different architectural arrangements. Some of them are monolithic systems, layered systems, virtual machines and client-server model.
The OS in it is written as a collection of procedures, each of which can call any of the other ones whenever it needs to.
This structure is subtitled "The Big Mess". The structure is that there is no structure.
In this technique, each procedure is free to call any other one, if the latter provides some useful computation than the former needs.
To construct the actual object of the operating system when this approach is used, one first compiles all the individuals' procedures, or files containing the procedures and then binds them all together into a single object file using the system linker.
In terms of information hiding, there is essentially none – every procedure is visible to every other procedure.
Theorganization suggests a basic structure for operating system:
A main program that invokes the requested service procedure.
A set of service procedures that carry out the system calls.
A set of utility procedures that help the service procedures.
To generalize the monolithic system into the hierarchy of layers, the first system constructed was called THE system and was built by E.W. Dijkstra in 1986.
This system had 6 layers each having their own function as shown in figure-f.
This approach permits every layer to be created and debugged independently, with the supposition that all lower layers have as of now been fixed and are trusted to deliver proper services.
The problem is deciding what order in which to put the layers, as no layer can call upon the services of any higher layer, and so numerous chicken-and-egg circumstances may emerge.
Layered approaches can likewise be less efficient as a request for service from a higher layer needs to filter through all lower layers before it reaches the HW, possibly with huge processing at every step.
Initial arrival of OS/360 were entirely batch systems.
Many 360 users needed to have time sharing and IBM built up a machine initially called CP/CMS and later named VM/370.
The heart of the system, known as the virtual machine monitor, keeps running on the bare hardware and does the multiprogramming, providing not only one but several virtual machines to the next layer up.
The virtual machines are not the extended machines but rather are the exact copies of the bare hardware including kernel/user mode, I/O, interrupts and everything else the real machine has.
The CMS (Conversational Monitor System) is for intuitive sharing users.
VM/370 gains much in simplicity by moving a large part of the traditional OS code into a higher layer CMS but it was still a complex program because simulating a number of virtual 370s in their entirety is not simple.
The trend in modern OS is to move the code up to higher levels and remove as much as possible from the kernel.
Communication between clients and servers is often by message passing.
An obvious generalization of this idea is to have the clients and servers run on different computers connected by a local or wide-area network.
The client-server model is an abstraction that can be used for a single machine or for the network of machines.
Operating System concept
Batch Operating System
A "batch" is the name given to the task of doing the same work again and again, the main contrast being the input data introduced for every emphasis of the job and maybe the output file.
The users of the batch operating system do not interface with the PC directly. Every user prepares his/her job on an off-line device like punch cards and submits it to the computer operator.
To accelerate processing, jobs with similar needs are batched together and ran as a group.
This sort of operating system requirement is normal on a mainframe computer that was bought particularly considering the massive repetitive data processing.
CPU is often idle, in light of the fact that the speed of the mechanical I/O devices is slower than CPU.
Real-Time Operating System
It is a multitasking operating system planned for real-time applications where time is a key parameter.
It is often utilized as a control device as a part of a dedicated application, for example, industrial robots, spacecraft, medical imaging systems, industrial control and scientific research equipment.
RTOS encourages the making of a real-time system, however, it does not ensure the final result will be real-time; this requires the right development of the software.
The completion of the task before the deadline is valuable, however, the extension of time after due date crashes the whole system.
They may be either hard real-time system or soft real-time system.
The hard real-time system has well defined fixed time constraints and processing must be done within the defined constraints or else, the system will fail.
The soft real-time system has less stringent timing constraints and does not support the deadline scheduling. The value of job degrades as the deadline passes.
Multiprogramming Operating System
Multiprogramming is interleaved execution of multiple jobs by the same computer.
Multiprogramming operating systems were introduced to overcome the problem of underutilization ofCPUand main memory.
Multiprogramming is a common approach to resource management.
In a multiprogramming system, when one program is waiting for I/O transfer; there is another program prepared to use the CPU. So it is feasible for several jobs to share the time of the CPU.
The simultaneous execution of programs enhances the utilization of system resources and improves the system throughput when contrasted with batch and serial preparing.
In this system, when a process demands some I/O to allocate; meanwhile the CPU time is assigned to another ready process. So, here when a process is changed to an I/O operation, the CPU is not set idle.
Time Sharing Operating System
Time-sharing is sharing a computing resource among numerous users by multitasking.
By permitting multiple users to connect simultaneously to a single computer, time-sharing drastically brought down the expense of providing computing, while in the meantime making the computing experience more interactive.
The CPU executes different jobs by switching among them, yet the switches happen so frequently that the user can interact with every program while it is running.
It requires complex job scheduling and memory management.
Mainframe Operating System
These computers separate themselves from PCs in terms of their I/O capacity.
A mainframe computer with 1000 disks and thousands of gigabyte of information is not surprising.
They might be utilized as high-end web servers, servers for huge scale e-commerce sites and servers for business to business transactions.
They are vigorously oriented towards processing numerous jobs at once, the majority of which need enormous amount of I/O.
They commonly offer three sorts of services: batch, transaction processing and timesharing.
Personal Computer Operating Systems
Dedicated machine per user where CPU usage is not a prime concern.
Hence, a portion of the design decisions made for an advanced system may not be suitable for these smaller systems. But other design decisions, for example, those for security, are suitable on the grounds that PCs can now be associated with other computer and users through the network and the web.
There are distinctive sorts of Operating systems for PC, for example, Windows, Mac OS, Linux, and UNIX.
Operating system functionality changes with hardware and users.
They support multiprogramming, often with dozens of programs started up at boot time.
They are widely used for word processing, spreadsheets, and internet access.
System Calls A system call is a mechanism utilized by an application program to request service from the kernel. The interface between the OS and the user programs is defined by the set of system calls provided by the operating system which is generally available as assembly language instructions. The system calls available in the interface vary from operating system to operating system. Languages defined to replace assembly language for system programming permits system calls to be made directly (e.g. C, C++). UNIX has a system call 'read' with three parameters: count=read (fd, buffer, n bytes). One to specify the file, one to tell where the data are to be put and one to tell what number of bytes to read.
Steps in making a system call: read (fd, buffer, n bytes)
Push parameter into the stack (1-3)
Calls library procedure (4)
Pass parameters in registers (5)
Switch from user mode to kernel mode and start to execute (6)
Examine the system call number and then dispatch to the correct system call handler via a table of pointer (7)
Run System call Handlers (8)
Once the system call handler completed its work, control return to the library procedure (9)
This procedure then returns to the user program in the usual way (10)
Increment SP before call to finish the job. (11)
Some of the system calls are system calls for process management, system calls for file management, system calls for directory and file system management, and miscellaneous system calls.