You are on page 1of 12

Figure 1 multiprocessor................................................................................................................................................

2
Figure 2 multicore processor........................................................................................................................................3
Figure 3 multitasking................................................................................................................................................... 5
Figure 4 multiprogramming.........................................................................................................................................7
Figure 5 task scheduling...............................................................................................................................................8
Figure 6 computer interrupt.........................................................................................................................................8
Figure 7 parallel and concurrency processing..............................................................................................................9
Figure 8 pipelining........................................................................................................................................................9
Multiprocessing and multicore processors are fundamental concepts in computer architecture that
significantly impact the design and performance of modern computing systems, including Real-
Time Embedded Systems.

Multiprocessing is the ability of a computer to use two or more processors (multiprocessors) for
computer operations. Multiple CPUs are interconnected so that a job can be divided among them
for faster execution. When a job finishes, results from all CPUs are collected and compiled to
give the final output. Jobs needed to share main memory and they may also share other system
resources among themselves. Multiple CPUs can also be used to run multiple jobs
simultaneously. An Example of multiprocessing system is as follows:

Server Farms, Large-scale data centers and cloud computing environments often utilize
multiprocessing systems to handle multiple user requests simultaneously. Servers within these
environments typically have multiple processors or cores to execute tasks concurrently, enabling
efficient utilization of resources and scalability to accommodate varying workloads.

The basic organization of a typical multiprocessing system is shown in the given figure.

Figure 1 multiprocessor

The advantages of multiprocessor systems are as follows, if there are multiple processors
working at the same time, more processes can be executed parallel at the same time. Therefore
the throughput of the system will increase. Multiprocessor systems are more reliable. Due to the
fact that there are more than one processor, in case of failure of any one processor will not make
the system come to a halt. Although the system will become slow if it happens but still it will
work. Electricity consumption of a multiprocessor system is less than the single processor
system. This is because, in single processor systems, many processes have to be executed by
only one processor so there is a lot of load on it. But in case of multiple processor systems, there
are many processors to execute the processes so the load on each processor will be
comparatively less so electricity consumed will also be less. [1]

A multicore processor is an integrated circuit that has two or more processor cores attached for
enhanced performance and reduced power consumption. These processors also enable more
efficient simultaneous processing of multiple tasks, such as with parallel processing and
multithreading. A dual core setup is similar to having multiple, separate processors installed on a
computer. However, because the two processors are plugged into the same socket, the connection
between them is faster. The heart of every processor is an execution engine, also known as a
core. The core is designed to process instructions and data according to the direction of software
programs in the computer's memory. Over the years, designers found that every new processor
design had limits. Numerous technologies were developed to accelerate performance, including
the following ones Clock speed, hyper-threading, more chips.

Figure 2 multicore processor

Advantages of Multi-Core CPU , Increase system performance, with multi-cores, the CPU can be
able to handle multiple tasks simultaneously hence improving performance and execution speed.
Maximum processor utilization, when the load is shared among many cores it means we can
achieve a high CPU utilization compared to when using a single core. Support parallel
processing, this is where the process is divided into small chunks which are assigned to different
cores and executed simultaneously. This reduces the time taken to complete a process execution.
Reduced process waiting time, the time that the process waits before it is assigned to the
processor is reduced since now we have many cores executing many processes. High system
response time, from the user’s point the system response is better which gives a better user
experience.

Disadvantages of Multi-Core systems are as follows, Software incompatibility, not all software
is designed to take advantage of multicores. For the system to use multi-core the software
should have been designed with that capability which most current software doesn’t have. High
power consumption, More cores mean the system will need more power to run. Multi-core
systems consume more power compared to single-core systems. Complex designing, to design
both the hardware and software to support multi-core systems in complex. This means it takes
more time to design and develop the system. High cost of development, manufacturing and
developing these systems is expensive. This means by the time a computer with a multi-core
reaches the market it is more expensive than a single-core. The more cores the system has the
high the price. [2]

Multitasking is the process of having a computer perform multiple tasks simultaneously. During
multitasking, tasks such as listening to music or browsing the Internet are performed in the
background while using others in the foreground. Multiple tasks are also known as processes that
share similar processing resources like a CPU. The operating system keeps track of where you
are in each of these jobs and allows you to transition between them without losing data are each
of these jobs and allows you to transition between them without losing data

Figure 3 multitasking

Early operating system could execute various programs at the same time, although multitasking
was not fully supported. As a result, a single software could consume the entire CPU of the
computer while completing a certain activity. Basic operating system functions, such as file
copying, prevented the user from completing other tasks, such as opening and closing windows.
Fortunately, because modern operating systems have complete multitasking capability, numerous
programs can run concurrently without interfering with one other. In addition, many operating
system processes can run at the same time. There are mainly two types of multitasking. These are
as follows:

Preemptive multitasking is a special task assigned to a computer operating system. It decides


how much time one task spends before assigning another task to use the operating system.
Because the operating system controls the entire process, it is referred to as 'preemptive'.
Preemptive multitasking is used in desktop operating systems. Unix was the first operating
system to use this method of multitasking. Windows NT and Windows 95 were the first versions
of Windows that use preemptive multitasking. With OS X, the Macintosh acquired proactive
multitasking. This operating system notifies programs when it's time for another program to take
over the CPU.

The term 'Non-Preemptive Multitasking' refers to cooperative multitasking. The main purpose of
cooperative multitasking is to run the present task while releasing the CPU to allow another
process to run. This task is carried out by using taskYIELD (). When the taskYIELD() function
is called, context-switch is executed. Windows and MacOS used cooperative multitasking. A
Windows program will respond to a message by performing some short unit of work before
handing the CPU over to the operating system until the program receives another message. It
worked perfectly as long as all programs were written with other programs in mind and bug-free

The advantages of multitasking are as follows ,Improved Efficiency, Multitasking allows for
better utilization of CPU resources by keeping it busy with useful tasks even when other tasks
are waiting. Enhanced User Experience: Users can perform multiple tasks simultaneously, such
as browsing the web while listening to music or working on a document, improving productivity
and convenience. Faster Response Times: Multitasking systems can provide faster response
times to user inputs, as tasks can be switched quickly in response to user actions. Resource
Sharing, Multitasking enables efficient sharing of system resources, such as memory and
peripherals, among multiple tasks. Task Management, Multitasking operating systems provide
mechanisms for managing tasks, such as scheduling algorithms to prioritize tasks and prevent
one task from monopolizing the CPU.

The disadvantages are as follows Resource Competition: Concurrent tasks may compete for
resources, leading to potential bottlenecks and reduced performance if resources are not managed
effectively. Complexity: Multitasking adds complexity to the operating system and application
development, as developers need to consider task synchronization, resource allocation, and task
prioritization. Overhead: Context switching between tasks incurs overhead, as the system must
save and restore the state of each task, which can impact performance, especially in real-time
systems. [3]

Multiprogramming in an operating system as the name suggests multi means more than one and
programming means the execution of the program. when more than one program can execute in
an operating system then this is termed a multiprogramming operating system.

Before the concept of Multiprogramming, computing takes place in other way which does not
use the CPU efficiently. Earlier, CPU executes only one program at a time. In earlier day’s
computing, the problem is that when a program undergoes in waiting state for an input/output
operation, the CPU remains idle which leads to underutilization of CPU and thus poor
performance. Multiprogramming addresses and solves this issue. Multiprogramming was
developed in 1950s. It was first used in mainframe computing. The major task of
multiprogramming is to maximize the utilization of resources. Multiprogramming is broadly
classified into two types namely Multi-user operating system and Multitasking operating system
Multiuser and Multitasking both are different in every aspect and multitasking is an operating
system that allows you to run more than one program simultaneously. The operating system does
this by moving each program in and out of memory one at a time. When a program runs out of
memory, it is temporarily stored on disk until it is needed again.

A multi-user operating system allows many users to share processing time on a powerful central
computer on different terminals. The operating system does this by quickly switching between
terminals, each receiving a limited amount of CPU time on the central computer. Operating
systems change so rapidly between terminals that each user appears to have constant access to
the central computer. If there are many users on such a system, the time it takes for the central
computer to respond may become more apparent.

Figure 4 multiprogramming

In multiprogramming system, multiple programs are to be stored in memory and each program
has to be given a specific portion of memory which is known as process. The operating system
handles all these process and their states before the process undergoes execution, the operating
system selects a ready process by checking which one process should undergo execution. When
the chosen process undergoes CPU execution, it might be possible that in between process need
any input/output operation at that time process goes out of main memory for I/O operation and
temporarily stored in secondary storage and CPU switches to next ready process. And when the
process which undergoes for I/O operation comes again after completing the work, then CPU
switches to this process. This switching is happening so fast and repeatedly that creates an
illusion of simultaneous execution. [4] [5]
Task Scheduling: Task scheduling is the process of deciding the order in which tasks should be
executed on a computer system. It ensures efficient utilization of resources and timely
completion of tasks. For Example, imagine doing a list with different tasks like studying, doing
laundry, and cooking dinner. Task scheduling would involve deciding whether to study first, then
do laundry, and finally cook dinner, or perhaps do laundry first, then study, and cook dinner last.
The goal is to optimize your time and get everything done efficiently. [6]

study Do laundry Cook dinner


stop

or

Do study Cook dinner stop


laundry

Figure 5 task scheduling


Computer Interrupts: Interrupts are signals sent by hardware or software to the processor,
indicating that an event requiring immediate attention has occurred. They temporarily halt the
current process to handle the urgent task. For example, suppose a game is being played on the
computer, and suddenly a message arrives from a friend. The game gets interrupted by the
message notification, causing a temporary pause until the message is read and responded to.
After handling the message, the game continues from the point where it was paused. [7]

Figure 6 computer interrupt

Parallel Processing and Concurrency: Parallel processing involves breaking down tasks into
smaller subtasks that can be executed simultaneously by multiple processors or cores, speeding
up overall processing time. While Concurrency, on the other hand, refers to the ability of a
system to execute multiple tasks concurrently, even if they're not necessarily processed
simultaneously. For Example, think of a group of people assembling a puzzle. In parallel
processing, each person works on a separate section of the puzzle at the same time, making the
overall assembly faster. In concurrency, imagine cooking in the kitchen while also listening to
music on the phone. therefore, cooking is not happening simultaneously with listening to music,
but rather tasks are seamlessly switched between, allowing both activities to be accomplished
effectively. [8] [9]

Figure 7 parallel and concurrency processing

Pipelining: Pipelining is a technique used to improve the efficiency of processing by breaking


down tasks into a series of smaller stages, with each stage performing a specific part of the
overall task. These stages operate concurrently, and each stage passes its output to the next stage
without waiting for the entire task to complete. For Example, Consider an assembly line in a car
manufacturing plant. Each station along the assembly line performs a specific task, such as
installing the engine, fitting the doors, or painting the body. Instead of waiting for one car to
complete the entire assembly process before starting the next one, each car moves through the
line, with different cars at different stages simultaneously. This continuous flow improves
efficiency, similar to how pipelining works in computer processing. [10] [11]

Figure 8 pipelining

Microcontrollers are integrated circuits (ICs) that are designed to perform specific tasks within a
larger electronic system. They consist of a microprocessor core, memory, and various
peripherals, all housed in a single chip. The microcontroller's behavior is defined by its
instruction set, which is a collection of machine-level instructions that the microcontroller can
execute.

The instruction set of a microcontroller determines the operations it can perform, the data types it
can handle, and the addressing modes it supports. Here are some key aspects of microcontroller
instruction sets:

Basic operations: Microcontroller instruction sets typically include basic arithmetic and logical
operations such as addition, subtraction, multiplication, division, bitwise AND, OR, XOR, and
shift operations. These operations allow the microcontroller to perform mathematical
calculations and manipulate binary data.

Data movement, Instructions for moving data between different memory locations and registers
are an essential part of microcontroller instruction sets. These instructions include loading data
from memory into registers, storing data from registers to memory, and transferring data between
registers.

Control flow, Microcontrollers need instructions to control the flow of program execution. These
instructions include conditional branching, which allows the microcontroller to make decisions
based on certain conditions, and unconditional branching, which allows it to jump to a different
part of the program unconditionally.

Input/output (I/O) operations, Microcontrollers often have built-in peripherals for interfacing
with the external world, such as GPIO (General Purpose Input/Output) pins, timers, UART
(Universal Asynchronous Receiver/Transmitter), SPI (Serial Peripheral Interface), and I2C
(Inter-Integrated Circuit) interfaces. The instruction set includes specific instructions for
interacting with these peripherals, such as reading from or writing to GPIO pins, initializing and
configuring timers, and transmitting/receiving data through UART, SPI, or I2C.

Stack operations, Microcontrollers usually have a small amount of stack memory for storing
return addresses and local variables. The instruction set includes instructions for pushing data
onto the stack and popping data from the stack.

Interrupt handling, Microcontrollers often support interrupts, which are signals that can pause the
normal program execution to handle time-critical events or respond to external stimuli. The
instruction set includes instructions for enabling or disabling interrupts, handling interrupt
requests, and returning from interrupt service routines. [12]

References

[1] W. Wolf, "Multiprocessor Systems-on-Chips," Morgan Kaufmann, 2011.

[2] J. R. S. S. Sanguthevar Rajasekaran, "Multicore Computing: Algorithms, Architectures, and


Applications," Chapman and Hall/CRC, 2011.

[3] j. T. point, "Java T point," [Online]. Available: https://www.javatpoint.com/multitasking-operating-


system. [Accessed 18 February 2024].

[4] "Geeks for Geeks," [Online]. Available: https://www.geeksforgeeks.org/multiprogramming-in-


operating-system/. [Accessed 18 February 2024].

[5] A. G. P. B. &. G. G. Silberschatz, "Operating System Concepts. Wiley.," 2018.

[6] linkdin, "task scheduling in a RTOS," task scheduling, pp. https://www.linkedin.com/pulse/task-


scheduling-rtos-madhavan-vivekanandan, 19 sep 2023.

[7] R. Awati, "interrupt," TechTarget, pp.


https://www.techtarget.com/whatis/definition/interrupt#:~:text=What%20is%20an%20interrupt
%3F,service%20or%20a%20current%20process., 2024.

[8] A. Kamat, "parallel processing and concurrency," What's the difference between concurrent and
parallel programming?, pp. https://www.linkedin.com/advice/0/whats-difference-between-
concurrent-parallel-programming#:~:text=While%20concurrency%20focuses%20on
%20managing,teamwork%20to%20achieve%20tasks%20concurrently., 07 feb 2024.

[9] geeksforgeeks, "Difference between Concurrency and Parallel," Difference between Concurrency and
Parallelism, pp. https://www.geeksforgeeks.org/difference-between-concurrency-and-parallelism/,
25 nov 2020.

[1 R. Awati, "pipelining," techtarget, pp.


0] https://www.techtarget.com/whatis/definition/pipelining#:~:text=Pipelining%20is%20the
%20process%20of,%2C%20orderly%2C%20somewhat%20overlapped%20manner., 2024.

[1 geeksforgeeks, "Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and
1] Throughput)," pipelining, pp. https://www.geeksforgeeks.org/computer-organization-and-
architecture-pipelining-set-1-execution-stages-and-throughput/, 09 Aug 2023.
[1 M. A. J. G. M. a. R. D. M. Mazidi, "8051 Microcontroller and Embedded Systems: Using Assembly and
2] C," Pearson Education, 2008.

You might also like