You are on page 1of 18

Multicore computers

A multicore processor is an integrated circuit that has two or more processor cores
attached for enhanced performance and reduced power consumption. These
processors also enable more efficient simultaneous processing of multiple tasks,
such as with parallel processing and multithreading. A dual core setup is similar to
having multiple, separate processors installed on a computer. However, because
the two processors are plugged into the same socket, the connection between them
is faster.
The use of multicore processors or microprocessors is one approach to boost
processor performance without exceeding the practical limitations of
semiconductor design and fabrication. Using multicores also ensure safe operation
in areas such as heat generation.
In addition to the multiple cores, contemporary multicore chips also include L2
cache and, in some cases, L3 cache. The individual cores can execute multiple
instructions in parallel, increasing the performance of software which is written to
take advantage of the unique architecture.
The improvement in performance gained by the use of a multi-core processor
depends very much on the software algorithms used and their implementation. In
the best case, so-called embarrassingly parallel problems may realize speedup
factors near the number of cores.
However, for some purposes, there’s an upper practical limit to how many cores
yield improvements relative to the cost of adding them. Every process, however, is
governed by a primary thread that can only occupy a single core. Thus, the relative
speed of a program like a game or a video renderer is hard-limited to the capability
of the core that the primary thread consumes.
Architecture of multicore computers
The architecture of multicore processors enables communication between all
available cores to ensure that the processing tasks are divided and assigned
accurately. At the time of task completion, the processed data from each core is
delivered back to the mother board by means of a single shared gateway. This
technique significantly enhances performance compared to a single core processor
of similar speed. Multicore technology is very effective in challenging tasks and
applications, such as encoding.
Cache organization

Multiple Levels of Cache


The cache we were looking at before is referred to as a level-1 cache, or simply L1
cache. It’s right up close by the core for quick, easy access, and it’s usually split
into separate sections for data and instructions. It’s optimized to be fast, but it’s not
typically very large. In a multicore Describes a computer chip that has more than
one CPU on it. system, each core normally gets its own L1 cache.
The next level of cache is usually bigger and a bit. The smallest unit of
information. It is a shortened form of "binary digit." Since it's binary, it can have
only two values -- typically 0 and 1. slower than L1 cache. Unsurprisingly, it’s
called L2 cache. L2 cache may be shared between cores or dedicated to each core.
Unlike L1 cache, however, it usually mixes data and instructions rather than
separating them out.
The next level of cache above this – L3 cache – is most typically shared between
cores. That means that data fetched for one core is also available to the other cores
so they don’t have to do separate fetches. The other benefit of sharing the cache is
that, if one core changes the value in the shared cache, then all the other cores have
access to the new value.
There is even sometimes an L4 cache, although it’s less common. It’s also shared,
and bigger than L3 (and also slower, but still faster than the original DRAM Stands
for "dynamic random access memory." This is temporary working memory in a
computer. When the goes off, the memory contents are lost. It's not super-fast, but
it's very cheap, so there's lots of it.).
The challenge is that, when fetching new data from memory, it first goes into the
highest-level cache (L3 or L4), and then, from there, moves into the lower levels.
So it has to pass through them all to get to the L1 cache, where it is finally
available for use.
Advantages of Multicore Computers:
Increased Performance: Multicore computers provide the ability to execute
multiple tasks simultaneously by dividing the workload among multiple cores. This
parallel processing capability leads to increased performance and faster execution
of applications, especially those that can be effectively parallelized.
Enhanced Responsiveness: Multicore systems can allocate dedicated cores for
specific tasks, such as background tasks or system maintenance processes. This
allocation ensures that critical tasks and user interactions receive immediate
attention, resulting in improved system responsiveness.
Scalability: Multicore architectures offer scalability by allowing additional cores
to be added to the system. This scalability enables systems to adapt to changing
workloads and handle increasing computational demands without requiring a
complete hardware overhaul.
Power Efficiency: Compared to using multiple single-core processors, multicore
computers can provide better energy efficiency. By consolidating multiple cores on
a single chip, multicore systems can reduce power consumption, heat generation,
and overall system costs.
Disadvantages of Multicore Computers:
Software Compatibility: Not all software applications are designed to take full
advantage of multicore architectures. Some older or poorly optimized applications
may not effectively utilize multiple cores, resulting in limited performance gains.
Developers need to specifically optimize software to ensure efficient utilization of
the available cores.
Complexity of Parallel Programming: Developing parallel software that
effectively utilizes multiple cores can be challenging. Parallel programming
requires careful management of shared data, synchronization mechanisms, and
load balancing among cores. Writing efficient and bug-free parallel code can be
more complex and time-consuming than writing sequential code.
Increased Complexity and Cost: Multicore systems are more complex than their
single-core counterparts. The design, manufacturing, and maintenance of multicore
processors involve additional complexities, which can result in higher costs
compared to single-core systems. Moreover, the development of multicore-specific
hardware and software tools also adds to the overall complexity and cost.
Parallel processing
Parallel processing is a computing technique where multiple streams of
calculations or data processing tasks co-occur through numerous central processing
units (CPUs) working concurrently. This method is employed to increase the
computational speed of a computer system.
Parallel processing uses two or more processors or CPUs simultaneously to handle
various components of a single activity. Systems can slash a program’s execution
time by dividing a task’s many parts among several processors. Multi-core
processors, frequently found in modern computers, and any system with more than
one CPU are capable of performing parallel processing.
The interest in parallel computing began in the late 1950s, and developments in
supercomputers started to appear in the 1960s and 1970s. These multiprocessors
used shared memory space and carried out parallel operations on a single data set.
Parallel processing derives from multiple levels of complexity. It is distinguished
between parallel and serial operations by the type of registers used at the lowest
level. Shift registers work one bit at a time in a serial fashion, while parallel
registers work simultaneously with all bits of the word. At high levels of
complexity, parallel processing derives from having a plurality of functional units
that perform separate or similar operations simultaneously.
By distributing data among several functional units, parallel processing is installed.
As an example, arithmetic, shift and logic operations can be divided into three
units and operations are transformed into a teach unit under the supervision of a
control unit.
The main advantage of parallel processing is that it provides better utilization of
system resources by increasing resource multiplicity which overall system
throughput.
Cache Organization: In parallel processing systems, each processor typically has
its own private cache, which stores frequently used data for quick access.
However, maintaining cache coherence in a parallel system can be challenging, as
multiple caches may possess different copies of the same memory block.
Techniques such as cache coherence protocols are used to ensure that changes in
the values of shared operands are propagated throughout the system in a timely
fashion.
Memory Organization: Memory organization in parallel computers can be
categorized into two main types: shared memory and distributed memory. In a
shared memory system, multiple processors operate independently but share the
same memory resources. In contrast, in a distributed memory system, each
processor has its own private memory. The choice between shared and distributed
memory can have significant implications for the design and programming of
parallel systems.
Registers Organization: In parallel processing, the type of registers used
distinguishes between parallel and serial operations. Shift registers work one bit at
a time in a serial fashion, while parallel registers work simultaneously with all bits
of the word. The use of parallel registers enables multiple operations to be
performed simultaneously, contributing to the overall speedup achieved by parallel
processing.

Parallel processing, the execution of multiple tasks or instructions simultaneously,


offers several advantages and disadvantages. Let's explore them:
Advantages of Parallel Processing:
Increased Performance: One of the primary advantages of parallel processing is
improved performance. By dividing a task into smaller subtasks and executing
them concurrently, parallel processing can significantly reduce the overall
execution time. This is especially beneficial for computationally intensive tasks
that can be effectively parallelized.
Scalability: Parallel processing allows for scalability by distributing the workload
across multiple processors or cores. As the workload increases, additional
processors or cores can be added to the system, enabling efficient utilization of
available resources and accommodating higher computational demands.
Enhanced Throughput: Parallel processing can improve system throughput by
simultaneously executing multiple tasks or processing multiple data streams. This
is particularly advantageous in scenarios where multiple independent tasks need to
be completed in a time-efficient manner.
Real-Time Processing: Parallel processing is often utilized in real-time systems
where tasks must be executed within strict timing constraints. By parallelizing the
workload, real-time processing systems can meet the required deadlines and ensure
timely responses.
Fault Tolerance: In certain parallel processing configurations, redundancy can be
introduced to enhance fault tolerance. By replicating tasks or data across multiple
processors, parallel processing systems can continue functioning even if one or
more processors fail.

Disadvantages of Parallel Processing:


Complexity: Parallel processing introduces increased complexity compared to
sequential processing. Developing parallel algorithms and software requires careful
consideration of issues such as task partitioning, load balancing, synchronization,
and data sharing. Parallel programming is generally more challenging and can be
more error-prone than sequential programming.
Overhead and Communication: Parallel processing involves communication and
coordination between multiple processors or cores. This communication overhead
can impact performance, especially when tasks frequently need to exchange data or
synchronize their execution. Efficient management of communication and
minimizing synchronization overhead is crucial to ensure optimal parallel
processing performance.
Limited Parallelizability: Not all tasks or algorithms can be effectively
parallelized. Some tasks are inherently sequential, and attempting to parallelize
them may lead to diminished performance or incorrect results. Identifying
parallelizable portions of a problem and designing efficient parallel algorithms can
be a complex task.
Dependency and Data Consistency: Dependencies and data consistency issues
can arise in parallel processing. If tasks depend on the results of other tasks, proper
synchronization mechanisms must be implemented to ensure the correct order of
execution and data consistency. Managing dependencies and maintaining data
integrity across multiple parallel tasks can be challenging.
Cost and Resource Constraints: Implementing parallel processing systems can
involve significant costs. It requires specialized hardware, such as multiple
processors or cores, along with additional infrastructure for communication and
synchronization. The cost of developing parallel algorithms and software, as well
as the complexity of system design, should also be considered.
Intel 8085 Processor: The Intel 8085 is an 8-bit microprocessor introduced by
Intel in March 19761. It has several key components, including the accumulator,
registers, program counter, stack pointer, instruction register, flags register, data
bus, address bus, and control bus2. The 8085 microprocessor has six general-
purpose registers, including B, C, D, E, H, and L, which can be combined to form
16-bit register pairs2. The 8085 uses a multiplexed address/data (AD 0 -AD 7)
bus1.
Intel 8085 are effective because they have the following features:
Temporary storage: Registers are used as temporary storage locations for data
that needs to be processed by the microprocessor. For example, when performing
arithmetic operations, the operands are typically stored in registers.
Addressing: Registers are used for addressing memory locations in the 8085
microprocessor. The program counter (PC) register keeps track of the memory
location of the current instruction, while the stack pointer (SP) register keeps track
of the top of the stack.
Input/Output: Registers are used for communicating with input/output (I/O)
devices. For example, the accumulator (A) register is used for communicating with
the data bus, which is connected to I/O devices.
Status information: Registers are used for storing status information about the
state of the microprocessor. For example, the flag register stores information about
the results of arithmetic and logical operations, including whether a result is
negative, zero, or carry.
Optimization: Registers are used to optimize the performance of the
microprocessor. By using registers to store frequently used data and instructions,
the microprocessor can access this information more quickly than if it had to
retrieve it from memory.
Registers in 8085:
(a) General Purpose Registers – The 8085 has six general-purpose registers to
store 8-bit data; these are identified as- B, C, D, E, H, and L. These can be
combined as register pairs – BC, DE, and HL, to perform some 16-bit operation.
These registers are used to store or copy temporary data, by using instructions,
during the execution of the program.
(b) Specific Purpose Registers –
Accumulator: The accumulator is an 8-bit register (can store 8-bit data) that is the
part of the arithmetic and logical unit (ALU). After performing arithmetical or
logical operations, the result is stored in accumulator. Accumulator is also defined
as register A.
Flag registers: The flag register is a special purpose register and it is completely
different from other registers in microprocessor. It consists of 8 bits and only 5 of
them are useful. The other three are left vacant and are used in the future Intel
versions. These 5 flags are set or reset (when value of flag is 1, then it is said to be
set and when value is 0, then it is said to be reset) after an operation according to
data condition of the result in the accumulator and other registers. The 5 flag
registers are:
Sign Flag: It occupies the seventh bit of the flag register, which is also known as
the most significant bit. It helps the programmer to know whether the number
stored in the accumulator is positive or negative. If the sign flag is set, it means
that number stored in the accumulator is negative, and if reset, then the number is
positive.
Zero Flag: It occupies the sixth bit of the flag register. It is set, when the operation
performed in the ALU results in zero (all 8 bits are zero), otherwise it is reset. It
helps in determining if two numbers are equal or not.
Auxiliary Carry Flag: It occupies the fourth bit of the flag register. In an
arithmetic operation, when a carry flag is generated by the third bit and passed on
to the fourth bit, then Auxiliary Carry flag is set. If not flag is reset. This flag is
used internally for BCD (Binary-Coded Decimal Number) operations. Note – This
is the only flag register in 8085 which is not accessible by user.
Parity Flag: It occupies the second bit of the flag register. This flag tests for
number of 1’s in the accumulator. If the accumulator holds even number of 1’s,
then this flag is set and it is said to even parity. On the other hand, if the number of
1’s is odd, then it is reset and it is said to be odd parity.
Carry Flag: It occupies the zeroth bit of the flag register. If the arithmetic
operation results in a carry (if result is more than 8 bit), then Carry Flag is set;
otherwise it is reset.
Advantages:
Fast access: Registers provide a fast and efficient way to access data and perform
operations. Since the registers are located inside the processor, they can be
accessed quickly without having to wait for data to be fetched from memory.
Reduced memory access: The use of registers can help reduce the number of
memory accesses required, which can improve the overall performance of the
system.
Specialized functionality: Each register in the 8085 microprocessor has a specific
function, such as the accumulator for arithmetic operations and the program
counter for storing the address of the next instruction. This specialized
functionality can make programming and debugging easier.
Reduced complexity: By providing dedicated registers for specific purposes, the
8085 microprocessor reduces the complexity of the programming and execution
process.
Disadvantages:
Limited storage capacity: The 8085 microprocessor has a limited number of
registers, which can restrict the amount of data that can be stored and manipulated
at any given time.
Complex addressing modes: Some of the addressing modes used in the 8085
microprocessor can be complex, which can make programming more difficult.
Context switching: In some cases, switching between different sets of registers
can add overhead and complexity to the programming process.
Lack of flexibility: The fixed number and function of registers in the 8085
microprocessor can limit the flexibility of the system and make it more difficult to
adapt to changing requirements.
The 8086 microprocessor is an 8-bit/16-bit microprocessor designed by Intel in the
late 1970s. It is the first member of the x86 family of microprocessors, which
includes many popular CPUs used in personal computers.
The architecture of the 8086 microprocessor is based on a complex instruction set
computer (CISC) architecture, which means that it supports a wide range of
instructions, many of which can perform multiple operations in a single instruction.
The 8086 microprocessor has a 20-bit address bus, which can address up to 1 MB
of memory, and a 16-bit data bus, which can transfer data between the
microprocessor and memory or I/O devices.
The 8086 microprocessor has a segmented memory architecture, which means that
memory is divided into segments that are addressed using both a segment register
and an offset. The segment register points to the start of a segment, while the offset
specifies the location of a specific byte within the segment. This allows the 8086
microprocessor to access large amounts of memory, while still using a 16-bit data
bus.
The 8086 microprocessor has two main execution units: the execution unit (EU)
and the bus interface unit (BIU). The BIU is responsible for fetching instructions
from memory and decoding them, while the EU executes the instructions. The BIU
also manages data transfer between the microprocessor and memory or I/O
devices.
The 8086 microprocessor has a rich set of registers, including general-purpose
registers, segment registers, and special registers. The general-purpose registers
can be used to store data and perform arithmetic and logical operations, while the
segment registers are used to address memory segments. The special registers
include the flags register, which stores status information about the result of the
previous operation, and the instruction pointer (IP), which points to the next
instruction to be executed.
A Microprocessor is an Integrated Circuit with all the functions of a CPU.
However, it cannot be used stand-alone since unlike a microcontroller it has no
memory or peripherals.
8086 does not have a RAM or ROM inside it. However, it has internal registers for
storing intermediate and final results and interfaces with memory located outside it
through the System Bus.
 ache Organization: The 8086 processor does not have a built-in cache. Its design
predates the use of cache in processors. However, it has internal registers for
storing intermediate and final results and interfaces with memory located outside it
through the System Bus.
 Memory Organization: The 8086 microprocessor has a 20-bit address bus, which
can address up to 1 MB of memory. The memory in an 8086 based system is
organized as segmented memory. One megabyte is physically organized as an odd
bank and an even bank, each of 512Kbytes, addresses in parallel by the processor.
Byte data with even address transferred on D7-D0, while byte data with odd
address is transferred on D15-D8 bus lines.
 Registers Organization: The 8086 microprocessor has 8 registers each of 8 bits,
AH, AL, BH, BL, CH, CL, DH, DL. Each register can store 8 bits. To store more
than 8 bits, we have to use two registers in pairs. There are 4 register pairs AX,
BX, CX, DX. Each register pair can store a maximum of 16-bit data. General-
purpose registers are used for holding variables or data. They can be also used as
counters or as temporary storage for intermediate results during any operation.
These registers perform the following functions, Accumulator (AX) Register, Base
(BX) Register, Counter (CX) Register, Data (DX) Register. The 8086
microprocessor has a 20-bit wide physical address to access 1MB memory
location. But the registers of the 8086 microprocessor that holds the logical address
are only 16-bits wide. Thus 8086 microprocessor implements memory
segmentation for 1MB physical memory where the memory is divided into sections
or segments.

Advantages of Architecture of 8086:


The architecture of the 8086 microprocessor provides several advantages,
including:
Wide range of instructions: The 8086 microprocessor supports a wide range of
instructions, allowing programmers to write complex programs that can perform
many different operations.
Segmented memory architecture: The segmented memory architecture allows
the 8086 microprocessor to address large amounts of memory, up to 1 MB, while
still using a 16-bit data bus.
Powerful instruction set: The instruction set of the 8086 microprocessor includes
many powerful instructions that can perform multiple operations in a single
instruction, reducing the number of instructions needed to perform a given task.
Multiple execution units: The 8086 microprocessor has two main execution units,
the execution unit and the bus interface unit, which work together to efficiently
execute instructions and manage data transfer.
Rich set of registers: The 8086 microprocessor has a rich set of registers,
including general-purpose registers, segment registers, and special registers,
allowing programmers to efficiently manipulate data and control program flow.
Backward compatibility: The architecture of the 8086 microprocessor is
backward compatible with earlier 8-bit microprocessors, allowing programs
written for these earlier microprocessors to be easily ported to the 8086
microprocessor.
Dis-advantages of Architecture of 8086:
The architecture of the 8086 microprocessor has some disadvantages, including:
Complex programming: The architecture of the 8086 microprocessor is complex
and can be difficult to program, especially for novice programmers who may not
be familiar with the assembly language programming required for the 8086
microprocessor.
Segmented memory architecture: While the segmented memory architecture
allows the 8086 microprocessor to address a large amount of memory, it can be
difficult to program and manage, as it requires programmers to use both segment
registers and offsets to address memory.
Limited performance: The 8086 microprocessor has a limited performance
compared to modern microprocessors, as it has a slower clock speed and a limited
number of execution units.
Limited instruction set: While the 8086 microprocessor has a wide range of
instructions, it has a limited instruction set compared to modern microprocessors,
which can limit its functionality and performance in certain applications.
Limited memory addressing: The 8086 microprocessor can only address up to 1
MB of memory, which can be limiting in applications that require large amounts of
memory.
Lack of built-in features: The 8086 microprocessor lacks some built-in features
that are commonly found in modern microprocessors, such as hardware floating-
point support and virtual memory management.

RISC
A Reduced Instruction Set Computer is a type of microprocessor architecture that
utilizes a small, highly-optimized set of instructions rather than the highly-
specialized set of instructions typically found in other architectures. RISC is an
alternative to the Complex Instruction Set Computing (CISC) architecture and is
often considered the most efficient CPU architecture technology available today.
With RISC, a central processing unit (CPU) implements the processor design
principle of simplified instructions that can do less but can execute more rapidly.
The result is improved performance. A key RISC feature is that it allows
developers to increase the register set and increase internal parallelism by
increasing the number of parallel threads executed by the CPU and increasing the
speed of the CPU's executing instructions. ARM, or “Advanced RISC Machine” is
a specific family of instruction set architecture that’s based on reduced instruction
set architecture developed by Arm Ltd. Processors based on this architecture are
common in smartphones, tablets, laptops, gaming consoles and desktops, as well as
a growing number of other intelligent devices.
RISC systems use hard-wired code with a simple instruction set that needs a less
costly CPU than a CISC device. RISC processors are used in smartphones, printers,
tablets and devices that do a specific set of repeatable activities. RISC CPU
technology is increasingly popular in data center systems because of their
performance and ease of use.

Characteristics of RISC
 Simpler instruction, hence simple instruction decoding.
 Instruction comes undersize of one word.
 Instruction takes a single clock cycle to get executed.
 More general-purpose registers.
 Simple Addressing Modes.
 Fewer Data types.
 A pipeline can be achieved.

Advantages of RISC
 Simpler instructions: RISC processors use a smaller set of simple
instructions, which makes them easier to decode and execute quickly. This
results in faster processing times.
 Faster execution: Because RISC processors have a simpler instruction set,
they can execute instructions faster than CISC processors.
 Lower power consumption: RISC processors consume less power than CISC
processors, making them ideal for portable devices.
Disadvantages of RISC
 More instructions required: RISC processors require more instructions to
perform complex tasks than CISC processors.
 Increased memory usage: RISC processors require more memory to store the
additional instructions needed to perform complex tasks.
 Higher cost: Developing and manufacturing RISC processors can be more
expensive than CISC processors.
Examples of RISC architectures:
ARM: ARM (Advanced RISC Machine) is a widely used RISC architecture. It is
known for its energy efficiency and is commonly found in mobile devices,
embedded systems, and microcontrollers. ARM processors are used in popular
devices such as smartphones, tablets, and smartwatches.
MIPS: MIPS (Microprocessor without Interlocked Pipeline Stages) is another
well-known RISC architecture. It has been used in various applications, including
embedded systems, networking devices, and gaming consoles. MIPS processors
are known for their simplicity and high performance.
PowerPC: PowerPC is a RISC architecture that was originally developed by IBM,
Motorola, and Apple. It has been used in a variety of systems, including personal
computers, game consoles, and high-performance computing. PowerPC processors
have been used in Apple Macintosh computers in the past.
SPARC: SPARC (Scalable Processor Architecture) is a RISC architecture
developed by Oracle Corporation. It has been primarily used in servers and high-
performance computing systems. SPARC processors are known for their
scalability and support for multi-threading.
The design philosophy of CISC processors is to build the complexity into the CPU,
so the computing process would not be so taxing on the software and other
hardware components. This allows CISC processors to tackle complex workloads
very quickly and efficiently, and they can benefit from a technique known
as multithreading.
In addition to the inherent benefits of CISC, x86 processors enjoy a complete and
comprehensive software and hardware ecosystem, thanks in part to Intel and
AMD's long years of investing in PC. While the champion of the RISC
architecture, the ARM processor, has been making inroads into the server market,
x86 is still ubiquitous in today's server rooms and IT infrastructure. Innovative new
techniques, such as liquid cooling and immersion cooling, have been invented to
help deal with the relatively high power consumption and heat dissipation of CISC
machines.
Instruction Set Complexity: CISC architectures are known for their complex
instruction sets, which typically include a wide range of instructions with varying
formats and functionalities. These instructions can perform operations such as
arithmetic calculations, memory access, data movement, logic operations, and
control flow. The complexity of the instruction set allows programmers to
accomplish tasks with fewer instructions, potentially reducing the overall code size
and development time.

Variable-Length Instructions: In CISC architectures, instructions are often encoded


using variable-length formats. This means that instructions can have different
lengths in terms of the number of bytes they occupy in memory. Variable-length
instructions allow for a more compact encoding of complex operations and enable
the instruction set to incorporate a larger number of instructions. However, the
variable-length encoding can make instruction decoding more challenging and can
affect the efficiency of instruction fetching and pipeline design.
Addressing Modes: CISC architectures support various addressing modes to
facilitate accessing data from memory. These modes provide flexibility in
specifying the location of operands. Common addressing modes include immediate
addressing (where the operand is embedded within the instruction itself), direct
addressing (where the operand is specified using a memory address), register
indirect addressing (where the operand is stored in a register specified by the
instruction), and indexed addressing (where an index register is used to calculate
the memory address).

Memory Access: CISC architectures typically provide instructions that allow direct
memory access. These instructions can load data from memory into registers or
store data from registers into memory. Additionally, CISC architectures often
support memory access instructions that enable efficient manipulation of data
structures, such as strings or arrays, by providing operations lik
Characteristics of CISC
 Complex instruction, hence complex instruction decoding.
 Instructions are larger than one-word size.
 Instruction may take more than a single clock cycle to get executed.
 Less number of general-purpose registers as operations get performed in
memory itself.
 Complex Addressing Modes.
 More Data types.
Advantages of CISC
 Reduced code size: CISC processors use complex instructions that can
perform multiple operations, reducing the amount of code needed to perform
a task.
 More memory efficient: Because CISC instructions are more complex, they
require fewer instructions to perform complex tasks, which can result in
more memory-efficient code.
 Widely used: CISC processors have been in use for a longer time than RISC
processors, so they have a larger user base and more available software.
Disadvantages of CISC
 Slower execution: CISC processors take longer to execute instructions
because they have more complex instructions and need more time to decode
them.
 More complex design: CISC processors have more complex instruction sets,
which makes them more difficult to design and manufacture.
 Higher power consumption: CISC processors consume more power than
RISC processors because of their more complex instruction sets.
Examples of CISC architectures:
x86: The x86 architecture, developed by Intel and AMD, is one of the most widely
used CISC architectures. It powers most personal computers and servers today. x86
processors support a wide range of instructions and have evolved over several
generations, including the Intel 8086, 80286, 80386, Pentium, and the modern Intel
Core series.
Motorola 68k: The Motorola 68k architecture, also known as the Motorola 68000,
was a popular CISC architecture used in personal computers and workstations in
the 1980s and early 1990s. It was used in early Apple Macintosh computers and
Atari ST machines.
VAX: The VAX (Virtual Address extension) architecture was developed by
Digital Equipment Corporation (DEC) and was widely used in minicomputers and
mainframes during the 1970s and 1980s. VAX processors supported a rich
instruction set, including complex operations such as string manipulation and
decimal arithmetic.
IBM System/360 and z/Architecture: IBM's System/360 and its successor, the
z/Architecture, are examples of CISC architectures used in mainframe computers.
These architectures support a broad range of instructions and provide extensive
hardware support for virtualization, security, and transaction processing.

You might also like