You are on page 1of 20

SEMICONDUCTOR

MEMORIES/DATA
STORAGE
DEVICES 1

V S Imbulpitiya
Instructor(Electronic)
Technical collage
Gampaha
What is Semiconductor memory?
Semiconductor memory is a digital electronic semiconductor device used
for digital data storage, such as computer memory.
It typically refers to MOS memory, where data is stored within metal–
oxide–semiconductor (MOS) memory cells on a silicon integrated circuit
memory chip. There are numerous different types using different
semiconductor technologies.
Memory is required in computers to store data
and instructions. Memory is physically
organized as a large number of cells that are
capable of storing one bit each. Logically they
are organized as groups of bits called words that
are assigned an address. Data and instructions
are accessed through these memory address. The
speed with which these memory addresses can
be accessed determines the cost of the memory.
Faster the memory speed, higher the price.

RAM chips for computers usually come on removable me


modules like these.
Key difference between Primary memory and secondary
memory
 Primary memory is also called internal memory whereas Secondary memory is also
known as a Backup memory or Auxiliary memory.
 Primary memory can be accessed by the data bus whereas Secondary memory is
accessed by I/O channels.
 Primary memory data is directly accessed by the processing unit whereas Secondary
memory data cannot be accessed directly by the processor.
 Comparing primary and secondary storage devices, Primary storage devices are
costlier than secondary storage device whereas Secondary storage devices are cheaper
compared to primary storage device.
 When we differentiate primary and secondary memory, Primary memory is both
volatile & nonvolatile whereas Secondary memory is always a non-volatile memory.
Cache and Registers
Caches are used most often by the
CPU instantly available data
storage. This is accomplished by
building a small amount of memory,
known as primary or level 1 cache,
right into the CPU. Level 1 cache is
very small, normally ranging
between 2 kilobytes (KB) and 64
KB.

https://youtu.be/yi0FhRqDJfo
The secondary or level 2 cache typically resides on a memory card located near the CPU.
The level 2 cache has a direct connection to the CPU. A dedicated integrated circuit on
the motherboard, the L2 controller, regulates the use of the level 2 cache by the CPU.
Depending on the CPU, the size of the level 2 cache ranges from 256 KB to 2 megabytes
(MB). In most systems, data needed by the CPU is accessed from the cache
approximately 95 percent of the time, greatly reducing the overhead needed when the
CPU has to wait for data from the main memory.

high performance CPUs now have the level 2 cache actually built into the CPU chip
itself. Therefore, the size of the level 2 cache and whether it is onboard (on the CPU)
is a major determining factor in the performance of a CPU.
Level 3 cache
A memory bank built onto the motherboard or within the CPU module. The L3 cache
feeds the L2 cache, and its memory is typically slower than the L2 memory, but faster
than main memory. The L3 cache feeds the L2 cache, which feeds the L1 cache, which
feeds the processor. See L1 cache, L2 cache and cache.

Cash memory
What is primary storage?
Primary storage (also known as main memory or internal memory), often referred to
simply as memory, is the only one directly accessible to the CPU. The CPU continuously
reads instructions stored there and executes them as required. Any data actively operated
on is also stored there in uniform manner.
The two main types of random-access memory (RAM) are static RAM (SRAM), which
uses several MOS transistors per memory cell, and dynamic RAM (DRAM), which uses a
MOS transistor and a MOS capacitor per cell. Non-volatile memory (such as EPROM,
EEPROM and flash memory) uses floating-gate memory cells, which consist of a single
floating-gate MOS transistor per cell.
Random Access Memory(RAM)
RAM is considered the fastest storage and can achieve very high transfer rates of data. When programs or files are
accessed, the data is temporarily loaded from your hard drive into your RAM where it can be smoothly accessed.
However, if your RAM becomes filled, your operating system will adjust and send some of the open programs and files
to your hard drive's paging file. This file is slower than your RAM because it resides on your hard drive and is one of the
causes of your computer being unresponsive. For this reason, having enough RAM to handle all your work allows you to
keep multiple programs open without any concerns with slowed down or unresponsive memory.

Random access (more precisely and more generally


called direct access) is the ability to access an arbitrary
element of a sequence in equal time or any datum from a
population of addressable elements roughly as easily and
efficiently as any other, no matter how many elements
may be in the set. In computer science it is typically
contrasted to sequential access which requires data to be
retrieved in the order it was stored.
Static random-access memory (static RAM or SRAM )
It is a type of random-access memory (RAM) that uses latching circuitry (flip-flop) to
store each bit. SRAM is volatile memory; data is lost when power is removed.
A typical SRAM cell is made up of six
MOSFETs. Each bit in an SRAM is
stored on four transistors (M1, M2, M3,
M4) that form two cross-coupled
inverters. This storage cell has two stable
states which are used to denote 0 and 1.
Two additional access transistors serve
to control the access to a storage cell
during read and write operations
In addition to such six-transistor (6T) SRAM, other kinds of SRAM chips use 4, 8, 10
(4T, 8T, 10T SRAM), or more transistors per bit.[13][14][15] Four-transistor SRAM is
quite common in stand-alone SRAM devices (as opposed to SRAM used for CPU
caches), implemented in special processes with an extra layer of polysilicon, allowing for
very high-resistance pull-up resistors.The principal drawback of using 4T SRAM is
increased static power due to the constant current flow through one of the pull-down
transistors.

Static RAM
Dynamic random access memory
DRAM, is a specific type of random access memory that allows for higher densities at a
lower cost. The memory modules found in laptops and desktops use DRAM.

Invented by Robert Dennard in 1966 at IBM, DRAM works much differently than other
types of memory. The fundamental storage cell within DRAM is composed of two
elements: a transistor and a capacitor.

When a bit needs to be put in memory, the transistor is used to charge or discharge the
capacitor. A charged capacitor represents a logic high, or '1', while a discharged capacitor
represents a logic low, or '0'. The charging/discharging is done via the wordline and
bitline,
During a read or write, the wordline goes high and the transistor connects the capacitor to
the bitline. Whatever value is on the bitline ('1' or '0') gets stored or retrieved from the
capacitor.

The charge stored on each capacitor is too small to be read directly and is instead
measured by a circuit called a sense amplifier. The sense amplifier detects the minute
differences in charge and outputs the corresponding logic level. The act of reading from
the bitline forces the charge to flow out of the capacitor. Thus, in DRAM, reads are
destructive. To get around this, an operation known as precharging is done to put the value
read from the bitline back into the capacitor.

The fact that the capacitors leak charge over time. Therefore, to maintain the data stored
in memory the capacitors must be refreshed periodically. Refreshing works just like a read
and ensures data is never lost.
Random acess memory
?

You might also like