You are on page 1of 37

Education Ministry of the Republic of Moldova

Technical University of Moldova


Radioelectronics and Telecomumunications Faculty
Optoelectronical Systems Chair

YEAR THESIS

DSP of Texas Instruments. Applications of DSP Integra DSP+ARM Processor

Executant
Coseri Marina,
Degree MMRT-101M group

Thesis leader
Veaceslav Perju, dr.hab.
Chisinau 2011
ANNOTATION

Executant: Coşeri Marina

Title: Applications of DSP Integra DSP+ARM Processor

Location: Republic of Moldova,Chisinau

Year: 2011-01-26

Thesis structure: Introduction, Chapter 1, Chapter 2, Conclusion, Bibliography, 4 figures

Keywords: DSP, OMAP(Open Multimedia Application Platform),Application,Texas


Instruments, Applications Processor.

Field of study: DSP(Digital Signal Processing)

Goals and objectives: The main goal in this thesys is to show what is DSP, where is
implemented, and about application of dsp integra dsp+arm processor where is implemented,
what kind of processors works on that basis, analyze Applications Processor Block Diagram
information for OMAP L138 & OMAP L137 .

Material presented in thesis: Material in this thesis is from multiple books about DSP,OMAP ,
different articles from newspapaers and magazines about this new technology.

2
STATEMENT OF ACCOUNTIBILITY

Undersigned, declare on personal responsibility that the material in this thesis are the
results of my elaborations. I realize that, otherwise i’ll suport the consequences in accordance
with the law.

3
SUMMARY

INTRODUCTION
1. General features of Digital Signal Processing…………………...…………………………..6
1.1. DSP Fundamentals …………………..………….………………………………………6
1.2. . Applications of DSP ……………………………………………………………...…..10
2. Applications of DSP Integra DSP+ARM Processor ...........................................................18
2.1. Texas Instruments OMAP …………………………………………………………….…18
2.2 Applications of Integra DSP+ARM Processor ………….…………..……………..…….21
2.2.1 OMAP-L138 Applications Processor……………………………………21
2.2.2 OMAP-L137 Applications Processor……………………………………27
General conclusions
BIBLIOGRAPHY

4
INTRODUCTION

During the 1980s quality became a focus area in this business. During the early 80s a
quality program was instituted. This included wide spread Juran training, as well as
promoting Statistical process control, Taguchi methods and Design for Six Sigma. In the late 80s
TI, along with Eastman Kodak and Allied Signal, began involvement
with Motorola institutionalizing Motorola's Six Sigma methodology. [8] Motorola, who originally
developed the Six Sigma methodology, began this work in 1982. Note that TI's Six Sigma
program began well before 1995 when GE started its legendary Six Sigma policy. In 1992 the
DSEG division of Texas Instruments' quality improvement efforts were rewarded by winning
the Malcolm Baldrige National Quality Award for manufacturing.[11]
TI's innovative analog and DSP technologies, along with our other semiconductor
products, help customers meet real world signal processing requirements. TI is the world leader
in digital signal processing and analog technologies. Texas Instruments was also active in
the defense electronics market starting in 1942 with submarine detection equipment, building on
the seismic exploration technology developed for the oil industry. This business was known over
time as the Laboratory & Manufacturing Division, the Apparatus Division, the Equipment Group
and the Defense Systems & Electronics Group (DSEG).[1]
Digital Signal Processing is one of the most powerful technologies that will shape science and
engineering in the twenty-first century. Revolutionary changes have already been made in a
broad range of fields: communications, medical imaging, radar & sonar, high fidelity music
reproduction, and oil prospecting, to name just a few. Each of these areas has developed a deep
DSP technology, with its own algorithms, mathematics, and specialized techniques. This
combination of breath and depth makes it impossible for any one individual to master all of the
DSP technology that has been developed. DSP education involves two tasks: learning general
concepts that apply to the field as a whole, and learning specialized techniques for your
particular area of interest. This chapter starts our journey into the world of Digital Signal
Processing by describing the dramatic effect that DSP has made in several diverse fields.
A DSP's information can be used by a computer to control such things as security,
telephone, home theater systems, and video compression. Signals may be compressed so that
they can be transmitted quickly and more efficiently from one place to another (e.g.
teleconferencing can transmit speech and video via telephone lines). Signals may also be
enhanced or manipulated to improve their quality or provide information that is not sensed by
humans (e.g. echo cancellation for cell phones or computer-enhanced medical images). Although
real-world signals can be processed in their analog form, processing signals digitally provides
the advantages of high speed and accuracy.[3]

5
This recent history is more than a curiosity; it has a tremendous impact on your ability to
learn and use DSP. Suppose you encounter a DSP problem, and turn to textbooks or other
publications to find a solution. What you will typically find is page after page of equations,
obscure mathematical symbols, and unfamiliar terminology. It's a nightmare! Much of the DSP
literature is baffling even to those experienced in the field. It's not that there is anything wrong
with this material, it is just intended for a very specialized audience. State-of-the-art researchers
need this kind of detailed mathematics to understand the theoretical implications of the work.
[11]

1. General features of Digital Signal Processing

6
1.1. DSP Fundamentals

Digital Signal Processing is one of the most powerful technologies that will shape science
and engineering in the twenty-first century. Revolutionary changes have already been made in a
broad range of fields: communications, medical imaging, radar & sonar, high fidelity music
reproduction, and oil prospecting, to name just a few. Each of these areas has developed a deep
DSP technology, with its own algorithms, mathematics, and specialized techniques. This
combination of breath and depth makes it impossible for any one individual to master all of the
DSP technology that has been developed. DSP education involves two tasks: learning general
concepts that apply to the field as a whole, and learning specialized techniques for your
particular area of interest. This chapter starts our journey into the world of Digital Signal
Processing by describing the dramatic effect that DSP has made in several diverse fields. The
revolution has begun.[9]
Digital Signal Processing is distinguished from other areas in computer science by the
unique type of data it uses: signals. In most cases, these signals originate as sensory data from
the real world: seismic vibrations, visual images, sound waves, etc. DSP is the mathematics, the
algorithms, and the techniques used to manipulate these signals after they have been converted
into a digital form. This includes a wide variety of goals, such as: enhancement of visual images,
recognition and generation of speech, compression of data for storage and transmission, etc.
Suppose we attach an analog-to-digital converter to a computer and use it to acquire a chunk of
real world data. [3]
The roots of DSP are in the 1960s and 1970s when digital computers first became available.
Computers were expensive during this era, and DSP was limited to only a few critical
applications. Pioneering efforts were made in four key areas: radar & sonar, where national
security was at risk; oil exploration, where large amounts of money could be made; space
exploration, where the data are irreplaceable; and medical imaging, where lives could be saved.
The personal computer revolution of the 1980s and 1990s caused DSP to explode with new
applications. Rather than being motivated by military and government needs, DSP was suddenly
driven by the commercial marketplace. Anyone who thought they could make money in the
rapidly expanding field was suddenly a DSP vender. DSP reached the public in such products as:
mobile telephones, compact disc players, and electronic voice mail.[11]
Most DSP techniques are based on a divide-and-conquer strategy called superposition. The
signal being processed is broken into simple components, each component is processed
individually, and the results reunited. This approach has the tremendous power of breaking a
single complicated problem into many easy ones. Superposition can only be used with linear

7
systems, a term meaning that certain mathematical rules apply. Fortunately, most of the
applications encountered in science and engineering fall into this category. This chapter presents
the foundation of DSP: what it means for a system to be linear, various ways for breaking signals
into simpler components, and how superposition provides a variety of signal processing
techniques.
Digital Signal Processing is one of the most powerful technologies that will shape science
and engineering in the twenty-first century. Revolutionary changes have already been made in a
broad range of fields: communications, medical imaging, radar & sonar, high fidelity music
reproduction, and oil prospecting, to name just a few. Each of these areas has developed a deep
DSP technology, with its own algorithms, mathematics, and specialized techniques. This
combination of breath and depth makes it impossible for any one individual to master all of the
DSP technology that has been developed. DSP education involves two tasks: learning general
concepts that apply to the field as a whole, and learning specialized techniques for your
particular area of interest. This chapter starts our journey into the world of Digital Signal
Processing by describing the dramatic effect that DSP has made in several diverse fields. The
term DSP is somewhat misleading because it is usually associated with the FFTs, digital filters,
and spectral analysis theories that are taught as part of the classic DSP curriculum. This is not to
say that DSP is unrelated to frequency domain analysis techniques–spectral analysis remains an
integral part of DSP technology.[9]
A more appropriate definition for DSP as it applies toward the computer industry can be
derived from the name digital signal processing itself–DSP is the processing of analog signals in
the digital domain. Real-world signals, such as voltages, pressures, and temperatures, are
converted to their digital equivalents at discrete time intervals for processing by the CPU of a
digital computer. The result is an array of numerical values stored in memory, ready to be
processed. DSP is useful in almost any application that requires the high-speed processing of a
large amount of numerical data. The data can be anything from position and velocity information
for a closedloop control system, to two-dimensional video images, to digitized audio and
vibration signals.This application note describes DSP from an application point of view to
demonstrate the manydifferent and effective uses for digital signal and array processing. The
common factor in all ofthese applications is the need to do extremely high-speed calculations on
large amounts of data in real time.
Digital signal processing algorithms typically require a large number of mathematical
operations to be performed quickly and repetitively on a set of data. Signals (perhaps from audio
or video sensors) are constantly converted from analog to digital, manipulated digitally, and then
converted again to analog form, as diagrammed below. Many DSP applications have constraints

8
on latency; that is, for the system to work, the DSP operation must be completed within some
fixed time, and deferred (or batch) processing is not viable.

Fig. 1.1 DSP algorithms


Most general-purpose microprocessors and operating systems can execute DSP
algorithms successfully, but are not suitable for use in portable devices such as mobile phones
and PDAs because of power supply and space constraints. A specialized digital signal processor,
however, will tend to provide a lower-cost solution, with better performance, lower latency, and
no requirements for specialized cooling or large batteries.
The architecture of a digital signal processor is optimized specifically for digital signal
processing. Most also support some of the features as an applications processor or
microcontroller, since signal processing is rarely the only task of a system. Some useful features
for optimizing DSP algorithms are outlined below.[12]
In many cases, the signal of interest is initially in the form of an analog electrical voltage
or current, produced for example by a microphone or some other type of transducer. In some
situations, such as the output from the readout system of a CD (compact disc) player, the data is
already in digital form. An analog signal must be converted into digital form before DSP
techniques can be applied. An analog electrical voltage signal, for example, can be digitised
using an electronic circuit called an analog-to-digital converter or ADC. This generates a digital
output as a stream of binary numbers whose values represent the electrical voltage input to the
device at each sampling instant. [2]
By the standards of general purpose processors, DSP instruction sets are often highly
irregular. One implication for software architecture is that hand optimized assembly is
commonly packaged into libraries for re-use, instead of relying on unusually advanced compiler
technologies to handle essential algorithms.
Hardware features visible through DSP instruction sets commonly include:
• Hardware modulo addressing, allowing circular buffers to be implemented without
having to constantly test for wrapping.
• A memory architecture designed for streaming data, using DMA extensively and
expecting code to be written to know about cache hierarchies and the associated delays.
• Driving multiple arithmetic units may require memory architectures to support several
accesses per instruction cycle

9
• Separate program and data memories (Harvard architecture), and sometimes concurrent
access on multiple data busses
• Special SIMD (single instruction, multiple data) operations
• Some processors use VLIW techniques so each instruction drives multiple arithmetic
units in parallel
• Special arithmetic operations, such as fast multiply-accumulates (MACs). Many
fundamental DSP algorithms, such as FIR filters or the Fast Fourier transform (FFT)
depend heavily on multiply-accumulate performance.
• Bit-reversed addressing, a special addressing mode useful for calculating FFTs
• Special loop controls, such as architectural support for executing a few instruction words
in a very tight loop without overhead for instruction fetches or exit testing
• Deliberate exclusion of a memory management unit. DSPs frequently use multi-tasking
operating systems, but have no support for virtual memory or memory protection.
Operating systems that use virtual memory require more time for context
switching among processes, which increases latency.

Memory architecture

• DSPs often use special memory architectures that are able to fetch multiple data and/or
instructions at the same time:
− Harvard architecture
− Modified von Neumann architecture
• Use of direct memory access
• Memory-address calculation unit

1.2. Applications of DSP


In DSP, engineers usually study digital signals in one of the following domains: time
domain (one-dimensional signals), spatial domain (multidimensional signals), frequency
domain,autocorrelation domain, and wavelet domains. They choose the domain in which to
process a signal by making an informed guess (or by trying different possibilities) as to which
domain best represents the essential characteristics of the signal. A sequence of samples from a
measuring device produces a time or spatial domain representation, whereas a discrete Fourier
transform produces the frequency domain information, that is the frequency spectrum.

10
Autocorrelation is defined as the cross-correlation of the signal with itself over varying intervals
of time or space. [1]
The main applications of DSP are audio signal processing, audio compression, digital
image processing, video compression, speech processing, speech recognition, digital
communications, RADAR, SONAR, seismology, and biomedicine. Specific examples are speech
compression and transmission in digital mobile phones, room correction of sound in hi-
fiand sound reinforcement applications, weather forecasting, economic forecasting, seismic data
processing, analysis and control of industrial processes, medical imaging such as CATscans
and MRI, MP3 compression, computer graphics, image manipulation, hi-fi
loudspeaker crossovers and equalization, and audio effects for use with electric
guitar amplifiers.[10]
This recent history is more than a curiosity; it has a tremendous impact on your ability to
learn and use DSP. Suppose you encounter a DSP problem, and turn to textbooks or other
publications to find a solution. What you will typically find is page after page of equations,
obscure mathematical symbols, and unfamiliar terminology. It's a nightmare! Much of the DSP
literature is baffling even to those experienced in the field. It's not that there is anything wrong
with this material, it is just intended for a very specialized audience. State-of-the-art researchers
need this kind of detailed mathematics to understand the theoretical implications of the work.
DSP has revolutionized many areas in science and engineering. A few of these diverse
applications:

• Telecomunications

Telecommunications is about transferring information from one location to another. This


includes many forms of information: telephone conversations, television signals, computer files,
and other types of data. To transfer the information, you need a channel between the two
locations. This may be a wire pair, radio signal, optical fiber, etc. Telecommunications
companies receive payment for transferring their customer's information, while they must pay to
establish and maintain the channel. The financial bottom line is simple: the more information
they can pass through a single channel, the more money they make. DSP has revolutionized the
telecommunications industry in many areas: signaling tone generation and detection, frequency
band shifting, filtering to remove power line hum, etc. Three specific examples from the
telephone network will be discussed here: multiplexing, compression, and echo control.

11
− Multiplexing:

There are approximately one billion telephones in the world. At the press of a few buttons,
switching networks allow any one of these to be connected to any other in only a few seconds.
The immensity of this task is mind boggling! Until the 1960s, a connection between two
telephones required passing the analog voice signals through mechanical switches and
amplifiers. One connection required one pair of wires. In comparison, DSP converts audio
signals into a stream of serial digital data. Since bits can be easily intertwined and later
separated, many telephone conversations can be transmitted on a single channel. For example, a
telephone standard known as the T-carrier system can simultaneously transmit 24 voice signals.
Each voice signal is sampled 8000 times per second using an 8 bit companded (logarithmic
compressed) analog-to-digital conversion. This results in each voice signal being represented as
64,000 bits/sec, and all 24 channels being contained in 1.544 megabits/sec. This signal can be
transmitted about 6000 feet using ordinary telephone lines of 22 gauge copper wire, a typical
interconnection distance. The financial advantage of digital transmission is enormous. Wire and
analog switches are expensive; digital logic gates are cheap.

− Compression

When a voice signal is digitized at 8000 samples/sec, most of the digital information is
redundant. That is, the information carried by any one sample is largely duplicated by the
neighboring samples. Dozens of DSP algorithms have been developed to convert digitized voice
signals into data streams that require fewer bits/sec. These are called data compression
algorithms. Matching uncompression algorithms are used to restore the signal to its original
form. These algorithms vary in the amount of compression achieved and the resulting sound
quality. In general, reducing the data rate from 64 kilobits/sec to 32 kilobits/sec results in no loss
of sound quality. When compressed to a data rate of 8 kilobits/sec, the sound is noticeably
affected, but still usable for long distance telephone networks. The highest achievable
compression is about 2 kilobits/sec, resulting in sound that is highly distorted, but usable for
some applications such as military and undersea communications.

− Echo control
Echoes are a serious problem in long distance telephone connections. When you speak into a
telephone, a signal representing your voice travels to the connecting receiver, where a portion of
it returns as an echo. If the connection is within a few hundred miles, the elapsed time for
receiving the echo is only a few milliseconds. The human ear is accustomed to hearing echoes

12
with these small time delays, and the connection sounds quite normal. As the distance becomes
larger, the echo becomes increasingly noticeable and irritating. The delay can be several hundred
milliseconds for intercontinental communications, and is particularity objectionable. Digital
Signal Processing attacks this type of problem by measuring the returned signal and generating
an appropriate antisignal to cancel the offending echo. This same technique allows speakerphone
users to hear and speak at the same time without fighting audio feedback (squealing). It can also
be used to reduce environmental noise by canceling it with digitally generated antinoise.

• Audio Processing

The two principal human senses are vision and hearing. Correspondingly, much of DSP is
related to image and audio processing. People listen to both music and speech. DSP has made
revolutionary changes in both these areas.

− Music

The path leading from the musician's microphone to the audiophile's speaker is remarkably
long. Digital data representation is important to prevent the degradation commonly associated
with analog storage and manipulation. This is very familiar to anyone who has compared the
musical quality of cassette tapes with compact disks. In a typical scenario, a musical piece is
recorded in a sound studio on multiple channels or tracks. In some cases, this even involves
recording individual instruments and singers separately. This is done to give the sound engineer
greater flexibility in creating the final product. The complex process of combining the individual
tracks into a final product is called mix down. DSP can provide several important functions
during mix down, including: filtering, signal addition and subtraction, signal editing, etc. One of
the most interesting DSP applications in music preparation is artificial reverberation. If the
individual channels are simply added together, the resulting piece sounds frail and diluted, much
as if the musicians were playing outdoors. This is because listeners are greatly influenced by the
echo or reverberation content of the music, which is usually minimized in the sound studio. DSP
allows artificial echoes and reverberation to be added during mix down to simulate various ideal
listening environments. Echoes with delays of a few hundred milliseconds give the impression of
cathedral like locations. Adding echoes with delays of 10-20 milliseconds provide the
perception of more modest size listening rooms.

− Speech generation

13
Speech generation and recognition are used to communicate between humans and machines.
Rather than using your hands and eyes, you use your mouth and ears. This is very convenient
when your hands and eyes should be doing something else, such as: driving a car, performing
surgery, or (unfortunately) firing your weapons at the enemy. Two approaches are used for
computer generated speech: digital recording and vocal tract simulation. In digital recording, the
voice of a human speaker is digitized and stored, usually in a compressed form. During
playback, the stored data are uncompressed and converted back into an analog signal. An entire
hour of recorded speech requires only about three megabytes of storage, well within the
capabilities of even small computer systems. This is the most common method of digital speech
generation used today.
Vocal tract simulators are more complicated, trying to mimic the physical mechanisms by which
humans create speech. The human vocal tract is an acoustic cavity with resonate frequencies
determined by the size and shape of the chambers. Sound originates in the vocal tract in one of
two basic ways, called voiced and fricative sounds. With voiced sounds, vocal cord vibration
produces near periodic pulses of air into the vocal cavities. In comparison, fricative sounds
originate from the noisy air turbulence at narrow constrictions, such as the teeth and lips. Vocal
tract simulators operate by generating digital signals that resemble these two types of excitation.
The characteristics of the resonate chamber are simulated by passing the excitation signal
through a digital filter with similar resonances. This approach was used in one of the very early
DSP success stories, the Speak & Spell, a widely sold electronic learning aid for children.

− Speech recognition

The automated recognition of human speech is immensely more difficult than speech generation.
Speech recognition is a classic example of things that the human brain does well, but digital
computers do poorly. Digital computers can store and recall vast amounts of data, perform
mathematical calculations at blazing speeds, and do repetitive tasks without becoming bored or
inefficient. Unfortunately, present day computers perform very poorly when faced with raw
sensory data. Teaching a computer to send you a monthly electric bill is easy. Teaching the same
computer to understand your voice is a major undertaking.
Digital Signal Processing generally approaches the problem of voice recognition in two steps:
feature extraction followed by feature matching. Each word in the incoming audio signal is
isolated and then analyzed to identify the type of excitation and resonate frequencies. These
parameters are then compared with previous examples of spoken words to identify the closest
match. Often, these systems are limited to only a few hundred words; can only accept speech
with distinct pauses between words; and must be retrained for each individual speaker. While

14
this is adequate for many commercial applications, these limitations are humbling when
compared to the abilities of human hearing. There is a great deal of work to be done in this area,
with tremendous financial rewards for those that produce successful commercial products.

• Echo Location

A common method of obtaining information about a remote object is to bounce a wave off of
it. For example, radar operates by transmitting pulses of radio waves, and examining the received
signal for echoes from aircraft. In sonar, sound waves are transmitted through the water to detect
submarines and other submerged objects. Geophysicists have long probed the earth by setting off
explosions and listening for the echoes from deeply buried layers of rock. While these
applications have a common thread, each has its own specific problems and needs. Digital Signal
Processing has produced revolutionary changes in all three areas.

− Radar

Radar is an acronym for Radio Detection And Ranging. In the simplest radar system, a radio
transmitter produces a pulse of radio frequency energy a few microseconds long. This pulse is
fed into a highly directional antenna, where the resulting radio wave propagates away at the
speed of light. Aircraft in the path of this wave will reflect a small portion of the energy back
toward a receiving antenna, situated near the transmission site. The distance to the object is
calculated from the elapsed time between the transmitted pulse and the received echo. The
direction to the object is found more simply; you know where you pointed the directional
antenna when the echo was received. The operating range of a radar system is determined by two
parameters: how much energy is in the initial pulse, and the noise level of the radio receiver.
Unfortunately, increasing the energy in the pulse usually requires making the pulse longer. In
turn, the longer pulse reduces the accuracy and precision of the elapsed time measurement. This
results in a conflict between two important parameters: the ability to detect objects at long range,
and the ability to accurately determine an object's distance. DSP has revolutionized radar in three
areas, all of which relate to this basic problem. First, DSP can compress the pulse after it is
received, providing better distance determination without reducing the operating range. Second,
DSP can filter the received signal to decrease the noise. This increases the range, without
degrading the distance determination. Third, DSP enables the rapid selection and generation of
different pulse shapes and lengths. Among other things, this allows the pulse to be optimized for
a particular detection problem. Now the impressive part: much of this is done at a sampling rate

15
comparable to the radio frequency used, at high as several hundred megahertz! When it comes to
radar, DSP is as much about high-speed hardware design as it is about algorithms.

− Sonar

Sonar is an acronym for Sound NAvigation and Ranging. It is divided into two categories,
active and passive. In active sonar, sound pulses between 2 kHz and 40 kHz are transmitted into
the water, and the resulting echoes detected and analyzed. Uses of active sonar include: detection
& localization of undersea bodies, navigation, communication, and mapping the sea floor. A
maximum operating range of 10 to 100 kilometers is typical. In comparison, passive sonar
simply listens to underwater sounds, which includes: natural turbulence, marine life, and
mechanical sounds from submarines and surface vessels. Since passive sonar emits no energy, it
is ideal for covert operations. You want to detect the other guy, without him detecting you. The
most important application of passive sonar is in military surveillance systems that detect and
track submarines. Passive sonar typically uses lower frequencies than active sonar because they
propagate through the water with less absorption. Detection ranges can be thousands of
kilometers. DSP has revolutionized sonar in many of the same areas as radar: pulse generation,
pulse compression, and filtering of detected signals. In one view, sonar is simpler than radar
because of the lower frequencies involved. In another view, sonar is more difficult than radar
because the environment is much less uniform and stable. Sonar systems usually employ
extensive arrays of transmitting and receiving elements, rather than just a single channel. By
properly controlling and mixing the signals in these many elements, the sonar system can steer
the emitted pulse to the desired location and determine the direction that echoes are received
from. To handle these multiple channels, sonar systems require the same massive DSP
computing power as radar.

− Reflection seismology

As early as the 1920s, geophysicists discovered that the structure of the earth's crust could be
probed with sound. Prospectors could set off an explosion and record the echoes from boundary
layers more than ten kilometers below the surface. These echo seismograms were interpreted by
the raw eye to map the subsurface structure. The reflection seismic method rapidly became the
primary method for locating petroleum and mineral deposits, and remains so today.
In the ideal case, a sound pulse sent into the ground produces a single echo for each boundary
layer the pulse passes through. Unfortunately, the situation is not usually this simple. Each echo
returning to the surface must pass through all the other boundary layers above where it

16
originated. This can result in the echo bouncing between layers, giving rise to echoes of echoes
being detected at the surface. These secondary echoes can make the detected signal very
complicated and difficult to interpret. Digital Signal Processing has been widely used since the
1960s to isolate the primary from the secondary echoes in reflection seismograms. How did the
early geophysicists manage without DSP? The answer is simple: they looked in easy places,
where multiple reflections were minimized. DSP allows oil to be found in difficult locations,
such as under the ocean.

• Image Processing

Images are signals with special characteristics. First, they are a measure of a parameter over
space (distance), while most signals are a measure of a parameter over time. Second, they
contain a great deal of information. For example, more than 10 megabytes can be required to
store one second of television video. This is more than a thousand times greater than for a similar
length voice signal. Third, the final judge of quality is often a subjective human evaluation,
rather than an objective criteria. These special characteristics have made image processing a
distinct subgroup within DSP.

17
2. Applications of DSP Integra DSP+ARM Processor

2.1. Texas Instruments OMAP

Texas Instruments OMAP (Open Multimedia Application Platform) is a category of


proprietary system on chips (SoCs) for portable and mobile multimedia applications developed
by Texas Instruments. OMAP devices generally include a general-purpose ARM
architecture processor core plus one or more specialized co-processors. Earlier OMAP variants
commonly featured a variant of the Texas Instruments TMS320 series digital signal processor.
The OMAP family consists of three product groups classified by performance and intended
application:

 High-performance applications processors


 Basic multimedia applications processors
 Integrated modem and applications processors

Additionally, there are two primary distribution channels - not all parts being available in
both channels. The genesis of the OMAP product line is from partnership with cell phone
vendors, and the main distribution channel involves sales directly to such wireless
handset vendors. Parts developed to suit evolving cell phone requirements are flexible and
powerful enough to support sales through less specialized catalog channels; some OMAP1 parts,
and many OMAP3 parts, have catalog versions with different sales and support models. Parts
that are obsolete from the perspective of handset vendors may still be needed to support products
developed using catalog parts and distributor-based inventory management
Recently, the catalog channels have received more focus, with OMAP35x and OMAP-L13x
parts being marketed for use with various applications where capable and power-efficient
processors are useful. .[6]
These are parts originally intended for use as application processors in smart phones, with
processors hefty enough to run significant operating systems (such as Linux or Symbian OS),
support connectivity to personal computers, and support various audio and video applications.

OMAP1

The OMAP1 family started with a TI-enhanced ARM core, and then switched to a standard
ARM926 core. It included numerous variants, most easily distinguished according to
manufacturing technology (130 nm except for the OMAP171x series), CPU, peripheral set, and

18
distribution channel (direct to large handset vendors, or through catalog-based distributors). In
March 2009, the OMAP 1710 family chips are still available to handset vendors.
Products using OMAP1 processors include hundreds of cell phone models, and the Nokia
770 Internet Tablets.

 OMAP171x - 220 MHz ARM926EJ-S + C55x DSP, Low-voltage 90 nm technology


 OMAP162x - 204 MHz ARM926EJ-S + C55x DSP + 2MB Internal SRAM, 130 nm
technology
 OMAP5912 - catalog availability version of OMAP1621 (or OMAP1611b in older
versions)
 OMAP161x - 204 MHz ARM926EJ-S + C55x DSP, 130 nm technology
 OMAP1510 - 168 MHz ARM925T (TI-enhanced) + C55x DSP
 OMAP5910 - catalog availability version of OMAP 1510

OMAP2

These parts were never marketed except to handset vendors. Products using these include
both Internet Tablets and mobile phones:

 OMAP2431 - 330 MHz ARM1136 + 220 MHz C64x DSP


 OMAP2430 - 330 MHz ARM1136 + 220 MHz C64x DSP + PowerVR MBX lite GPU
 OMAP2420 - 330 MHz ARM1136 + 220 MHz C55x DSP + PowerVR MBX GPU

OMAP3

The OMAP3 is broken into 3 distinct groups: the OMAP34x, the OMAP35x, and the
OMAP36x. OMAP34x and OMAP36x are distributed directly to large handset (such as cell
phone) manufacturers. OMAP35x is a variant of OMAP34x intended for catalog distribution
channels. The OMAP36x is a 45nm version of the 65nm OMAP34x with higher clock speed.[2]

The video technology in the higher end OMAP3 parts is derived in part from
the DaVinci product line, which first packaged higher end C64x+ DSPs and image processing
controllers with ARM9 processors last seen in the older OMAP1 generation.

Not highlighted in the list below is that each OMAP3 SoC has an "Image, Video, Audio" (IVA2)
accelerator. These units do not all have the same capabilities. Most devices support 12 megapixel
camera images, though some support 5 or 3 megapixels. Some support HD imaging.

19
OMAP4

OMAP4430 and OMAP4440 use dual-core ARM Cortex-A9, a PowerVR SGX540 integrated
3D graphics accelerator, and an IVA3 multimedia hardware accelerator with a programmable
DSP that enables 1080p Full HD and multi-standard video encode/decode. OMAP4 will use
ARM-Cortex A9s with ARMs SIMD engine (Media Processing Engine, aka NEON) which may
have a significant performance advantage in some cases over Nvidia Tegra 2s Cortex-A9s with
non-vector floating point units.
Basic multimedia applications processors are marketed only to handset manufacturers. They
are intended to be highly integrated, low cost chips for consumer products. The OMAP-DM
series are intended to be used as digital media coprocessors for mobile devices with high
megapixel digital still and video cameras.

 OMAP331 - ARM9
 OMAP310 - ARM9
 OMAP-DM270 - ARM7 + C54x DSP
 OMAP-DM299 - ARM7 + ISP + stacked mDDR SDRAM
 OMAP-DM500 - ARM7 + ISP + stacked mDDR SDRAM
 OMAP-DM510 - ARM926 + ISP + 128Mb stacked mDDR SDRAM
 OMAP-DM515 - ARM926 + ISP + 256Mb stacked mDDR SDRAM
 OMAP-DM525 - ARM926 + ISP + 256Mb stacked mDDR SDRAM

Integrated modem and applications processors are marketed only to handset manufacturers.
Many of the newer versions are highly integrated for use in very low cost cell phones.

 OMAPV1035 - single-chip EDGE (was discontinued in 2009 as TI announced baseband


chipset market withdrawal).
 OMAPV1030 - EDGE digital baseband
 OMAP850 - 200 MHz ARM926EJ-S + GSM/GPRS digital baseband + stacked EDGE
co-processor
 OMAP750 - 200 MHz ARM926EJ-S + GSM/GPRS digital baseband + DDR Memory
support
 OMAP733 - 200 MHz ARM926EJ-S + GSM/GPRS digital baseband + stacked SDRAM

20
 OMAP730 - 200 MHz ARM926EJ-S + GSM/GPRS digital baseband + SDRAM
Memory support
 OMAP710 - 133 MHz ARM925 + GSM/GPRS digital baseband

OMAP L-1x

The OMAP L-1x parts are marketed only through catalog channels, and have a different
technological heritage than the other OMAP parts. Rather than deriving directly from cell phone
product lines, they grew from the video-oriented DaVinci product line by removing the video-
specific features while using upgraded DaVinci peripherals. A notable feature is use of afloating
point DSP, instead of the more customary fixed point one.
The Hawkboard uses the OMAP-L138

 OMAP-L137 - 300 MHz ARM926EJ-S + C674x floating point DSP


 OMAP-L138 - 300 MHz ARM926EJ-S + C674x floating point DSP [6]

2.2 Applications of Integra DSP+ARM Processor

2.2.1 OMAP-L138 Applications Processor

The device is a Low-power applications processor based on an ARM926EJ-S™ and a


C674x DSP core. It provides significantly lower power than other members of the
TMS320C6000™ platform of DSPs.
The device enables OEMs and ODMs to quickly bring to market devices featuring robust
operating systems support, rich user interfaces, and high processing performance life through the
maximum flexibility of a fully integrated mixed processor solution.
The dual-core architecture of the device provides benefits of both DSP and Reduced Instruction
Set Computer (RISC) technologies, incorporating a high-performance TMS320C674x DSP core
and an ARM926EJ-S core.
The ARM926EJ-S is a 32-bit RISC processor core that performs 32-bit or 16-bit
instructions and processes 32-bit, 16-bit, or 8-bit data. The core uses pipelining so that all parts
of the processor and memory system can operate continuously.[4]

21
The ARM core has a coprocessor 15 (CP15), protection module, and Data and program
Memory Management Units (MMUs) with table look-aside buffers. It has separate 16K-byte
instruction and 16K-byte data caches. Both are four-way associative with virtual index virtual
tag (VIVT). The ARM core also has a 8KB RAM (Vector Table) and 64KB ROM.
The device DSP core uses a two-level cache-based architecture. The Level 1 program cache
(L1P) is a 32KB direct mapped cache and the Level 1 data cache (L1D) is a 32KB 2-way set-
associative cache. The Level 2 program cache (L2P) consists of a 256KB memory space that is
shared between program and data space. L2 also has a 1024KB Boot ROM. L2 memory can be
configured as mapped memory, cache, or combinations of the two. Although the DSP L2 is
accessible by ARM and other hosts in the system, an additional 128KB RAM shared memory is
available for use by other hosts without affecting DSP performance.
The peripheral set includes: a 10/100 Mb/s Ethernet MAC (EMAC) with a Management
Data Input/Output (MDIO) module; one USB2.0 OTG interface; one USB1.1 OHCI interface;
two inter-integrated circuit (I2C) Bus interfaces; one multichannel audio serial port (McASP)
with 16 serializers and FIFO buffers; two multichannel buffered serial ports (McBSP) with FIFO
buffers; two SPI interfaces with multiple chip selects; four 64-bit general-purpose timers each
configurable (one configurable as watchdog); a configurable 16-bit host port interface (HPI) ; up
to 9 banks of 16 pins of general-purpose input/output (GPIO) with programmable interrupt/event
generation modes, multiplexed with other peripherals; three UART interfaces (each with RTS
and CTS); two enhanced high-resolution pulse width modulator (eHRPWM) peripherals; 3 32-
bit enhanced capture (eCAP) module peripherals which can be configured as 3 capture inputs or
3 auxiliary pulse width modulator (APWM) outputs; and 2 external memory interfaces: an
asynchronous and SDRAM external memory interface (EMIFA) for slower memories or
peripherals, and a higher speed DDR2/Mobile DDR controller.
The Ethernet Media Access Controller (EMAC) provides an efficient interface between the
device and a network. The EMAC supports both 10Base-T and 100Base-TX, or 10 Mbits/second
(Mbps) and 100 Mbps in either half- or full-duplex mode. Additionally an Management Data
Input/Output (MDIO) interface is available for PHY configuration. The EMAC supports both
MII and RMII interfaces.[4]
The SATA controller provides a high-speed interface to mass data storage devices. The
SATA controller supports both SATA I (1.5 Gbps) and SATA II (3.0 Gbps).
The Universal Parallel Port (uPP) provides a high-speed interface to many types of data
converters, FPGAs or other parallel devices. The UPP supports programmable data widths
between 8- to 16-bits on each of two channels. Single-date rate and double-data rate transfers are

22
supported as well as START, ENABLE and WAIT signals to provide control for a variety of
data converters.
A Video Port Interface (VPIF) is included providing a flexible video input/output port.
The OMAP-L138 Applications Processor contains two primary CPU cores: an ARM
RISC CPU for general-purpose processing and systems control; and a powerful DSP to
efficiently handle communication and audio processing tasks. The OMAP-L138 Applications
Processor consists of the following primarycomponents:

• ARM926 RISC CPU core and associated memories


• DSP and associated memories
• A set of I/O peripherals
• A powerful DMA subsystem and SDRAM EMIF interface

Fig. 2.2.1. OMAP-L138 Applications Processor Block Diagram

23
Features

• Highlights
• Dual Core SoC
• ARM926EJ-S Core
• ARM9 Memory Architecture
• C674x Two Level Cache Memory Architecture
• Enhanced Direct-Memory-Access Controller 3 (EDMA3):
• Three Configurable 16550 type UART Modules:
• LCD Controller
• Two Serial Peripheral Interfaces
• Two Multimedia Card (MMC)/Secure Digital (SD) Card Interface
• Two Master/Slave Inter-Integrated Circuit
• One Host-Port Interface (HPI) With 16-Bit-Wide Muxed Address/Data Bus For High
Bandwidth
• Programmable Real-Time Unit Subsystem (PRUSS)
• USB 1.1 OHCI (Host) With Integrated PHY (USB1)
• USB 2.0 OTG Port With Integrated PHY (USB0)
• One Multichannel Audio Serial Port
• Two Multichannel Buffered Serial Ports:
• 10/100 Mb/s Ethernet MAC (EMAC):
• Dual Core SoC
• 375/456-MHz ARM926EJ-S™ RISC MPU [4]

ARM subsystem
This chapter describes the ARM subsystem and its associated memories. The ARM subsystem
consists of the following components:
• ARM926EJ-S - 32-bit RISC processor
• 16-KB Instruction cache
• 16-KB Data cache
• MMU
• CP15 to control MMU, cache, etc.
• Java accelerator
• ARM Internal Memory
– 8 KB RAM

24
– 64 KB built-in ROM
• Embedded Trace Module and Embedded Trace Buffer (ETM/ETB)
• Features:
– The main write buffer has a 16-word data buffer and a 4-address buffer
– Support for 32/16-bit instruction sets
– Fixed little-endian memory format
– Enhanced DSP instructions

The ARM926EJ-S processor is a member of the ARM9 family of general-purpose


microprocessors. The ARM926EJ-S processor targets multi-tasking applications where full
memory management, high performance, low die size, and low power are all important.
The ARM926EJ-S processor supports the 32-bit ARM and the 16-bit THUMB
instruction sets, enabling you to trade off between high performance and high code density. This
includes features for efficient execution of Java byte codes and providing Java performance
similar to Just in Time (JIT) Java interpreter without associated code overhead.[5]
The ARM926EJ-S processor supports the ARM debug architecture and includes logic to
assist in both hardware and software debugging. The ARM926EJ-S processor has a Harvard
architecture and provides a complete high performance subsystem, including the following:
• An ARM926EJ-S integer core
• A Memory Management Unit (MMU)
• Separate instruction and data AMBA AHB bus interfaces

The ARM926EJ-S processor implements ARM architecture version 5TEJ.


The ARM926EJ-S core includes new signal processing extensions to enhance 16-bit
fixed-point performance using a single-cycle 32 × 16 multiply-accumulate (MAC) unit. The
ARM core also has 8 KB RAM (typically used for vector table) and 64 KB ROM (for boot
images) associated with it. The RAM/ROM locations are not accessible by the DSP or any other
master peripherals. Furthermore, the ARM has DMA and CFG bus master ports via the AHB
interface.[7]
The ARM can operate in two states: ARM (32-bit) mode and Thumb (16-bit) mode. You
can switch the ARM926EJ-S processor between ARM mode and Thumb mode using the BX
instruction.
The ARM can operate in the following modes:
• User mode (USR): Non-privileged mode, usually for the execution of most application
programs.

25
• Fast interrupt mode (FIQ): Fast interrupt processing
• Interrupt mode (IRQ): Normal interrupt processing
• Supervisor mode (SVC): Protected mode of execution for operating systems
• Abort mode (ABT): Mode of execution after a data abort or a pre-fetch abort
• System mode (SYS): Privileged mode of execution for operating systems
• Undefined mode (UND): Executing an undefined instruction causes the ARM to enter
undefined mode.[4]

DSP subsystem
The DSP subsystem (Figure 2.2.2) includes TI’s standard TMS320C674x megamodule
and several blocks of internal memory (L1P, L1D, and L2). This document provides an overview
of the DSP subsystem and the following considerations associated with it:
• Memory mapping
• Interrupts
• Power management

26
Fig. 2.2.2.TMS320C674x Megamodule Block Diagram

The C674x megamodule 2.2.2 consists of the following components:


• TMS320C674x CPU
• Internal memory controllers:
– Program memory controller (PMC)
– Data memory controller (DMC)
– Unified memory controller (UMC)
– External memory controller (EMC)
– Internal direct memory access (IDMA) controller
• Internal peripherals:
– Interrupt controller (INTC)
– Power-down controller (PDC)
– Bandwidth manager (BWM)
• Advanced event triggering (AET)

The C674x megamodule implements a two-level internal cache-based memory architecture with
external memory support. Level 1 memory (L1) is split into separate program memory (L1P
memory) and data memory (L1D memory). L1 memory is accessible to the CPU without stalls.
Level 2 memory (L2) can also be split into L2 RAM (normal addressable on-chip memory) and
L2 cache for caching external memory locations. The internal direct memory access controller
(IDMA) manages DMA among the L1P, L1D, and L2 memories.
The C674x megamodule includes the following internal peripherals:
• DSP interrupt controller (INTC)
• DSP power-down controller (PDC)
• Bandwidth manager (BWM)
• Internal DMA (IDMA) controller [5]

2.2.2 OMAP-L137 Applications Processor

27
The OMAP-L137 is a Low-power applications processor based on an ARM926EJ-S™
and a C674x™ DSP core. It provides significantly lower power than other members of the
TMS320C6000™ platform of DSPs.
The OMAP-L137 enables OEMs and ODMs to quickly bring to market devices featuring
robust operating systems support, rich user interfaces, and high processing performance life
through the maximum flexibility of a fully integrated mixed processor solution.
The dual-core architecture of the OMAP-L137 provides benefits of both DSP and
Reduced Instruction Set Computer (RISC) technologies, incorporating a high-performance
TMS320C674x DSP core and an ARM926EJ-S core.
The ARM926EJ-S is a 32-bit RISC processor core that performs 32-bit or 16-bit
instructions and processes 32-bit, 16-bit, or 8-bit data. The core uses pipelining so that all parts
of the processor and memory system can operate continuously.[8]
The ARM core has a coprocessor 15 (CP15), protection module, and Data and program
Memory Management Units (MMUs) with table look-aside buffers. It has separate 16K-byte
instruction and 16K-byte data caches. Both are four-way associative with virtual index virtual
tag (VIVT). The ARM core also has a 8KB RAM (Vector Table) and 64KB ROM.
The OMAP-L137 DSP core uses a two-level cache-based architecture. The Level 1 program
cache (L1P) is a 32KB direct mapped cache and the Level 1 data cache (L1D) is a 32KB 2-way
set-associative cache. The Level 2 program cache (L2P) consists of a 256KB memory space that
is shared between program and data space. L2 also has a 1024KB ROM. L2 memory can be
configured as mapped memory, cache, or combinations of the two. Although the DSP L2 is
accessible by ARM and other hosts in the system, an additional 128KB RAM shared memory is
available for use by other hosts without affecting DSP performance. [7]
The peripheral set includes: a 10/100 Mb/s Ethernet MAC (EMAC) with a Management
Data Input/Output (MDIO) module; two inter-integrated circuit (I2C) Bus interfaces; 3
multichannel audio serial port (McASP) with 16/12/4 serializers and FIFO buffers; 2 64-bit
general-purpose timers each configurable (one configurable as watchdog); a configurable 16-bit
host port interface (HPI); up to 8 banks of 16 pins of general-purpose input/output (GPIO) with
programmable interrupt/event generation modes, multiplexed with other peripherals; 3 UART
interfaces (one with RTS and CTS); 3 enhanced high-resolution pulse width modulator
(eHRPWM) peripherals; 3 32-bit enhanced capture (eCAP) module peripherals which can
be configured as 3 capture inputs or 3 auxiliary pulse width modulator (APWM) outputs; 2 32-
bit enhanced quadrature pulse (eQEP) peripherals; and 2 external memory interfaces: an
asynchronous and SDRAM external memory interface (EMIFA) for slower memories or
peripherals, and a higher speed memory interface (EMIFB) for SDRAM.

28
The Ethernet Media Access Controller (EMAC) provides an efficient interface between
the OMAP-L137 and the network. The EMAC supports both 10Base-T and 100Base-TX, or 10
Mbits/second (Mbps) and 100 Mbps in either half- or full-duplex mode. Additionally an
Management Data Input/Output (MDIO) interface is available for PHY configuration.
The HPI, I2C, SPI, USB1.1 and USB2.0 ports allow the OMAP-L137 to easily control
peripheral devices and/or communicate with host processors. The rich peripheral set provides the
ability to control external peripheral devices and communicate with external processors. For
details on each of the peripherals, see the related sections later in this documentand the
associated peripheral reference guides.
The OMAP-L137 has a complete set of development tools for both the ARM and DSP.
These include C compilers, a DSP assembly optimizer to simplify programming and scheduling,
and a Windows™ debugger interface for visibility into source code execution.

Fig. 2.2.3. OMAP-L137 Applications Processor Block Diagram


Features

29
• Highlights
• Dual Core SoC
- 375/456-MHz ARM926EJ-S™ RISC MPU
- 375/456-MHz C674x VLIW DSP
• TMS320C674x Fixed/Floating-Point VLIW DSP Core
• Enhanced Direct-Memory-Access Controller 3 (EDMA3)
• 128K-Byte RAM Shared Memory
• Two External Memory Interfaces
• Three Configurable 16550 type UART Modules
• LCD Controller
• Two Serial Peripheral Interfaces (SPI)
• Multimedia Card (MMC)/Secure Digital (SD)
• Two Master/Slave Inter-Integrated Circuit
• One Host-Port Interface (HPI)
• USB 1.1 OHCI (Host) With Integrated PHY (USB1)
• Applications
• Industrial Diagnostics
• Military Sonar/ Radar
• Medical measurement
• Professional Audio

ARM Subsystem
The ARM Subsystem includes the following features:
· ARM926EJ-S RISC processor
· ARMv5TEJ (32/16-bit) instruction set
· Little endian
· System Control Co-Processor 15 (CP15)
· MMU
· 16KB Instruction cache
· 16KB Data cache
· Write Buffer
· Embedded Trace Module and Embedded Trace Buffer (ETM/ETB)
· ARM Interrupt controller

30
The ARM Subsystem integrates the ARM926EJ-S processor. The ARM926EJ-S
processor is a member of ARM9 family of general-purpose microprocessors. This processor is
targeted at multi-tasking applications where full memory management, high performance, low
die size, and low power are all important. The ARM926EJ-S processor supports the 32-bit ARM
and 16 bit THUMB instruction sets, enabling the user to trade off between high performance and
high code density. Specifically, the ARM926EJ-S processor supports the ARMv5TEJ instruction
set, which includes features for efficient execution of Java byte codes, providing Java
performance similar to Just in Time (JIT) Java interpreter, but without associated codeoverhead.
The ARM926EJ-S processor supports the ARM debug architecture and includes logic to
assist in both hardware and software debug. The ARM926EJ-S processor has a Harvard
architecture and provides a complete high performance subsystem, including:
· ARM926EJ -S integer core
· CP15 system control coprocessor
· Memory Management Unit (MMU)
· Separate instruction and data caches
· Write buffer
· Separate instruction and data (internal RAM) interfaces
· Separate instruction and data AHB bus interfaces
· Embedded Trace Module and Embedded Trace Buffer (ETM/ETB) [7]

By default the ARM has access to most on and off chip memory areas, including the DSP
Internal memories, EMIFA, EMIFB, and the additional 128K byte on chip shared SRAM.
Likewise almost all of the on chip peripherals are accessible to the ARM by default.
To improve security and/or robustness the OMAP-L137 has extensive memory and
peripheral protection units which can be configured to limit access rights to the various on / off
chip resources to specific hosts; including the ARM as well as other master peripherals. This
allows the system tasks to be partitioned between the ARM and DSP as best suites the particular
application; while enhancing the overall robustness of the solution.

DSP Subsystem
The DSP Subsystem includes the following features:
· C674x DSP CPU
· 32KB L1 Program (L1P)/Cache (up to 32KB)
· 32KB L1 Data (L1D)/Cache (up to 32KB)
· 256KB Unified Mapped RAM/Cache (L2)

31
· 1MB Mask-programmable ROM
· Little endian

Fig. 2.2.4. C674x Megamodule Block Diagram

The C674x Central Processing Unit (CPU) consists of eight functional units, two register
files, and two data paths as shown in Figure 3-2. The two general-purpose register files (A and
B) each contain 32 32-bit registers for a total of 64 registers. The general-purpose registers can
be used for data or can be data address pointers. The data types supported include packed 8-bit
data, packed 16-bit data, 32-bit data, 40-bit data, and 64-bit data. Values larger than 32 bits, such
as 40-bit-long or 64-bit-long values are stored in register pairs, with the 32 LSBs of data placed
in an even register and the remaining 8 or 32 MSBs in the next upper register (which is always
an odd-numbered register).The eight functional units (.M1, .L1, .D1, .S1, .M2, .L2, .D2, and .S2)

32
are each capable of executing one instruction every clock cycle. The .M functional units perform
all multiply operations. The .S and .L units perform a general set of arithmetic, logical, and
branch functions. The .D units primarily load data from memory to the register file and store
results from the register file into memory.
The C674x CPU combines the performance of the C64x+ core with the floating-point
capabilities of the C67x core. Each C674x .M unit can perform one of the following each clock
cycle: one 32 x 32 bit multiply, one 16 x32 bit multiply, two 16 x 16 bit multiplies, two 16 x 32
bit multiplies, two 16 x 16 bit multiplies with add/subtract capabilities, four 8 x 8 bit multiplies,
four 8 x 8 bit multiplies with add operations, and four 16 x 16 multiplies with add/subtract
capabilities (including a complex multiply). There is also support for Galois field multiplication
for 8-bit and 32-bit data. Many communications algorithms such as FFTs and modems require
complex multiplication. The complex multiply (CMPY) instruction takes for 16-bit inputs and
produces a 32-bit real and a 32-bit imaginary output. There are also complex multiplies with
rounding capability that produces one 32-bit packed output that contain 16-bit real and 16-bit
imaginary values. The 32 x 32 bit multiply instructions provide the extended precision necessary
for high-precision algorithms on a variety of signed and unsigned 32-bit data types.
The .L or (Arithmetic Logic Unit) now incorporates the ability to do parallel add/subtract
operations on a pair of common inputs. Versions of this instruction exist to work on 32-bit data
or on pairs of 16-bit data performing dual 16-bit add and subtracts in parallel. There are also
saturated forms of these instructions.[7]
The C674x core enhances the .S unit in several ways. On the previous cores, dual 16-bit
MIN2 and MAX2 comparisons were only available on the .L units. On the C674x core they are
also available on the .S unit which increases the performance of algorithms that do searching and
sorting. Finally, to increase data packing and unpacking throughput, the .S unit allows sustained
high performance for the quad 8-bit/16-bit and dual 16-bit instructions. Unpack instructions
prepare 8-bit data for parallel 16-bit operations. Pack instructions return parallel results to output
precision including saturation support.
The C674x core uses a two-level cache-based architecture. The Level 1 Program cache
(L1P) is 32 KB direct mapped cache and the Level 1 Data cache (L1D) is 32 KB 2-way set
associated cache. The Level 2 memory/cache (L2) consists of a 256 KB memory space that is
shared between program and data space.
L2 memory can be configured as mapped memory, cache, or a combination of both.[8]

33
Conclusion

In this thesis, I described the importance of the company's digital signal processor in
Texas Intruments and it's applications.
In this thesis, I described the importance of the company's digital signal processor in
Texas Intruments and it's applications.
This recent history is more than a curiosity; it has a tremendous impact on your ability to
learn and use DSP. Suppose you encounter a DSP problem, and turn to textbooks or other
publications to find a solution. What you will typically find is page after page of equations,
obscure mathematical symbols, and unfamiliar terminology. It's a nightmare! Much of the DSP
literature is baffling even to those experienced in the field. It's not that there is anything wrong
with this material, it is just intended for a very specialized audience. State-of-the-art researchers
need this kind of detailed mathematics to understand the theoretical implications of the work.
In Chapter 2 I examined Open Multimedia Application Platform,this's an application
DSP, these are parts originally intended for use as application processors in smart phones, with
processors hefty enough to run significant operating systems (such as Linux or Symbian OS),
support connectivity to personal computers, and support various audio and video applications.

34
BIBLIOGRAPHY

1. Communications Infrastructure and Voice/DSP System - David Bell Greg Wood,


http://focus.ti.com/lit/an/sprab27a/sprab27a.pdf
2. Alango Technologies and solution - http://www.alango.com/index.php
3.Digital Signal Processing Fundamentals – D.Koening, National Instruments© ,
http://www.sss-mag.com/pdf/sigdsp.pdf
4.OMAP-L138 Low-Power Applications Processor - http://focus.ti.com/lit/ds/symlink/omap-
l138.pdf
5. OMAP-L138 Applications Processor System, Reference Guid, Literature Number:
SPRUGM7D, April 2010 - http://focus.ti.com/lit/ug/sprugm7d/sprugm7d.pdf
6. Texas Instruments OMAP - http://en.wikipedia.org/wiki/Texas_Instruments_OMAP
7. ARM the architecture for the digital world-
http://www.arm.com/products/processors/classic/arm9/
8. OMAP-L137 Low-Power Applications Processor,SPRS563D–SEPTEMBER 2008–REVISED
AUGUST 2010 - http://focus.ti.com/lit/ds/symlink/omap-l137.pdf
9.Introduction to DSP – DSP processor:characteristics -
http://www.bores.com/courses/intro/chips/6_basics.htm
10. The scientist & engineer's guide to digital signal processing-
http://www.analog.com/en/embedded-processing-dsp/learning-and-
development/content/scientist_engineers_guide/fca.html-
11. History Of The DSP - http://www.slideshare.net/ratbagradio/history-of-the-dsp-1960s-to-
1980s-by-christopher-pickering-presentation
12. Digital Signal Processing- http://www.cl.cam.ac.uk/teaching/0910/DSP/slides-2up.pdf

35
Marina Coşeri
str. Soarelui 138 A
or. Chişinău MD-2011
Tel: +373 691 23116
Email: coseri@mail.ru

STUDII Universitatea Tehnică din Moldova (2006-2010),


Facultatea Radiotelecomunicaţii, Specialitatea Inginerie şi
Management în Telecomunicaţii
• Practica de iniţiere S.A. MOLDTELECOM CTI (27
decembrie 2007
- 15ianuarie 2008)
• Practica de producţie S.A. MOLDTELECOM DMTC
centrala 22 (24 decembrie 2008 - 12 ianuarie
2009)
• Cursuri : CCNA CISCO ( februarie 2010-___________)

Liceul „Mihai Eminescu”Chişinău Moldova (1994-2006)

• membră a echipei de dezbateri


• studii francofone

EXPERIENŢĂ PROFESIONALĂ
• telefonistă S.A MOLDTELECOM DMTC (1 iulie 2010-30
septembrie)
• consultant vinzari persoane fizice S.A
MOLDTELECOM DMTC (1 octombrie-
___________)
• Curs de management al carierei (noiembrie 2010)

ACTIVITATE ŞTIINŢIFICĂ

• Conferinţa jubiliară tehnnico-ştiintifică a colabratorilor,


doctoranzilor şi studenţilor consacrată celei de-a 40-a
Aniversare a doctoranturii U.T.M. (17-18 noembrie
2006). Tema comunicării: “Fibre Optice”

36
LIMBI
• Romana – nativ
• Limba rusă - fluent
• Franceza - fluent
• Engleza –intermediar

DIVERSE
• Computerul: MS Office (Word, Excel); AutoCad;
Internet;
• Abilităţi : atenţia, abilităţi oratorice, lucrul în echipă,
responsabilă, creativă ;
• Permis de conducere: Categoria B

INTERESE
• Sport: dansuri sportive, volei
• Călătoriile, Calculatorul, Tehnologiile Informationale
• Muzica, Arta

37

You might also like