You are on page 1of 84

Lossless EEG Compression Algorithm Based on SemiSupervised

Learning for VLSI Implementation


Abstract:
In this paper, a hardware-oriented lossless EEG compression algorithm including a two-stage

prediction, voting prediction and tri-entropy coding is proposed. In two stages prediction, 27

conditions and 6 functions are used to decide how to predict the current data from previous data.

Then, voting prediction finds optimal function according to 27 conditions for best function to

produce best Error (the difference of predicted data and current data). Moreover, a tri-entropy

coding technique is developed based on normal distribution. The two-stage Huffman coding and

Golomb-Rice coding was used to generate the binary code of Error value. In CHB-MIT Scalp

EEG Database, the novel EEG compression algorithm achieves average compression rate to

2.37. The proposed hardware-oriented algorithm is suitable for VLSI implementation due to its

low complexity.
CHAPTER 1

INTRODUCTION

EEG (Electroencephalography, EEG) is a kind of biomedical signal, that plays an important role

in neurology. This technique is built to record tiny electrical signal form human brain. It can help

neurologists diagnose potential functioning disorders from brains, such as epilepsy, stroke and

sleep disorders. Also, Sports Science uses EEG to monitor athlete to increase their concentration

and performance. Amount all kinds of available, portable EEG device is excepted to provide

comfortable and ease of accessibility during EEG signal acquisition procedure.

Recently, wireless body sensor network (WBSN) [1] has been widely used in measuring

biomedical signals during these years. The WBSN provides a good solution for solving long

term biomedical measurement. However, the using time per charge of wireless device is still

limited by battery capacity and data transmission time. Thus, compression technique is a good

way to reduce the data, which can decrease data transmission time and keep the wireless device

to work more time. Previous studies [2][3] provided a brilliant compression technique to solve

the limitation of wireless device. For biomedical signal, any loss of data processing is

intolerable. The data missing may cause the wrong diagnosis and fail treatment. As mentioned

above, a lossless EEG compression method needs to be developed for reducing data transmission

time and keeping data quality.


Thus, a lossless EEG compression hardware-oriented algorithm based on semi-supervised

learning technique was proposed in this study. The compression algorithm achieved 2.37 CR in

CHB-MIT Scalp EEG Database and easily implemented by VLSI architecture.

Energy Consumption Reduction Using Physical Properties of Semiconductors:

One of the commonly used approaches to conserve energy is dynamic voltage-frequency

scaling (DVFS) (Le Sueur and Heiser, 2010). Lowering the operation frequency of a processor

reduces the power consumption but will not lead to a direct reduction of energy consumption

because a given program requires proportionally more time to execute. Energy savings can be

achieved only because semiconductors commonly require a lower voltage supply when operating

at lower frequencies. The relation of active power (ignoring leakage power) to the supply

voltage VDD is

P~VDD2×f,

resulting in a super linear reduction of dynamic power consumption when reducing the operating

frequency and supply voltage simultaneously.

A number of additional hardware-based methods are available to reduce the power consumption.

Two methods commonly used in modern semiconductors are clock gating (Wu et al., 2000)

and power gating (Shi et al., 2007). Clock gating disconnects a circuit or a part of it from its

driving clock(s). Because the dynamic power consumption of CMOS circuits is dependent on the

operating frequency, this can be reduced to zero using clock gating. However, because of effects

such as leakage currents, static power consumption is still relevant.


The dynamic and static power consumption of a circuit can be reduced to zero using power

gating, which switches of the supply voltage to a circuit or a part of it. Although power gating

has the obvious advantage of significant power reduction, it requires all data stored in the power

gated circuit (e.g., in flip-flops or embedded memories) to be restored after the power is

reapplied. Consequentially, this takes significantly longer compared to reapplying the frequency

to a clock gated circuit.

Another method to reduce the energy consumption is to exploit differences in power

consumption in the memory hierarchy of a system. Small static memories, called scratchpad

memories (SPM) (Banakar et al., 2002) or tightly coupled memories (TCM), are significantly

more energy efficient than large dynamic memories. An example for the relation of memory size

to the energy required per access is given in Figure 7.5 (Marwedel et al., 2004). An approach to

reduce the memory energy can rely on the locality of reference principle (Denning, 2005). This

principle describes the phenomenon that a given value or related storage locations are frequently

accessed by programs. Temporal locality refers to the reuse of specific data within a small time

frame whereas spatial locality refers to the use of data objects within storage locations that are

close to each other. Both locality patterns can be exploited to conserve energy by placing the

related data objects into scratchpad memory.


Power Management in Datacenters:

Power management is one of the key challenges in datacenters. The power issue is one of the

most important considerations for almost every decision-making process in a datacenter. In this

context, the power issue refers to power distribution and delivery challenges in a datacenter,

electrical energy cost due to average power consumption in the IT equipment and the room air

conditioning, and power dissipation constraints due to thermal power budgets for VLSI chips.

Figure 2 depicts a distributed power management architecture composed of server-level power

managers, plus blade enclosure and rack-level, and datacenter-level power provisioners, denoted

as SPMs, EPPs, and DPP, respectively. There is one SPM per server, one EPP per blade

enclosure, and a single DPP for the whole datacenter. This architecture is similar to the four-

layer architecture proposed in [48]. The only difference with the architecture proposed in [48] is

that instead of using one server power manager for each server that minimizes the average power
consumption and avoids power budget violation, two power managers are proposed to do these

jobs.

A number of dynamic power provisioning policies have been presented in the literature,

including [48–50], where the authors propose using dynamic (as opposed to static) power

provisioning to increase the performance in datacenter and decrease power consumption. Notice

that the power provisioning problem can be formulated as deciding how many computing

resources can be made active with a given total power budget for the datacenter.

Fan et al. [49] present the aggregate power usage characteristics of different units (servers, racks,

clusters, and datacenter) in a datacenter for different applications over a long period of time. This
data is analyzed in order to maximize the use of the deployed power capacity in the datacenter

while reducing the risk of any power budget violations. In particular, this reference shows that

there is a large difference between theoretical peak and actual peak power consumptions for

different units. This difference grows as the unit size grows. This shows that the opportunity of

minimizing the power budget under performance constraints (or maximizing the number of

servers that are turned ON under a fixed power budget) increases as one goes higher in the

datacenter hierarchy (e.g., from individual servers to datacenter as a whole). For example, it is

reported that in a real Google datacenter, the ratio of the theoretical peak power consumption to

actual maximum power consumption is 1.05, 1.28, and 1.39 for rack, power distribution unit

(PDU), and cluster, respectively. The authors consider two approaches usually used for power

and energy saving in datacenters, i.e., DVFS and reducing the idle power consumption in servers

and enclosures (for example, by power gating logic and memory). Reported results suggest that

employing the DVFS technique can result in 18% peak power reduction and 23% total energy

reduction in a model datacenter. Moreover, decreasing the idle power consumption of the servers

to 10% of their peak power can result in 30% peak power and 50% energy reduction. Based on

these analyses and actual measurements, the authors present a dynamic power provisioning

policy for datacenters to increase the possibility of better utilization of the available power while

protecting the power distribution hierarchy against overdraws.

Exploring the best way of distributing a total power budget among different servers in a server

farm in order to reach the highest performance level is studied in [51]. Moreover, an approach to

reduce the peak power consumption of servers by dynamic power allocation using workload and

performance feedbacks is presented in [52].


Design of an effective server-level power management is perhaps the most researched power

management problem in the literature. Various dynamic power management (DPM) techniques

that solve versions of this problem have been presented by researchers. These DPM approaches

can be broadly classified into three categories: ad hoc [53], stochastic [54], and learning-based

methods [55].

Server-level power manager can be quite effective in reducing the power consumption of

datacenter. As an example, Elnozahy et al. [56] present independent as well as coordinated

voltage and frequency scaling and turn ON/OFF policies for servers in a datacenter and compare

them against each other from a power savings perspective. Their results indicate that independent

DVFS policies for individual servers results in 29% power reduction compared to a baseline

system with no DVFS. In contrast, a policy that considers only turning ON/OFF servers results

in 42% lowering of the power consumption. The largest power saving of 60% is reported for a

policy with coordinated DVFS and dynamic server ON/OFF decisions.

DPM techniques typically try to put the power consuming components to idle mode as often as

possible to maximize the power saving. Studies on different datacenter

workloads [7,49,57] show frequent short idle times in workload. Because of the short widths of

these idle times, components cannot be switched to their deep sleep modes (which consume

approximately zero power) considering the expected performance penalty of frequent go-to-sleep

and wakeup commands. At the same time, because of energy nonproportionality of current

servers [7], idle server power modes give rise to relatively high power consumption compared to

the sleep mode power consumption. As discussed at length before, VM consolidation is an

answer to this problem. A new solution is however emerging. More precisely, a number of new

architectures have been presented for hardware with very low (approximately zero) idle mode
power consumption (energy-proportional servers) to be able to reduce the average power

consumption in case of short idle times [4,49].

There are many examples of work that describe a combined solution for power and resource

management solution. For example, Wang and Wang [58] present a coordinated control solution

that includes a cluster-level power control loop and a performance control loop for every VM.

These control loops are configured to achieve desired power and performance objectives in the

datacenter. Precisely, the cluster-level power controller monitors the power consumption of the

servers and sets the DVFS state of the servers to reach the desired power consumption. In the

same venue, the VM performance controller dynamically manages the VM performance by

changing the resource (CPU) allocation policy. Finally, a cluster-level resource coordinator is

introduced whose job is to migrate the VMs in case of performance violation. As another

example, Buyya and Beloglazov [59] propose a management architecture comprising of a VM

dispatcher, as well as local and global managers. A local manager migrates a VM from one

server to another in case of SLA violations, low server utilization, high server temperature, or

high amount of communication with another VM in a different server. A global manager

receives information from local managers and issues commands for turning ON/OFF servers,

applying DVFS, or resizing VMs.

This chapter tackles the resource management problem in a cloud computing system. Key

features of our formulation and proposed solution are that we consider heterogeneous servers in

the system and use a two-dimensional model of the resource usage accounting for both

computational and memory bandwidth. We propose multiple copies of VMs to be active in each
time in order to reduce the resource requirement for each copy of the VM and hence help to

increase the chances for VM consolidation. Finally, an algorithm based on DP and local search is

described. This algorithm determines the number of copies of each VM and the placement of

these copies on servers so as to minimize some total system cost function.

Task definition:

The cores executing the benchmarks have distinct computational phases or execution windows at

runtime. At the beginning of each phase/window, dynamic V/F levels (for fine-grain approach)

are adjusted to meet the optimization goals. To ensure the correct execution of ROI, the

benchmark source codes are instrumented with synchronization routines (such as barriers) to

resolve memory access delays and perform data race recovery for the current phase before

executing the next one. This book chapter leverages such computational phases, separated by

barriers, to define tasks executed on each core, where phase boundaries are established by the

barriers. Each phase in this application, which may consist of one or more function invocations,

corresponds to a task that represents the core's workload.

Definition 2:

An application consists of a set of tasks T = {τ1…τr}, as shown in Fig. 3, where each task τi,

executed on core ci (1 ≤ i ≤ r), is composed of a set of subtasks τi = {τi,1…τi,p} and whose

execution times may include memory access delays for data exchange among the subtasks

through the shared memory. Here, r and p denote the number of cores and application phases,

respectively. In Fig. 3, during an interval, gray portions show computation periods of cores

executing tasks and black portions show the core's idle periods representing overheads caused by

inter-core synchronizations at the end of each interval. The optimization techniques, discussed in

this book chapter, take advantage of these variations in idle periods to improve energy efficiency
and performance by slowing down cores that execute subtasks with longer idle periods and

speeding up cores with subtasks that have shorter idle periods within any given interval.

Inclusion of high-voltage effects in III-Nitride devices:

A significant non-ideality that still persists in GaN-HEMT technology and impacts its circuit-

level performance is the so-called “charge-trapping effect” which manifests as reduced on-

currents and increased Ron and/or as dynamic VT shifts during switching conditions. The origins

of charge-trapping is attributed to surface and/or bulk donor trap-states that are filled during

switching causing a reduction in ns either (i) in the drain-access region (increasing R on) and/or (ii)

under the gate (causing VT shifts), with the effect seemingly aggravated by high drain-to-gate

fields, switching frequency and temperatures. Most models account for this effect using

empirical trapping modules since the effect is heavily process-dependent and varies vastly across

technologies that can be fitted to specific outcomes.

(i) is modeled in MVSG model using an average charge-trapping module that can mimic the

observed knee-walk-out as an RC-filter whose output is the extent to which RSh in the drain-

access transistor is increased during switching. The input source-function is dependent

on VDG and T and is shown in Fig. 39B. The function is given by:
Chapter 2

Literature survey:

1. An efficient micro control unit with a reconfigurable filter design for wireless body

sensor networks (WBSNs):

In this paper, a low-cost, low-power and high performance micro control unit (MCU) core is

proposed for wireless body sensor networks (WBSNs). It consists of an asynchronous interface,

a register bank, a reconfigurable filter, a slop-feature forecast, a lossless data encoder, an error

correct coding (ECC) encoder, a UART interface, a power management (PWM), and a multi-

sensor controller. To improve the system performance and expansion abilities, the asynchronous
interface is added for handling signal exchanges between different clock domains. To eliminate

the noise of various bio-signals, the reconfigurable filter is created to provide the functions of

average, binomial and sharpen filters. The slop-feature forecast and the lossless data encoder is

proposed to reduce the data of various biomedical signals for transmission. Furthermore, the

ECC encoder is added to improve the reliability for the wireless transmission and the UART

interface is employed the proposed design to be compatible with wireless devices. For long-term

healthcare monitoring application, a power management technique is developed for reducing the

power consumption of the WBSN system. In addition, the proposed design can be operated with

four different bio-sensors simultaneously. The proposed design was successfully tested with a

FPGA verification board. The VLSI architecture of this work contains 7.67-K gate counts and

consumes the power of 5.8 mW or 1.9 mW at 100 MHz or 133 MHz processing rate using a

TSMC 0.18 μm or 0.13 μm CMOS process. Compared with previous techniques, this design

achieves higher performance, more functions, more flexibility and higher compatibility than

other micro controller designs.

2. VLSI Implementation of an Efficient Lossless EEG Compression Design for

Wireless Body Area Network:

Data transmission of electroencephalography (EEG) signals over Wireless Body Area Network

(WBAN) is currently a widely used system that comes together with challenges in terms of

efficiency and effectivity. In this study, an effective Very-Large-Scale Integration (VLSI) circuit

design of lossless EEG compression circuit is proposed to increase both efficiency and effectivity

of EEG signal transmission over WBAN. The proposed design was realized based on a novel

lossless compression algorithm which consists of an adaptive fuzzy predictor, a voting-based

scheme and a tri-stage entropy encoder. The tri-stage entropy encoder is composed of a two-stage
Huffman and Golomb-Rice encoders with static coding table using basic comparator and

multiplexer components. A pipelining technique was incorporated to enhance the performance of

the proposed design. The proposed design was fabricated using a 0.18 μm CMOS technology

containing 8405 gates with 2.58 mW simulated power consumption under an operating condition

of 100 MHz clock speed. The CHB-MIT Scalp EEG Database was used to test the performance

of the proposed technique in terms of compression rate which yielded an average value of 2.35

for 23 channels. Compared with previously proposed hardware-oriented lossless EEG

compression designs, this work provided a 14.6% increase in compression rate with a 37.3%

reduction in hardware cost while maintaining a low system complexity.

3. A unified active and semi-supervised learning framework for image compression:

We consider the problem of lossy image compression from machine learning perspective.

Typical image compression algorithms first transform the image from its spatial domain

representation to frequency domain representation using some transform technique, such as

discrete cosine transform and discrete wavelet transform, and then code the transformed values.

Recently, instead of performing a frequency transformation, machine learning based approach

has been proposed which uses the color information from a few representative pixels to learn a

model which predicts color on the rest of the pixels. Selecting the most representative pixels is

essentially an active learning problem, while colorization is a semi-supervised learning problem.

In this paper, we propose a novel active learning algorithm, called graph regularized

experimental design (GRED), which shares the same principle of the semi-supervised learning

algorithm used for colorization. This way, active and semi-supervised learning is unified into a

single framework for pixel selection and colorization. Our experimental results suggest that the
proposed approach achieves higher compression ratio and image quality, while the compression

time is significantly reduced.

4. Fuzzy Feature Extraction for Multichannel EEG Classification:

EEG signals (EEGs) are usually collected by placing multiple electrodes at various positions

along the scalp as multichannel data. Given that many channels are collected for each single-

trial, the multichannel EEG classification problem can be treated as multivariate time series

classification problem. For multichannel EEG data to be more accurately classified, we propose

an algorithm, called the fuzzy multichannel EEG classifier (FMCEC). This algorithm can take

into consideration the interaction among different signals collected at different time instants and

locations on the skull when constructing a classifier. The FMCEC first preprocess raw EEG data

by eliminating noise by discretization of the data. It then performs fuzzification of the resulting

discretized data to capture imprecision and vagueness in the data. Given the fuzzified data,

FMCEC then discovers intrachannel patterns within each channel and then interchannel patterns

between different channels of EEGs. The discovered patterns, which are represented as fuzzy

temporal patterns, are then used to characterize and differentiate between different classes of

multichannel EEG data. To evaluate the effectiveness of FMCEC, we tested it with several sets

of real EEG datasets. The results show that the algorithm can be a promising tool for the

classification of multichannel EEG data.

5. Performance comparison of Huffman Coding and Double Huffman Coding:

Huffman coding [11] is a most popular technique for generating prefix-free codes [7, 10]. It is an

efficient algorithm in the field of source coding. It produces the lowest possible number of code

symbols of a single source symbol [1]. Huffman coding is a most widely used lossless

compression technique [2]. However, there are some limitations that arise in Huffman coding
[20, 21]. This method produces a code of few bits for a symbol having high probability of

occurrence and large number of bits for a symbol having low probability of occurrence [3].

Instead of this, in Double Huffman Coding when the code word of the symbol has been

generated it will be compressed on binary basis. Through this technique a better result be

achieved. In this paper we discussed the technique of Huffman Coding and Double Huffman

coding and compare their performance analysis.


CHAPTER 3

EEG

The human electroencephalogram (EEG) was discovered by the German psychiatrist, Hans

Berger, in 1929. Its potential applications in epilepsy rapidly became clear, when Gibbs and

colleagues in Boston demonstrated 3 per second spike wave discharge in what was then termed

petit mal epilepsy. EEG continues to play a central role in diagnosis and management of patients

with seizure disorders—in conjunction with the now remarkable variety of other diagnostic

techniques developed over the last 30 or so years—because it is a convenient and relatively

inexpensive way to demonstrate the physiological manifestations of abnormal cortical

excitability that underlie epilepsy.

However, the EEG has a number of limitations. Electrical activity recorded by electrodes placed

on the scalp or surface of the brain mostly reflects summation of excitatory and inhibitory

postsynaptic potentials in apical dendrites of pyramidal neurons in the more superficial layers of

the cortex. Quite large areas of cortex—in the order of a few square centimetres—have to be

activated synchronously to generate enough potential for changes to be registered at electrodes

placed on the scalp. Propagation of electrical activity along physiological pathways or through

volume conduction in extracellular spaces may give a misleading impression as to location of the

source of the electrical activity. Cortical generators of the many normal and abnormal cortical

activities recorded in the EEG are still largely unknown. Spatial sampling in routine scalp EEG is

incomplete, as significant amounts of cortex, particularly in basal and mesial areas of the

hemispheres, are not covered by standard electrode placement. Temporal sampling is also
limited, and the relatively short duration of routine interictal EEG recording is one reason why

patients with epilepsy may not show interictal epileptiform discharge (IED) in the first EEG

study.

If inappropriate questions are asked of the EEG, diagnostic errors will occur, and there will be

poor yield of information that could be useful in the management of patients with seizure

disorders. It is crucial to recognise that a normal EEG does not exclude epilepsy, as around 10%

of patients with epilepsy never show epileptiform discharges. Secondly, an abnormal EEG

demonstrating IED does not in itself indicate that an individual has a seizure disorder, as IED are

seen in a small percentage of normal subjects who never develop epilepsy, and IED may also be

found in patients with neurological disorders which are not complicated by epilepsy. Table 1 lists

the areas in epilepsy diagnosis and management for which interictal and ictal EEG are useful,

strongly so in some, but in a more limited way in others.

SPECIFICITY AND SENSITIVITY OF ROUTINE EEG:

Epileptiform activity is specific, but not sensitive, for diagnosis of epilepsy as the cause of a

transient loss of consciousness or other paroxysmal event that is clinically likely to be epilepsy.

EEG has relatively low sensitivity in epilepsy, ranging between 25–56%. Specificity is better,

but again variable at 78–98%. These wide ranges can be explained partly by diverse case

selection and differences in clinical requirements for diagnosis of epilepsy in population studies

of EEG specificity and sensitivity. Secondly, correlation between different EEG patterns and

epilepsy varies, and only IED are associated with seizure disorders at a sufficiently high rate to

be of clinical use. Abnormalities of background cerebral rhythms, focal slow activity or regional

attenuation are much less specific than epileptiform activity, although they can indicate localised
structural pathology underlying the seizure disorder, or diffuse cortical dysfunction as in

symptomatic generalised epilepsies. Some types of epileptiform phenomena—3 per second spike

wave discharge, hypsarrhythmia, and generalised photoparoxysmal response—are strongly

correlated with clinical epilepsy, whereas focal sharp waves in centro-temporal or occipital

regions have moderate association with clinically active epilepsy. Of children with centro-

temporal or rolandic EEG discharges, only about 40% have clinically expressed seizures. Spikey

or rhythmic phenomena such as 14 and 6 Hz spikes, phantom spike and wave, rhythmic mid

temporal theta (θ), psychomotor variant and subclinical rhythmic epileptiform discharge in adults

(SREDA), have low or zero predictive value for epilepsy. Misinterpretation of such non-

epileptogenic phenomena, or overinterpretation of non-specific EEG abnormalities and

spiky/paroxysmal variants of normal cerebral rhythms, are a common reason for over-diagnosis

of epilepsy.

How often and in which circumstances do non-epileptic subjects show IED in the EEG? In

healthy adults with no declared history of seizures, the incidence of epileptiform discharge in

routine EEG was 0.5%.A slightly higher incidence of 2–4% is found in healthy children and in

non-epileptic patients referred to hospital EEG clinics. The incidence increases substantially to

10–30% in cerebral pathologies such as tumour, prior head injury, cranial surgery, or congenital

brain injury; particular caution is necessary when evaluating the significance of IED in such

cases, and especially when the clinical history offers little support for a diagnosis of epilepsy.

A number of factors influence whether patients with epilepsy will show IED in the EEG.

Children are more likely to than older subjects. IED is more likely to be found in some epilepsy

syndromes or seizure types. The location of an epileptogenic zone is relevant: a majority of

patients with temporal lobe epilepsy show IED, whereas epileptic foci in mesial or basal cortical
regions remote from scalp electrodes are less likely to demonstrate spikes, unless additional

recording electrodes are used. Patients with frequent (one per month) seizures are more likely to

have IED than those with rare (one per year) attacks.The timing of EEG recording may be

important: investigation within 24 hours of a seizure revealed IED in 51%, compared with 34%

who had later EEG. Some patients show discharges mainly in sleep, or there may be circadian

variation as in idiopathic generalised epilepsies. Co-medication may be relevant, particularly

drugs that lower seizure threshold or may themselves induce epileptiform activity.

Improving the yield of interictal EEG:

About 50% of patients with epilepsy show IED in the first EEG test. Yield in adults can be

increased by repeating the routine EEG (up to four recordings), and in all ages by use of sleep

studies. The combination of wake and sleep records gives a yield of 80% in patients with

clinically confirmed epilepsy. Sleep EEG may be achieved by recording natural or drug induced

sleep, using hypnotics which have minimal effect on the EEG, such as chloral or melatonin (the

latter is not currently licensed in the UK). Whether sleep deprivation has additional value is

difficult to establish from reported studies, although there is some evidence that it activates IED

in idiopathic generalised epilepsies, and in practice, most patients achieve sleep reduction rather

than true sleep deprivation.

Standard activation procedures of hyperventilation (up to three minutes) and photic stimulation

(using published protocols) should be included in routine EEG recordings, but it is current good

practice to warn patients of the small risk of seizure induction and obtain consent to these

procedures.
Although potentiation of epileptiform discharge may occur up to 24 hours after partial and

generalised seizures, there is insufficient high quality evidence that interictal EEG within this

period increases the likelihood of obtaining IED.

Prolonged interictal sampling using EEG monitoring increases yield by about 20%, and is now

more widely available through 24 hour ambulatory multichannel digital EEG.

USES OF EEG IN DIAGNOSIS OF EPILEPSY

EEG helps determine seizure type and epilepsy syndrome in patients with epilepsy, and thereby

choice of antiepileptic medication and prediction of prognosis. EEG findings contribute to the

multi-axial diagnosis of epilepsy, in terms of whether the seizure disorder is focal or generalised,

idiopathic or symptomatic, or part of a specific epilepsy syndrome.

Focal and generalised seizure disorders show some overlap of both clinical and electrographic

manifestations, and the entity of unihemispheric epilepsies blurs the boundaries further.

However, the conceptual division of partial and generalised seizures/epilepsy types is still valid

and clinically useful. In practice, the clinician will be reasonably certain about seizure type based

on the account provided by the patient and witness. However, when history is unclear (un-

witnessed “blackouts” or brief impairment of awareness), EEG can help distinguish between a

complex partial seizure with focal IED, and an absence type seizure with generalised IED.

EEG FINDINGS IN EPILEPSY SYNDROMES:

Many of the epilepsy syndromes associated with specific EEG features present in early life or

childhood (table 2). Some syndromes are well accepted; others are controversial or may not be

included in current International League Against Epilepsy (ILAE) classification systems because

of insufficient data. These classifications are work in progress, and will increasingly be informed
by developments in imaging, genetics, and molecular biology. In some individuals, the epilepsy

syndrome may only become apparent over time, necessitating regular electro-clinical appraisal.
CHAPTER 4

Lossless EEG Compression Algorithm

This EEG Compression algorithm consists of two major parts, a predictor and an encoder. In the

predictor part, there are three steps. First the raw data will go through first-stage prediction given

this data preliminary processing. Second, the fuzzy decision is used to classify them in to

different cases. Finally, the voting prediction decides an optimal outcome of the predictor part.

After receiving the outcome of the predictor, tri-entropy coding will convert the predicted value

into variable length codes, as show in Fig.1. And the details of each section will be demonstrate

as following.

A. First-stage Prediction:
There are two outcome of First-stage prediction, value diff1 and diff_2, where diff1 is

defined by the difference between X(n) and X(n1), diff2 is defined by the difference between

X(n-2) and X(n-3). These two values diff1 and diff2 are also indicated the slope of the passed

signals. Both of them will play an important role in the fuzzy decision section.
B. Prediction function based on machine learning:

The main idea of the prediction is to create a function which uses several previous incomes

X(n-1) to X(n-4) and generates an outcome PD1. In this study a concept of supervised

learning [4] was used to minimize the gap between PD1 and X(n).

where A(n) is the parameter of predicted function (1). In order to find the optimal parameter

A(n), Error(n) was define by (2) the gap between signal X(n) and PD.

With the purpose of evaluation of the performance, all the absolute values of Error(n) were

sum up, and the smallest value can be seen as a batter prediction function. Because of

entropy coding compression rate will be highly related to data concentration. Repeating all

these steps for few times, six different functions can be obtained. Each of them offer different

prediction patterns for the fluctuation of EEG signals. The six prediction functions are shown

in equations (3)(4)(5)(6)(7)(8).
C. Fuzzy Decision:

Fuzzy Decision is based on the characteristics of passing three signals and then classifying

them in to several modules. According to the value of X(n-1), it can be defined in three

classes, Low, Medium and High. Fig. 2 illustrates the first fuzzy decision rule. After that, the

outcome of the first-stage prediction diff1 and diff2 can be classified to another three

different classes, Low, Medium and High. Fig.3 illustrates the second fuzzy decision rule.

According to all of the rules, the result of first-stage prediction will be sent to twenty-seven

modules M1 to M27.
D. Voting Prediction:

In order to adaptively select the best function to fit the current situation, the first-stage

prediction classifies data by the passed two values of slope diff_1 and diff_2. Next, to

consider the most current data X(n-1) and then put them into three different section as shown

in Fig. 3. After the data denoting into twenty-seven modules from L-LLow to H-H-High, one

of six functions will be selected by the secondstage fuzzy decision module. All this work is

try to provide an easy way for decoding side to recognize which prediction function was

selected.

For a better understanding, Fig. 4 shows an example of the voting processing. Once data is

sent to one of the twenty-seven modules, the system will check which function has most

votes and selected this function as the optimal function to produce outcome PD. If they have

same amount of votes, the smaller label of function has higher priority.

After executing all of six prediction functions, the function which has smaller absolute value

will get a vote. So in the future it can also execute this process reversely, and figure out
which function is selected. In the example of Fig.4, the function_4 will be selected because it

has most votes. After that, the function_2 can get a vote due to the minimum absolute value.

E. Tri-Entropy Coding:

Tri-Entropy encoder includes two layers of Huffman coding and one layer of Golomb-Rice

coding algorithm. Huffman coding is based on the frequency of occurrence. Huffman coding

has already been proved that it can significantly increase the compression rate. Golomb-Rice

can provide a simple way to increase compression rate by selecting a proper divider K.

Golomb-Rice coding is one of the special form of Rice coding, where K can be written as

k=2n . Meanwhile due to the features of Golomb-Rice coding, PD2 should be transformed

into positive number PD’ by the equation (3)


Tri-entropy coding is going to combine the advantage of both two algorithms, the outcome

from the predictor will go through two layers of Huffman coding, and one layer of Golomb-

Rice coding by the frequency of occurrence. Fig.5 illustrates the tri-entropy coding rules.

Simulation Results:

To evaluate the performance of the proposed lossless compression algorithm with previous

studies, CHB-MIT Scalp EEG Database were selected as the test dataset. As show in Table.1,

most of channels in this database gets higher CR rate. The average compression rate in CHB-

MIT Scalp EEG Database is 2.37. Table.1 shows the comparison of compression rates and
bit per sample in the proposed lossless compression algorithm with previous studies. As

results shown, the proposed lossless compression algorithm has better performance than

previous studies [1][2][3].

Table.1

Comparison of compression rates and bit per sample in the proposed lossless compression

algorithm with previous studies


Table. 2

shows energy consumptions for Bluetooth and Zigbee protocols by using the proposed

lossless compression algorithm or not. It is clear to see that after data been compressed, the

power consumption will be drop significantly.

CHAPTER 6

XILINX Software

Xilinx Tools is a suite of software tools used for the design of digital circuits implemented
using Xilinx Field Programmable Gate Array (FPGA) or Complex Programmable
Logic Device (CPLD). The design procedure consists of (a) design entry, (b) synthesis and
implementation of the design, (c) functional simulation and (d) testing and verification.
Digital designs can be entered in various ways using the above CAD tools: using a schematic
entry tool, using a hardware description language (HDL) – Verilog or VHDL or a
combination of both. In this lab we will only use the design flow that involves the use of
VerilogHDL.

The CAD tools enable you to design combinational and sequential circuits starting with
Verilog HDL design specifications. The steps of this design procedure are listed below:

Create Verilog design input file(s) using template driveneditor.


Compile and implement the Verilog designfile(s).
Create the test-vectors and simulate the design (functional simulation) without using a PLD
(FPGA orCPLD).
Assign input/output pins to implement the design on a targetdevice.
Download bitstream to an FPGA or CPLDdevice.
Test design on FPGA/CPLDdevice

A Verilog input file in the Xilinx software environment consists of the following segments:

Header: module name, list of input and output ports.


Declarations: input and output ports, registers and wires.

Logic Descriptions: equations, state machines and logic functions.

End: endmodule

All your designs for this lab must be specified in the above Verilog input format. Note that the
state diagram segment does not exist for combinational logic designs.
Programmable Logic Device:FPGA

In this lab digital designs will be implemented in the Basys2 board which has a Xilinx
Spartan3E
–XC3S250E FPGA with CP132 package. This FPGA part belongs to the Spartan family of
FPGAs. These devices come in a variety of packages. We will be using devices that are
packaged in 132 pin package with the following part number: XC3S250E-CP132. This
FPGA is a device with about 50K gates. Detailed information on this device is available at
the Xilinx website.

Creating a NewProject
Xilinx Tools can be started by clicking on the Project Navigator Icon on the Windows
desktop. This should open up the Project Navigator window on your screen. This window
shows (see Figure 1) the last accessed project.
Figure 1: Xilinx Project Navigator window (snapshot from Xilinx ISE software)

Opening aproject

Select File->New Project to create a new project. This will bring up a new project window
(Figure 2) on the desktop. Fill up the necessary entries as follows:
Figure 2: New Project Initiation window (snapshot from Xilinx ISE software)

ProjectName: Write the name of your newproject

Project Location: The directory where you want to store the new project (Note: DO
NOT specify the project location as a folder on Desktop or a folder in the Xilinx\bin
directory. Your H: drive is the best place to put it. The project location path is NOT to
have any spaces in it eg: C:\Nivash\TA\new lab\sample exercises\o_gate is NOT to be
used)

Leave the top level module type as HDL.

Example: If the project name were “o_gate”, enter “o_gate” as the project name and then
click “Next”.

Clicking on NEXT should bring up the following window:


Figure 3: Device and Design Flow of Project (snapshot from Xilinx ISE software)

For each of the properties given below, click on the ‘value’ area and select from the list
of values that appear.
Device Family: Family of the FPGA/CPLD used. In this laboratory we will be using the
Spartan3EFPGA’s.
Device: The number of the actual device. For this lab you may enterXC3S250E
(this can be found on the attached prototyping board)
Package:Thetypeofpackagewiththenumberofpins.TheSpartanFPGAusedin this lab is
packaged in CP132package.
Speed Grade: The Speed grade is“-4”.
Synthesis Tool: XST[VHDL/Verilog]
Simulator: The tool used to simulate and verify the functionality of the design. Modelsim
simulator is integrated in the Xilinx ISE. Hence choose “Modelsim-XE Verilog” as the
simulator or even Xilinx ISE Simulator can beused.
Then click on NEXT to save theentries.

All project files such as schematics, netlists, Verilog files, VHDL files, etc., will be stored in
a subdirectory with the project name. A project can only have one top level HDL source file
(or schematic). Modules can be added to the project to create a modular, hierarchical design
(see Section 9).

In order to open an existing project in Xilinx Tools, select File->Open Project to show the
list of projects on the machine. Choose the project you want and click OK.

Clicking on NEXT on the above window brings up the following window:

Figure 4: Create New source window (snapshot from Xilinx ISE


software)

If creating a new source file, Click on the NEW SOURCE.


Creating a Verilog HDL input file for a combinational logicdesign

In this lab we will enter a design using a structural or RTL description using the Verilog
HDL. You can create a Verilog HDL input file (.v file) using the HDL Editor available in the
Xilinx ISE Tools (or any text editor).

In the previous window, click on the NEW SOURCE

A window pops up as shown in Figure 4. (Note: “Add to project” option is selected by


default. If you do not select it then you will have to add the new source file to the project
manually.)
Figure 5: Creating Verilog-HDL source file (snapshot from Xilinx ISE software)

Select Verilog Module and in the “File Name:” area, enter the name of the Verilog source
file you are going to create. Also make sure that the option Add to project is selected so that
the source need not be added to the project again. Then click on Next to accept the entries.
This pops up the following window (Figure 5).
Figure 6: Define Verilog Source window (snapshot from Xilinx ISE software)

In the Port Name column, enter the names of all input and output pins and specify the Direction
accordingly. A Vector/Bus can be defined by entering appropriate bit numbers in the MSB/LSB
columns. Then click on Next> to get a window showing all the new source information (Figure
6). If any changes are to be made, just click on <Back to go back and make changes. If
everything is acceptable, click on Finish > Next > Next > Finish tocontinue.
Figure 7: New Project Information window(snapshot from Xilinx ISE software)

Once you click on Finish, the source file will be displayed in the sources window in the
Project Navigator (Figure 1).

If a source has to be removed, just right click on the source file in the Sources in Project
window in the Project Navigator and select Removein that. Then select Project -> Delete
Implementation Data from the Project Navigator menu bar to remove any relatedfiles.

Editing the Verilog sourcefile

The source file will now be displayed in the Project Navigator window (Figure 8). The
source filewindowcanbeusedasatexteditortomakeanynecessarychangestothesourcefile.All
The input/output pins will be displayed. Save your Verilog program periodically by selecting the
File->Save from the menu. You can also edit Verilog programs in any text editor and add them
to the project directory using “Add Copy Source”.

Figure 8: Verilog Source code editor window in the Project Navigator (from Xilinx ISE
software)

Adding Logic in the generated Verilog Source codetemplate:

A brief Verilog Tutorial is available in Appendix-A. Hence, the language syntax and
construction of logic equations can be referred to Appendix-A.

The Verilog source code template generated shows the module name, the list of ports
and also the declarations (input/output) for each port. Combinational logic code can
be added to the verilog code after the declarations and before the endmodule line.
For example, an output z in an OR gate with inputs a and b can be described
as, assign z = a | b;
Remember that the names are case sensitive.

Other constructs for modeling the logicfunction:

A given logic function can be modeled in many ways in verilog. Here is another
example in which the logic function, is implemented as a truth table using a case statement:

moduleor_gat
e(a,b,z); input
a;

inp
ut
b;
out
put
z;

reg z;

always
@(a or b)
begin
case
({a,b})
00: z
=1'b0;

01: z =1'b1;

10: z =1'b1;

11: z =1'b1;

endcase

end
e
ndmo
dule

Suppose we want to describe an OR gate. It can be done using the logic equation as shown in
Figure 9a or using the case statement (describing the truth table) as shown in Figure 9b.
These are just two example constructs to design a logic function. Verilog offers numerous
such constructs to efficiently model designs. A brief tutorial of Verilog is available in
Appendix-A.
Figure 9: OR gate description using assign statement (snapshot from Xilinx
ISE software)
Figure 10: OR gate description using case statement (from Xilinx ISE software)

Synthesis and Implementation of theDesign

The design has to be synthesized and implemented before it can be checked for correctness, by
running functional simulation or downloaded onto the prototyping board. With the top-level
Verilog file opened (can be done by double-clicking that file) in the HDL editor window in the
right half of the Project Navigator, and the view of the project being in the Module view , the
implement design option can be seen in the process view. Design entry utilities and Generate
Programming File options can also be seen in the process view. The former can be used to
include user constraints, if any and the latter will be discussed later.
To synthesize the design, double click on the Synthesize Design option in the Processes
window.

To implement the design, double click the Implement design option in the Processes window.
It will go through steps like Translate, Map and Place & Route. If any of these steps could not
be done or done with errors, it will place a X mark in front of that, otherwise a tick mark will be
placed after each of them to indicate the successful completion. If everything is done
successfully, a tick mark will be placed before the Implement Design option. If thereare

warnings, one can see mark in front of the option indicating that there are some warnings. One can
look at the warnings or errors in the Console window present at the bottom of the Navigator window.
Every time the design file is saved; all these marks disappear asking for a freshcompilation.

Figure 11: Implementing the Design (snapshot from Xilinx ISE software)
The schematic diagram of the synthesized verilog code can be viewed by double clicking View
RTL Schematic under Synthesize-XST menu in the Process Window. This would be a handy
way to debug the code if the output is not meeting our specifications in the proto type board.

By double clicking it opens the top level module showing only input(s) and output(s) as shown
below.

Figure 12: Top Level Hierarchy of the design


By double clicking the rectangle, it opens the realized internal logic as shown
below.

Figure 13: Realized logic by the XilinxISE for the verilog code

Functional Simulation of CombinationalDesigns


Adding the testvectors

To check the functionality of a design, we have to apply test vectors and simulate the circuit.
In order to apply test vectors, a test bench file is written. Essentially it will supply all the
inputs to the module designed and will check the outputs of the module. Example: For the 2
input OR Gate, the steps to generate the test bench is as follows:

In the Sources window (top left corner) right click on the file that you want to generate the
test bench for and select ‘New Source’
Provide a name for the test bench in the file name text box and select ‘Verilog test fixture’
among the file types in the list on the right side as shown in figure 11.

Figure 14: Adding test vectors to the design (snapshot from Xilinx ISE software)
Click on ‘Next’ to proceed. In the next window select the source file with which you want to
associate the test bench.

Figure 15: Associating a module to a testbench (snapshot from Xilinx ISE software)

/Click on Next to proceed. In the next window click on Finish. You will now be provided
with a template for your test bench. If it does not open automatically click the radio button
next to Simulation .
You should now be able to view your test bench template. The code generated would be
something like this:
moduleo_gate_tb_v;

//
Inputs
reg a;

reg b;

//
Outputs
wire z;

// Instantiate the Unit Under


Test (UUT) o_gateuut (

.a(a),

.b(b),

.z(z)

);
initialbegin

// Initialize
Inputs a = 0;

b =0;

// Wait 100 ns for global reset


tofinish #100;

// Add stimulus here

end

endmodule

The Xilinx tool detects the inputs and outputs of the module that you are going to test an assigns
them initial values. In order to test the gate completely we shall provide all the different input
combinations. ‘#100’ is the time delay for which the input has to maintain the current value. After
100 units of time have elapsed the next set of values can be assign to the inputs.
Complete the test bench as shown below:

moduleo_gate_tb_v;

//
Inputs
reg a;
regb;
//
Outputs
wire z;

// Instantiate the Unit Under


Test (UUT) o_gateuut (

.a(a),

.b(b),

.z(z)

);

initialbegin

// Initialize
Inputs a = 0;

b =0;

// Wait 100 ns for global reset to finish #100;

a = 0;

b =1;
// Wait 100 ns for global reset
tofinish #100;

a = 1;

b =0;

// Wait 100 ns for global reset


tofinish #100;

a = 1;

b =1;

// Wait 100 ns for global reset


tofinish #100;

end

endmodule

Save your test bench file using the File menu.s

Simulating and Viewing the OutputWaveforms

Now under the Processes window (making sure that the testbench file in the Sources window is
selected) expand the ModelSim simulator Tab by clicking on the add sign next to it. Double
Click on Simulate Behavioral Model. You will probably receive a complier error. This is
nothing to worry about – answer “No” when asked if you wish to abort simulation. This should
cause ModelSim to open. Wait for it to complete execution. If you wish to not receive the
compiler error, right click on Simulate Behavioral Model and select process properties. Mark
the
checkbox next to “Ignore Pre-Complied Library Warning Check”./
Figure 16: Simulating the design (snapshot from Xilinx ISE software)

Saving the simulationresults

To save the simulation results, Go to the waveform window of the Modelsim simulator, Click on
File -> Print to Postscript -> give desired filename and location.

Notethatbydefault,thewaveformis“zoomedin”tothenanosecondlevel. Use the zoom


controls to display the entirewaveform.

Else a normal print screen option can be used on the waveform window and subsequently stored
in Paint.
/

Figure 17: Behavioral Simulation output Waveform (Snapshot from


ModelSim)

For taking printouts for the lab reports, convert the black background to white in Tools -> Edit
Preferences. Then click Wave Windows -> Wave Background attribute.
/

Figure 18: Changing Waveform Background in ModelSim

CHAPTER 4

XILINX Software

Xilinx Tools is a suite of software tools used for the design of digital circuits implemented
using Xilinx Field Programmable Gate Array (FPGA) or Complex Programmable Logic
Device (CPLD). The design procedure consists of (a) design entry, (b) synthesis and
implementation of the design, (c) functional simulation and (d) testing and verification. Digital
designs can be entered in various ways using the above CAD tools: using a schematic entry tool,
using a hardware description language (HDL) – Verilog or VHDL or a combination of both. In
this lab we will only use the design flow that involves the use of VerilogHDL.

The CAD tools enable you to design combinational and sequential circuits starting with Verilog
HDL design specifications. The steps of this design procedure are listed below:

1. Create Verilog design input file(s) using template driveneditor.


2. Compile and implement the Verilog designfile(s).
3. Create the test-vectors and simulate the design (functional simulation) without using a
PLD (FPGA orCPLD).
4. Assign input/output pins to implement the design on a targetdevice.
5. Download bitstream to an FPGA or CPLDdevice.
6. Test design on FPGA/CPLDdevice

A Verilog input file in the Xilinx software environment consists of the following segments:

Header: module name, list of input and output ports.


Declarations: input and output ports, registers and wires.

Logic Descriptions: equations, state machines and logic functions.


End: endmodule

All your designs for this lab must be specified in the above Verilog input format. Note that the
state diagram segment does not exist for combinational logic designs.

2. Programmable Logic Device:FPGA

In this lab digital designs will be implemented in the Basys2 board which has a Xilinx Spartan3E
–XC3S250E FPGA with CP132 package. This FPGA part belongs to the Spartan family of
FPGAs. These devices come in a variety of packages. We will be using devices that are
packaged in 132 pin package with the following part number: XC3S250E-CP132. This FPGA is
a device with about 50K gates. Detailed information on this device is available at the Xilinx
website.

3. Creating a NewProject
Xilinx Tools can be started by clicking on the Project Navigator Icon on the Windows desktop.
This should open up the Project Navigator window on your screen. This window shows (see
Figure 1) the last accessed project.
Figure 1: Xilinx Project Navigator window (snapshot from Xilinx ISE software)

3.1 Opening aproject

Select File->New Project to create a new project. This will bring up a new project window
(Figure 2) on the desktop. Fill up the necessary entries as follows:
Figure 2: New Project Initiation window (snapshot from Xilinx ISE software)

ProjectName: Write the name of your newproject

Project Location: The directory where you want to store the new project (Note: DO NOT
specify the project location as a folder on Desktop or a folder in the Xilinx\bin directory.
Your H: drive is the best place to put it. The project location path is NOT to have any spaces
in it eg: C:\Nivash\TA\new lab\sample exercises\o_gate is NOT to be used)

Leave the top level module type as HDL.

Example: If the project name were “o_gate”, enter “o_gate” as the project name and then click
“Next”.

Clicking on NEXT should bring up the following window:


Figure 3: Device and Design Flow of Project (snapshot from Xilinx ISE software)

For each of the properties given below, click on the ‘value’ area and select from the list of
values that appear.
o Device Family: Family of the FPGA/CPLD used. In this laboratory we will be
using the Spartan3EFPGA’s.
o Device: The number of the actual device. For this lab you may enterXC3S250E
(this can be found on the attached prototyping board)
o Package:Thetypeofpackagewiththenumberofpins.TheSpartanFPGAusedin this lab
is packaged in CP132package.
o Speed Grade: The Speed grade is“-4”.
o Synthesis Tool: XST[VHDL/Verilog]
o Simulator: The tool used to simulate and verify the functionality of the design.
Modelsim simulator is integrated in the Xilinx ISE. Hence choose “Modelsim-XE
Verilog” as the simulator or even Xilinx ISE Simulator can beused.
o Then click on NEXT to save theentries.
All project files such as schematics, netlists, Verilog files, VHDL files, etc., will be stored in a
subdirectory with the project name. A project can only have one top level HDL source file (or
schematic). Modules can be added to the project to create a modular, hierarchical design (see
Section 9).

In order to open an existing project in Xilinx Tools, select File->Open Project to show the list
of projects on the machine. Choose the project you want and click OK.

Clicking on NEXT on the above window brings up the following window:

Figure 4: Create New source window (snapshot from Xilinx ISE software)

If creating a new source file, Click on the NEW SOURCE.

3.1 Creating a Verilog HDL input file for a combinational logicdesign

In this lab we will enter a design using a structural or RTL description using the Verilog HDL.
You can create a Verilog HDL input file (.v file) using the HDL Editor available in the Xilinx
ISE Tools (or any text editor).
In the previous window, click on the NEW SOURCE

A window pops up as shown in Figure 4. (Note: “Add to project” option is selected by default. If
you do not select it then you will have to add the new source file to the project manually.)

Figure 5: Creating Verilog-HDL source file (snapshot from Xilinx ISE software)

Select Verilog Module and in the “File Name:” area, enter the name of the Verilog source file
you are going to create. Also make sure that the option Add to project is selected so that the
source need not be added to the project again. Then click on Next to accept the entries. This pops
up the following window (Figure 5).
Figure 6: Define Verilog Source window (snapshot from Xilinx ISE software)

In the Port Name column, enter the names of all input and output pins and specify the Direction
accordingly. A Vector/Bus can be defined by entering appropriate bit numbers in the MSB/LSB
columns. Then click on Next> to get a window showing all the new source information (Figure 6). If
any changes are to be made, just click on <Back to go back and make changes. If everything is
acceptable, click on Finish > Next > Next > Finish tocontinue.
Figure 7: New Project Information window(snapshot from Xilinx ISE software)

Once you click on Finish, the source file will be displayed in the sources window in the
Project Navigator (Figure 1).

If a source has to be removed, just right click on the source file in the Sources in Project
window in the Project Navigator and select Removein that. Then select Project -> Delete
Implementation Data from the Project Navigator menu bar to remove any relatedfiles.

3.2 Editing the Verilog sourcefile

The source file will now be displayed in the Project Navigator window (Figure 8). The source
filewindowcanbeusedasatexteditortomakeanynecessarychangestothesourcefile.All
The input/output pins will be displayed. Save your Verilog program periodically by selecting the
File->Save from the menu. You can also edit Verilog programs in any text editor and add them to the
project directory using “Add Copy Source”.

Figure 8: Verilog Source code editor window in the Project Navigator (from Xilinx ISE
software)

Adding Logic in the generated Verilog Source codetemplate:

A brief Verilog Tutorial is available in Appendix-A. Hence, the language syntax and
construction of logic equations can be referred to Appendix-A.

The Verilog source code template generated shows the module name, the list of ports and
also the declarations (input/output) for each port. Combinational logic code can be added
to the verilog code after the declarations and before the endmodule line.

For example, an output z in an OR gate with inputs a and b can be described as,
assign z = a | b;
Remember that the names are case sensitive.
Other constructs for modeling the logicfunction:

A given logic function can be modeled in many ways in verilog. Here is another example
in which the logic function, is implemented as a truth table using a case statement:

moduleor_gate(a,
b,z); input a;

input
b;
output
z;

reg z;

always @(a
or b) begin

case
({a,b}) 00:
z =1'b0;

01: z =1'b1;

10: z =1'b1;

11: z =1'b1;

endcase

end
en
dmodule
Suppose we want to describe an OR gate. It can be done using the logic equation as shown in
Figure 9a or using the case statement (describing the truth table) as shown in Figure 9b. These
are just two example constructs to design a logic function. Verilog offers numerous such
constructs to efficiently model designs. A brief tutorial of Verilog is available in Appendix-A.

Figure 9: OR gate description using assign statement (snapshot from Xilinx ISE
software)
Figure 10: OR gate description using case statement (from Xilinx ISE software)

4. Synthesis and Implementation of theDesign

The design has to be synthesized and implemented before it can be checked for correctness,
by running functional simulation or downloaded onto the prototyping board. With the top-
level Verilog file opened (can be done by double-clicking that file) in the HDL editor
window in the right half of the Project Navigator, and the view of the project being in the
Module view , the implement design option can be seen in the process view. Design entry
utilities and Generate Programming File options can also be seen in the process view. The
former can be used to include user constraints, if any and the latter will be discussed later.
To synthesize the design, double click on the Synthesize Design option in the Processes
window.

To implement the design, double click the Implement design option in the Processes
window. It will go through steps like Translate, Map and Place & Route. If any of these
steps could not be done or done with errors, it will place a X mark in front of that, otherwise
a tick mark will be placed after each of them to indicate the successful completion. If
everything is done successfully, a tick mark will be placed before the Implement Design
option. If thereare

warnings, one can see mark in front of the option indicating that there are some warnings. One
can look at the warnings or errors in the Console window present at the bottom of the Navigator
window. Every time the design file is saved; all these marks disappear asking for a
freshcompilation.
Figure 11: Implementing the Design (snapshot from Xilinx ISE software)

The schematic diagram of the synthesized verilog code can be viewed by double clicking
View RTL Schematic under Synthesize-XST menu in the Process Window. This would be a
handy way to debug the code if the output is not meeting our specifications in the proto type
board.

By double clicking it opens the top level module showing only input(s) and output(s) as
shown below.
Figure 12: Top Level Hierarchy of the design

By double clicking the rectangle, it opens the realized internal logic as


shown below.
Figure 13: Realized logic by the XilinxISE for the verilog code

5. Functional Simulation of CombinationalDesigns


5.1 Adding the testvectors

To check the functionality of a design, we have to apply test vectors and simulate the
circuit. In order to apply test vectors, a test bench file is written. Essentially it will supply
all the inputs to the module designed and will check the outputs of the module. Example:
For the 2 input OR Gate, the steps to generate the test bench is as follows:
In the Sources window (top left corner) right click on the file that you want to generate
the test bench for and select ‘New Source’

Provide a name for the test bench in the file name text box and select ‘Verilog test
fixture’ among the file types in the list on the right side as shown in figure 11.

Figure 14: Adding test vectors to the design (snapshot from Xilinx ISE software)
Click on ‘Next’ to proceed. In the next window select the source file with which you
want to associate the test bench.

Figure 15: Associating a module to a testbench (snapshot from Xilinx ISE software)

Click on Next to proceed. In the next window click on Finish. You will now be provided
with a template for your test bench. If it does not open automatically click the radio
button next to Simulation .
You should now be able to view your test bench template. The code generated would be
something like this:
moduleo_gate_tb_v;

//
Inp
uts
reg
a;

reg b;

//
Out
puts
wire
z;
// Instantiate the Unit Under
Test (UUT) o_gateuut (

.a(a),

.b(b),

.z(z)

);

initialbegin

// Initialize
Inputs a =
0;

b =0;

// Wait 100 ns for global


reset tofinish #100;

// Add stimulus here

end

endmodule
The Xilinx tool detects the inputs and outputs of the module that you are going to test an assigns
them initial values. In order to test the gate completely we shall provide all the different input
combinations. ‘#100’ is the time delay for which the input has to maintain the current value.
After 100 units of time have elapsed the next set of values can be assign to the inputs.
Complete the test bench as shown below:

moduleo_gate_tb_v;

//
In
put
s
reg
a;
reg
b;

//
Out
puts
wire
z;

// Instantiate the Unit Under


Test (UUT) o_gateuut (

.a(a),

.b(b),
.z(z)

);

initialbegin

// Initialize
Inputs a =
0;

b =0;

// Wait 100 ns for global reset to finish #100;

a = 0;

b =1;

// Wait 100 ns for global


reset tofinish #100;

a = 1;

b =0;
// Wait 100 ns for global
reset tofinish #100;

a = 1;

b =1;

// Wait 100 ns for global


reset tofinish #100;

end

endmodule

Save your test bench file using the File menu.s

5.1 Simulating and Viewing the OutputWaveforms

Now under the Processes window (making sure that the testbench file in the Sources
window is selected) expand the ModelSim simulator Tab by clicking on the add sign next
to it. Double Click on Simulate Behavioral Model. You will probably receive a complier
error. This is nothing to worry about – answer “No” when asked if you wish to abort
simulation. This should cause ModelSim to open. Wait for it to complete execution. If you
wish to not receive the compiler error, right click on Simulate Behavioral Model and select
process properties. Mark the
checkbox next to “Ignore Pre-Complied Library Warning Check”.

Figure 16: Simulating the design (snapshot from Xilinx ISE software)

5.2 Saving the simulationresults

To save the simulation results, Go to the waveform window of the Modelsim simulator,
Click on File -> Print to Postscript -> give desired filename and location.

Notethatbydefault,thewaveformis“zoomedin”tothenanosecondlevel. Use the


zoom controls to display the entirewaveform.

Else a normal print screen option can be used on the waveform window and subsequently
stored in Paint.
Figure 17: Behavioral Simulation output Waveform (Snapshot from
ModelSim)
For taking printouts for the lab reports, convert the black background to white in Tools ->
Edit Preferences. Then click Wave Windows -> Wave Background attribute.

Figure 18: Changing Waveform Background in ModelSim


SIMULATION AND RESULTS
CONCLUSION

A lossless EEG compression hardware oriented algorithm based on a fuzzy decision, semi-

supervised learning prediction function, voting prediction and tri-entropy coding technique is

proposed. The proposed algorithm decreases data storage space, bandwidth of network and

lower power consumption. In CHB-MIT Scalp EEG Database, this study achieved 2.37

compression rate, which is 0.85% better than [3]. The proposed algorithm was developed for

hardware implementation, which is suitable for realizing on chip and using in WBSN system.

You might also like