You are on page 1of 60

INTRODUCTION

Recent years have seen a rising interest in wearable sensors and today several devices are
commercially available for personal health care, fitness, and activity awareness. In addition to
the niche recreational fitness arena catered to by current devices, researchers have also
considered applications of such technologies in clinical applications in remote health monitoring
systems for long term recording, management and clinical access to patient’s physiological
information. Based on current technological trends, one can readily imagine a time in the near
future when your routine physical examination is preceded by a two–three day period of
continuous physiological monitoring using inexpensive wearable sensors.

Over this interval, the sensors would continuously record signals correlated with your key
physiological parameters and relay the resulting data to a database linked with your health
records. When you show up for your physical examination the doctor has available not only
conventional clinic/lab-test based static measurements of your physiological and metabolic state,
but also the much richer longitudinal record provided by the sensors. Using the available data,
and aided by decisionsupport systems that also have access to a large corpus of observation data
for other individuals, the doctor can make a much better prognosis for your health and
recommend treatment, early intervention, and life-style choices that are particularly effective in
improving the quality of your health. Such a disruptive technology could have a transformative
impact on global healthcare systems and drastically reduce healthcare costs and improve speed
and accuracy for diagnoses. Technologically, the vision presented in the preceding paragraph has
been feasible for a few years now. Yet, wearable sensors have, thus far, had little influence on
the current clinical practice of medicine.

In this paper, we focus particularly on the clinical arena and examine the opportunities afforded
by available and upcoming technologies and the challenges that must be addressed in order to
allow integration of these into the practice of medicine. Most proposed frameworks for remote
health monitoring leverage a three tier architecture: a Wireless Body Area Network (WBAN)
consisting of wearable sensors as the data acquisition unit, communication and networking and
the service layer . For instance proposes a system that recruits wearable sensors to measure
various physiological parameters such as blood pressure and body temperature.
Sensors transmit the gathered information to a gateway server through a Bluetooth connection.
The gateway server turns the data into an Observation and Measurement file and stores it on a
remote server for later retrieval by clinicians through the Internet. Utilizing a similar cloud based
medical data storage, a health monitoring system is presented which medical staff can access the
stored data online through content service application. Targeting a specific medical application,
WANDA an end to end remote health monitoring and analytics system is presented for
supervision of patients with high risk of heart failure. In addition to the technology for data
gathering, storage and access, medical data analysis and visualization are critical components of
remote health monitoring systems.

Accurate diagnoses and monitoring of patient’s medical condition relies on analysis of medical
records containing various physiological characteristics over a long period of time. Dealing with
data of high dimensionality in both time and quantity makes data analysis task quite frustrating
and error prone for clinicians. Although the use of data mining and visualization techniques had
previously been addressed as a solution to the aforementioned challenge, these methods have
only recently gained attention in remote health monitoring systems. While the advent of
electronic remote health monitoring systems has promised to revolutionize the conventional
health care methods, integrating the IoT paradigm into these systems can further increase
intelligence, flexibility and interoperability.

A device utilizing the IoT scheme is uniquely addressed and identifiable at any time and
anywhere through the Internet. IoT based devices in remote health monitoring systems are not
only capable of the conventional sensing tasks but can also exchange information with each
other, automatically connect to and exchange information with health institutes through the
Internet, significantly simplifying set up and administration tasks. As exemplified such systems
are able to provide services such as automatic alarm to the nearest healthcare institute in the
event of a critical accident for a supervised patient.

MODERN BIOMEDICAL SENSORS

A conceptual model of a biomedical sensor and its important design elements are shown in Fig.
1. This model presents a complete biomedical sensor scheme in which, in addition to a sensing
section of a biomedical sensor, microfluidic, signal processing and packaging units are included.
Simultaneous analysis and design of all these elements are essential for the development of
marketable biomedical sensors. The principle of operation of such a biomedical sensor can be
inferred by following its sensing path. A measurand is introduced to a biomedical sensor using
sample delivery system or by bringing the sensor to the patient, as with implantable or
indwelling biomedical sensor probes. Next, the measurand passes through a preprocessing
section, such as semi-permeable membrane, which performs a initial selective screening of
possible interfering factors. After that, the measurand is exposed to the sensing element, a
biologically active substance which is selective to the measurand of interest (i.e., DNA,
antibodies, enzymes, or cellular components). When a measurand interacts with the sensing
element, microscopic physical, chemical, and/or biochemical changes are produced. These
microscopic changes cause the macroscopic physical changes in the sensing element, which are
converted by the physical transducer into an output electric signal. The electric signal is
conditioned, processed, and displayed. The processing can include such important sensor
features as self-calibration, self-diagnostics, and advanced pattern recognition analyses. All these
functional design elements can be enclosed in the sensor package that provides measurement
integrity to the device. In recent years significant progress has been made in all areas listed
above. However, the widest researched area has been on the sensing elements and sensing
mechanisms. Below is a short overview of the current state in these areas with an emphasis on
the optical, electrochemical and acoustic biomedical sensing technologies.

Fig. 1: Internet of things Throw interaction with Doctors to patients


Internet of Things

The Internet of Things is a novel paradigm shift in IT arena. The phrase “Internet of Things”
which is also shortly well-known as IoT is coined from the two words i.e. the first word is
“Internet” and the second word is “Things”. The Internet is a global system of interconnected
computer networks that use the standard Internet protocol suite (TCP/IP) to serve billions of
users worldwide. It is a network of networks that consists of millions of private, public,
academic, business, and government networks, of local to global scope, that are linked by a
broad array of electronic, wireless and optical networking technologies [3]. Today more than 100
countries are linked into exchanges of data, news and opinions through Internet. According to
Internet World Statistics, as of December 31, 2011 there was an estimated 2, 267, 233, 742
Internet users worldwide (Accessed data dated on 06/06/2013: from the Universal Resource
Location http://www.webopedia.com/TERM/I/Internet.html). This signifies 32.7% of the world’s
total population is using Internet. Even Internet is going into space through Cisco’s Internet
Routing in Space (IRIS) program in the coming fourth years (Accessed on 10/05/2012:
(http://www.cisco.com/web/strategy/government/space-routing.html).

While coming to the Things that can be any object or person which can be distinguishable by the
real world. Everyday objects include not only electronic devices we encounter and use daily and
technologically advanced products such as equipment and gadgets, but “things” that we do not
do normally think of as electronic at all—such as food, clothing; and furniture; materials, parts
and equipment, merchandise and specialized items; landmarks, monuments and works of art and
all the miscellany of commerce, culture and sophistication [4]. That means here things can be
both living things like person, animals—cow, calf, dog, pigeons, rabbit etc., plants—mango tree,
jasmine, banyan and so on and nonliving things like chair, fridge, tube light, curtain, plate etc.
any home appliances or industry apparatus. So at this point, things are real objects in this
physical or material world.

Definitions

There is no unique definition available for Internet of Things that is acceptable by the world
community of users. In fact, there are many different groups including academicians,
researchers, practitioners, innovators, developers and corporate people that have defined the
term, although its initial use has been attributed to Kevin Ashton, an expert on digital innovation.
What all of the definitions have in common is the idea that the first version of the Internet was
about data created by people, while the next version is about data created by things. The best
definition for the Internet of Things would be: “An open and comprehensive network of
intelligent objects that have the capacity to auto-organize, share information, data and resources,
reacting and acting in face of situations and changes in the environment” Internet of Things is
maturing and continues to be the latest, most hyped concept in the IT world.

Over the last decade the term Internet of Things (IoT) has attracted attention by projecting the
vision of a global infrastructure of networked physical objects, enabling anytime, anyplace
connectivity for anything and not only for any one [4]. The Internet of Things can also be
considered as a global network which allows the communication between human-to-human,
human-to-things and things-to-things, which is anything in the world by providing unique
identity to each and every object [5]. IoT describes a world where just about anything can be
connected and communicates in an intelligent fashion that ever before.

Most of us think about “being connected” in terms of electronic devices such as servers,
computers, tablets, telephones and smart phones. In what’s called the Internet of Things, sensors
and actuators embedded in physical objects—from roadways to pacemakers—are linked through
wired and wireless networks, often using the same Internet IP that connects the Internet. These
networks churn out huge volumes of data that flow to computers for analysis. When objects can
both sense the environment and communicate, they become tools for understanding complexity
and responding to it swiftly.

What’s revolutionary in all this is that these physical information systems are now beginning to
be deployed, and some of them even work largely without human intervention. The “Internet of
Things” refers to the coding and networking of everyday objects and things to render them
individually machine-readable and traceable on the Internet [6]-[11]. Much existing content in
the Internet of Things has been created through coded RFID tags and IP addresses linked into an
EPC (Electronic Product Code) network [12].

Time Series
Accessed from the URL dated on 24/3/2013: http://postscapes.com/internet-of-things-history.
1999: The term Internet of Things is coined by Kevin Ashton, Executive Director of the Auto-ID
Center in Massachute Institute of Technology (MIT)

1999: Neil Gershenfeld first time spoken about IoT principles in his book titled “When Things
Start to Think”

1999: MIT Auto-ID Lab, originally founded by Kevin Ashton, David Brock and Sanjay Sarma in
this year. They helped to develop the Electronic Product Code

2000: LG announced its first Internet of refrigerator plans

2002: The Ambient Orb created by David Rose and others in a spin-off from the MIT Media Lab
is released into wild with NY Times Magazine naming it as one of the Ideas of Year (2003-
2004): RFID is deployed on a massive scale by the US Department of Defense in their Savi
program and Wal-Mart in the commercial world

2005: The UN’s International Telecommunications Union (ITU) published its first report on the
Internet of Things topic 2008: Recognition by the EU and the First European IoT conference is
held

2008: A group of companies launched the IPSO Alliance to promote the use of IP in networks of
“Smart Objects” and to enable the Internet of Things

2008: The FCC voted 5-0 to approve opening the use of the ‘white space’ spectrum (2008-2009):
The IoT was born according to Cisco’s Business Solutions Group

2008: US National Intelligence Council listed the IoT as one of the 6 “Disruptive Civil
Technologies” with potential impacts on US interests out to 2025

2010: Chinese Premier Wen Jiabao calls the IoT a key industry for China and has plans to make
major investments in Internet of Things

2011: IPv6 public launch-The new protocol allows for 340, 282, 366, 920, 938, 463, 463, 374,
607, 431,768, 211, 456 (2128) addresses.

Requirements
For successful implementation of Internet of Things (IoT), the prerequisites are (a) Dynamic
resource demand (b) Real time needs (c) Exponential growth of demand (d) Availability of
applications (e) Data protection and user privacy (f) Efficient power consumptions of
applications (g) Execution of the applications near to end users (h) Access to an open and inter
operable cloud system. According to another author, there are three components, which required
for seamless Internet of Things (IoT) computing (a) Hardware—composed of sensors, actuators,
IP cameras, CCTV and embedded communication hardware (b) Middleware—on demand
storage and computing tools for data analytics with cloud and Big Data Analytics (c)
Presentation.

ITU Architecture

According to the recommendations of the International Telecommunication Union (ITU), the


network, Architecture of Internet of Things consists of

(a) The Sensing Layer

(b) The Access Layer

(c) The Network Layer

(d) The Middleware Layer

(e) The Application Layers These are like the Open Systems Interconnection (OSI) reference
model in network and data communication.

IoT Forum Architecture

The IoT Forum says that the Internet of Things Architecture is basically categorized into 3 types
including Applications, Processors and Transpiration.
Kun Han, Shurong Liu, Dacheng Zhang and Ying Han’s (2012)’s Architecture

The traditional IoT is formed by three layers. The bottom is perception layer, whose function is
cognizing and collecting information of objects. The middle is transportation layer which
consists of OFC, mobile phone networks, and fixed telephone networks, broadcasting networks,
and closed IP data networks for each carrier. And finally the top is application layer, where
abundant applications run. Typical applications include in this layer are smart traffic, precise
agriculture, intelligent logistics, smart industry, environment protection, mining monitor, remote
nursing, safety defense, smart government etc.

Technologies

The Internet of Things [15] was initially inspired by members of the RFID community, who
referred to the possibility of discovering information about a tagged object by browsing an
internet address or database entry that corresponds to a particular RFID or Near Field
Communication [16] technologies. In the research paper “Research and application on the smart
home based on component technologies and Internet of Things”, the included key technologies
of IoT are RFID, the sensor technology, nano technology and intelligence embedded technology.
Among them, RFID is the foundation and networking core of the construction of Internet of
Things [17]. The Internet of Things (IoT) enabled users to bring physical objects into the sphere
of cyber world. This was made possible by different tagging technologies like NFC, RFID and
2D barcode which allowed physical objects to be identified and referred over the internet [18].
IoT, which is integrated with Sensor Technology and Radio Frequency Technology, is the
ubiquitous network based on the omnipresent hardware resources of Internet, is the Internet
contents objects together. It is also a new wave of IT industry since the application of computing
fields, communication network and global roaming technology had been applied. It involves in
addition to sophisticated technologies of computer and communication network outside, still
including many new supporting technologies of Internet of Things, such as collecting
Information Technology, Remote Communication Technology, Remote Information
Transmission Technology, Sea Measures Information Intelligence Analyzes and Controlling
Technology etc. [19].

Radio Frequency Identification (RFID)

Radio Frequency Identification (RFID) is a system that transmits the identity of an object or
person wirelessly using radio waves in the form of a serial number [20]. First use of RFID device
was happened in 2nd world war in Brittan and it is used for Identify of Friend or Foe in 1948.
Later RFID technology is founded at Auto-ID center in MIT in the year 1999. RFID technology
plays an important role in IoT for solving identification issues of objects around us in a cost
effective manner [5]. The technology is classified into three categories based on the method of
power supply provision in Tags: Active RFID, Passive RFID and Semi Passive RFID. The main
components of RFID are tag, reader, antenna, access controller, software and server. It is more
reliable, efficient, secured, inexpensive and accurate. RFID has an extensive range of wireless
applications such as distribution, tracing, patient monitoring, military apps etc. [21].

Internet Protocol (IP)


Internet Protocol (IP) is the primary network protocol used on the Internet, developed in 1970s.
IP is the principal communications protocol in the Internet protocol suite for relaying datagrams
across network boundaries. The two versions of Internet Protocol (IP) are in use: IPv4 and IPv6.
Each version defines an IP address differently. Because of its prevalence, the generic term IP
address typically still refers to the addresses defined by IPv4. There are five classes of available
IP ranges in IPv4: Class A, Class B, Class C, Class D and Class E, while only A, B, and C are
commonly used. The actual protocol provides for 4.3 billion IPv4 addresses while the IPv6 will
significantly augment the availability to 85,000 trillion addresses [22]. IPv6 is the 21st century
Internet Protocol. This supports around for 2128 addresses.

Electronic Product Code (EPC)

Electronic Product Code (EPC) is a 64 bit or 98 bit code electronically recorded on an RFID tag
and intended to design an improvement in the EPC barcode system. EPC code can store
information about the type of EPC, unique serial number of product, its specifications,
manufacturer information etc. EPC was developed by AutoID centre in MIT in 1999. EPCglobal
Organisation [Wikipedia, “EPCglobal”, 2010] which is responsible for standardization of
Electronic Product Code (EPC) technology, created EPCglobal Network [Wikipedia,
“EPCglobal Network”, 2010] for sharing RFID information. It has four components namely
Object Naming Service (ONS), EPC Discovery Service (EPCDS), EPC Information Services
(EPCIS) and EPC Security Services (EPCSS).

Barcode

Barcode is just a different way of encoding numbers and letters by using combination of bars and
spaces of varying width. Behind Bars [23] serves its original intent to be descriptive but is not
critical. In The Bar Code Book, Palmer (1995) acknowledges that there are alternative methods
of data entry techniques. Quick Response (QR) Codes the trademark for a type of matrix barcode
first designed for the automotive industry in Japan. Bar codes are optical machine-readable labels
attached to items that record information related to the item. Recently, the QR Code system has
become popular outside the automotive industry due to its fast readability and greater storage
capacity compared to standard. There are 3 types of barcodes of Alpha Numeric, Numeric and 2
Dimensional. Barcodes are designed to be machine readable. Usually they are read by laser
scanners, they can also be read using a cameras.

Wireless Fidelity (Wi-Fi)

Wireless Fidelity (Wi-Fi) is a networking technology that allows computers and other devices to
communicate over a wireless signal. Vic Hayes has been named as father of Wireless Fidelity.
The precursor to Wi-Fi was invented in 1991 by NCR Corporation in Nieuwege in the
Netherland. The first wireless products were brought on the market under the name WaveLAN
with speeds of 1 Mbps to 2 Mbps. Today, there are nearly pervasive Wi-Fi that delivers the high
speed Wireless Local Area Network (WLAN) connectivity to millions of offices, homes, and
public locations such as hotels, cafes, and airports. The integration of Wi-Fi into notebooks,
handhelds and Consumer Electronics (CE) devices has accelerated the adoption of Wi-Fi to the
point where it is nearly a default in these devices [24]. Technology contains any type of WLAN
product support any of the IEEE 802.11 together with dual-band, 802.11a, 802.11b, 802.11g and
802.11n. Nowadays entire cities are becoming Wi-Fi corridors through wireless APs.

Bluetooth

Bluetooth wireless technology is an inexpensive, short-range radio technology that eliminates the
need for proprietary cabling between devices such as notebook PCs, handheld PCs, PDAs,
cameras, and printers and effective range of 10 - 100 meters. And generally communicate at less
than 1 Mbps and Bluetooth uses specifi- cation of IEEE 802.15.1 standard. At first in 1994
Ericson Mobile Communication company started project named “Bluetooth”. It is used for
creation of Personal Area Networks (PAN). A set of Bluetooth devices sharing a common
channel for communication is called Piconet. This Piconet is capable of 2 - 8 devices at a time
for data sharing, and that data may be text, picture, video and sound. The Bluetooth Special
Interest Group comprises more than 1000 companies with Intel, Cisco, HP, Aruba, Intel, Ericson,
IBM, Motorola and Toshiba.

ZigBee

ZigBee is one of the protocols developed for enhancing the features of wireless sensor networks.
ZigBee technology is created by the ZigBee Alliance which is founded in the year 2001.
Characteristics of ZigBee are low cost, low data rate, relatively short transmission range,
scalability, reliability, flexible protocol design. It is a low power wireless network protocol based
on the IEEE 802.15.4 standard [25]. ZigBee has range of around 100 meters and a bandwidth of
250 kbps and the topologies that it works are star, cluster tree and mesh. It is widely used in
home automation, digital agriculture, industrial controls, medical monitoring &power systems.

Near Filed Communication (NFC)

Near Field Communication (NFC) is a set of short-range wireless technology at 13.56 MHz,
typically requiring a distance of 4 cm. NFC technology makes life easier and more convenient
for consumers around the world by making it simpler to make transactions, exchange digital
content, and connect electronic devices with a touch. Allows intuitive initialization of wireless
networks and NFC is complementary to Bluetooth and 802.11 with their long distance
capabilities at a distance circa up to 10 cm. It also works in dirty environment, does not require
line of sight, easy and simple connection method. It is first developed by Philips and Sony
companies. Data exchange rate now days approximately 424 kbps. Power consumption during
data reading in NFC is under 15ma.

Actuators

An actuator is something that converts energy into motion, which means actuators drive motions
into mechanical systems. It takes hydraulic fluid, electric current or some other source of power.
Actuators can create a linear motion, rotary motion or oscillatory motion. Cover short distances,
typically up to 30 feet and generally communicate at less than 1 Mbps. Actuators typically are
used in manufacturing or industrial applications. There are three types of actuators are (1)
Electrical: ac and dc motors, stepper motors, solenoids (2) Hydraulic: use hydraulic fluid to
actuate motion (3) Pneumatic: use compressed air to actuate motion. All these three types of
actuators are very much in use today. Among these, electric actuators are the most commonly
used type. Hydraulic and pneumatic systems allow for increased force and torque from smaller
motor.

Wireless Sensor Networks (WSN)


A WSN is a wireless network consisting of spatially distributed autonomous devices using
sensors to cooperatively monitor physical or environmental conditions, such as temperature,
sound, vibration, pressure, motion or pollutants, at different locations (Wikipedia). Formed by
hundreds or thousands of motes that communicate with each other and pass data along from one
to another. A wireless sensor network is an important element in IoT paradigm. Sensor nodes
may not have global ID because of the large amount of overhead and large number of sensors.
WSN based on IoT has received remarkable attention in many areas, such as military, homeland
security, healthcare, precision agriculture monitoring, manufacturing, habitat monitoring, forest
fire and flood detection and so on [26]. Sensors mounted to a patient’s body are monitoring the
responses to the medication, so that doctors can measure the effects of the medicines [27].

Artificial Intelligence (AI)

Artificial Intelligence refers to electronic environments that are sensitive and responsive to the
presence of people. In an ambient intelligence world, devices work in concert to support people
in carrying out their everyday life activities in easy, natural way using Information and
Intelligence that is hidden in the network connected devices. It is characterized by the following
systems of characteristics (1) Embedded: Many Net- worked devices are integrated in to the
environment (2) Context Aware: These devices can recognize you and your situational context
(3) Personalized: They can be tailored to your needs (4) Adaptive: They can change in response
to you (5) Anticipatory: They can anticipate your desires without conscious mediation.
Literature review

In this field research is going on individually either big data or Internet of Things with
biomedical. Number of periodical patient visit for normal disease’s like diabetic, thyroid, etc. It
is not fully curable at once. Struggles lot it increase percentage in body counseling required. It
monitoring required such like disease each and every month take appointment and visit hospital
is difficult issue. For health purpose it is required to reducing. It Arrange digital health centers in
middle of it patient to doctor. To check with the digital devices and status of disease not changed
continue with medicines. Otherwise change prescription according to status. Major changes there
to fix the patient appointment with doctor.

In [10] author says that devices merge with the different devices and connected with the human
body. Communicated with local network for data storing and processing environment they
maintain software team to handle it is very expansive. And small scale hospital not bare this kind
of things.

In [11] author developed an interactive model for Doctor to patient with wireless sensor
networks. But it also required to maintain data storage and maintaining unit. It is very
complicated evaluation strategy are used analyzing and storing information. Common doctor
understanding with basic IT knowledge it is difficult. The research efforts in this bearing has
expanded over the previous years, and in the present European R&D Program Horizon 2020, for
instance, the topical needs in "Wellbeing Demographic Change and Wellbeing", assume a
critical part in the structure of the system of Social Challenges [12].

The most recent activities with comparable introduction, and past in the structure of FP7, have a
noteworthy spotlight on the advancement of this sort of arrangements in view of ICT to advance
self-administration of wellbeing and malady. At times the undertakings concentrate on remote
observing and self-administration of ailments, for example, CVD and COPD, offering answers
for patients that permit them to stay away from the need to go to the doctor's facility for an
amendment, among different advantages [13-14].

Different ventures put the center in advancing solid practices in people, without wellbeing
issues, with a more preventive character [15]. Biomedical Informatics usage needs of Big Data
Technology The growth in pharmaceutical, covering all zones from medicine improvement to
customized diagnostics and therapeutics, reflects the achievement of the most difficult orders
included: subatomic science and data science. Digital data is the key element for biomedical
informatics. Especially, the progressing revelations in sub-atomic science and their immediate
impact on the comprehension of human illnesses will have expansive outcomes for the entire
social insurance framework, conclusion, subsequently impact counteractive action, and
treatment.

The advancement in biomedical building will bring about new symptomatic. Pervasive care [16],
rather than the present doctor's facility based drug, intends to convey wellbeing benefits past
healing facilities and into individual's everyday lives. All the more significantly, it underpins
individualization by giving intends to get individual wellbeing data that are difficult to get inside
healing centers. Under the p-Health demonstrate, a gathering of wellbeing data needs to start as
right on time as would be prudent, which can be beginning during childbirth or even before
conception. Sorts of data need to traverse different spatial sizes of the human body, down from
the hereditary and sub-atomic level and up to body framework level.

Data of various methodology must be caught by an assortment of obtaining instruments, e.g.,


detecting and imaging gadgets, and under various circumstances, e.g., amid every day exercises
and also sporadic clinical visits. The arrangement of data will in the long run understand
wellbeing issues stimulate at various levels, from individual to worldwide. In short the p-Health
Model [17] answers two inquiries: What choices ought to be settled on and how the choices
ought to be made. The previous identifies with preventive, participatory, and preemptive
measures, the last to customized, prescient, and pervasive activities.
EXISTING SYSTEM

The recent advancements in technology and the availability of the Internet make it possible to
connect various devices that can communicate with each other and share data. The Internet of
Things (IoT) is a new concept that allows users to connect various sensors and smart devices to
collect real-time data from the environment. However, it has been observed that a comprehensive
platform is still missing in the e-Health and m-Health architectures to use smartphone sensors to
sense and transmit important data related to a patient’s health. In this paper, our contribution is
twofold. Firstly, we critically evaluate the existing literature, which discusses the effective ways
to deploy IoT in the field of medical and smart health care. Secondly, we propose a new
semantic model for patients’ e-Health. The proposed model named as ‘k-Healthcare’ makes use
of 4 layers; the sensor layer, the network layer, the Internet layer and the services layer. All
layers cooperate with each other effectively and efficiently to provide a platform for accessing
patients’ health data using smart phones.

INTRODUCTION
In the new era of communication and technology, the explosive growth of electronic devices,
smart phones and tablets which can be communicated physically or wirelessly has become the
fundamental tool of daily life. The next generation of connected world is Internet of Things (IoT)
which connects devices, sensors, appliances, vehicles and other “things”. The things or objects
may include the radio-frequency identification (RFID) tag, mobile phones, sensors, actuators and
much more. With the help of IoT, we connect anything, access from anywhere and anytime,
efficiently access any service and information about any object. The aim of IoT is to extend the
benefits of Internet with remote control ability, data sharing, constant connectivity and so on.
Using an embedded sensor which is always on and collecting data, all the devices would be tied
to local and global networks. The term IoT, often called Internet of everything, was 1st
introduced by Kevin Ashton in 1999 who dreams a system where every physical object is
connected using the Internet via ubiquitous sensors. The IoT technology is nowadays used in
different fields of life including digital oilfield, home and building automation, intelligent Grid,
digital medical treatment, intelligent transportation etc. RFIDs use the radio frequency tags to
identify real objects, and a RFID sensor transfers data between a reader and an object which is
identified track and categorize.

RFID can use two different types of tags: Active and Passive. The IoT technology can provide a
large amount of data about human, objects, time and space. While combining the current Internet
technology and IoT provides a large amount of space and innovative service based on low-cost
sensors and wireless communication. IPv6 and Cloud computing promote the development of
integration of Internet and IoT. It is providing more possibilities of data collecting, data
processing, port management and other new services. Every object which connects to IoT
requires a unique address or identification which can be accomplished with the help of IPv6.
There are so many people in the world whose health may suffer because they do not have proper
access to hospitals and health monitoring. Due to the latest technology, small wireless solutions
which are connected to IoT can make it possible to monitor patients remotely instead of visiting
the physical hospital. A variety of sensors which are attached to the body of a patient can be used
to get health data securely, and the collected data can be analyzed (by applying some relevant
algorithms) and sent to the server using different transmission media (3G/4G with base stations
or Wi-Fi which is connected to the Internet). All the medical professionals can access and view
the data, take decision accordingly to provide services remotely.

With the passage of time and development of society, people recognize that health is the basic
condition of promoting economic development. Some people say that existing public health
service and its supportability have been greatly challenged with respect to time. Worldwide the
Government and industry are investing billions of dollars for development of IoT computing, and
some of these projects include China’s National IoT Plan by Ministry of Industry and IT,
European Research Cluster on IoT (IERC), Japan’s u-Strategy, UK’s Future Internet Initiatives
and Italian National Project of Netergit. The IoT applications in the field of medical and
healthcare will benefit patients to use the best medical assistance, shortest treatment time, low
medical costs and most satisfactory service.
With the help of IoT we can easily capture the process of production, anti-counterfeit and tracing
of medical equipment delivery. We can also mange medical information with the help of IoT,
including sample identification and medical record identification. We can construct systems
which can continuously monitor the patients, remote consultation, critically-ill patients and
health care management platform using different techniques and equipment which can sense,
capture, measure and transmit the information of body or things . In June 2013, the U.S
Department of Veterans Affairs (VA) decided to deploy RFID in American hospitals to improve
care and reduce the costs of treatment.
Combining sensors and the microcontroller to get accurate measurement, and monitoring and
analyzing the health status increase the power of IoT in healthcare. These can include blood
pressure, heart rate, oxygen saturation in blood, levels of glucose and motion of body. For
working effectively, smart sensors and microcontroller components have several capabilities:
low power operation, integrated precision-analog capabilities and GUI’s. To keep device
footprint small and extend the life of battery to make the device usable, make the sensors
possible to achieve high accuracy at low cost, improve the usability and read the information in a
good manner. The end-to-end connectivity using sensors and other devices in healthcare.
Most of the users are using smartphones with built-in sensors. The existing research in IoT, more
specifically, in thefield e-health does not make use of smartphone sensors to monitor patients’
health. The motivation of this paper is to use the existing smartphones sensor to monitor e-
health. In this paper, we propose a novel model named k-Healthcare in IoT. The proposed model
provides platform for physical sensors, which are connected directly with patient’s smartphone
to obtain data at run time. This data is processed and stored in the cloud storage. The stored data
can be accessed by practitioners and medical staff later on to observe and monitor patients’
health.
Access Mechanism
To obtain the patient data and process it for further use, different researchers use different
mechanisms. Some of them use RFID, few use 3G and 4G, and some use WLAN.
 RFID
Amendola et al. conducts a survey on the RFID for bodycentric systems which can get
information about the surrounding area of a user’s living environment (temperature, humidity
and other gases). The survey covers the passive devices which use the UHF band. They discuss
different types of sensors: temperature tags (Threshold Temp. Sensors, Continuous Temp.
Sensors, Digital Data Loggers), and Body-Centric RFID (Wearable RFID Tags, Implantable
RFID Tags). They also work on data processing and human behavior analysis such as Tracking
human motion inside rooms, gesture recognition, remote monitoring and control of overnight
living environment. They finally point out some issues which are still open and challenging.
RFID has been used by many of authors to sense and collect data.
 2G/3G/4G
These are mobile communications standards which allow the IoT users to access the Internet
wirelessly using different devices, e.g., mobile phones, tablets and other portable electronics
devices. Some proposed models used 3G, and few used 4G.
 WLAN
WLAN is a computer network that can help connect different IoT devices with the help of
wireless distribution method within small geographical area such as home, office building, labs
etc. The WLAN is used in some studeis to communicate the devices and transfer the data to
Internet Cloud storage.
 Applications
Fang et al [6] explain the applications of IoT, such as Constant Real time Monitoring, the Anti-
counterfeit of Medical Equipment and Medication, Medical Refuse Information Management,
Medical Information Management, Medical Emergency Management, Patient Information
Management, Medication Storage Management, Blood Information Management, Telemedicine
and Mobile Medical Care and Health Management. Dongxin and Tao summarize the concept of
IoT which includes the structure and implementation of IoT functions. An overview of IoT
architecture is provided in their study, which consists of three layers: Perceptual Layer, Network
Layer, and Application Layer. They present that telemedicine has two kinds: Interactive and non-
Interactive consultation. They introduce some advantages of telemedicine and also propose two
applications of IoT in health, i.e., clinical care and remote real time ECG monitoring. C.E Turcu,
and C.O. Turcu worked on how to integrate multi agents, RFID and IoT platform to provide
efficient healthcare with reduced medical errors. They discuss some systems, projects which
have already been successfully deployed (like SAPHIRE, K4CARE, CMDS, BRIDGE), and give
a general overview of IoT in healthcare. Some important details seems to be missing in their
study such as they did not show how to sense data, which type of sensor was used, which
protocol was used to send data etc.

COMPARISON AND CONTRAST ANALYSIS


We investigated the relevant studies reported in the referenced papers, we notice that some of
researchers proposed new architectures and models for IoT, which help to deploy IoT in the field
of medical and healthcare. It is also noticed that some of the authors follow IEEE and other
standards to implement their proposed IoT model to provide remote monitoring and emergency
aid while some of the authors simply explain the applications of IoT in healthcare. We present a
performance comparison and analysis of different IoT structures. We evaluate the proposed IoT
models based on some parameters such as provision of emergency aid, technology used, standard
followed, support for multi device and artificial intelligence implementation.
A. Emergency Aid:
Using IoT in the field of medical andhealthcare, the focus should be on data and on the provision
of the support in emergencies. The system must generate alarms, inform the patients and
consultants.
B. Technology:
IoT supports different and latest technologies like RFID, WSN, 3G, 4G networks etc. Using
these technologies, one can obtain data related to patient’s health and send it to a remote server
for further processing and storage. We can compare different systems/architectures on the bases
of these technologies.
C. Standards:
IoT supports different standards and protocols, e.g. IEEE 802.11/b/g/n, IEEE 802.15.4, IEEE
802.15.6, ZigBee, WBAN, and NL 7 etc. Using standards and protocols, we can find the
distance, accuracy and time to take a system to complete his work.
D. Multi device support:
We can compare different models and systems on the bases of multi device support. The efficient
systems support many devices such as RFID sensors, body sensors, smart phone sensors, tablets,
and wearable devices. It should be noted that the existing research in IoT mostly focus on RFID
sensors and external wearable sensors (as shown in Table 1). There is no formal use of smart
phone sensors in the existing literature. Our proposed model particularly makes use of built-in
smartphone sensors to monitor the patient health
data.
The k-Healtcare model
The k-Healtcare model proposed in this paper for efficient deployment of IoT in the field of
medical and healthcare consists of four layers.
A. Sensor Layer
The bottom layer of the model is called a sensor layer which is the heart of the model, there are
different sensors lying on this layer, e.g., RTX-4100, wireless two-lead EKG, Arduino &
Raspberry Pi, blood oxygen sensor, pulse oximetry, and Smart Phone sensors. RFID performs
the object identification automatically by reading the tag, which attached to objects. The
passive RFID is mostly used which has no power/ battery requirement, it takes power from the
RFID reader and becomes active to communicate with the reader. The main idea of WSN
is to get data from the environment and pass data through the network to the centralized storage.
The modern smartphones have certain sensors built-in by default, e.g., accelerometer,
gyroscope, proximity, barometer, temperature, humidity, gesture, etc., which makes it easier to
use (as no external sensors are used). In k-Healthcare we use these built-in sensors
to get data and send the data to remote data storage for furtherprocessing. The communication
between the sensor layer and the network layer is done using IEEE 802.11/b/g/n, IEEE
802.15.4, IEEE 802.15.6, ZigBee etc.
B. Network Layer
The Network layer plays the key role in communication to connect the devices with WAN using
different protocols (TCP/IP), technologies and standards like 3G, 4G, ADSL, DSLAM, and
Routers. The sensor device sends the data to a connected device, e.g. smart phone or RFID
reader which is connected to home gate or the Internet via Ethernet / Wireless. The gateway
device, then sends the data to a particular server for further processing and updating the
databases. This layer also supports different protocols for communication like IEEE 802.16 for
3G, IEEE 802.16m for 4G, IEEE 802.20, ITU G.992.1 - ITU G.992.5.
C. Internet Layer
This layer provides the functionality of data storage and management. For this purpose, we use
the cloud storage. Thecloud storage provides the facility to store the data into logical pools. The
physical storage may be one server or multiple servers, typically owned and managed by a
hosting company. The cloud provides different services and algorithms on demand like cloud
storage, cloud data store, cloud SQL, BigQuery, RESTful services for iOS, Android, JavaScript
and machine learning algorithms.
D. Services Layer
This layer provides direct access of data to professional medical facilities and stakeholders such
as doctors, emergency centers, hospitals, and medicine supply chains. The doctor can easily
manage the patients, view the medication history, and provide remote support in case of
emergency. The patient can also access the data on provided interface any time anywhere. This
layer supports different protocols and techniques like HTTP, HTTPS, JavaScript, RESTful web
services etc.
Summary:
m-Health and e-Health are providing different services remotely, such as prevention and
diagnosis against disease, risk assessment, monitoring patient health, education and treatment
to users. This is why e-Health and m-Health is being widely accepted in the society. The
emerging of state of the art tools and technologies of IoT can be really beneficial for e-Health
and m-Health. Different e-Health and m-Health architectures for IoT have been developed which
handle an emergency situation efficiently. However, the existing e-Health and m-Health
architectures do not use smart phone sensors to sense proposed k-Healthcare model and transmit
important data related to the patients’ health. We proposed a novel framework for e-Health and
m-Health which makes use of smart phone sensors and body sensors to obtain, process and
transmit patient health related data to centralize storage in the cloud. This stored data could be
retrieved by patients’ and other stakeholders in the future. Our proposed model, named k-
Healthcare, makes use of four layers which work closely together and provide efficient storing,
processing and retrieving of valuable data. We have provided a comparative analysis of different
architectures and applications of IoT which can be used in e-Health and m-Health. The ongoing
work focuses on the actual development and deployment of k-Healthcare. One way could be the
design of a software or smartphone application which will obtain the data directly from the
sensors and process it automatically. Furthermore, we will investigate the security and privacy
issues of k-Healthcare.
Proposed system:

Encryption of data using multiple, independent encryption schemes (“multiple encryption”) has
been suggested in a variety of contexts, and can be used, for example, to protect against partial
key exposure or cryptanalysis, or to enforce threshold access to data. Most prior work on this
subject has focused on the security of multiple encryption against chosen-plaintext attacks, and
has shown constructions secure in this sense based on the chosen-plaintext security of the
component schemes. Subsequent work has sometimes assumed that these solutions are also
secure against chosen-ciphertext attacks when component schemes with stronger security
properties are used. Unfortunately, this intuition is false for all existing multiple encryption
schemes. Here, in addition to formalizing the problem of chosen-ciphertext security for multiple
encryption, we give simple, efficient, and generic constructions of multiple encryption schemes
secure against chosen-ciphertext attacks (based on any component schemes secure against such
attacks) in the standard model. We also give a more efficient construction from any
(hierarchical) identity-based encryption scheme secure against selective identity chosen plaintext
attacks. Finally, we discuss a wide range of applications for our proposed schemes.

Motivation for our Work Chosen-ciphertext security (“CCA security”) is as much of a concern in
each of the above settings as it is in the case of standard encryption. One might hope to achieve
CCA security for any of the above settings by simply “plugging in” an appropriate CCA-secure
multiple encryption scheme. However (with one recent exception; see below), we are unaware of
any previous work which considers chosen-ciphertext security for multiple encryption. To be
clear: there has been much work aimed at giving solutions for specific applications using specific
number-theoretic assumptions: for example, in the context of CCA-secure threshold encryption,
broadcast encryption, and key-insulated encryption. However, this type of approach suffers from
at least two drawbacks: first, it does not provide generic solutions, but instead only provides
solutions based on very specific assumptions. Second, the derived solutions are application-
dependent, and must be constantly “re-invented” and modified each time one wants to apply the
techniques to a new domain. Although solutions based on specific assumptions are often more
efficient than generic solutions, it is important to at least be aware that a generic solution exists
so that its efficiency can be directly compared with a solution based on specific assumptions.
Indeed, we argue in Section 6 that for some applications, a generic solution may be roughly as
efficient as (or may offer reasonable efficiency tradeoffs as compared to) the best currently-
known solutions based on specific assumptions. Making the problem even more acute is that
currently-known schemes for multiple encryption are demonstrably insecure against chosen-
ciphertext attacks (this holds even with respect to the weakest definition considered here. Zhang,
et al. have also recently noticed this problem, and appear to be the first to have considered
chosen-ciphertext security for multiple encryption. We compare our work to theirs in the
following section.

Multiple encryption algorithm for IOT based patient care:

Cipher text is an art of protecting information by encrypting it into an unreadable. This


information can only be possessed by those who possess the secret key that can decipher (or
decrypt) the message into original form. Cryptography is a vital part of securing private data and
preventing it from being stolen. In addition to concealing the real information stored in the data,
cryptography performs other critical security requirements for data including integrity,
repudiation, authentication and confidentiality. Today cryptography is not just limited to prevent
sensitive military information, but is one of the critical components of the security policy of any
organization and is considered as an industry standard for providing information trust, security,
electronic financial transactions and controlling access to resources. In the World War II for
instance cryptography played an imperative role that gave the allied forces the upper hand, and
helped them in winning the war. They were able to dissolve the Enigma cipher machine which
the Germans used to encrypt their military secret communications. Plaintext is the Original data
that to be transmitted or stored, which is readable and understandable either by a computer or by
a person. Whereas the ciphertext, which is unreadable, neither machine nor human can make out
some meaning out of it until it is decrypted. Cryptosystem is a system or product that provides
encryption and decryption. Cryptosystem uses an encryption algorithms which determines how
simple or complex the encryption process will be. In encryption, key is a piece of information
which states the particular conversion of plaintext to ciphertext, or vice versa during decryption.
The larger key space the more possible keys can be created . The strength of the encryption
algorithm banks on the length of the key ,secrecy of the key , the initialization vector, and how
they all work together. Depending on the algorithm, and length of the key, the strength of
encryption can be measured. There are two encryption/decryption key types: In some of
encryption technologies when two end points need to communicate with one another via
encryption, they must use the same algorithm, and in the most of the time the same key, and in
other encryption technologies, they must use different but related keys for encryption and
decryption purposes. Cryptography algorithms are either symmetric algorithms, which use
symmetric keys (also called secret keys), or asymmetric algorithms, which use asymmetric keys
(also called public and private keys).

ADVANCED ENCRYPTION STANDARD

The principal drawback of 3DES (which was recommended in 1999, Federal Information
Processing Standard FIPS PUB 46-3 as new standard with 168-bit key) is that the algorithm is
relatively sluggish in software. A secondary drawback is the use of 64-bit block size. For reasons
of both efficiency and security, a larger block size is desirable.

In 1997, National Institute of Standards and Technology NIST issued a call for proposals for a
new Advanced Encryption Standard (AES), which should have security strength equal to or
better than 3DES, and significantly improved efficiency. In addition, NIST also specified that
AES must be a symmetric block cipher with a block length of 128 bits and support for key
lengths of 128, 192, and 256 bits.

In a first round of evaluation, 15 proposed algorithms were accepted. A 2 nd round narrowed to 5


algorithms. NIST completed its evaluation process and published a final standard (FIPS PUB
197) in November, 2001. NIST selected Rijndael as the proposed AES algorithm. The 2
researches of AES are Dr. Joan Daemon and Dr. Vincent Rijmen from Belgium.

AES Evaluation

Security – 128 minimal key size provides enough security

Cost – AES should have high computational efficiency


THE AES CIPHER

A number of AES parameters depend on the key length (Table 5.3). In the description of this
section, we assume the key length of 128 bits.

OVERALL STRUCTURE OF AES

The input to the encryption and decryption algorithm is a single 128-bit block, this block, in
FIPS PUB 197, is depicted as a square matrix of bytes. This block is copied into the State array,
which is modified at each stage of encryption or decryption. After the final stage, State is copied
to an output matrix.
OVERALL STRUCTURE OF AES (Cont 1)

Similarly, the 128-bit is depicted as a square matrix of bytes. This key is expanded into an array
of key schedule words; each word is 4 bytes and the total key schedule is 44 words for the 128-
bit key (Figure 5.2b). Ordering of bytes within a matrix is by column.

Before delving into details, we can make several comments about overall AES structure:

1. This cipher is not a Feistel structure.


2. The key that is provided as input is expanded into an array of 44 words (32-bits each),
w[i]. 4 distinct words (128 bits) serve as a round key for each round; these are indicated
in Fig. 5.1
3. 4 different stages are used, 1 permutation and 3 of substitution:
- Substitute bytes – Uses an S-box to perform a byte-to-byte substitution of the block
- Shift rows – A simple permutation
- Mix columns – A substitution that makes use of arithmetic over GF(28).
- Add round key – A simple bitwise XOR of the current block with the portion of the
expanded key
4. The structure is quite simple.
OVERALL STRUCTURE OF AES (Cont 2)

5. Only the Add Round Key stage uses the key. Any other stage is reversible without
knowledge of the key.
6. The Add Round Key is a form of Vernam cipher and by itself would not be formidable.
The other 3 stages together provide confusion, diffusion, and nonlinearity, but by
themselves would provide no security because they do not use the key. We can view the
cipher as alternating operations of XOR encryption (Add Round Key), followed by
scrambling of the block.
7. Each stage is easily reversible
8. Decryption uses the same keys but in the reverse order. Decryption is not identical to
encryption
9. At each horizontal point (e.g., the dashed line) in Figure 5.1, State is the same for both
encryption and decryption
10. The final round of both encryption and decryption consists of only 3 stages; it is the
consequence of the particular structure of AES.
As was mentioned earlier, AES uses arithmetic in the finite field GF(2 8), with the irreducible
8 4 3
polynomial m( x )=x + x + x + x+ 1 .

Substitute Byte Transformation. Forward and Inverse Transformation

The Forward substitute byte transformation, called SubBytes, is a simple table lookup
(Figure 5.4a).

Substitute Byte Transformation. Forward and Inverse Transformation

AES defines a 16x16 matrix of byte values, called an S-box (Table 5.4a), that contains a
permutation of all possible 256 8-bit values. Each individual byte of State is mapped into a
new byte in the following way: The leftmost 4 bits are used as a row value and the rightmost
4 bits are used as a column value. These row and column values serve as indexes into the S-
box to select a unique 8-bit output value. For example, the hexadecimal value {95}
references row 9, column 5 of the S-box, which contains the value {2a}:

Substitute Byte Transformation. Forward and Inverse Transformation (Cont 3)


The S-box is constructed in the following fashion:

1. Initialize the S-box with the byte values in ascending order row by row. Thus, the value
of the byte at row x, column y is {xy}
Substitute Byte Transformation. Forward and Inverse Transformation

2. Map each byte in the S-box to its multiplicative inverse in the finite field GF(2 8); the
value {00} is mapped to itself.
3. Consider that each byte in the S-box consists of 8 bits labeled (b7,b6,b5,b4,b3,b2,b1,b0).
Apply the following transformation to each bit of each byte in the S-box:
b ′ =b i ⊕ b(i+4 )mod 8 ⊕b( i+5 )mod 8 ⊕b( i+6) mod8 ⊕ b(i+7) mod8 ⊕ c i
i (5.1)

where ci is the i-th bit of byte c with the value {63}, that is,
'
(c7c7c5c4c3c2c1c0)=(01100011). The prime ( ) indicates that the variable is to be updated
by the value on the right. The AES standard depicts this transformation in matrix form as
follows:

B0’ 1 0 0 0 1 1 1 1 B0 1
B1’ 1 1 0 0 0 1 1 1 B1 1
B2’ 1 1 1 0 0 0 1 1 B2 0
B3’ = 1 1 1 1 0 0 0 1 x B3 + 0 (5.2)
B4’ 1 1 1 1 1 0 0 0 B4 0
B5’ 0 1 1 1 1 1 0 0 B5 1
B6’ 0 0 1 1 1 1 1 0 B6 1
B7’ 0 0 0 1 1 1 1 1 B7 0
Each element in the product matrix is the bitwise XOR of elements of one row and one
column. Further, the final addition, shown in (5.2), is a bitwise XOR.
DATA ENCRYPTION STANDARD

It was adopted in 1977 by the National Bureau of Standards (NBS), now National Institute of
Standards and Technology (NIST), as Federal Information Processing Standard 46 (FIPS PUB
46). In 1971, IBM’s team under Horst Feistel leadership developed algorithm LUCIFER,
operating on 64-bit blocks with 128-bit key. Further, IBM’s team leaded by Walter Tuchman and
Carl Meyer revised LUCIFER to make it more resistant to cryptanalysis, but they reduced key
size to 56 bits. In 1973, NBS issued a request for proposals for a national cipher standard. IBM
submitted results of its Tuchman-Meyer project. This was by far the best algorithm proposed and
was adopted in 1977 as Data Encryption Standard. In 1994, NIST reaffirmed DES for federal use
for another 5 years. In 1999, NIST issued a new version of its standard (FIPS PUB 46-3) that
indicated that DES should only be used for legacy systems and that triple DES be used.

DES ENCRYPTION

32-bit swap swaps left and 32-bit halves obtained after Round 16, we get preoutput. Finally,
preoutput passes through a permutation IP -1, that is inverse to initial permutation IP, to produce
the 64-bit ciphertext. The right-hand portion of Fig. 3.7 shows the way in which 56-bit is used.
For each of 16 rounds a subkey Ki is produced by the combination of a left circular shift and a
permutation. The permutation function is the same for each round.
INITIAL PERMUTATION AND ITS INVERSE

It affects on 64-bit input

IP
58 50 42 34 26 18 10 2
60 52 44 36 28 20 12 4
62 54 46 38 30 22 14 6
64 56 48 40 32 24 16 8
57 49 41 33 25 17 9 1
59 51 43 35 27 19 11 3
61 53 45 37 29 21 13 5
63 55 47 39 31 23 15 7

IP-1
40 8 48 16 56 24 64 32
39 7 47 15 55 23 63 31
38 6 46 14 54 22 62 30
37 5 45 13 53 21 61 29
36 4 44 12 52 20 60 28
35 3 43 11 51 19 59 27
34 2 42 10 50 18 58 26
33 1 41 9 49 17 57 25
DETAILS OF SINGLE ROUND

The left and right halves of each 64-bit intermediate value are treated as separate 32-bit
quantities, labeled L and R. As in the classic Feistel cipher, the overall process at each round is
summarized as follows:

Li=R i−1
Ri =Li−1 ⊕ F( R i−1 , K i )

The round key Ki is 48 bits. The R input is 32 bits. This R input is first expanded to 48 bits by
Expansion/Permutation (E table):

Expansion/Permutation (E table)
32 1 2 3 4 5
4 5 6 7 8 9
8 9 10 11 12 13
12 13 14 15 16 17
16 17 18 19 20 21
20 21 22 23 24 25
24 25 26 27 28 29
28 29 30 31 32 1
DETAILS OF SINGLE ROUND
The resulting 48 bits are XORed with Ki. This 48 bit result passes through a substitution function
that produces 32-bit output, which is permuted by Permutation function (P):

Permutation function( P )
16 7 20 21 29 12 28 17
1 15 23 26 5 18 31 10
2 8 24 14 32 27 3 9
19 13 30 6 22 11 4 25
The role of S-boxes is illustrated in Fig.:

The substitution consists of a set of 8 S-boxes, each of which accepts 6 bits input and produces 4
bits as output.
DETAILS OF SINGLE ROUND

These transformations are:

Each row of an S-box defines a general reversible substitution: middle 4 bits of each group of 6-
bit input are substituted by S-box output, 1 st and last 6th bits define what particular substitution
out of to use.
KEY GENERATION

Input key has 64 bits. But each 8 th bit is not used: bits 8,16,24,32,40,48,56,64 are not further
used. The 56-bit key is first subjected to permutation Permuted Choice 1:

Permuted Choice 1 (PC-1)


57 49 41 33 25 17 9
1 58 50 42 34 26 18
10 2 59 51 43 35 27
19 11 3 60 52 44 36
63 55 47 39 31 23 15
7 62 54 46 38 30 22
14 6 61 53 45 37 29
21 13 5 28 20 12 4
The resulting 56-bit key is then treated as 2 28-bit quantities, labeled C0 and D0. At each round,
C i-1 and Di-1 are separately subjected to a circular left shift, or rotation, of 1 or 2 bits as governed
by the following:

Schedule of Left Shifts


Round number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Bits rotated 1 122222212 2 2 2 2 2 1
These shifted values serve as input to the next round. They also serve as input to Permuted

Choice 2, which produces a 48-bit output that serves as input to the function F (R i−1 , K i ) .

Permuted Choice 2 (PC-2)


14 17 11 24 1 5 3 28
15 6 21 10 23 19 12 4
26 8 16 7 27 20 13 2
41 52 31 37 47 55 30 40
51 45 33 48 44 49 39 56
34 53 46 42 50 36 29 32

DES DECRYPTION
As with any Feistel cipher, decryption uses the same algorithm as encryption, except that the
application of subkeys is reversed.

THE AVALANCE EFFECT IN DES

1 bit change in the plaintext leads to 34 bit difference in the ciphertext. 1 bit change in the key
leads to 35 bit difference in the ciphertext.

THE STRENGTH OF DES

DES proved insecure in July 1998, when the Electronic Frontier Foundation (EFF) announced
that it had broken a DES encryption using a special-purpose “DES cracker” machine that was
built for less than $250 000. The attack took less than 3 days.

Design criteria for S-boxes were not made public, so there was a concern that cryptanalysis is
possible for an opponent who knows the weaknesses in S-boxes. Up to now, there are no
published results about such weaknesses in S-boxes.

DES also appears to be resistant to timing attack but suggest some avenues to explore. Timing
attack tries to understand essence of algorithm by analysis of time of its work on different inputs.
One of such approaches yields a Hamming weight (number of bits equal to 1) of the secret key.

DIFFERENTIAL AND LINEAR CRYPTANALYSIS

Differential cryptanalysis attack is first published attack that is capable of breaking DES in less
than 255 complexity. The scheme can successfully cryptanalyze DES with an effort of the order
of 247, requiring 247 chosen plaintexts. Idea is to follow differences in 2 plaintexts in the rounds
of DES transformations, and to estimate probability of the output difference depending on the
used key. The first open publication on the differential cryptanalysis was in 1990.

Linear cryptanalysis was described in 1993. Idea is to find linear equation (with XOR
operations) between bits of plaintext, ciphertext and key that holds with probability greater than
0.5.

BLOCK CIPHER DESIGN PRINCIPLES

1. No output of any S-box should be too close a linear function of the input bits.
2. Each row of an S-box should include all 16 possible output combinations
3. If two inputs to an S-box differ in exactly 1 bit, the outputs must differ in at least 2 bits.
4. If 2 inputs to an S-box differ in the 2 middle bits exactly, the outputs must differ in at
least 2 bits.
5. If 2 inputs to an S-box differ in their first 2 bits and are identical in their last 2 bits, the 2
outputs must not be the same
6. For any nonzero 6-bit difference between inputs, no more than 8 of the 32 pairs of inputs
exhibiting that difference may result in the same output difference
These criteria are intended to increase confusion properties.

BLOCK CIPHER DESIGN PRINCIPLES

The criteria for the permutation P are as follows:

1. The 4 output bits from each S-box at round I are distributed so that 2 of them affect
(provide input for) “middle bits” of round (i+1) and the other 2 affect end bits. The 2
middle bits of input to an S-box are not shared by adjacent S-boxes. The end bits are the
2 left-hand bits and the 2 right-hand bits which are shared with adjacent S-boxes (for
example, bits 1,2,3,4 – outputs of S1, affect on bits 9, 17 (end-bits) and 23,31 (middle-
bits) respectively
2. The 4 output bits from each S-box affect 6 different S-boxes on the next round, and no 2
affect the same S-box (for example, bits 1,2,3,4 - outputs of S1, affect (S2, S3) – bit 1,
(S4,S5) – bit 2, S6 – bit 3, S8 – bit 4
3. For 2 S-boxes j, k, if an output bit from Sj affects a middle bit of Sk on the next round,
then an output bit from Sk cannot affect a middle bit of Sj. This implies that for j=k, an
output bit from Sj must not affect a middle bit of Sj. For example, output bit 3 of S1
affects middle bit of S6. Then we are to have that output bits of S6 are not to affect
middle bits of S1. Output bits of S6 are 21,22,23,24. Bit 21 affects bit 4 – end bit of S1,
S2; bit 22 affects bit 29 – end bit of S7, S8; bit 23 affects bit 11 – middle bit of S3; bit 24
affects bit 19 – middle bit of S5. So, middle bits of S1 are not affected by output bits of
S6.
These criteria are intended to increase diffusion properties.
Number of rounds – the greater this number, the more difficult is cryptanalysis. If DES had 15
or less rounds, differential cryptanalysis would require less effort than brute-force attack

Function F should be nonlinear, provide avalanche effect. Also, bit independence criterion is
used: output bits j and k should change independently when any single bit I is inverted, for all
i,j,k.

Size of S-boxes: larger size – more resistant to differential and linear cryptanalysis. S-boxes may
be made randomly or according to mathematical rules automatically. Contents of S-boxes may
depend on the key.

Key schedule algorithm – no general principles for this have yet been promulgated. Key
schedule (production of subkeys) should guarantee key/ciphertext avalanche criterion and bit
independence criterion.

The IDEA encryption algorithm

 provides high level security not based on keeping the algorithm a secret, but rather upon
ignorance of the secret key
 is fully specified and easily understood
 is available to everybody
 is suitable for use in a wide range of applications
 can be economically implemented in electronic components (VLSI Chip)
 can be used efficiently
 may be exported world wide
 is patent protected to prevent fraud and piracy

Description of IDEA

The block cipher IDEA operates with 64-bit plaintext and cipher text blocks and is controlled
by a 128-bit key. The fundamental innovation in the design of this algorithm is the use of
operations from three different algebraic groups. The substitution boxes and the associated table
lookups used in the block ciphers available to-date have been completely avoided. The algorithm
structure has been chosen such that, with the exception that different key sub-blocks are used, the
encryption process is identical to the decryption process.

Key Generation

The 64-bit plaintext block is partitioned into four 16-bit sub-blocks, since all the algebraic
operations used in the encryption process operate on 16-bit numbers. Another process produces
for each of the encryption rounds, six 16-bit key sub-blocks from the 128-bit key. Since a further
four 16-bit key-sub-blocks are required for the subsequent output transformation, a total of 52 (=
8 x 6 + 4) different 16-bit sub-blocks have to be generated from the 128-bit key.

The key sub-blocks used for the encryption and the decryption in the individual rounds are
shown in Table.

The 52 16-bit key sub-blocks which are generated from the 128-bit key are produced as
follows:
 First, the 128-bit key is partitioned into eight 16-bit sub-blocks which are then directly
used as the first eight key sub-blocks.
 The 128-bit key is then cyclically shifted to the left by 25 positions, after which the
resulting 128-bit block is again partitioned into eight 16-bit sub-blocks to be directly used
as the next eight key sub-blocks.
 The cyclic shift procedure described above is repeated until all of the required 52 16-bit
key sub-blocks have been generated.

Encryption

The functional representation of the encryption process is shown in Figure 1. The process
consists of eight identical encryption steps (known as encryption rounds) followed by an output
transformation. The structure of the first round is shown in detail.
In the first encryption round, the first four 16-bit key sub-blocks are combined with two of
the 16-bit plaintext blocks using addition modulo 2 16, and with the other two plaintext blocks
using multiplication modulo 216 + 1. The results are then processed further as shown in Figure 1,
whereby two more 16-bit key sub-blocks enter the calculation and the third algebraic group
operator, the bit-by-bit exclusive OR, is used. At the end of the first encryption round four 16-bit
values are produced which are used as input to the second encryption round in a partially
changed order. The process described above for round one is repeated in each of the subsequent
7 encryption rounds using different 16-bit key sub-blocks for each combination. During the
subsequent output transformation, the four 16-bit values produced at the end of the 8 th encryption
round are combined with the last four of the 52 key sub-blocks using addition modulo 2 16 and
multiplication modulo 216 + 1 to form the resulting four 16-bit ciphertext blocks.

Decryption

The computational process used for decryption of the ciphertext is essentially the same as
that used for encryption of the plaintext. The only difference compared with encryption is that
during decryption, different 16-bit key sub-blocks are generated.
More precisely, each of the 52 16-bit key sub-blocks used for decryption is the inverse of the
key sub-block used during encryption in respect of the applied algebraic group operation.
Additionally, the key sub-blocks must be used in the reverse order during decryption in order to
reverse the encryption process as shown in Table.

Modes of operation

IDEA supports all modes of operation as described by NIST in its publication FIPS 81. A
block cipher encrypts and decrypts plaintext in fixed-size-bit blocks (mostly 64 and 128 bit). For
plaintext exceeding this fixed size, the simplest approach is to partition the plaintext into blocks
of equal length and encrypt each separately. This method is named Electronic Code Book (ECB)
mode. However, Electronic Code Book is not a good system to use with small block sizes (for
example, smaller than 40 bits) and identical encryption modes. As ECB has disadvantages in
most applications, other methods named modes have been created. They are Cipher Block
Chaining (CBC), Cipher Feedback (CFB) and Output Feedback (OFB) modes.

Weak keys for IDEA

According to Daemon’s report [6], large classes of weak keys have been found for the block
cipher algorithm IDEA. IDEA has a 128-bit key and encrypts blocks of 64 bits. For a class of
223 keys IDEA exhibits a linear factor. For a certain class of 235 keys the cipher has a global
characteristic with probability 1. For another class of 251 keys only two encryptions and solving
a set of 16 nonlinear boolean equations with 12 variables is sufficient to test if the used key
belongs to this class. If it does, its particular value can be calculated efficiently. It is shown that
the problem of weak keys can be eliminated by slightly modifying the key schedule of IDEA.

In [4], two new attacks on a reduced number of rounds of IDEA are presented: truncated
differential attack and differential-linear attack. The truncated differential attack finds the secret
key of 3.5 rounds of IDEA in more than 86% of all cases using an estimated number of 2 56
chosen plaintexts and a workload of about 267 encryptions of 3.5 rounds of IDEA. With 240
chosen plaintexts the attack works for 1% of all keys. The differential-linear attack finds the
secret key of 3 rounds of IDEA. It needs at most 2 29 chosen pairs of plaintext and a workload of
about 244 encryptions with 3 rounds of IDEA.

Implementation

Although IDEA involves only simple 16-bit operations, software implementations of this
algorithm still cannot offer the encryption rate required for on-line encryption in high-speed
networks. Software implementation running on a Sun Enterprise E4500 machine with twelve
400MHz Ultra-Hi processor, performs 2.30 x 106 encryptions per second or a equivalent
encryption rate of 147.13Mb/sec, still cannot be applied to applications such as encryption for
155Mb/sec Asynchronous Transfer Mode (ATM) networks.

Hardware implementations offer significant speed improvements over software


implementations by exploiting parallelism among operators. In addition, they are likely to be
cheaper, have lower power consumption and smaller footprint than a high speed software
implementation. The first VLSI implementation of IDEA was developed and verified by
Bonnenberg et. al. in 1992 using a 1.5 CMOS technology [7]. This implementation had an
encryption rate of 44Mb/sec. In 1994, VINCI, a 177Mb/sec VLSI implementation of the IDEA
algorithm in 1.2 CMOS technology, was reported by Curiger et. al. [5, 11]. A 355Mb/sec
implementation in 0.8 technology of IDEA was reported in 1995 by Wolter et. al. [10]. The
fastest single chip implementation of which we are aware is a 424Mb/sec implementation of 0.7
technology by Salomao et. al. [9]. A commercial implementation of IDEA called the
IDEACrypt coprocessor, developed by Ascom achieves 300Mb/sec [2].

A high performance implementation of the IDEA presented by Leong [8] uses a novel bit-
serial architecture to perform multiplication modulo 2 16 + 1; the implementation occupies a
minimal amount of hardware. The bit-serial architecture enabled the algorithm to be deeply
pipelined to achieve a system clock rate of 125MHz. An implementation on a Xilinx Virtex X
CV300-4 was successfully tested, delivering a throughput of 500Mb/sec. With a X CV1000-6
device, the estimated performance is 2.35Gb/sec, three orders of magnitude faster than a
software implementation on a 450MHz Intel Pentium II. This design is suitable for applications
in online encryption for high-speed networks.
RC6 algorithm

The RC6 algorithm is a block cipher that was one of the finalists in the Advanced Encryption Standard
(AES) competition (Rivest et al., 1998a); the AES competition, sponsored by the National Institute of
Standards and Technology (NIST), began in 1997 (Nechvatal et al., 2000). The RC6 algorithm evolved
from its predecessor RC5, a simple and “parameterized family of encryption algorithms” (Rivest, 1997).
Since RC6 is an evolution of RC5, evolutionary differences will be noted accordingly. Also, for
consistency in all AES-related documents and RC6 research, whenever RC6 is mentioned without any
trailing parameters, the assumed parameters are the AES required parameters, which will be defined later.

The evolution leading to RC6 has provided a simple cipher yielding numerous evaluations and
adequate security in a small package. After describing the structure of the algorithm, the prominent goal
that stands out is simplicity. Through this simplicity, multiple evaluations have been performed, including
AES-related evaluations, which will be discussed at a high level, due to complexity and number of
articles. The fact that such a small, simple algorithm contended for AES with such high security
requirements is noteworthy.

Description
In 1995, RC5 came about from Ronald Rivest, one of the creators of the RSA algorithm (Rivest et
al., 1998a). RC5 is a symmetric block cipher that relies heavily on data-dependent rotations (Rivest,
1997). RC5 has been the subject of many studies that have expanded the knowledge of how RC5’s
structure contributes to its security (Rivest et al., 1998a). With certain architectural constraints by the
AES competition, RC5 did not appear to be the best fit. However, in 1998, RC5’s successor is born: RC6.
Improvements over RC5 include using four w-bit word registers, integer multiplication as an additional
primitive operation, and introducing a quadratic equation into the transformation (Rivest et al., 1998a).

Algorithm Details
RC6, like RC5, consists of three components: a key expansion algorithm, an encryption
algorithm, and a decryption algorithm. The parameterization is shown in the following specification:
RC6-w/r/b, where w is the word size, r is the non-negative number of rounds, and b is the byte size of the
encryption key (Rivest et al., 1998a). RC6 makes use of data-dependent rotations, similar to DES rounds
(Rivest et al., 1998a). RC6 is based on seven primitive operations as shown in Table 1. Normally, there
are only six primitive operations (Rivest et al., 1998a); however, the parallel assignment is primitive and
an essential operation to RC6. The addition, subtraction, and multiplication operations use two’s
complement representations (Rivest, 1997). Integer multiplication is used to increase diffusion per round
and increase the speed of the cipher (Rivest et al., 1998a).

Operation Description
a+b Integer addition modulo 2w
a–b Integer subtraction modulo 2w
a b Bitwise exclusive-or (XOR) of w-bit words
axb Integer multiplication modulo 2w
a <<< b Rotate the w-bit word a to the left by the amount given
by the least significant (log2 w) bits of b
a >>> b Rotate the w-bit word a to the right by the amount
given by the least significant (log2 w) bits of b
Enc: (A,B,C,D) = (B,C,D,A) Parallel assignment of values on the right to registers
Dec: (A,B,C,D) = (D,A,B,C) on the left.
Table 1: RC6 Operations (Rivest et al., 1998a)

Diffusion
Diffusion involves propagating bit changes from one block to other blocks. An avalanche effect is
where one small change in the plaintext triggers major changes in the ciphertext. To speed up the
avalanche of change between rounds, a quadratic equation is introduced (Rivest et al., 1998a). By
increasing the rate of diffusion, the rotation amounts spoiling sooner is more likely, due to the changes
from simple differentials (Rivest et al., 1998a). To achieve the security goals for transformation, the
following quadratic equation is used twice within each round:

f(x) = x(2x + 1)(mod 2w)

The high-order bits of this equation, which depend on all of the bits of x, are used to determine the
rotation amount used (Rivest et al., 1998a). In conjunction with the quadratic equation, the (log 2 w) bit
shift complicates advanced cryptanalytic attacks (Rivest et al., 1998a). Integer multiplication also
contributes by making sure that all of the bits of the rotation amounts are dependent on the bits of another
register (Rivest et al., 1998a).
Key Setup
The key expansion algorithm is used to expand the user-supplied key to fill an expanded array S,
so S resembles an array of t random binary words (Rivest, 1997). The key schedule algorithm for RC6
differs from the RC5 version where more words are derived from the user-supplied key (Rivest et al.,
1998a). The user must supply a key of b bytes, where 0 ≤ b ≤ 255, and from which (2r+4) words are
derived and stored in a round key array S (Rivest et al., 1998a). Zero bytes are appended to give the key
length equal to a “non-zero integral number” (Rivest et al., 1998a). The key bytes are then loaded in little-
endian order into an array L of size c; when b = 0, c = 1 and L[0] = 0 (Rivest et al., 1998a). The (2r+4)
derived words are stored in array S for later decryption or encryption (Rivest et al., 1998a). Figure 1
illustrates the key schedule used in both RC5 and RC6. P w and Qw are “magic constants”, or word-sized
binary constants where Odd(x) is the least odd integer greater than or equal to |x| (Rivest, 1997). In Figure
1, the base of natural logarithms and the golden ratio are respectively defined as e = 2.718281828459…

Encryption and Decryption


The processes of encryption and decryption are both composed of three stages: pre-whitening, an
inner loop of rounds, and post-whitening. Pre-whitening and post-whitening remove the possibility of the
plaintext revealing part of the input to the first round of encryption and the ciphertext revealing part of the
input to the last round of encryption (Rivest et al., 1998a).

The block encryption process illustrated in Figure 1 uses operations in Table 1. First, the registers
B and D undergo pre-whitening. Next, there are r rounds, which are designated by the “for” loop in
Figure 1. The registers B and D are put through the quadratic equation and rotated (log 2 w) bits to the left,
respectively. The resulting value of B has an exclusive-or (XOR) operation with A, and D with C
respectively. This value t is then left-rotated u bits and added to round key S[2i]; the resulting value of D
and C is left-rotated t bits and added to round key S[2i + 1]. In the final stage of the round, the register
values are permuted, using parallel assignment, to mix the AB computation with the CD computation,
increasing cryptanalytic complexity (Rivest et al., 1998a). Finally, registers A and C undergo post-
whitening.

Though encryption and decryption are similar in overall structure, the detailed differences bear
discussion. For RC6 decryption, the procedure begins with a pre-whitening step for C and A. The loop
runs in reverse for the number of r rounds. Within the loop, the first task is parallel assignment. From
there, the aforementioned quadratic equation is used on D and B respectively. The resulting value for u,
and t respectively, is left-rotated (log2 w) bits. The round key S[2i + 1] is subtracted from register C value,
the result of which is right-rotated t bits; round key S[2i] is subtracted from register A value, the result of
which is right-rotated u bits. This resulting value involving register C has an exclusive-or operation with
u, A with t respectively. After completing the loop, D and B undergo a post-whitening step.

AES Candidacy
The AES criteria for candidacy were very specific and, in some cases, architecturally limiting.
The requirement for 128-bit block handling pushed RC5 out of competition because of unreliable or
inefficient implementations of 64-bit operations in the target architectures and languages (Rivest et al.,
1998a). To satisfy the AES requirements for the target architectures and languages, RC6 parameters
contained the following values: w = 32, r = 20, and b = [16, 24, 32]; the use of four 32-bit registers
satisfies the 128-bit block handling requirement (Rivest et al., 1998a); the choice of 20 rounds comes
from a linear cryptanalysis where 16 rounds could be compromised in 2 119 plaintexts (Knudsen & Meier,
1999). With the AES values, the bit shift is (log 2 32), or 5 bits, and the length of the round key array is ((2
* 20) + 3) + 1), or 44 keys.

Input:

Plaintext stored in four w-bit input registers A,B,C,D

20 rounds

32-bit round keys S[0,…,43]

Output:

Ciphertext stored in A,B,C,D

Figure 4: RC6 AES Encryption Diagram


Procedure:

B = B + S[0] //Pre-whitening

D = D + S[1]

for i = 1 to 20 do

Simplicity
An important goal of RC6 is simplicity. By keeping the cipher structure simple, it becomes
available to a larger group of people for evaluation. The simplistic structure also plays a part in
performance and security.

Research Appeal
When a cipher is simple, it can be analyzed widely by cryptanalysts (Rivest et al., 2000). The
simplicity of RC6 has been quite striking for many researchers (Rivest et al., 1998a). This simplicity
leaves RC6 open to both rudimentary and complex analysis, which permits many people to evaluate the
security of the algorithm (Rivest et al., 1998a). However, prior research exists on RC5, which applies to
RC6. With this existing base of knowledge, RC6 appears to come from a quite mature and heavily studied
cipher (Rivest et al., 1998a).
Performance
Achieving many of the goals, while keeping the cipher simple, was the design objective that
prevailed throughout RC6 development (Rivest et al., 1998a). Considering the simplicity of this cipher,
an estimated assembly language implementation can be obtained for each component (key setup, block
encryption, block decryption) in under 256 bytes (Rivest et al., 1998a). Since the algorithm is quite
simple, this permits a compiler to produce well-optimized code, resulting in good performance without
hand-optimizations (Rivest et al., 2000). When discussing the size of RC6, the actual code length is not
the only metric; memory usage is of particular interest. In terms of the Java Runtime Environment, RC6
used the least amount of memory out of all the AES finalists (Rivest et al., 1998a).

Security
The security of the cipher is augmented by the simple structure. For instance, the rate of diffusion
is increased by several simple steps in the round: integer multiplication, the quadratic equation, and fixed
bit shifting. The data-dependent rotations are improved, because the rotation amounts are determined
from the high-order bits in f(x), which in turn are dependent on the register bits. RC6 security has been
evaluated to possess an “adequate security margin” (Nechvatal et al, 2000); this rating is given with
knowledge of theoretical attacks, which were devised out of the multiple evaluations. The AES-specific
security evaluations provide sufficient breadth and depth to how RC6 security is affected by the
simplicity of the cipher.

AES Security Evaluations

The security evaluations performed against RC6 result from scrutiny during the AES competition.
Though there have been evaluations of RC5, the focus of those evaluations do not negatively affect RC6.
There is a variety of detailed security analyses available for RC6 in the AES competition that is very
informative. This assortment of materials shows that a great deal of thought and scrutiny has gone into the
evolution and assessment of RC6.
Software used:

Introduction

ModelSim is a verification and simulation tool for VHDL, Verilog, SystemVerilog, and
mixedlanguage designs. This lesson provides a brief conceptual overview of the ModelSim
simulation environment. It is divided into fourtopics, which you will learn more about in
subsequent lessons.

• Basic simulation flow

• Project flow

• Multiple library flow

• Debugging tools

Basic Simulation Flow

 Creating the Working Library


In ModelSim, all designs are compiled into a library. You typically start a new simulation in
ModelSim by creating a working library called "work," which is the default library name used by
the compiler as the default destination for compiled design units.

 Compiling Your Design

After creating the working library, you compile your design units into it. The ModelSim library
format is compatible across all supported platforms. You can simulate your design on any
platform without having to recompile your design.

 Loading the Simulator with Your Design and Running the Simulation With the design
compiled, you load the simulator with your design by invoking the simulator on a top-level
module (Verilog) or a configuration or entity/architecture pair (VHDL). Assuming the design
loads successfully, the simulation time is set to zero, and you enter a run command to begin
simulation.
 Debugging Your Results

If you don’t get the results you expect, you can use ModelSim’s robust debugging environment
to track down the cause of the problem.

Project Flow

A project is a collection mechanism for an HDL design under specification or test. Even though
you don’t have to use projects in ModelSim, they may ease interaction with the tool and
areuseful for organizing files and specifying simulation settings.The following diagram shows
the basic steps for simulating a design within a ModelSim project.
Introduction

The Altera Quartus II design software provides a complete, multiplatform design environment
that easily adapts to your specific design needs. It is a comprehensive environment for system-
on-a-programmable-chip (SOPC) design. The Quartus II software includes solutions for all
phases of FPGA and CPLD design (Figure 1).
Graphical User Interface Design Flow

You can use the Quartus II software graphical user interface (GUI) to perform all stages of the
design flow. Figure 2 shows the Quartus II GUI as it appears when you first start the software.

The Quartus II software includes a modular Compiler. The Compiler includes the following
modules (modules marked with an asterisk are optional during a compilation, depending on your
settings):

■ Analysis & Synthesis

■ Partition Merge*

■ Fitter
■ Assembler*

■ TimeQuest Timing Analyzer*

■ Design Assistant*

■ EDA Netlist Writer*

■ HardCopy®

Netlist Writer*

To run all Compiler modules as part of a full compilation, on the Processing menu, click Start
Compilation. You can also run each module individually by pointing to Start on the Processing
menu, and then clicking the command for the module you want to start.In addition, you can use
the Tasks window to start Compiler modules individually The Tasks window also allows you to
change settings or view the report file for the module, or to start other tools related to each stage
in a flow.
REFERENCES

[1] S. Chow and Y. Kong, “On Big-Data Analytics in Biomedical Research,” biometrics Biostat., vol. 6, no.
3, 2015.

[2] E. Benkhelifa, M. Abdel-Maguid, S. Ewenike, and D. Heatley, “The Internet of Things: The eco-system
for sustainable growth,” Computer Systems and Applications (AICCSA), 2014 IEEE/ACS 11th International
Conference on. pp. 836–842, 2014.

[3] D. Niewolny, “How the Internet of Things Is Revolutionizing Healthcare,” 2014.

[4] E. W. T. Ngai, “Internet of Things in healthcare : the case of RFID-enabled asset management Samuel
Fosso Wamba *,” vol. 11, no. 3, pp. 318–335, 2013.

[5] S. Amendola, R. Lodato, S. Manzari, C. Occhiuzzi, and G. Marrocco, “RFID Technology for IoT-Based
Personal Healthcare in Smart Spaces,” vol. 1, no. 2, pp. 144–152, 2014.

[6] A. Saleh, M. Mosa, I. Yoo, and L. Sheets, “A Systematic Review of Healthcare Applications for
Smartphones,” BMC Med. Inform. Decis. Mak., vol. 12, no. 1, p. 1, 2012.

[7] X. Liu, N. Iftikhar, and X. Xie, “Survey of real-time processing systems for big data,” Proc. 18th Int.
Database Eng. Appl. Symp. - IDEAS ’14, no. October 2015, pp. 356–361, 2014.

[8] U. C. Berkeley, “Snapshots in Hadoop Distributed File System,” Csberkeleyedu, 2011.

[9] A. Holzinger, Biomedical Informatics. 2014.

[10] oskar fallman Karllon, “Innovation in healthcare design,” 1995.

[11] F. Nasri and N. Moussa, “Internet of Things : Intelligent system for healthcare Based on WSN and
Android,” 2014.

[12] “No TitHealth, Demographic Change and Wellbeing,” European Comission. [Online]. Available:
http://ec.europa.eu/programmes/horizon2020/en/h2020- section/healthdemographic-change-and-
wellbeing. [Accessed: 03-Mar-2016].
[13] S. Canale, “The Bravehealth Software Architecture for the Monitoring of Patients Affected by CVD,”
5th Telemed, pp. 29–34, 2013.

[14] F. M. Gerdes, Smaradottir, B., “End to End Infraestructure for Usability Evaluation of eHealth
Applications and Services,” in Scandinavian Conference on Health Informatics, 2014, pp. 21–22.

[15] P. S. Youm S1, “How the Awareness of u-Healthcare Service and Health Conditions Affect Healthy
Lifestyle: An Empirical Analysis Based on a u-Healthcare Service Experience,” Telemed. J. E Heal., vol. 4,
no. 21, pp. 286–295.

[16] J. Woodbridge, B. Mortazavi, and A. A. T. Bui, “Improving biomedical signal search results in big data
case-based reasoning environments,” Pervasive Mob. Comput., 2015.

[17] M. P. Brittes, B. S. Junior, C. Utfpr, U. Tecnológica, C. Utfpr, and U. Tecnológica, “A COLLABORATIVE


APPROACH TO MANAGE AND SHARE ELDERLY BIOMEDICAL INFORMATION,” pp. 1454–1458, 2014.

[18] “Interference Aware Scheduling of Sensors in IoT Enabled Health-Care Monitoring System
Interference Aware Scheduling of Sensors in IoT Enabled Health-care Monitoring System,” in 2014
Fourth international conferance of emerging Applications of Information Technolgy, 2016, no. December
2014, pp. 152–157.

You might also like