You are on page 1of 26

A Project Report

On

AUTOMATIC STREET LIGHTS USING PIR SESNSOR

For The Partial Fulfillment of the Requirement for the Award in

DIPLOMA
IN
ELECTRONICS & COMMUNICATION ENGINEERING

Submitted by

Jothik (21573-EC-032)
Rohith (21573-EC-030)
Navya (21573-EC-031)
Kalyan (21573-EC-023)

Under the Guidance of


SWEATHA VATSALYA
Assistant Professor

Department of Electronics of Communication Engineering


HOLY MARY INSTITUTE OF TECHNOLOGY-573
II-SHIFT POLYTECHNIC COLLEGE
Bogaram(V),Keesara(M),Medchal(Dist),501301.(TS)
2023-2024
HOLY MARY INSTITUTE OF TECHNOLOGY-573
II-SHIFT POLYTECHNIC COLLEGE
Bogaram(V),Keesara(M),Medchal(Dist) 501301.(TS)

2023-2024

BONAFIDE CERTIFICATE

This is to certify that the Project Report entitled AUTOMATIC


STREET LIGHTS USING PIR SESNSOR being submitted by JOTHIK for
partial fulfillment of the requirement for the award of Diploma in Electronics
and Communication Engineering discipline at Holy Mary Institute of
Technology carried out during the Academic Year 2023-2024. This record of is
a bonafide piece of work, undertaken by him/her under the supervision of the
undersigned. The results embodied in this report have not been submitted to any
other university for the award of any degree.

INTERNAL GUIDE HEAD OF DEPARTMENT

SWEATHA VATSALYA CH PADMASRI


Assistant Professor Head of Department
DECE DECE

EXTERNAL EXAMINER
ACKNOWLEDGEMENT

I extend my high regards to our chairman Dr. Arimanda Vara Prasad Reddy,
Secretary Dr. Arimanda Vijaya Sarada Reddy, Principal Mr. AVS Sai
Kumar, for providing all facilities in the college and pleased to bestow heartfelt
gratitude to our lab technicians for the cooperation during the project.

I am grateful to Mrs Ch. Padmasri, HoD of DECE and Ms. Sweatha Vatsalya
Project coordinator, for their constant encouragement during the dissertation
and convey my sincere appreciations for their suggestions and encouragement
without which this work be unfilled dream.

I am grateful to Mrs Ch.Padmasri, HoD of DECE for her Constant


encouragement during the dissertation and coney my sincere appreciations for
his suggestions and encouragement without which this work might be an
unfulfilled dream.

I would like to express my sincere and profound gratitude for unmatched service
and valuable guidance in each and every stage of completing this work
successfully.

I am thankful to the Internal Department Committee who have invested their


valuable time to conduct our monthly presentations and provided their feedback
with a lot of useful suggestions. We thank all the Faculty Members,
Department of Computer Science & Engineering for their encouragement and
assistance.

Group Members
Jothik (21573-EC-032)
Rohith (21573-EC-030)
Navya (21573-EC-031)
Kalyan (21573-EC-023)
CONTENTS

CONTENTS

LIST OF FIGURES

ABSTRACT

1 Chapter - 1 INTRODUCTION
1.1 Introduction to verification 1
1.2 Literature Survey 4
1.3 Organization of Thesis 5
2 Chapter - 2 INTRODUCTION TO RISCV
2.1 Introduction to RISCV processor 6
2.2 RISC-V ISA overview 7
2.3 Instruction length coding 8
2.4 Register structure 9

3 Chapter - 3 RV32I BASE INTEGER INSTRUCTION SET


3.1 Base instruction formats 10
3.2 Immediate Encoding variants 11
3.3 Integer computational instructions 12
3.4 Control transfer instructions 13
3.5 Load and Store instructions 15
3.6 Memory model 16
3.7 Environment call and break points 17
4 Chapter – 4 INTRODUCTION TO VERIFICATION
4.1 Motivation 18
4.2 Verification 18
4.3 The Need of Verification 19
4.4 Traditional Test benches and UVM Verification 21
4.5 Methods of Verification 22
5 Chapter - 5 IMPLEMENTATION
5.1 RISC-V Verification Environment 23
5.2 Working and Execution 24
5.3 Stimulus Generator 25
5.4 UVM Environment 26
5.5 Transaction 27
5.6 Agents 28
5.7 Reference model 29
5.8 Verilog compiler and simulator (VCS) 30

6 Chapter - 6 RESULTS
6.1 Verification of ALU instruction Sequence 32
6.2 Verification of Load and Store dependency 33
6.3 Verification of Conformal Test 34
6.4 Verification of all instructions 35
6.5 Conformal test through microcode 36
7 Chapter - 7 CONCLUSION AND FUTURE SCOPE

REFERENCES 41
LIST OF FIGURES

Figure 1.1 Formal Verification Flow 2

Figure 2.3.1 RISC-V instruction length coding 8

Figure 2.4.1 RISC-V user level Base integer register state 9

Figure 3.1.1 RISC-V base instruction formats 10

Figure 3.2.1 RISC-V base instruction formats showing immediate variants 11

Figure 3.2.2 Types of immediate produced by Riscv instructions 11

Figure 4.1 Logic Inside and Outside the Cone Of Influence (COI) 21

Figure 4.2 Typical Verification Component Environment 21

Figure 5.1 VC Formal in the Design Flow 23

Figure 5.2 VC Formal in the Verification Flow 24

Figure 5.3 VC Formal Components 24

Figure 5.4 Falsification Trace in VC Formal 28

Figure 5.6 Complex Operators in Design 30

Figure 6.1 VC Formal TaskList Console Window 32

Figure 6.2 Witness Proof Of a Proven Assertion 33

Figure 6.3 Inconclusive Results Due to Resource Constraints 34

Figure 6.4 Computation Of a Formal Core 35

Figure 6.5 Analysis Of Property Progress Report 36

Figure 6.6 Debugging Falsified Assertions 37


ABSTRACT

This abstract explores the efficiency and benefits of


automatic street lights. The system employs sensors, such as
motion detectors or ambient light sensors, to intelligently control
street lighting. By adjusting brightness levels based on real-time
conditions, it enhances energy conservation and reduces
operational costs. The integration of smart technologies not only
improves overall sustainability but also contributes to increased
safety and reduced light pollution in urban areas. This abstract
highlights the potential of automatic street lights in creating more
responsive and resource-efficient lighting systems.

The street lights on roads are using more current as example


at on empty road the street light consumes power either vehicle
comes or not comes the street lights will be on. As our aim we are
going to create a prototype model of automatic street light by
using motion sensor.

Group Members:
B. ROHITH 21573-EC-030
M. NAVYA 21573-EC-031
V. JOTHIK 21573-EC-032
M. KALYAN 21573-EC-023
Chapter 1

INTRODUCTION
The end goal of any verification endeavor in a Semiconductor firm is to ensure
that the churned out chips meet their specification expectation exactly and do not
deviate from their intended behavior. In any Product Development Cycle, 70% of
the time is consumed in verification with exorbitant costs, thus it becomes very
imperative to ensure the integrity of this resource intensive activity.

Electronic Devices are becoming ubiquitous these days with increasing


complexity accommodating high end features like with shrinking device sizes and
working on low power. These devices shelter complex sub systems such as multi-
core CPU’s and GPUs, connectivity utilities like 4G/5G standards , Bluetooth and
integrated with them are a host of other major Intellectual Property(IP) blocks
running on GHz clock rate coherently on a compatible software ecosystem. This
enormous logic density poses a gigantic verification overhead in terms of hunting
bugs and making sure a reliable product is rolled out to market otherwise the
consequences can be catastrophic.

The verification teams put in a lot of planning in terms of ensuring exploring


entire design space, employ crafty verification techniques, deploy sate of art
verification tools and methodology in their aspiration to roll out bug-free
products, but lately the lacunas have started emerging as few bugs escape the
conventional approaches . The early instance of this can date back to 1994 when
Intel Pentium Floating Point Divide Unit returned incorrect answers and it had to
shell a pre-tax charge of $475 million against earnings, ostensibly the total cost
associated with replacement of the flawed processors. Recently too Meltdown
and Spectre vulnerabilities created a havoc of sorts with hardware glitches which
can’t be rectified by software patches which lead to performance degradations, the
opportunity is now ripe to introspect on current verification practices . These have
a catastrophic effect on the firms with billions of dollars getting evaporated on
stock markets along with erosion of trust from stakeholders and clients plus does
an irreparable damage to brand image.

The figure 1.1 shows Formal Verification flow of verifying any Design-under-test
(DUT) .First step includes coding checkers and constraints as input . These

8
checkers capture the design motion sensor

A passive infrared (PIR) sensor (Fig. 5) is an electronic sensor that measures


infrared (IR) light radiating from objects in its field of view. They’re most
frequently utilized in PIR-based motion detectors. PIR sensors are commonly
utilized in security alarms and automatic lighting applications.

Fig. 5. PIR motion sensor

PIR sensors detect general movement, but don’t give information as to who or
what moved. For that purpose, an imaging IR sensor is required. PIR sensors are
commonly called simply “PIR” or sometimes “PID” for “passive infrared
detector.” The term passive refers to the fact that PIR devices don’t radiate energy
for detection purposes. They work entirely by detecting infrared (radiant heat)
emitted by or reflected from objects.

View chapterExplore book

Reading S'ensors

In Building Robots with LEGO Mindstorms NXT, 2007

The Passive Infrared Sensor

The passive infrared (PIR) sensor (Figure 4.19) by Techno-Stuff was originally
intended for use with the RCX. However, you can also use it with the NXT using

9
a converter cable. The PIR sensor detects passive infrared radiation (heat sources).
Remember the movie Predator? Remember how the alien “saw” the soldiers it
was hunting? The PIR sensor “sees” heat much in the same way. You can use this
sensor to build a robot that can seek out heat sources not evident to the human
eye.

Figure 4.19. PIR Sensor


You can configure the PIR sensor as a light sensor in either NXT-G or RobotC.
Readings will indicate detection of an object in front of it. The sensor technology
has a 20-foot range and a 180-degree swath. The infrared technology used in this
sensor is different from TV remote controls, so they will not interfere with it.
If you have this sensor, you can try setting up a small program that plays a sound
when you wave objects in front of it. First try moving something cool to the
touch, such as an unfilled glass or cup—you should not get any results, as it is not
giving off much (if any) infrared radiation. Now try moving your hand in front of
the sensor—you should hear a sound. Your results may vary depending on the
level of infrared being given off by your hand. During tests, the sensor was quite
effective at detecting people walking in front of it from a few feet away, but it did
nothing when it was right in front of a wall or door (as expected).
Can you think of an idea for using this sensor on your robot? How about making a
robot that can follow your pet around? Or how about a robot that can find a
small light source (such as a light bulb) in a maze? Note that you should not use a
candle or open flame for such a test, as that is a fire hazard. Stick to things that are
safe!
View chapterExplore book
A crop-monitoring system using wireless sensor networking

10
Himadri Nath Saha, ... Chiranmay Sarkar, in AI, Edge and IoT-based Smart
Agriculture, 2022

3.2 Passive infrared sensor (PIR sensor)


A PIR sensor is an electronic sensor used in motion detectors such as
automatically triggered lighting devices and protection systems that measure
devices emitting infrared light in their field of view. Each body with a temperature
above zero releases heat energy, which is in the form of radiation. PIR sensors
detect infrared radiation that is reflected or released from the target instead of
measuring or sensing heat. If the sensor detects an animal, insect, or a person, the
temperature at that point in the sensor’s field of view increases to the body
temperature of the intruder from the ambient temperature and then back
accordingly. The resulting change in the received infrared radiation is translated
by the sensor into a change in the output voltage, which activates detection.
Usually, the PIR sensors detect general movement. They do not offer specifics
and information about what or who moved, but only movements of animals,
people, or some other thing. An active IR sensor can be used to get further details.
View chapterExplore book
Internal Threats and Countermeasures
Philip P. Purpura, in Effective Physical Security (Fifth Edition), 2017
Trends
Two types of sensor technologies often are applied to a location to reduce false
alarms, prevent defeat techniques, or fill unique needs. The combination of
microwave and PIR sensors is a popular example of applying dual
technologies (Fig. 8.17). Reporting can be designed so an alarm is signaled when
both sensors detect an intrusion (to reduce false alarms) or when either sensor
detects an intrusion. Sensors are also becoming “smarter” by sending sensor data
to a control panel or computer, distinguishing between humans and animals, and
activating a trouble output if the sensor lens is blocked. Supervised wireless
sensors have become a major advancement because sensors can be placed at the
best location without the expense of running a wire; these sensors are constantly
monitored for integrity of the radio frequency link between the sensor and panel,
status of the battery, and whether the sensor is functioning normally ([27]:
104; [47]: 36–48).

11
Figure 8.17. Commercial intrusion alarm system.

View chapterExplore book

Internal Threats and Countermeasures

Philip P. Purpura, in Security and Loss Prevention (Sixth Edition), 2013

Trends

Two types of sensor technologies often are applied to a location to reduce false
alarms, prevent defeat techniques, or fill unique needs. The combination of
microwave and passive infrared sensors is a popular example of applying dual
technologies (see Figure 7-17). Reporting can be designed so an alarm is signaled
when both sensors detect an intrusion (to reduce false alarms) or when either
sensor detects an intrusion. Sensors are also becoming “smarter” by sending
sensor data to a control panel or computer, distinguishing between humans and
12
animals, and activating a trouble output if the sensor lens is
blocked. Supervised wireless sensors have become a major advancement because
sensors can be placed at the best location without the expense of running a wire;
these sensors are constantly monitored for integrity of the radio frequency link
between the sensor and panel, status of the battery, and whether the sensor is
functioning normally (Garcia, 2006: 104; O’Leary, 1999: 36–48).

Figure 7-17. Commercial intrusion alarm system.

View chapterExplore book

Domain 3: Security Engineering (Engineering and Management of Security)

Eric Conrad, ... Joshua Feldman, in CISSP Study Guide (Third Edition), 2016

Motion Detectors and Other Perimeter Alarms

Ultrasonic and microwave motion detectors work like “Doppler radar” used to
predict the weather. A wave of energy is sent out, and the “echo” is returned when
it bounces off an object. A motion detector that is 20 feet away from a wall will

13
consistently receive an echo in the time it takes for the wave to hit the wall and
bounce back to the receiver, for example. The echo will be returned more quickly
when a new object (such as a person walking in range of the sensor) reflects the
wave.

A photoelectric motion sensor sends a beam of light across a monitored space to a


photoelectric sensor. The sensor alerts when the light beam is broken.

Ultrasonic, microwave, and infrared motion sensors are active sensors, which
means they actively send energy. A passive sensor can be thought of as a “read-
only” device. An example is a passive infrared (PIR) sensor, which detects
infrared energy created by body heat.

Exam Warning

We often think of technical controls like NIDS (Network Intrusion Detection


Systems) when we hear the term “intrusion.” Motion detectors provide
physical intrusion detection.

It is important to remember that the original intrusions were committed by human


“intruders” (who may have stormed a castle wall). If you see the term “intrusion”
on the exam, be sure to look for the context (human or network-based).

Perimeter alarms include magnetic door and window alarms. They include
matched pairs of sensors on the wall, as well as window/door. An electrical circuit
flows through the sensor pairs as long as the door or window is closed; the circuit
breaks when either is opened. These are often armed for secured areas as well as
in general areas during off hours such as nights or weekends. Once armed, a
central alarm system will alert when any door or window is opened.

View chapterExplore book

Stream Sequence Mining for Human Activity Discovery

Parisa Rashidi, in Plan, Activity, and Intent Recognition, 2014

5.1 Introduction

Remarkable recent advancements in sensor technology and sensor networking are


leading us into a world of ubiquitous computing. Many researchers are now
working on smart environments that can respond to the needs of the residents in a
14
context-aware manner by using sensor technology. Along with data mining
and machine learning techniques [12]. For example, smart homes have proven to
be of great value for monitoring physical and cognitive health through analysis of
daily activities [54]. Besides health monitoring and assisted living, smart homes
can be quite useful for providing more comfort, security, and automation for the
residents. Some of the efforts for realizing the vision of smart homes have been
demonstrated in the real-world testbeds such as CASAS [51], MavHome [14],
DOMUS [7], iDorm [17], House_n [61], and Aware Home [1].

A smart environment typically contains many highly interactive devices, as well


as different types of sensors. The data collected from these sensors is used by
various data mining and machine learning techniques to discover residents’
activity patterns and then to recognize such patterns later [28,31]. Recognizing
activities allows the smart environment to respond in a context-aware way to the
residents’ needs [21,50,70,67].

Activity recognition is one of the most important components of many pervasive


computing systems. Here, we define an activity as a sequence of steps performed
by a human actor to achieve a certain goal. Each step should be measurable by a
sensor state or a combination of sensor states. It should be noted that besides
activities, we can recognize other types of concepts such as situations or goals.
We define a situation as a snapshot of states at a specific time point in a physical
or conceptual environment.

Unlike activities, situations try to model high-level interpretation of phenomena in


an environment. An example of an activity would be pouring hot water into a cup,
but making tea would constitute a situation, according to these definitions. Rather
than trying to figure out the purpose of a user from sensor data, it is also possible
to define goals for users (e.g., each goal is the objective realized by performing an
activity). The boundaries between those concepts are often blurred in practice.
This chapter focuses on the problem of activity recognition.

Data in a smart home are collected from different types of ambient sensors. For
example, a Passive Infrared Sensor (PIR) can be used for detecting motion, and a
contact switch sensor can be used for detecting the open/close status of doors and
cabinets. Some of the most widely used smart home sensors are summarized
in Table 5.1. Most of the ambient sensors, such as the PIR sensor or the contact
15
switch sensor, provide a signal in the form of on/off activation states, as depicted
in Figure 5.1.

Table 5.1. Ambient Sensors Used in Smart Environments


Sensor Measurement Data format

PIRa Motion Categorical

Active infrared Motion/identification Categorical

RFIDb Object information Categorical

Pressure Surface pressure Numeric

Smart tile Floor pressure Numeric

Magnetic Open/close Categorical


switch

Ultrasonic Motion Numeric

Camera Activity Image

Microphone Activity Sound

Note: The PIR sensor is one of the most popular sensors used by many
researchers for detecting motion. Other options, such as ultrasonic sensors, might
prove more accurate for detecting motion; however, they are typically more
expensive.

a. Passive Infrared Motion Sensor

b. Radio Frequency Identification

16
Figure 5.1. A stream of sensor events. Note: Each one of the M1 through M5
symbols correspond to one sensor. For example, sensor M1 is activated during the
third and seventh time intervals, and M5 is activated during the fifth time interval.

Before using sensor data in machine learning and data mining tasks, data are
usually preprocessed into a higher-level format that is easier to interpret. For
example, the signal levels are converted into categorical values, such as on/off
binary values, in the case of PIR sensors. Table 5.2 shows an example format
typically used in data mining and machine learning tasks.1
Table 5.2. Example Sensor Data
Timestamp Sensor ID Label
7/17/2009 09:52:25 M3 Personal Hygiene
7/17/2009 09:56:55 M5 Personal Hygiene
7/17/2009 14:12:20 M4 None

As soon as data are collected from various sensors, they are passed to an activity
recognition component. In supervised activity recognition methods, the algorithm
is provided with the activity label of sensor event sequences. These labels are
hand-done by a human annotator during the training phase of the system. Some
researchers also have taken advantage of the crowdsourcing mechanism for
labeling [72,59]. In Table 5.2, the first and the second sensor events are
labeled Personal Hygiene activity. The ultimate goal is to predict the label of the
future unseen activity patterns by generalizing based on the examples provided.

During the past decade, many supervised activity recognition algorithms have
been proposed [8,61,65,13,31,47,57,39]. In real-world settings, using supervised
methods is not practical because it requires labeled data for training. Manual
labeling of human activity data is time consuming, laborious, and error-prone.
Besides, one usually needs to deploy invasive devices in the environment during
the data-collection phase to obtain reliable annotations. Another option is to ask
the residents to report their activities. Asking residents to report their activities
puts the burden on them, and in the case of the elderly with memory problems
(e.g., dementia), it would be out the of question.

To address the annotation problem, recently a few unsupervised methods have


been proposed for mining human activity data [23,44,51,58,54]. However, none of
these mining approaches take into account the streaming nature of data or the
17
possibility that patterns might change over time. In a real-world smart
environment, we have to deal with a potentially infinite and unbounded flow of
data. In addition, the discovered activity patterns can change from time to time.
Mining the stream of data over time not only allows us to find new emerging
patterns in the data, but it also allows us to detect changes in the patterns, which
can be beneficial for many applications. For example, a caregiver can look at the
pattern trends and spot any suspicious changes immediately.

Based on these insights, we extend a well-known stream mining method [20] in


order to discover activity pattern sequences over time. Our activity discovery
method allows us to find discontinuous varied-order patterns in streaming,
nontransactional sensor data over time. The details of the model will be discussed
in Section 5.3.2

View chapterExplore book

Exploiting LoRa, edge, and fog computing for traffic monitoring in smart
cities

Tuan Nguyen Gia, ... Tomi Westerlund, in LPWAN Technologies for IoT and
M2M Applications, 2020

16.5 System architecture

We propose an extension of traditional IoT system architectures to leverage the


advantages of LoRa while taking into account its limitations in terms of duty
cycle and data rate. By integrating LoRa with edge and fog computing, we can
benefit from the latter two’s capabilities in data analysis and compression. This
converges to a five-layer architecture that consists of (1) end-devices, (2) smart
edge gateways, (3) fog layer with LoRa access points, (4) cloud servers, and (5)
end-user terminal applications. The proposed architecture is illustrated in Fig. 16–
3.

18
Figure 16–3. Proposed system architecture.

Proposed four-layer hybrid sensor-edge-fog-cloud system architecture with LoRa


wireless link used as the edge-fog bridge.

16.5.1 Device layer

In the system architecture, the device layer plays the main role in terms of
integration and interaction with the real world. When sensor nodes of the device
layer do not properly function due to, for instance, a hardware failure or

19
disconnection with edge devices/gateways, the whole system might stop working.
Therefore, it is necessary to design and implement the device layer carefully. The
device layer can consist of sensing nodes and actuating nodes or hybrid nodes
which can both sense data and control some actuators. Depending on the
application scenarios, one or several sensor node types can be used. Sensing nodes
are responsible for collecting different data from a wide variety of sensors. In
a traffic monitoring and management system, data such as the number of cars
passing by a given street or intersection, the approximate number of vehicles in an
area, or the public transportation flow can be acquired via some sensors such
as piezoelectric sensors, inductive loop sensors, magnetic sensor, acoustic sensor,
or passive infrared sensors. Each sensor has its own advantages and limitations.
For instance, passive infrared sensors might not be suitable for highways with
many lanes. In addition to sensing nodes collecting the primary data, sensing
nodes acquiring contextual data should be applied. For example, sensing nodes
can collect contextual environmental data such as temperature, humidity,
CO2 level, and other parameters related to pollution emissions. Even though the
information might not be directly related to traffic management systems, they are
helpful for the traffic system administrators, city planners, and regulation makers.
Actuating nodes can be used to control actuators such as fans, traffic lights, or
water pump systems. For instance, they can turn on a water pumping system to
emit steam to increase humidity level when the air is too dry.

These nodes are often built from a combination of energy-efficient sensors/


actuators, microcontroller, and wireless communication modules. In summary,
sensors and a wireless communication module are connected to a microcontroller
via SPI protocol because SPI is more energy-efficient than other wire
communication protocols such as I2C and UART. The microcontroller is energy-
efficient with different sleep modes. The communication protocol nRF is used in
nodes because the protocol is energy-efficient and capable of having many-to-
many communication.

16.5.2 Edge layer

Edge layer consists of edge devices that are built from several components such as
a powerful microcontroller with high-capacity memory or single-board
computers. Edge devices might be either battery-powered or connected to a power
20
source, depending on the specific application scenario. Single-board computers
usually need to be connected to a power source as they can only operate for a
period of a few days at most if powered with a battery. In the case of battery-
powered edge devices, these must be energy-efficient. In this case, edge device
can be equipped with nRF communication modules to collect the data transmitted
from sensing nodes. Due to the capability of supporting high data rates (i.e., 250
kbps in theory and 150 kbps in practice), an edge device with a single nRF
module can support more than 120 sensing nodes as each sensing node often
gathers less than 1 kbps data. When the number of sensing nodes increases
dramatically, more nRF modules can be added into an edge device. The collected
data is then processed for eliminating noises and saving bandwidth. For instance,
the data can be both encrypted and compressed before being sent to Fog gateways
via LoRa. Loss-less and lossy compression algorithms can be used. However,
lossy compression algorithms are more preferred as their compression rate is
high (e.g., around 100 times). Although lossy decompression cannot fully recover
the original data, the decompressed data (i.e., the number of cars passing by a
street, the number of cars waiting at a crossroad, temperature, and humidity) are
still good enough for traffic management systems.

The selection of microcontroller-based edge devices or gateways based on single-


board computers is made depending on the computational needs of each specific
application. For applications where only standard data compression and
encryption are necessary, a microcontroller-based edge gateway might be
sufficient. In use cases where more intensive data analysis is preferred, the power
of single-board computers might be necessary. This is the case of data processing
based on computationally expensive image processing algorithms or neural
networks for object classification or semantic analysis. However,
lightweight AI models can run in microcontrollers. For instance, a face
recognition algorithm has been developed for Espressif’s ESP32
microcontroller [52]. This is possible, of course, by trading-off frame resolution
(QVGA, 320×240 pixels) and the number of frames per second analyzed.
Nonetheless, widely used machine learning frameworks such as TensorFlow now
offer edge-focused versions (TensorFlow Lite), which are able to run in
microcontrollers or less powerful CPUs [50,53,54].

21
16.5.3 Fog layer

Fog layer encompasses interconnected fog-assisted gateways that are placed at


particular places and able to communicate with each other and share certain
information. These gateways use power from a socket wall and they are built with
hardware much more powerful computationally wise than edge devices.
Therefore, fog-assisted gateways are capable of performing heavy computation
and offering advanced services such as distributed storage, data processing,
and push notifications. The detailed information of Fog services is mentioned in
some of our previous works [10–13,55]. Fog-assisted gateways are equipped with
LoRa modules for receiving data from edge devices. In addition, these gateways
are connected to cloud servers via Ethernet or high-capacity wireless links. This
includes mobile networks, such as 4G or 5G, which are useful for mobile or
remote access points.

In the proposed system architecture, LoRa access points, or gateways, are the
central element in the fog layer. These gateways are connected to back-end
servers that, for instance, prioritize which gateway is used for a given downlink
message to a specific end-device. Moreover, for some applications, services
available at fog computing can be essential, such as distributed storage. This is the
case of collaborative SLAM, where local maps are stored at the fog layer so that
they are always available even if the edge gateway to which an end-node is
connected changes [56].

16.5.4 Cloud layer

The cloud layer consists of servers and different services that support the system
and provide a bridge between the IoT devices and end-users for management or
monitoring. Time-series data of sensor state, and the results of the data analysis
and compression performed at the edge and fog layers are stored in cloud servers
where global storage is available. Part of the data analysis can also be performed
at the cloud. However, this should be only data that do not require time-critical or
safety-critical responses, as unexpected increases in communication latency or the
possibility of data loss might develop into unexpected behavior.

Cloud servers are also essential in terms of bridging the IoT platform with end-
users and administrators. Even if the edge and fog layers perform most or all of

22
the data analysis, servers are necessary to host cloud-based web and mobile
applications and all related data. Typical cloud server providers are Amazon
AWS, Google Cloud, or Digital Ocean. The cloud layer can also be deployed with
private servers.

16.5.5 Terminal layer

The terminal layer consists of any type of connected devices that are used by end-
users for interacting with the IoT platform. These include any devices able to open
or execute native or web applications. Cloud-based applications are the public
side of the IoT platform and allow end-users to monitor and control the system,
analyze historical data, or overview the platform’s state and effectiveness.

In the case of a traffic management system for a smart city, end-users are typically
city administrators, and terminal devices are used to monitor traffic and receive
alerts of traffic accidents, traffic congestion, or other important events.

View chapterExplore book

A survey on wearable sensor modality centred human activity recognition in


health care

WangYan , ... YuHongnian , in Expert Systems with Applications, 2019

3.2.3 CHAR plus ASHAR plus WSHAR

(Diethe et al., 2017) introduce using Bayesian models to tackle the challenges of
fusion of heterogeneous sensor modalities. The multiple-sensor-modality data,
including environmental data from PIR sensors, accelerometer data, and video
data, are collected in the HealthCare in the Residential Environment SPHERE
house (Diethe, Twomey, & Flach, 2014). The authors summarize that their
proposed approach can identify the modalities for each particular activity and the
features relevant to the activity simultaneously. Also, the results show how the
approach fuses and separates the tasks of activity recognition and location
prediction. (Nakamura et al., 2010) present a collective framework which can
monitor a user's location and vitals (heart rate or blood pressure) by synchronizing
wearable and ambient sensors.

Deep and transfer learning for building occupancy detection: A review and
comparative analysis
23
Aya Nabil Sayed, ... Faycal Bensaali, in Engineering Applications of Artificial
Intelligence, 2022

3.1 Network of sensors

Presence detection is a critical component in smart buildings to optimize the


overall energy consumption. However, when designing occupancy detection
methods, some challenges arise. One of them is keeping in mind how to protect
the occupants’ privacy while creating such devices (Sardianos et al., 2020b;
Himeur et al., 2020a). An adequate occupancy detection system should be built to
prevent inhabitants or their actions from being identified. As a result, non-
intrusive methods for detecting occupancy are required; otherwise, existing
mechanisms should be improved (Zou et al., 2017; Wang et al., 2021a).

The authors in Abade et al. (2018) presented and evaluated a system for non-
intrusive occupancy detection employing sensors collecting data such as noise,
temperature, carbon dioxide (CO2), and light intensity. A working system was
tested, which included a device to collect and analyze environmental data, as well
as an analysis of data patterns across the obtained data using ML techniques to
estimate human occupancy in interior spaces.

Additionally, the authors in Adeogun et al. (2019), presented the results on


implementing ML methods using sensory data such as temperature, humidity,
pressure, CO2, sound, total volatile organic compounds (TVOC), and
PaPIRMotion which is based on PIR sensor. The data was gathered from
an IoT monitoring system to estimate indoor occupancy information. For binary
and multi-class problems, the proposed system could predict room occupancy
with an accuracy of up to 94.6% and 91.5%, respectively. Similarly, the work
done in Li et al. (2019) inferred occupancy by fusing data from a network of
sensors. In Zemouri et al. (2018), environmental data was employed to detect
occupancy. However, the authors claimed they used edge devices in their
implementation. Nevertheless, they relied on a cloud provider to supply the
packaged code to the IoT devices, which may appear contradictory.

The study in Jeon et al. (2018) tackled the problem in a novel manner. They
introduced an occupancy detection system using IoT technologies and on dust
concentration change patterns. The extraction technique used by the authors is to

24
create triangle forms, and accordingly their characteristics are utilized to
recognize presence in an indoor setting. Data is collected using dust, temperature,
and humidity sensors. The system was implemented in a real-live experiment to
evaluate the effectiveness. Finally, a qualitative analysis of the experimental
outcomes was conducted to compare the system performance against other
standard techniques.

Another study performed by the authors in Wu and Wang (2021) concluded that
the use of PIR sensors for internal lighting management resulted in a high number
of false-negative for stationary occupancy detection, accounting for over 50% of
the overall occupancy accuracy. To resolve that issue, they designed a
synchronized low-energy electronically chopped PIR (SLEEPIR) sensor that
employs a liquid crystal (LC) shutter to reduce the power of the PIR sensor’s
long-wave infrared output. By incorporating a support vector
machine (SVM) classifier, experiments with everyday routines showed a 99.12%
of accuracy.

In Abedi and Jazizadeh (2019), the use of doppler radar sensors (DRS) along with
infrared thermal array (ITA) sensors demonstrated a high accuracy when
using deep neural networks (DNN) algorithms. The DRS and ITA sensors
achieved occupancy detection accuracy of 98.9% and 99.96%, respectively

Circuit connection:
25
Block Diagram:

26

You might also like