You are on page 1of 90

CAMERA SURVEILLANCE USING RASPBERRY PI

Thesis Submitted in Partial Fulfillment of


the Requirements for the Degree of

Bachelor of Technology
in
Electronics and Communication Engineering
By

D. SEETHEND REDDY (Roll No. 13241A0434)

DEPARTMENT OF ELECTRONICS AND


COMMUNICATION ENGINEERING

GOKARAJU RANGARAJU INSTITUTE OF


ENGINEERING AND TECHNOLOGY
(Approved by AICTE Autonomous under JNTUH)
(NBA Accredited)

HYDERABAD 500 090


2017
Department of Electronics and Communication Engineering

Gokaraju Rangaraju Institute of Engineering and Technology


(Approved by AICTE Autonomous under JNTUH)
(NBA Accredited)

Hyderabad 500 090

2017

Certificate

This is to certify that this project report entitled Camera Surveillance using Raspberry Pi
by Mr. D. Seethend Reddy (Roll No.13241A0434), submitted in partial fulfillment of the
requirements for the degree of Bachelor of Technology in Electronics and Communication
Engineering of the Jawaharlal Nehru Technological University, Hyderabad, during the academic
year 2013-17, is a bonafide record of work carried out under our guidance and supervision.
The results embodied in this report have not been submitted to any other University or
Institution for the award of any degree or diploma.

(Guide) (External Examiner) (Head of Department)


Mr. N. Srinivasa Rao Dr. T. Jagannadha
Swamy
Assistant Professor Professor

2
ACKNOWLEDGEMENT

Numerous people are responsible for the success of my project work. I would like to take this
opportunity to thank them.
I’m extremely grateful to my internal guide Asst.Prof. Mr. N. Srinivasa Rao for the
encouragement and guidance throughout the course of this project and whole hearted support.
I express my sincere thanks to Dr. T. Jagannadha Swamy HOD of ECE department for his
support, encouragement and providing me with the facilities for the successful completion of my
project work.
I express my sincere gratitude to Dr.Jandhyala N Murthy Principal, Gokaraju Rangaraju
institute of engineering and technology, Hyderabad.
I would like to thank all the faculty members and supporting staff of ECE department for their
kind cooperation and valuable suggestions, which helped us in completing this project
successful.

D. Seethend Reddy

3
Table of Contents
PREFACE.................................................................................................6
1.1 Background........................................................................................................................................6
1.2 Motivation.........................................................................................................................................7
1.3 Aim of the project..............................................................................................................................7
1.4 Methodology.....................................................................................................................................7
1.5 Scope of the project..........................................................................................................................8
1.6 LITERATURE SURVEY..........................................................................................................................8
1.7 Organization of this report..............................................................................................................11
PROJECT OVERVIEW.............................................................................13
RASPBERRY PI......................................................................................17
3.1 Introduction.....................................................................................................................................19
3.2 Features and Benefits......................................................................................................................21
3.3 Detailed description.........................................................................................................................22
3.4 Block description.............................................................................................................................23
RASPBERRY PI CAMERA........................................................................46
4.1 Raspberry Pi Camera Module v2......................................................................................................47
4.2 Functions and Features....................................................................................................................50
4.3 Camera Serial Interface...................................................................................................................51
4.4 Software..........................................................................................................................................51
PIR SENSOR...........................................................................................53
5.1 General Description.........................................................................................................................55
5.2 Theory of Operation........................................................................................................................56
5.3 Lenses..............................................................................................................................................57
5.4 Sensitivity Adjust.............................................................................................................................58
USB Wi-Fi ADAPTER...............................................................................59
6.1 Introduction.....................................................................................................................................59
6.2 Features...........................................................................................................................................60
6.3 Functional Block Diagram................................................................................................................61
6.4 Pin Layout........................................................................................................................................61
6.5 Radio Modes....................................................................................................................................62

4
TELEGRAM BOT.....................................................................................65
7.1 Bots..................................................................................................................................................65
7.2 Creating a new bot..........................................................................................................................66
7.3 Our Design.......................................................................................................................................68
7.4 Bots Working...................................................................................................................................69
PROJECT IMPLEMENTATION.................................................................70
8.1 Setting up Raspberry Pi....................................................................................................................70
8.2 Design Approach..............................................................................................................................71
RESULTS...............................................................................................74
CONCLUSION AND FUTURE SCOPE........................................................77
10.1 Conclusion.....................................................................................................................................77
10.2 Future Scope..................................................................................................................................77
APPENDIX..............................................................................................78

5
CHAPTER 1
PREFACE

1.1 Background

Surveillance is the monitoring of the behavior, activities, or other changing information, usually
of people for the purpose of influencing, managing, directing, or protecting them. This can
include observation from a distance by means of electronic equipment (such as closed-circuit
television (CCTV) cameras), or interception of electronically transmitted information (such as
Internet traffic or phone calls); and it can include simple, no- or relatively low-technology
methods such as human intelligence agents and postal interception. The word surveillance comes
from a French phrase for "watching over", and is in contrast to more recent developments such as
sousveillance.

Surveillance is used by governments for intelligence gathering, the prevention of crime,


the protection of a process, person, group or object, or the investigation of crime. It is also used
by criminal organizations to plan and commit crimes such as robbery and kidnapping, by
businesses to gather intelligence, and by private investigators.

Surveillance is often a violation of privacy, and is opposed by various civil liberties


groups and activists. Liberal democracies have laws which restrict domestic government and
private use of surveillance, usually limiting it to circumstances where public safety is at risk.
Authoritarian government seldom have any domestic restrictions, and international espionage is
common among all types of countries.

Surveillance cameras are video cameras used for the purpose of observing an area. They
are often connected to a recording device or IP network, and may be watched by a security guard
or law enforcement officer. Cameras and recording equipment used to be relatively expensive
and required human personnel to monitor camera footage, but analysis of footage has been made
easier by automated software that organizes digital video footage into a searchable database. The
amount of footage is also drastically reduced by motion sensors which only record when motion
is detected. With cheaper production techniques, surveillance cameras are simple and
inexpensive enough to be used in home security systems, and for everyday surveillance.

6
1.2 Motivation
The major problem that leads me to develop this project is to reduce the crime against
property by using the advanced technology. Present having technology doesn’t have an option of
instant processing of the data recorded by the surveillance camera. By using this it takes time to
process the data recorded and need special tools to get the exact time of the crime. This project
will measure the parameters at a time with modern technology and using less power consumption
technology. Most of the time using present technology the data recorded will be lost due to the
technology used by the criminals to erase the data remotely. But in this project the data copy will
always be available with the user.
The theft may happen at any time so continuous monitoring the area will be best option.
But considering the amount of the data it takes to process the whole recording will not be
sufficient and leads to shutdown of the system or losing the previous data. This project help us to
install and forget about the system as it doesn’t take much for recording and as the data will be
already sent to the user phone it will be safe and secure with the user itself and no third party
companies will be involved.

1.3 Aim of the project


Camera based surveillance system with Raspberry Pi will be used as a 24x7
surveillance system without even considering of restart and reboot once installed. It works
continuously with the help of Raspberry Pi providing enough disk space and fast processing
speed for taking photos at the right time when the motion sensor is triggered without any delay.
This system is highly scalable to include more sensors and increase the range of the system.
Once the photo is clicked it should be sent to the user messaging app. This ensures the data will
be secure and copy of the data will also be made in case of any malfunctioning of the system
leads to the deleting the data or damage of the system.

1.4 Methodology
In this project, I have proposed Raspberry Pi based camera surveillance system because
of its performance, power efficiency and Interfacing with sensor and other modules. In this
project PIR sensors are used which are triggered for any change in the IR in the detecting area.
The PIR sensor itself has two slots in it, each slot is made of a special material that is sensitive to
IR. When the sensor is idle, both slots detect the same amount of IR, the ambient amount

7
radiated from the room or walls or outdoors. When a warm body like a human or animal passes
by, it first intercepts one half of the PIR sensor, which causes a positive differential change
between the two halves. When the warm body leaves the sensing area, the reverse happens,
whereby the sensor generates a negative differential change. These change pulses are what is
detected. Using the camera interfaced to Raspberry Pi it takes photo and then send it to the user
messaging app using Wi-Fi provided. Software have developed in Python and run using an
Interpreter already present in Raspberry Pi Operating System.

1.5 Scope of the project

There are a few scopes and guidelines are listed to ensure the project can be
accomplished. They include:
 Design a system to get images of the crime in case happened and trigger the PIR
sensor.
 Proper interfacing design between PIR sensor and Raspberry Pi.
 Working with SSH
 Interfacing I2C and Wi-Fi
 Wi-Fi Adapter is used in monitor mode to sniff the packets of the user mobile
phone.

1.6 LITERATURE SURVEY


Several of the existing researches conducted on the same field will give inspiring ideas of
designing and developing the project. This chapter will elaborate the recent research on the new
technology. It emphasizes the role of camera surveillance systems in many applications.
Research and findings have been conducted in order to design and develop camera surveillance
systems that will suit the aim and objective in this project.

8
[1] The first Home Office study in August, 2002
Conducted by Brandon Welsh and David Farrington, surveyed 22 studies of surveillance camera
in both the US and UK for a meta-analysis and found that as a whole the cameras showed no
significant impact on crime. Welsh and Farrington’s data showed a very small impact on crime
that was statistically insignificant.   In studies included in their meta-analysis which did show a
reduction in crime other interventions, such as improved lighting, fencing, notices about
surveillance camera and increased security personnel confounded the data such that any
reduction could be solely attributable to the cameras themselves.
The authors (along with additional co-authors) asserted this opinion in a subsequent paper:
“Overall, it might be concluded that surveillance camera reduces crime to a small degree.”

[2] British Home Office Study, 2005


A February 2005 meta-analysis of fourteen sites commissioned by the Home Office and
conducted by Martin Gill and Angela Spriggs reached a similar conclusion to that of Welsh and
Farrington. Only one site they studied experienced a statistically significant crime reduction and
this result was limited by confounds. Furthermore this was the most expensive site. This study
also showed an increase in reported crime in half the sites which the authors hypothesize was due
to increased awareness of crime. Spriggs and Gill concluded: “It has been shown that the
surveillance camera schemes produced no overall effect on all relevant crime viewed
collectively.”
Within this meta-analysis however, the authors did find a reduction in vehicle crimes in
half the sites though it was only statistically significant in two such sites although a change in
parking regulations reduced the number of cars at the site and therefore opportunities for vehicle
crimes. Additionally, violent crimes against individuals actually increased in three of four urban
sites, which was similar to crime patterns in the control. Gill and Spriggs cautioned: “The belief
at that surveillance camera alone can counter complex social problems is unrealistic in the
extreme.” However, they also warned against concluding surveillance camera was not effective
based on their study. A later report by the British House of Lords noted public opinion is
important as a policy consideration and warned against concluding that people feel safer merely
due to the presence of cameras.

9
[3] San Francisco – UC Berkeley, CITRIS Report, 2008
In the United States, the first major scientific surveillance camera study was conducted based on
San Francisco’s Community Safety Camera (CSC) program. Jennifer King of University of
California (UC) Berkeley Law and her colleagues conducted a six-month study for San
Francisco through the University’s Center for Informational Research in the Interest of Society
(CITRIS) releasing a final report December 17, 2008. The study found very little impact on
violent crime and that homicides did decline near the cameras but increased further away (a
displacement effect but not reduction). The study did find a statistically significant decrease in
property crime near the cameras. Finally there was little of evidence of an impact on other types
of crimes in the vicinity of the cameras.

The report concluded that at least for property crimes, “the system is clearly having an
effect” but noted that the CSC program’s “lack of deterrent effects on violent crime and its
limited usefulness with respect to investigations” limited the program’s benefit. The camera
program was incredibly expensive, especially given its failure to reduce violent crime. Finally,
because the study was short and featured only one city, its results could be a fluke.

[4] Los Angeles – USC Study, 2008


Students at the University of Southern California (USC) School of Policy released a report for
the California Research Bureau in May 2008 on the effects of video surveillance cameras in two
areas of Los Angeles. The study showed no impact on crime based on the Los Angeles Police
Department’s crime statistics before and after the camera installation. While violent crime
declined in both areas this trend merely followed the general pattern also demonstrated in the
control areas. The study found no statistically significant evidence of displacement.

[5] California ACLU Affiliates, 2007


On the opposite coast but in a similar vein, in August 2007 the ACLU affiliates in California
published a report on the growing use of surveillance cameras in that state and these system’s
effect on civil liberties. The report noted the threats the cameras posed to the rights of freedom of
speech, association and movement, and to privacy. Furthermore the report stressed that studies
have shown cameras to be ineffective at deterring crime or solving crimes nor did they reduce
fear of crime.

10
[6] Chicago – ACLU of Illinois, 2011
In February 2011, the ACLU of Illinois published a large-scale report on Chicago’s network of
video surveillance cameras. The ACLU’s Schwartz estimated the city has access to somewhere
between 10,000 and 20,000 publicly and privately-owned cameras though the exact number is
unknown. Like the NYCLU’s report and ACLU’s California report, this report did not
scientifically examine the effectiveness of the surveillance camera system as a tool to fight crime
but rather analyzed the system with respect to civil liberties. The study noted the following risks
inherent in such a surveillance scheme as demonstrated by various surveillance camera systems:
the absence of regulation of many of the cameras’ features, privacy and First Amendment
problems, improper release of video by employees, and racial disparities in targeting. The study
looked at the exorbitant costs of the cameras while noting such systems are not particularly
effective for solving crimes and questioning their effectiveness in deterring crime.

1.7 Organization of this report


This project is consists many modules so to give complete description on this we divide
into sections for better understanding. Here we try to make you clear on each chapter what it is
says.

Chapter 1: INTRODUCTION, which give complete idea of my project. It consist


Background, Aim of project, Methodology, scope of the project, literature survey and
diagrammatical representation of the project.

Chapter 2: PROJECT OVERVIEW, in this chapter shows the control flow in the project
by a flow chart and block diagram. It gives the basic idea of the project.

Chapter 3: RASPBERRY PI, in this chapter we discuss about Raspberry Pi 3 Model B is


an ARM based credit card sized SBC (Single Board Computer) created by Raspberry Pi
Foundation. Raspberry Pi runs Debian based GNU/Linux operating system Raspbian and ports
of many other OSes exist for this SBC. In this chapter discus about the processing of the photo
taken when motion detector is triggered.

11
Chapter 4: RASPBERRY PI CAMERA, can be used to take high-definition video, as
well as stills photographs. Here explain how it takes photos and commands used to takes photos.
In this chapter we will come to know that how it will respond to the motion detected by PIR
sensor.

Chapter 5: PIR SENSOR, This sensor is used for almost always used to detect whether a
human has moved in or out of the sensors range. It explain about the features and working
functionality of the sensor. And explain about how the sensor is communicating with the
Raspberry Pi.

Chapter 6: USB Wi-Fi ADAPTER, in this chapter discuss about different types of Wi-Fi
working modes and their transmission methods. We will also discuss how packets are exchanged
between the access point and device using MAC address.

Chapter 7: TELEGRAM BOT, in this chapter come to know that how we made Telegarm
messaging bot for our project requirement which work total autonomous.

Chapter 8: PROJECT IMPLEMENTATION, in this chapter explains the schematic of the


project. In that it gives a clear idea of modules connections.

Chapter 9: RESULTS, in this chapter describes the functioning of PIR sensor and camera
taking photos and sending to user telegram bot.

Chapter 10: CONCLUSION AND FUTURE SCOPES concludes the work presented in
this thesis by exploring the strengths and weaknesses of the work, reviewing the extent to which
the research objectives have been met, and discussing possible future work directions. And the
Appendices contain the source code of the project.

12
CHAPTER 2

PROJECT OVERVIEW

In this chapter we can see what components are present in the project and the control flow. It
gives an idea of overall project what I have done.

Figure 2.1. Block Diagram of Project

This block diagram consists of


Hardware:
 Raspberry Pi 3 Model B
 Raspberry Pi Camera v2
 PIR Sensor
 Wi-Fi Adapter
 Wi-Fi Router
 Android Smart Phone

13
Software:

 Python
 UNIX Commands

The above figure describes the entire project of “Camera Surveillance Using Raspberry
Pi”. The block diagram shows the overview of the project and help to easy understanding of the
project. Here each block individually having their specifications .The project describes the how
the PIR sensor communicating with Raspberry Pi and how they are connected with Wi-Fi Router
.The Raspberry Pi controls the sensor and data collected from sensor can be used to take photo if
triggered.

This block diagram explains clearly how the data flow can be transmitted from Raspberry
Pi to mobile phone. This project execution can be explained as 5 parts. In that first initializing
the motion sensor. It is connected to Raspberry Pi using GPIO pins. The sensor monitors any
change in the area and trigger if it detects any change. Second is RPI receives trigger signal from
the sensor and then it initializes camera and takes a photo. This photo is saved in the RPI
memory. Third it uses Wi-Fi adapter to connect to access point and search for user device to
check if it is in range. Fourth, if user device is not in the range it sends the photo taken to the user
mobile phone. Fifth the user receive the photo using a telegram bot and can also send some
commands provided before in the code and receive respective information from RPI.

Overviews of the project explains the Smartphone for collecting real-time sensor data
from sensors using Internet connection. The below figure shows the flow chart of Camera
Surveillance Using Raspberry Pi.

14
15
Figure 2.2. Flow chart of the Project

The Camera Surveillance System consists Raspberry Pi 3 Model B, Wi-Fi Adapter, Raspberry
Pi Camera v2, and Android Smartphone.
 The sensor monitors for any change in the area and trigger if it detects any change and
signals the RPI
 Now the RPI will process the data from motion sensor and takes the photo using RPI
camera and stores in the memory
 Wi-Fi Adapter always monitor for the user mobile phone if it detects the mobile in its
range the system will disarm
 If Adapter doesn’t detect the user phone that means the motion detected is due to other
person and it arms
 As soon as it arms the photo taken is sent to the mobile phone using RPI and the mobile
receive it using Telegram Bot.
In this chapter we have seen the block diagram of this project and control flow of
the process in controller. The purpose of all elements those are in this project briefly. In coming
chapter we can see complete description of each module.

16
CHAPTER 3

RASPBERRY PI
The Raspberry Pi is a series of small single-board computers developed in the United
Kingdom by the Raspberry Pi Foundation to promote the teaching of basic computer science in
schools and in developing countries. The original model became far more popular than
anticipated, selling outside of its target market for uses such as robotics. Peripherals (including
keyboards, mice and cases) are not included with the Raspberry Pi.

Several generations of Raspberry Pi’s have been released. The first generation (Raspberry
Pi 1 Model B) was released in February 2012. It was followed by a simpler and inexpensive
Model A. In 2014, the foundation released a board with an improved design in Raspberry Pi 1
Model B+. These boards are approximately credit-card sized and represent the standard mainline
form-factor. Improved A+ and B+ models were released a year later. A "compute module" was
released in April 2014 for embedded applications, and a Raspberry Pi Zero with smaller size and
reduced input/output (I/O) and general-purpose input/output (GPIO) capabilities was released in
November 2015. The Raspberry Pi 2 which added more RAM was released in February 2015.
Raspberry Pi 3 Model B released in February 2016 is bundled with on-board Wi-Fi, Bluetooth
and USB Boot capabilities. As of January 2017, Raspberry Pi 3 Model B is the newest mainline
Raspberry Pi. Raspberry Pi boards are priced between. As of 28 February 2017, the Raspberry Pi
Zero W was launched, which is identical to the Raspberry Pi Zero, but has the Wi-Fi and
Bluetooth functionality of the Raspberry Pi 3.

All models feature a Broadcom system on a chip (SoC), which includes an ARM
compatible central processing unit (CPU) and an on-chip graphics processing unit (GPU, a
VideoCore IV). CPU speed ranges from 700 MHz to 1.2 GHz for the Pi 3 and on board memory
range from 256 MB to 1 GB RAM. Secure Digital (SD) cards are used to store the operating
system and program memory in either the SDHC or MicroSDHC sizes. Most boards have
between one and four USB slots, HDMI and composite video output, and a 3.5 mm phone jack
for audio. Lower level output is provided by a number of GPIO pins which support common
protocols like I²C. The B-models have an 8P8C Ethernet port and the Pi 3 and Pi Zero W have
on board Wi-Fi 802.11n and Bluetooth.
17
The Foundation provides Raspbian, a Debian-based Linux distribution for download, as
well as third party Ubuntu, Windows 10 IOT Core, RISC OS, and specialized media center
distributions. It promotes Python and Scratch as the main programming language, with support
for many other languages.

Hardware
The Raspberry Pi hardware has evolved through several versions that feature variations in
memory capacity and peripheral-device support.

This block diagram depicts Models A, B, A+, and B+. Model A, A+, and the Pi Zero lack
the Ethernet and USB hub components. The Ethernet adapter is internally connected to an
additional USB port. In Model A, A+, and the PI Zero, the USB port is connected directly to the
system on a chip (SoC). On the Pi 1 Model B+ and later models the USB/Ethernet chip contains
a five-point USB hub, of which four ports are available, while the Pi 1 Model B only provides
two. On the Pi Zero, the USB port is also connected directly to the SoC, but it uses a micro USB
(OTG) port.

Processor
The Broadcom BCM2835 SoC used in the first generation Raspberry Pi is somewhat
equivalent to the chip used in first generation smartphones (its CPU is an older ARMv6
architecture), which includes a 700 MHz ARM1176JZF-S processor, VideoCore IV graphics
processing unit (GPU), and RAM. It has a level 1 (L1) cache of 16 KB and a level 2 (L2) cache
of 128 KB. The level 2 cache is used primarily by the GPU. The SoC is stacked underneath the
RAM chip, so only its edge is visible.

The Raspberry Pi 2 uses a Broadcom BCM2836 SoC with a 900 MHz 32-bit quad-core
ARM Cortex-A7 processor (as do many current smartphones), with 256 KB shared L2 cache.

The Raspberry Pi 3 uses a Broadcom BCM2837 SoC with a 1.2 GHz 64-bit quad-core
ARM Cortex-A53 processor, with 512 KB shared L2 cache.

18
Performance
The Raspberry Pi 3, with a quad-core Cortex-A53 processor, is described as 10 times the
performance of a Raspberry Pi 1. This was suggested to be highly dependent upon task threading
and instruction set use. Benchmarks showed the Raspberry Pi 3 to be approximately 80% faster
than the Raspberry Pi 2 in parallelized tasks.

Raspberry Pi 2 includes a quad-core Cortex-A7 CPU running at 900 MHz and 1 GB


RAM. It is described as 4–6 times more powerful than its predecessor. The GPU is identical to
the original. In parallelized benchmarks, the Raspberry Pi 2 could be up to 14 times faster than a
Raspberry Pi 1 Model B+.

While operating at 700 MHz by default, the first generation Raspberry Pi provided a real-
world performance roughly equivalent to 0.041 GFLOPS. On the CPU level the performance is
similar to a 300 MHz Pentium II of 1997–99. The GPU provides 1 Gpixel/s or 1.5 Gtexel/s of
graphics processing or 24 GFLOPS of general purpose computing performance. The graphical
capabilities of the Raspberry Pi are roughly equivalent to the performance of the Xbox of 2001.

3.1 Introduction
In this project we use Raspberry Pi 3 Model B as a main computing unit for
processing of data of photo and sending it to user mobile. The Raspberry Pi 3 Model B is the
third generation Raspberry Pi. This powerful credit-card sized single board computer can be used
for many applications and supersedes the original Raspberry Pi Model B+ and Raspberry Pi 2
Model B. whilst maintaining the popular board format the Raspberry Pi 3 Model B brings you a
more powerful processer, 10x faster than the first generation Raspberry Pi. Additionally it adds
wireless LAN & Bluetooth connectivity making it the ideal solution for powerful connected
designs. This model comes with onboard Wi-Fi, Bluetooth, Ethernet port and other ports for
peripherals. The Raspberry Pi 3 Model B features a quad-core 64-bit ARM Cortex A53 clocked
at 1.2 GHz and 1GB of LPDDR2-900 SDRAM, and the graphics capabilities, provided by the
VideoCore IV GPU.

It has 4 USB ports for connecting the peripherals. It also have CSI and DSI for
interfacing Camera and Display LCD using I2C bus. The Raspberry Pi 3 is powered by a +5.1V
micro USB supply. Typically, the model B uses between 700-1000mA depending on what

19
peripherals are connected; the model A can use as little as 500mA with no peripherals attached.
The maximum power the Raspberry Pi can use is 1 Amp. If you need to connect a USB device
that will take the power requirements above 1 Amp, then you must connect it to an externally-
powered USB hub. The power requirements of the Raspberry Pi increase as you make use of the
various interfaces on the Raspberry Pi. The GPIO pins can draw 50mA safely, distributed across
all the pins; an individual GPIO pin can only safely draw 16mA. The HDMI port uses 50mA, the
camera module requires 250mA, and keyboards and mice can take as little as 100mA or over
1000mA! Check the power rating of the devices you plan to connect to the Pi and purchase a
power supply accordingly.

Figure 3.1. Raspberry Pi 3 Model B

20
3.2 Features and Benefits
Specifications:
Processor Broadcom BCM2387 chipset.
1.2GHz Quad-Core ARM Cortex-A53
802.11 b/g/n Wireless LAN and Bluetooth 4.1 (Bluetooth
Classic and LE)
GPU Dual Core VideoCore IV® Multimedia Co-Processor.
Provides Open GL ES 2.0, hardware-accelerated OpenVG,
and 1080p30 H.264 high-profile decode.
Capable of 1Gpixel/s, 1.5Gtexel/s or 24GFLOPs with
texture filtering and DMA infrastructure

Memory 1GB LPDDR2


Operating System Boots from Micro SD card, running a version of the Linux
operating system or Windows 10 IoT
Dimensions 85 x 56 x 17mm
Power Micro USB socket 5V1, 2.5A

Connectors:
Ethernet 10/100 BaseT Ethernet socket
Video Output HDMI (rev 1.3 & 1.4 Composite RCA (PAL and NTSC)
Audio Output Audio Output 3.5mm jack, HDMI USB 4 x USB 2.0
Connector
GPIO Connector 40-pin 2.54 mm (100 mil) expansion header: 2x20 strip
Providing 27 GPIO pins as well as +3.3 V, +5 V and GND
supply lines
Camera Connector 15-pin MIPI Camera Serial Interface (CSI-2)
Display Connector Display Serial Interface (DSI) 15 way flat flex cable
connector with two data lanes and a clock lane
Memory Card Slot Push/pull Micro SDIO

21
Key Benefits
• Low cost • Consistent board format
• 10x faster processing • Added connectivity

Key Applications
• Low cost PC/tablet/laptop • IoT applications
• Media center • Robotics
• Industrial/Home automation • Server/cloud server
• Print server • Security monitoring
• Web camera • Gaming
• Wireless access point • Environmental sensing/monitoring

3.3 Detailed description


Block diagram

Figure 3.2. Block Diagram of Raspberry Pi 3 Model B

22
3.4 Block description

3.4.1 Processor
The Processor used in the Raspberry Pi 3 Model B is Cortex-A53 Processor. The
ARM Cortex-A53 processor offers a balance between performance and power-efficiency.
Cortex-A53 is capable of seamlessly supporting 32-bit and 64-bit instruction sets. It makes use
of a highly efficient 8-stage in-order pipeline enhanced with advanced fetch and data access
techniques for performance. It fits in a power and area footprint suitable for entry-level
smartphones. It also can deliver high aggregate performance in scalable enterprise systems via
high core density, which accounts for its popularity in base station and networking designs.
The Cortex-A53 delivers significantly higher performance than the highly successful Cortex-A7,
in a similar low-cost footprint. Like the Cortex-A7, it is capable of deployment as a standalone
applications processor or in combination with a high-end Cortex-A CPU using ARM
big.LITTLE™ technology. It is less than half the size of the high-end Cortex-A processors, and
between 2 and 3 times more efficient, while still delivering performance equivalent or higher
than the Cortex-A9 processor that powered high-end smartphones just a few years ago. This
blend of efficiency and performance enables affordable smartphone and consumer devices with
substantial compute power in the lowest power and area footprint.
The Cortex-A53 supports the full ARMv8-A architecture. It not only runs 64-bit
applications also seamlessly and efficiently runs legacy ARM 32-bit applications.
The Cortex-A53 processor is a mid-range, low-power processor that implements
the ARMv8-A architecture. The Cortex-A53 processor has one to four cores, each with an L1
memory system and a single shared L2 cache.

23
Figure 3.3. ARM Cortex A53 Processor

Features
 The Cortex-A53 processor includes the following features:
 Full implementation of the ARMv8-A architecture instruction set with the
architecture options listed in ARM architecture.
 In-order pipeline with symmetric dual-issue of most instructions.
 Harvard Level 1 (L1) memory system with a Memory Management Unit (MMU).
 Level 2 (L2) memory system providing cluster memory coherency, optionally
including an L2 cache.

Interfaces
The Cortex-A53 processor has the following external interfaces:
 Memory interface that implements either an ACE or CHI interface.
 Optional Accelerator Coherency Port (ACP) that implements an AXI slave interface.
24
 Debug interface that implements an APB slave interface.
 Trace interface that implements an ATB interface.
 CTI.
 Design for Test (DFT).
 Memory Built-In Self-Test (MBIST).
 Q-channel, for power management.

Power management
The Cortex-A53 processor provides mechanisms and support to control both dynamic
and static power dissipation. The individual cores in the Cortex-A53 processor support four main
levels of power management.
 Power domains.
 Power modes.
 Event communication using WFE or SEV.
 Communication to the Power Management Controller.

Memory Management Unit


The Cortex-A53 processor is an ARMv8 compliant processor that supports execution in
both the AArch64 and AArch32 states. In AArch32 state, the ARMv8 address translation system
resembles the ARMv7 address translation system with LPAE and Virtualization Extensions. In
AArch64 state, the ARMv8 address translation system resembles an extension to the Long
Descriptor Format address translation system to support the expanded virtual and physical
address spaces. For more information regarding the address translation formats, see the ARM®
Architecture Reference Manual ARMv8, for ARMv8-A architecture profile. Key differences
between the AArch64 and AArch32 address translation systems are that the AArch64 state
provides:
 A translation granule of 4KB or 64KB. In AArch32, the translation granule is limited
to be 4KB.
 A 16-bit ASID. In AArch32, the ASID is limited to an 8-bit value.

25
The maximum supported physical address size is 40 bits. You can enable or disable each
stage of the address translation, independently.
The MMU controls table walk hardware that accesses translation tables in main memory.
The MMU translates virtual addresses to physical addresses. The MMU provides fine-grained
memory system control through a set of virtual-to-physical address mappings and memory
attributes held in page tables. These are loaded into the Translation Lookaside Buffer (TLB)
when a location is accessed.
The MMU in each core features the following:
 10-entry fully-associative instruction micro TLB.
 10-entry fully-associative data micro TLB.
 4-way set-associative 512-entry unified main TLB.
 4-way set-associative 64-entry walk cache.
 4-way set-associative 64-entry IPA cache.
 The TLB entries include global and application specific identifiers to prevent
context switch TLB flushes.
 Virtual Machine Identifier (VMID) to prevent TLB flushes on virtual machine
switches by the hypervisor.

Performance Monitor Unit


The Cortex-A53 processor includes performance monitors that implement the ARM
PMUv3 architecture. These enable you to gather various statistics on the operation of the
processor and its memory system during runtime. These provide useful information about the
behavior of the processor that you can use when debugging or profiling code.
The PMU provides six counters. Each counter can count any of the events available in the
processor. The absolute counts recorded might vary because of pipeline effects. This has
negligible effect except in cases where the counters are enabled for a very short time.

26
Key Benefits
 High efficiency CPU for wide range of applications in mobile, DTV, automotive,
networking, and more.
 ARMv8-A architecture at low cost for standalone entry level designs.
 Versatile enough to pair with any ARMv8 core in a big.LITTLE pairing,
including Cortex-A57, Cortex-A72, or even other Cortex-A53 or Cortex-A35
CPU clusters.
 Mature product with high volume shipment.

Cortex-A53 processor functions


Figure 3.4 shows a top-level functional diagram of the Cortex-A53 processor.

Figure 3.4. Cortex A53 Processor Functions

27
3.4.2 Chipset
The chipset used in Raspberry Pi 3 Model B is BCM2837. BCM2837 contains the
following peripherals which may safely be accessed by the ARM:

 Interrupt Controller
 GPIO
 USB
 DMA controller
 I2C Master
 I2C / SPI Slave
 PWM
 UART0,UART1

Interrupt Controller

There are numerous interrupts which need to be routed. The interrupt routing logic has
the following input signals:

 Core related interrupts


 Core un-related interrupts

28
Figure 3.5. Interrupt Routing

Core related interrupts

Core related interrupts are interrupts which are bound to one specific core. Most of these
are interrupts generated by that core itself like the four timer interrupt signals. Additionally each
core has four mailboxes assigned to it.

These are the core related interrupts:

 Four timer interrupts (64-bit timer)


 One performance monitor interrupts
 Four Mailbox interrupts

For each of these interrupts you can only choose to send them to either the IRQ pin or to
the FIQ pin of one core. (Or not pass at all) The following table shows the truth table:

The mailbox interrupts do not have a separate interrupt enable/disable bit. The routing
bits have to be used for that. Unfortunately this enables or disables all 32 bits of a mailbox. After
a reset all bits are zero so none of the interrupts is enabled.

29
Core un-related interrupts

Core unrelated interrupts are interrupts which can be send to any of the four cores and to
either the interrupt or the fast-interrupt of that core. These are the core unrelated interrupts:

 GPU IRQ (As generated by the ARM Control logic).


 GPU FIQ (As generated by the ARM Control logic)
 Local timer interrupt
  AXI error
  (unused: Fifteen local peripheral interrupts)

For each of these interrupts you can choose to send them to either the IRQ pin or to the
FIQ pin of any of the four cores. The following table shows the truth table:

Note that these interrupts do not have a 'disable' code. They are expected to have an
enable/disable bit at the source where they are generated. After a reset all bits are zero thus all
interrupts are send to the IRQ of core 0.

3.4.3 Universal Serial Bus

USB, short for Universal Serial Bus, is an industry standard initially developed in the
mid-1990s that defines the cables, connectors and communications protocols used in a bus for
connection, communication, and power supply between computers and electronic devices. It is
currently developed by the USB Implementers Forum (USB IF).

30
USB was designed to standardize the connection of computer peripherals (including
keyboards, pointing devices, digital cameras, printers, portable media players, disk drives and
network adapters) to personal computers, both to communicate and to supply electric power. It
has become commonplace on other devices, such as smartphones, PDAs and video game
consoles. USB has effectively replaced a variety of earlier interfaces, such as serial ports and
parallel ports, as well as separate power chargers for portable devices.

USB 2.0 was released in April 2000, adding a higher maximum signaling rate of 480
Mbit/s (High Speed or High Bandwidth), in addition to the USB 1.x Full Speed signaling rate of
12 Mbit/s. Due to bus access constraints, the effective throughput of the High Speed signaling
rate is limited to 280 Mbit/s or 35 MB/s.

The Raspberry Pi 3 Model B is equipped with four USB2.0 ports. These are connected to
the LAN9512 combo hub/Ethernet chip IC3, which is itself a USB device connected to the single
upstream USB port on BCM2837.

The USB ports enable the attachment of peripherals such as keyboards, mice, webcams
that provide the Pi with additional functionality.

There are some differences between the USB hardware on the Raspberry Pi and the USB
hardware on desktop computers or laptop/tablet devices.

The USB host port inside the Pi is an On-The-Go (OTG) host as the application processor
powering the Pi, BCM2837, was originally intended to be used in the mobile market: i.e. as the
single USB port on a phone for connection to a PC, or to a single device. In essence, the OTG
hardware is simpler than the equivalent hardware on a PC.

OTG in general supports communication to all types of USB device, but to provide an
adequate level of functionality for most of the USB devices that one might plug into a Pi, the
system software has to do more work.

31
Figure 3.6 USB in Raspberry Pi 3 Model B

3.4.4 GPIO

Here we discuss technical features of the GPIO pins available on BCM2837 in general.
GPIO pins can be configured as either general-purpose input, general-purpose output or as one of
up to 6 special alternate settings, the functions of which are pin-dependent.

There are 3 GPIO banks on BCM2837. Each of the 3 banks has its own VDD input pin.
On Raspberry Pi, all GPIO banks are supplied from 3.3V. Connection of a GPIO to a voltage
higher than 3.3V will likely destroy the GPIO block within the SoC. A selection of pins from
Bank 0 is available on the P1 header on Raspberry Pi.

3.4.5 INTERRUPTS

Each GPIO pin, when configured as a general-purpose input, can be configured as an


interrupt source to the ARM. Several interrupt generation sources are configurable:

 Level-sensitive (high/low)
 Rising/falling edge

32
 Rising/falling edge

Level interrupts maintain the interrupt status until the level has been cleared by system
software (e.g. by servicing the attached peripheral generating the interrupt).

The normal rising/falling edge detection has a small amount of synchronization built into
the detection. At the system clock frequency, the pin is sampled with the criteria for generation
of an interrupt being a stable transition within a 3-cycle window, i.e. a record of "1 0 0" or "0 1
1". Asynchronous detection bypasses this synchronization to enable the detection of very narrow
events.

These pins are a physical interface between the Pi and the outside world. At the simplest
level, you can think of them as switches that you can turn on or off (input) or that the Pi can turn
on or off (output). Of the 40 pins, 26 are GPIO pins and the others are power or ground pins plus
two EEPROM pins.

3.4.6 GPIO NUMBERING

These are the GPIO pins as the computer sees them. The numbers don't make any sense
to humans, they jump about all over the place, so there is no easy way to remember them. You
will need a printed reference or a reference board that fits over the pins.

Figure 3.7 GPIO in Raspberry Pi 3 Model B

3.4.7 PHYSICAL NUMBERING

The other way to refer to the pins is by simply counting across and down from pin 1 at
the top left (nearest to the SD card). This is 'physical numbering' and it looks like this:

33
Figure 3.8 GPIO Numbering and their Functions

3.4.8 Pulse Code Modulation / I2S

Pulse-code modulation (PCM) is a method used to digitally represent sampled analog


signals. It is the standard form of digital audio in computers, compact discs, digital telephony and
other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled
regularly at uniform intervals, and each sample is quantized to the nearest value within a range of
digital steps.

I²S (Inter-IC Sound), pronounced eye-squared-ess, is an electrical serial bus interface


standard used for connecting digital audio devices together. It is used to communicate PCM
audio data between integrated circuits in an electronic device. The I²S bus separates clock and
serial data signals, resulting in a lower jitter than is typical of communications systems that
recover the clock from the data stream.

An ADC measures analog audio amplitudes many times each second, storing those as
numbers in a file. The most common format used for this in computers is pulse code modulation
(PCM). The digital-to-analog conversion DAC, such as the PWM emulation of 1-bit DAC on the
Raspberry Pi board, samples a PCM audio file and reconstructs the analog waveform according
to the numeric data in the PCM file.

I2S—which is short for Inter-IC Sound, Interchip Sound or IIS—is a type of serial bus
interface standard that connects digital audio devices to one another. As an example, I2S

34
connects the Raspberry Pi to an external DAC. We could use one of the USB ports for outputting
PCM audio to a DAC, but that can introduce distortion. The best solution is to use the general
purpose input output (GPIO) pins on the Raspberry Pi board. Also, it’s best to use the shortest
path possible. Consequently, external DAC boards for the Raspberry Pi plug directly into the
GPIO pins.

3.4.9 DMA Controller

In computing, channel I/O is a high-performance input/output (I/O) architecture that is


implemented in various forms on a number of computer architectures, especially on mainframe
computers. In the past, channels were generally implemented with custom processors, variously
named channel, peripheral processor, I/O processor, I/O controller, or DMA controller.

Many I/O tasks can be complex and require logic to be applied to the data to convert
formats and other similar duties. In these situations, the simplest solution is to ask the CPU to
handle the logic, but because I/O devices are relatively slow, a CPU could waste time (in
computer perspective) waiting for the data from the device. This situation is called 'I/O bound'.

Channel architecture avoids this problem by using a separate, independent, low-cost


processor. Channel processors are simple, but self-contained, with minimal logic and sufficient
on-board scratchpad memory (working storage) to handle I/O tasks. They are typically not
powerful or flexible enough to be used as a computer on their own and can be construed as a
form of coprocessor.

A CPU sends relatively small channel programs to the controller via the channel to
handle I/O tasks, which the channel and controller can, in many cases, complete without further
intervention from the CPU (exception: those channel programs which utilize 'program controlled
interrupts', PCIs, to facilitate program loading, demand paging and other essential system tasks).

When I/O transfer is complete or an error is detected, the controller communicates with
the CPU through the channel using an interrupt. Since the channel has direct access to the main
memory, it is also often referred to as DMA controller (where DMA stands for direct memory
access), although that term is looser in definition and is often applied to non-programmable
devices as well.

35
3.4.10 Inter-Integrated Circuit Bus

I²C (Inter-Integrated Circuit), pronounced I-squared-C, is a multi-master, multi-slave,


packet switched, single-ended, serial computer bus invented by Philips Semiconductor (now
NXP Semiconductors). It is typically used for attaching lower-speed peripheral ICs to processors
and microcontrollers in short-distance, intra-board communication. Alternatively I²C is spelled
I2C (pronounced I-two-C) or IIC (pronounced I-I-C).

I²C uses only two bidirectional open-drain lines, Serial Data Line (SDA) and Serial Clock
Line (SCL), pulled up with resistors. Typical voltages used are +5 V or +3.3 V although systems
with other voltages are permitted.

The I²C reference design has a 7-bit or a 10-bit (depending on the device used) address
space. Common I²C bus speeds are the 100 Kbit/s standard mode and the 10 Kbit/s low-speed
mode, but arbitrarily low clock frequencies are also allowed. Recent revisions of I²C can host
more nodes and run at faster speeds (400 Kbit/s Fast mode, 1 Mbit/s Fast mode plus or FM+, and
3.4 Mbit/s High Speed mode). These speeds are more widely used on embedded systems than on
PCs. There are also other features, such as 16-bit addressing.

36
The I2C (Inter-Integrated Circuit) communications protocol also comes from Philips. I2C is a
communications bus, providing communications between chips on a printed circuit board. One
of its prime uses on the Raspberry Pi board and elsewhere lies in connecting sensors.

I2C is not initialized when the Raspberry Pi first comes out of the box. You have to tell
the Raspberry Pi to use it. You accomplish this under the Raspbian OS (and other operating
systems) with the raspi-config command in the terminal. On the command line, type

sudo raspi-config

Use the down arrow key to select 9 Advanced Options and press the Enter key. On the next
screen, select A7 I2C to toggle the automatic loading of I2C on or off. A reboot is required each
time for the new state to take effect.

Figure 3.9 Raspberry Pi Configuration Setting

As with most interfaces related to the GPIO pins, many of which enable connection to
services on the Broadcom SoC, some programming is required.

37
3.4.11 Serial Peripheral Interface Bus

The Serial Peripheral Interface bus (SPI) is a synchronous serial communication interface
specification used for short distance communication, primarily in embedded systems. The
interface was developed by Motorola in the late eighties and has become a de facto standard.
Typical applications include Secure Digital cards and liquid crystal displays.

SPI devices communicate in full duplex mode using a master-slave architecture with a
single master. The master device originates the frame for reading and writing. Multiple slave
devices are supported through selection with individual slave select (SS) lines.

Sometimes SPI is called a four-wire serial bus, contrasting with three-, two-, and one-
wire serial buses. The SPI may be accurately described as a synchronous serial interface, but it is
different from the Synchronous Serial Interface (SSI) protocol, which is also a four-wire
synchronous serial communication protocol. But SSI Protocol employs differential signaling and
provides only a single simplex communication channel.

The Raspberry Pi is equipped with one SPI bus that has 2 chip selects. The SPI master
driver is disabled by default on Raspbian. To enable it, use raspi-config, or ensure the line
“dtparam=spi=on” isn't commented out in /boot/config.txt, and reboot.

The SPI bus is available on the P1 Header:

38
3.4.12 Universal Asynchronous Receiver/Transmitters

A universal asynchronous receiver/transmitter (UART), is a computer hardware device


for asynchronous serial communication in which the data format and transmission speeds are
configurable. The electric signaling levels and methods (such as differential signaling, etc.) are
handled by a driver circuit external to the UART.

UARTs are commonly used in conjunction with communication standards such as TIA
(formerly EIA) RS-232, RS-422 or RS-485. A UART is usually an individual (or part of an)
integrated circuit (IC) used for serial communications over a computer or peripheral device serial
port. UARTs are now commonly included in microcontrollers. A dual UART, or DUART,
combines two UARTs into a single chip. Similarly, a quadruple UART or QUART, combines
four UARTs into one package, such as the NXP 28L194. An octal UART or OCTART combines
eight UARTs into one package, such as the Exar XR16L788 or the NXP SCC2698. A related
device, the Universal Synchronous/Asynchronous Receiver/Transmitter (USART) also supports
synchronous operation.

Universal asynchronous receiver/transmitters (UARTs) use a set of registers to accept


and output data. Older UARTs could translate data between parallel and serial formats, but
modern UARTs do not have this capacity. The personal computers of yesteryear used to have
serial ports as a standard feature. The now ancient (in computer years) RS-232 serial format
(which ran these ports) is implemented via a UART. Serial ports such as these can still be found
on various industrial instruments.

The UART works by breaking down bytes of data into their individual bits and sending
those serially (one after the other). At the destination, the receiving UART reassembles the bytes.
The advantage of serial transmission over parallel transmission lies in its cost; just a single wire
is required. The Broadcom SoC on the Raspberry Pi has two UARTs. A common use for UARTs
is in microcontrollers, and the Raspberry Pi excels as a control device. The Raspberry Pi’s
onboard UART comes inside the Broadcom SoC containing the CPU (or CPUs), graphics
processing units (GPUs) and all those other goodies. It’s accessed and is programmable using the
GPIO’s pin 9 (transmit) and pin 10 (receive).

39
3.4.13 Camera Serial Interface

The Camera Serial Interface (CSI) is a specification of the Mobile Industry Processor
Interface (MIPI) Alliance. It defines an interface between a camera and a host processor. This
interface allows the connection of a camera. Cameras for the Raspberry Pi are available. It is
sometimes a bit irksome to connect the ribbon cable just right, but once things are hooked up
properly you can program the Raspberry Pi to do all sorts of neat stuff with digital photography
and video.

The Raspberry Pi has a Mobile Industry Processor Interface (MIPI) Camera Serial
Interface Type 2 (CSI-2), which facilitates the connection of a small camera to the main
Broadcom BCM2837 processor. This is a camera port providing an electrical bus connection
between the two devices. It is a very simple interface and with a little reverse engineering with
an oscilloscope, it is possible to figure out the pinout.

Figure 3.10 Raspberry Pi CSI Port

The purpose of this interface was to standardize the attachment of cameras modules to
processors for the mobile phone industry. The CSI-2 version of the interface was extremely
popular and used on almost all the mobile phones and devices currently found. With increasing
camera resolution, the bandwidth of data transferring from the camera to the processor also

40
increases. The CSI-2 specification developed by the MIPI Alliance solves a number of problems
that arise when large amounts of data require transfer to the processor.

MIPI CSI-2 version 1.01 supports up to four data lanes, where each lane has a maximum
of 1 Gbps bandwidth, to provide a total bandwidth of 4 Gbps. In addition, the interface uses the
least number of electrical connections to reduce PCB complexity. The data communication is
one-way, from camera to processor.

The D-PHY specification defines the physical hardware layer interface between a camera
and a processor to facilitate the fast exchange of data. This is a low power, high-speed
specification ideal for mobile and battery operated devices. It is also very scalable and the
interface may have any number of “data lanes” depending upon the throughput requirement of
the camera module.

3.4.14 Display Serial Interface

The Display Serial Interface (DSI) is a specification by the Mobile Industry


Processor Interface (MIPI) Alliance aimed at reducing the cost of display controllers in a mobile
device. It is commonly targeted at LCD and similar display technologies. It defines a serial bus
and a communication protocol between the host (source of the image data) and the device
(destination of the image data).

At the physical layer, DSI specifies a high-speed differential signaling point-to-point


serial bus. This bus includes one high speed clock lane and one or more data lanes. Each lane is
carried on two wires (due to differential signaling). All lanes travel from the DSI host to the DSI
device, except for the first data lane (lane 0), which is capable of a bus turnaround (BTA)
operation that allows it to reverse transmission direction. When more than one lane is used, they
are used in parallel to transmit data, with each sequential byte in the stream traveling on the next
lane. That is, if 4 lanes are being used, 4 bits are transmitted simultaneously, one on each lane.
The link operates in either low power (LP) mode or high speed (HS) mode. In low power mode,
the high speed clock is disabled and signal clocking information is embedded in the data. In this
mode, the data rate is insufficient to drive a display, but is usable for sending configuration
information and commands.

41
Figure 3.11 Raspberry Pi DSI Port

This interface enables you to connect small displays to the Raspberry Pi board. This
makes the Raspberry Pi, along with a battery power source, truly portable. The Raspberry Pi
connector S2 is a display serial interface (DSI) for connecting a liquid crystal display (LCD)
panel using a 15-pin ribbon cable. The mobile industry processor interface (MIPI) inside the
Broadcom BCM2837 IC feeds graphics data directly to the display panel through this connector.
The S2 connector provides a fast high-resolution display interface dedicated for the purposes of
sending video data directly from the GPU to a compatible display.

3.4.15 High-Definition Multimedia Interface

HDMI (High-Definition Multimedia Interface) is a proprietary audio/video interface for


transmitting uncompressed video data and compressed or uncompressed digital audio data from
an HDMI-compliant source device, such as a display controller, to a compatible computer
monitor, video projector, digital television, or digital audio device. HDMI is a digital
replacement for analog video standards.

HDMI implements the EIA/CEA-861 standards, which define video formats and
waveforms, transport of compressed, uncompressed, and LPCM audio, auxiliary data, and

42
implementations of the VESA EDID. CEA-861 signals carried by HDMI are electrically
compatible with the CEA-861 signals used by the digital visual interface (DVI). No signal
conversion is necessary, nor is there a loss of video quality when a DVI-to-HDMI adapter is
used. The CEC (Consumer Electronics Control) capability allows HDMI devices to control each
other when necessary and allows the user to operate multiple devices with one handheld remote
control device.

Several versions of HDMI have been developed and deployed since initial release of the
technology but all use the same cable and connector. Other than improved audio and video
capacity, performance, resolution and color spaces, newer versions have optional advanced
features such as 3D, Ethernet data connection, and CEC (Consumer Electronics Control)
extensions

Figure 3.12 Raspberry Pi HDMI Port

The best solution involves HDMI, and here are two of the advantages of using HDMI
output:

43
 HDMI allows the transfer of video and audio from an HDMI-compliant display
controller to compatible computer monitors, projectors, digital TVs or digital
audio devices.
 HDMI’s higher quality provides a marked advantage over composite video. This
also provides a display that’s much easier on the eyes and provides higher
resolution instead of composite video’s noisy and sometimes distorted video
and/or audio.

3.4.16 Status LED

Raspberry Pi 3 Model B have two status LED’s of different color. One is Green and the
other is Red Color LED at the top of the board and both indicate or used for troubleshooting.

On the reboot both the LED’s blink at once and only Red continuous to glow. If the green
continuous to glow that indicates there is some problem with SD card. If Red LED’s blink then
on the reboot then there is an issue with power supply.

Both the LED’s have an ID name to indicate the Red LED is PWR and Green LED is ACT

Figure 3.13 Status LED on Raspberry Pi 3 Mode B

44
3.4.17 Ethernet

Ethernet is a family of computer networking technologies commonly used in local area


networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was
commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, and has since
been refined to support higher bit rates and longer link distances. Over time, Ethernet has largely
replaced competing wired LAN technologies such as token ring, FDDI and ARCNET.

Since its commercial release, Ethernet has retained a good degree of backward
compatibility. Features such as the 48-bit MAC address and Ethernet frame format have
influenced other networking protocols. The primary alternative for some uses of contemporary
LANs is Wi-Fi, a wireless protocol standardized as IEEE 802.11.

There are two ways to achieve network connectivity with the Raspberry Pi. The first is a
wired connection that uses the Ethernet socket on the Raspberry Pi (excluding the Raspberry Pi
Zero, which does not have an Ethernet socket). Figure 12-5 shows the socket, which accepts a
standard network cable plug. The Ethernet port on the Raspberry Pi supports connections of 100
megabits per second (Mbit/s).

Figure 3.15 Ethernet Port

The second way of connecting to the network involves the USB ports. You can use a
wireless USB dongle (a dongle is a plug-in device) or a USB-to-Ethernet adapter. The USB
wireless device allows easy connection to Wi-Fi networks in the area, and the USB-to-Ethernet
effects a physical connection by providing a socket for a standard Ethernet cable.

45
46
CHAPTER 4

RASPBERRY PI CAMERA
One of the main objective of the project is to take a photo and send it to the user mobile
phone. For this purpose we require a camera module which can take picture of high quality and
with fast shutter rate. Because, it is of no use if the photo is take minutes after the motion
detector is triggered. So, the camera used in this project is Raspberry Pi Camera v2. It’s a second
version of Raspberry Pi camera modules. When all other modules of the project are connected
then RPI 3 Model B including the camera. It monitors the PIR sensor for any motion in its range.
If any motion is detected by the PIR sensor then it sends a trigger signal to the RPI 3 Model B.
This signal is then processed and the photo is taken using RPI camera. Here, everything that is
done after the motion is detected is controlled by programming.

A camera module is an image sensor integrated with a lens, control electronics,


and an interface like CSI, Ethernet or plain raw low-voltage differential signaling. As discussed
earlier RPI 3 Model B has a special kind of port designed for interfacing camera modules. The
Camera Serial Interface or shortly CSI is a specification of the Mobile Industry Processor
Interface (MIPI) Alliance. The camera consists of a small circuit board, which connects to the
Raspberry Pi's Camera Serial Interface (CSI) bus connector via a flexible ribbon cable.

The Raspberry Pi camera module can be used to take high-definition video, as well as
stills photographs. It’s easy to use for beginners, but has plenty to offer advanced users if you’re
looking to expand your knowledge. There are lots of examples online of people using it for time-
lapse, slow-motion and other video cleverness. You can also use the libraries we bundle with the
camera to create effects.

If you’re interested in the nitty-gritty, you’ll want to know that the module has a five
megapixel fixed-focus camera that supports 1080p30, 720p60 and VGA90 video modes, as well
as stills capture. It attaches via a 15cm ribbon cable to the CSI port on the Raspberry Pi.

The camera works with all models of Raspberry Pi 1 and 2. It can be accessed through
the MMAL and V4L APIs, and there are numerous third-party libraries built for it, including the
Picamera Python library.

47
4.1 Raspberry Pi Camera Module v2
The Camera module used in our project is RPI Camera Module v2. The specific reason
for opting this module is because of its Sony image sensor and high resolution photos. The v2
Camera Module has a Sony IMX219 8-megapixel sensor. As discussed above it uses 15 pin
Camera Serial Interface Bus to connect with Camera Serial Interface Port on the Raspberry Pi
Module.

Figure 4.1 Raspberry Pi Camera Module v2

Hardware Specification

Weight : 3g

Still resolution : 8 Megapixels

Video modes : 1080p30, 720p60 and 640 × 480p60/90

Sensor : Sony IMX219

Pixel size : 1.12 µm x 1.12 µm

Optical size : 1/4"

Focal length : 3.04 mm

Horizontal field of view : 62.2 degrees

Vertical field of view : 48.8 degrees

48
Block Diagram

Figure 4.2 Block Diagram of Raspberry Pi Camera Module v2

The Block Diagram of Raspberry Pi Camera Module v2 is shown in the fig 4.1. It clearly
shows us the connection of 15pin CSI bus with Raspberry Pi module. This interface uses I2C bus
for connection between port and camera module. Pin 13 and Pin 14 are used for establishing
connection with the RPI board.

49
Image Sensor

An image sensor is a solid-state device, the part of the camera's hardware that captures
light and converts what you see through a viewfinder or LCD monitor into an image. The image
sensor used in Raspberry Pi Camera Module v2 is Sony IMX219. This is a diagonal 4.60mm
CMOS active pixel type image sensor with a square pixel array and 8.08M effective pixels.
CMOS sensors are much less expensive to manufacture than CCD sensors. Both CCD (charge-
coupled device) and CMOS (complementary metal-oxide semiconductor) image sensors start at
the same point -- they have to convert light into electrons. This chip operates with three power
supplies, analogue 2.8V, digital 1.2V, and IF 1.8V, and has low power consumption. High
sensitivity, low dark current, and no smear are achieved through the adoption of R, G and B
primary color pigment mosaic filters. This chip features an electronic shutter with variable
charge-storage time.

Figure 4.3 Block Diagram of Image Sensor

50
4.2 Functions and Features
 Black-Illuminated CMOS image sensor Exmor R.
 2-wire serial communication circuit on chip.
 CSI2 serial data output.
 Timing generator, H and V driver circuits on chip.
 CDS/PGA on chip.
 10-bit A/D converter on chip.
 Automatic optical black clamp circuit on chip.
 PLL on chip.
 High sensitivity, Low dark current, no smear.
 Variable-speed shutter function.
 Excellent anti-blooming characteristics.
 R, G, B primary color pigment mosaic filters on chip.
 Max 30frames/s in all-pixel scan modes
 Pixel rate: 280 Mpixels/s
 180 frames/s @720p with 2x2 analog binning, 60 frames/s @1080p with V-corp
 Datarate: max. 755Mbps/lane (@4lane), 912Mbps/lane (@2lane)

Device Structure

Image size : Diagonal 4.60mm

Total Number of Pixels : 3296(H) x 2512(V) approx. 8.28M pixels

Number of Effective Pixels : 3296(H) x 2480(V) approx. 8.17M pixels

Number of Active Pixels : 3280(H) x 2464(V) approx. 8.08M pixels

Chip Size : 5.095mm (H) x 4.930mm (V) 9w/Scribe)

Unit Cell Size : 1.12µm (H) x 1.12µm (V)

Substrate Material : Silicon

51
4.3 Camera Serial Interface
The Camera Serial Interface (CSI) is a specification of the Mobile Industry Processor
Interface (MIPI) Alliance. It defines an interface between a camera and a host processor. There
are three modes of performing this type of connection

CSI-1 was the original standard MIPI interface for cameras. It emerged as an architecture to
define the interface between a camera and a host processor.

CSI-2 is MIPI CSI-2 v1.0 specification was released in 2005. It uses either D-PHY or C-PHY
(Both standards are set by the MIPI Alliance) as a physical layer option.

CSI-3 is a next generation interface specification based on the UniPort-M. It was released in
2012.

4.4 Software
Though the camera module is manufactured by Raspberry Pi Foundation it will not be
directly controlled by Raspberry Pi Board. So, there are some steps to be followed to configure
the camera module with the board. Since its inception, the camera is supported in the latest
version of Raspbian, the preferred operating system for Raspberry Pi.

The first step is to get the latest Raspberry Pi firmware, which supports the camera. You
can do that from a console by running:

sudo apt-get update

sudo apt-get upgrade

In the second step we then need to enable the camera from the Raspberry Pi configuration
program by running:

sudo raspi-config

The third step involves we need to choose "camera" from the program and then select
"Enable support for Raspberry Pi camera". Then reboot when prompted by the raspi-config
program. The camera will be enabled on subsequent boots of the Raspberry Pi.

52
Figure 4.4 raspi-config options

Several applications should now be available for the camera: the rapistill program
captures images, raspivid captures videos, and raspiyuv takes uncompressed YUV format
images. These are command line program. They accept a number of options, which are
documented if you run the commands without options.

53
CHAPTER 5
PIR SENSOR
A passive infrared sensor (PIR sensor) is an electronic sensor that measures infrared (IR)
light radiating from objects in its field of view. They are most often used in PIR-based motion
detectors. All objects with a temperature above absolute zero emit heat energy in the form of
radiation. Usually this radiation isn't visible to the human eye because it radiates at infrared
wavelengths, but it can be detected by electronic devices designed for such a purpose.

The term passive in this instance refers to the fact that PIR devices do not generate or
radiate any energy for detection purposes. They work entirely by detecting the energy given off
by other objects. PIR sensors don't detect or measure "heat" instead they detect the infrared
radiation emitted or reflected from an object.

PIR sensors allow you to sense motion, almost always used to detect whether a human
has moved in or out of the sensors range. They are small, inexpensive, low-power, easy to use
and don't wear out. For that reason they are commonly found in appliances and gadgets used in
homes or businesses. PIRs are basically made of a pyroelectric sensor which can detect levels of
infrared radiation. Everything emits some low level radiation, and the hotter something is, the
more radiation is emitted. The sensor in a motion detector is actually split in two halves. The
reason for that is that we are looking to detect motion (change) not average IR levels. The two
halves are wired up so that they cancel each other out. If one half sees more or less IR radiation
than the other, the output will swing high or low.

Along with the pyroelectric sensor is a bunch of supporting circuitry, resistors and
capacitors. It seems that most small hobbyist sensors use the BISS0001 ("Micro Power PIR
Motion Detector IC"), undoubtedly a very inexpensive chip. This chip takes the output of the
sensor and does some minor processing on it to emit a digital output pulse from the analog
sensor.

54
Figure 5.1 PIR Sensor

New PIRs have more adjustable settings and have a header installed in the 3-pin. For
many basic projects or products that need to detect when a person has left or entered the area, or
has approached, PIR sensors are great. They are low power and low cost, pretty rugged, have a
wide lens range, and are easy to interface with.

Figure 5.2 PIR with Time and Sensor Adjust

55
5.1 General Description
Though PIR sensor is a one component it involves many microchips embedded inside it.
In this part we discuss about them

BISS0001

It is a micro power PIR motion detector IC.

Features

 Low power CMOS technology (ideal for battery operated PIR devices)
 CMOS high input impedance operational amplifiers
 Bi-directional level detector / Excellent noise immunity
 Built-in Power up disable & output pulse control logic
 Dual mode : retriggerable & non-retriggerable

Figure 5.3 BISS0001 IC in PIR Sensor

RE 200B
The RE 200B is a passive infrared sensor designed to pick up heat radiation of
wave lengths in a band around 10 microns. It contains two active elements configured as
balanced differential series opposed type. This results in good compensation of
environmental temperature and excellent sensitivity for small changes of a spatial
temperature pattern. Thermal signals far below one microwatt are sufficient to trigger a
sufficient output voltage change. If the active elements of the PIR sensor are exposed to a
change in the surrounding temperature field, electrical charges are separated within the
sensor elements. The voltage across the sensors controls a J-FET source follower

56
impedance converter and thus modulates the output current of the PIR detector. The
spectral sensitivity of the sensor is controlled by the optical transfer characteristics of the
window in the case and has been optimized to pick up radiation of the human body.

Figure 5.4 RE 200B in PIR Sensor

5.2 Theory of Operation


PIR sensors are more complicated than many of the other sensors explained in these
tutorials (like photocells, FSRs and tilt switches) because there are multiple variables that affect
the sensors input and output. To begin explaining how a basic sensor works, we'll use this rather
nice diagram

The PIR sensor itself has two slots in it, each slot is made of a special material that is
sensitive to IR. The lens used here is not really doing much and so we see that the two slots can
'see' out past some distance (basically the sensitivity of the sensor). When the sensor is idle, both
slots detect the same amount of IR, the ambient amount radiated from the room or walls or
outdoors. When a warm body like a human or animal passes by, it first intercepts one half of the
PIR sensor, which causes a positive differential change between the two halves. When the warm
body leaves the sensing area, the reverse happens, whereby the sensor generates a negative
differential change. These change pulses are what is detected.

57
Figure 5.5 Operation of PIR sensor

5.3 Lenses
PIR sensors are rather generic and for the most part vary only in price and sensitivity.
Most of the real magic happens with the optics. The lens is just a piece of plastic, but that means
that the detection area is just two rectangles. Usually we'd like to have a detection area that is
much larger. To do that, we use a simple lens such as those found in a camera. They condenses a
large area (such as a landscape) into a small one (on film or a CCD sensor). For reasons that will
be apparent soon, we would like to make the PIR lenses small and thin and moldable from cheap
plastic, even though it may add distortion. For this reason the sensors are actually Fresnel lenses.

Figure 5.6 Lenses in PIR Sensor

58
5.4 Sensitivity Adjust
We can adjust this if PIR is too sensitive or not sensitive enough - clockwise makes it
more sensitive.

Changing Pulse Time and Timeout Length

There are two 'timeouts' associated with the PIR sensor. One is the "Tx" timeout: how
long the LED is lit after it detects movement - this is easy to adjust on Adafruit PIR's because
there's a potentiometer. The second is the "Ti" timeout which is how long the LED is guaranteed
to be off when there is no movement. This one is not easily changed but if you're handy with a
soldering iron it is within reason.

On Adafruit PIR sensors, there's a little trim potentiometer labeled TIME. This is a 1
Mega ohm adjustable resistor which is added to a 10K series resistor. And C6 is 0.01uF so Tx =
24576 x (10K + Rtime) x 0.01uF If the Rtime potentiometer is turned all the way down counter-
clockwise (to 0 ohms) then

Tx = 24576 x (10K) x 0.01uF = 2.5 seconds (approx.)

If the Rtime potentiometer is turned all the way up clockwise to 1 Mega ohm then

Tx = 24576 x (1010K) x 0.01uF = 250 seconds (approx.)

If Rtime is in the middle, that'd be about 120 seconds (two minutes) so you can tweak it as
necessary. For example if you want motion from someone to turn on a fan for a minimum of 1
minute, set the Rtime potentiometer to about 1/4 the way around.

59
CHAPTER 6
USB Wi-Fi ADAPTER
Wireless internet, commonly known as Wi-Fi, requires a wireless router, which transmits
signals through the air to devices equipped with wireless network cards or adapters. Using Wi-Fi,
computer users can connect to the Internet from practically any room and can even connect to the
Internet on the go using Wi-Fi hotspots. There are dual and single band routers, with the former
providing much faster Internet. To connect to a wireless network, a computer or other device
must have wireless adapters, whether internal or external. The most commonly used external
options are USB Wi-Fi adapters.

The USB adapter is the most important part of Wi-Fi set up. As a stated before in the
main tutorial the USB adapter is a radio that is tuned to the 2.4 GHz frequency that Wi-Fi works
on. As a rule of thumb the more power the radio has the more area coverage it has. For example,
every laptop these days has an internal Wi-Fi radio, only problem is that it is usually too weak to
even get signals from across the street. These built in radios usually work within the range of
40mW (mill watts) and 100mW. In order to gain more reception the cheapest solution is to
purchase an external Wi-Fi radio most commonly called a USB Adapter. The external radio
connects through your USB port on the PC or laptop and works together with software to “see”
more signals by covering a larger area. While in operation nothing changes from your normal
steps of connecting to a signal.

6.1 Introduction
The RT5370 is a highly integrated MAC/BBP and 2.4GHz RF/PA/LNA single chip with
150Mbps PHY rate supporting. It fully complies with IEEE 802.11n and IEEE 802.11 b/g
feature rich wireless connectivity at high standards, delivers reliable, cost-effective, throughput
from an extended distance. Optimized RF architecture and baseband algorithms provide superb
performance and low power consumption. Intelligent MAC design deploys a high efficient DMA
engine and hardware data processing accelerators without overloading the host processor. The
RT5370 is designed to support standard based features in the areas of security, quality of service
and international regulation, giving end users the greatest performance any􀆟me in any
circumstance.

60
Figure 6.1 RT5370 USB Wi-Fi Adapter

6.2 Features
 CMOS Technology with PA, LNA, RF, Baseband, and MAC Integrated.
 1T1R Mode with 150Mbps PHY Rate for Both
 Transmit and Receiving.
 Legacy and High Throughput Modes
 20MHz/40MHz Bandwidth
 Reverse Direction Grant Data Flow and Frame Aggregation
 WEP 64/128, WPA, WPA2,TKIP, AES, WAPI
 QoS-WMM, WMM-PS
 WPS,PIN,PBC
 Multiple BSSID Support
 USB 2.0
 Cisco CCX Support
 Bluetooth Co-existence
 Low Power with Advanced Power Management
 Operating Systems - Windows XP 32/64, 2000, Windows 7, Vista 32/64, Linux,
Macintosh.

61
6.3 Functional Block Diagram

Figure 6.2 Block Diagram of RT5370

Above figure describe about the functioning of the RT5370 USB Wi-Fi Adapter. As you can see
a USB bus is connected to USB port on device that you want Wi-Fi access. RF receiver and
transmitter are used for receiving and transmitting data on the network. System control is used to
control radio frequency according to the router.

6.4 Pin Layout

Figure 6.3 PIN Diagram of RT5370

62
6.5 Radio Modes
Wi-Fi cards can be operated in one of these modes:

• Master mode
• Managed mode
• Ad-hoc mode
• Monitor mode
• Promiscuous mode
• Infrastructure mode

Master Mode

Master mode (also called AP or infrastructure mode) is used to create a service that looks
like a traditional access point. The wireless card creates a network with a specified name
(called the SSID) and channel, and offers network services on it. Wireless cards in master mode
can only communicate with cards that are associated with it in managed mode.

Managed Mode

Managed mode is sometimes also referred to as client mode. Wireless cards in managed
mode will join a network created by a master, and will automatically change their channel to
match it. Clients using a given access point are said to be associated with it. Managed mode
cards do not communicate with each other directly, and will only communicate with an
associated master.

Ad-hoc Mode

Ad-hoc mode creates a multipoint-to-multipoint network when there is no master or AP


available. In ad-hoc mode, each wireless card communicates directly with its neighbors. Nodes
must be in range of each other to communicate, and must agree on a network name and
channel. The set-up formed by the stations is called the independent basic service set, or IBSS
for short. An IBSS is a wireless network which has at least two stations and uses no access
point. The IBSS therefore forms a temporary network which lets people in the same room
exchange data. It is identified by an SSID, just like an ESS in infrastructure mode.

63
Figure 6.4 Ad-hoc Mode of Wi-Fi

In an ad hoc network, the range of the independent BSS is determined by each station's
range. That means that if two of the stations on the network are outside each other's range, they
will not be able to communicate, even if they can "see" other stations. Unlike infrastructure
mode, ad hoc mode has no distribution system that can send data frames from one station to
another. An IBSS, then, is by definition a restricted wireless network.

Monitor Mode

Monitor mode is used by some tools (such as Kismet) to passively listen to all radio
traffic on a given channel. This is useful for analyzing problems on a wireless link or observing
spectrum usage in the local area. Monitor mode is not used for normal communications.

Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a
wireless network interface controller (WNIC) to monitor all traffic received from the wireless
network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode
allows packets to be captured without having to associate with an access point or ad hoc
network first. Monitor mode only applies to wireless networks, while promiscuous mode can be
used on both wired and wireless networks.

64
Promiscuous mode

In computer networking, promiscuous mode is a mode for a wired network interface


controller (NIC) or wireless network interface controller (WNIC) that causes the controller to
pass all traffic it receives to the central processing unit (CPU) rather than passing only the
frames that the controller is intended to receive. This mode is normally used for packet sniffing
that takes place on a router or on a computer connected to a hub (instead of a switch) or one
being part of a WLAN. Interfaces are placed into promiscuous mode by software bridges often
used with hardware virtualization.

Infrastructure mode

In mode infrastructure, each station computer (STA for short) connects to an access point
via a wireless link. The set-up formed by the access point and the stations located within its
coverage area are called the basic service set, or BSS for short. They form one cell. Each BSS
is identified by a BSSID, a 6-byte (48-bite) identifier. In infrastructure mode, the BSSID
corresponds to the access point's MAC address.

Figure 5.5 Infrastructure Mode of Wi-Fi

It is possible to link several access points together (or more precisely several BSS's)
using a connection called a distribution system (DS for short) in order to form an extended
service set or ESS. The distribution system can also be a wired network, a cable between two
access points or even a wireless network.

65
CHAPTER 7

TELEGRAM BOT
Telegram is a free cloud-based instant messaging service. Telegram clients exist for both
mobile (Android, iOS, Windows Phone, Ubuntu Touch) and desktop systems (Windows,
macOS, Linux). Users can send messages and exchange photos, videos, stickers, audio, and files
of any type. Telegram also provides optional end-to-end-encrypted messaging.

Telegram's default messages are cloud-based and can be accessed on any of the user's
connected devices. Users can share photos, videos, audio messages and other files (up to 1.5
gigabyte in size). Users can send messages to other users individually or to groups of up to 5,000
members. Sent messages can be edited and deleted on both sides within 48 hours after they have
been sent. This gives user an ability to correct typos and retract messages that were sent by
mistake. The transmission of messages to Telegram Messenger LLP's servers is encrypted with
the service's MTProto protocol.

7.1 Bots
In June 2015, Telegram launched a platform for third-party developers to create bots.
Bots are Telegram accounts operated by programs. They can respond to messages or mentions,
can be invited into groups and can be integrated into other programs. Dutch website Tweakers
reported that an invited bot can potentially read all group messages when the bot controller
changes the access settings silently at a later point in time. Telegram pointed out that it
considered implementing a feature that would announce such a status change within the relevant
group. Also there are inline bots, which can be used from any chat screen. In order to activate an
inline bot, user needs to type in the message field a bot's username and query. The bot then will
offer its content. User can choose from that content and send it within a chat.

Bots are simply Telegram accounts operated by software – not people – and they'll often
have AI features. They can do anything – teach, play, search, broadcast, remind, connect,
integrate with other services, or even pass commands to the Internet of Things. Today’s 3.0
update to the Telegram apps makes interacting with bots super-easy. In most cases you won’t
even have to type anything, because bots will provide you with a set of custom buttons. Bots can
now provide you with custom keyboards for specialized tasks.

66
Bots are third-party applications that run inside Telegram. Users can interact with bots by
sending messages, commands and inline requests. You control your bots using HTTPS requests
to our bot API.

7.2 Creating a new bot


Use the /newbot command to create a new bot. The BotFather will ask you for a name
and username, then generate an authorization token for your new bot. The name of your bot is
displayed in contact details and elsewhere. BotFather is the one bot to rule them all. It will help
you create new bots and change settings for existing ones.

The Username is a short name, to be used in mentions and telegram.me links. Usernames
are 5-32 characters long and are case insensitive, but may only include Latin characters,
numbers, and underscores. Your bot's username must end in ‘bot’, e.g. ‘tetris_bot’ or ‘TetrisBot’.
The token is a string along the lines of that is required to authorize the bot and send requests to
the Bot API.

Generating an authorization token

If your existing token is compromised or you lost it for some reason, use the /token
command to generate a new one.

BotFather commands

The remaining commands are pretty self-explanatory:

 /mybots — returns a list of your bots with handy controls to edit their settings
 /mygames — does the same for your games

Edit bots

 /setname – change your bot's name.


 /setdescription — change the bot's description, a short text of up to 512 characters,
describing your bot. Users will see this text at the beginning of the conversation with the
bot, titled ‘What can this bot do?’

67
 /setabouttext — change the bot's about info, an even shorter text of up to 120 characters.
Users will see this text on the bot's profile page. When they share your bot with someone,
this text is sent together with the link.
 /setuserpic — change the bot‘s profile pictures. It’s always nice to put a face to a name.
 /setcommands — change the list of commands supported by your bot. Users will see
these commands as suggestions when they type / in the chat with your bot. Each
command has a name (must start with a slash ‘/’, alphanumeric plus underscores, no more
than 32 characters, case-insensitive), parameters, and a text description. Users will see the
list of commands whenever they type ‘/’ in a conversation with your bot.
 /deletebot — delete your bot and free its username.

Edit settings

 /setinline — toggle inline mode for your bot.


 /setinlinegeo - request location data to provide location-based inline results.
 /setjoingroups — toggle whether your bot can be added to groups or not. Any bot must be
able to process private messages, but if your bot was not designed to work in groups, you
can disable this.
 /setprivacy — set which messages your bot will receive when added to a group. With
privacy mode disabled, the bot will receive all messages. We recommend leaving privacy
mode enabled.

Manage games

 /newgame — create a new game.


 /listgames — get a list of your games.
 /editgame — edit a game.
 /deletegame — delete an existing game.

68
7.3 Our Design
In our design we choose our bot name as MyBot and username as @rpi3b_bot. In the program
we gave some commands which respond with their respective message or image formats. The
commands are ‘/status’,’/photo’,’/gif'’,’/video’,’/enable’,’/disable’

Figure 7.1 Telegram Bot Design

69
7.4 Bots Working
At the core, Telegram Bots are special accounts that do not require an additional phone
number to set up. Users can interact with bots in two ways:

 Send messages and commands to bots by opening a chat with them or by adding them to
groups. This is useful for chat bots or news bots like the official TechCrunch and Forbes
bots.
 Send requests directly from the input field by typing the bot's @username and a query.
This allows sending content from inline bots directly into any chat, group or channel.

Messages, commands and requests sent by users are passed to the software running on your
servers. Our intermediary server handles all encryption and communication with the Telegram
API for you. You communicate with this server via a simple HTTPS-interface that offers a
simplified version of the Telegram API. We call that interface our Bot API.

70
CHAPTER 8

PROJECT IMPLEMENTATION
Our project implementation involves a series of steps that have to be performed in order
to get perfect results. As discussed in earlier chapter Raspberry Pi board that acts as a main unit
in this project needs an operating system to operate. But the manufacturer doesn’t provide with
pre-installed OS.

8.1 Setting up Raspberry Pi


One of the main objective of the project is low cost and efficient. So we tried to set up
Raspberry Pi in headless mode. Which means it doesn’t have to connect with monitor, keyboard
and mouse to access. Remote access technique is implemented to access Raspberry Pi using
Secure Shell. Secure Shell (SSH) is a cryptographic network protocol for operating network
services securely over an unsecured network. First, we need to install an OS (we choose
Raspbian) into the board to access. This process require some external software to which are
used to connect to Raspberry Pi remotely. The steps for the above process are

 We need a SD card any version from class 4 to act like a bootable disk for Raspberry Pi.
 The SD card should contain only the boot files of Raspbian OS. So, it should be
formatted completely using an external software “SDFormatter”.
 Once it is formatted and completely empty copy the Raspbian OS boot files into SD card
using an external software “Win32DiskImager”.
 This process will write the boot files into SD card.
 As we thought to access board in headless mode we have to create a file with the name
“SSH” into the SD card with boot files. Which makes the board to allow remote access.
 Now, we have to connect the board to Wi-Fi router using Ethernet port (for the first time)
 It provides the board a private IP address using which we can remotely access the board
with an external software “VNC Viewer”.
 Then power up the board using 5v micro USB adapter.
 Same as other OS installations it takes some time to install the OS into the board.
 When installation is finished it asks to login for the very first time the Username is “pi”
and Password is “raspberry”.

71
 Once inside you have to set the SSH option enable in Raspberry Pi Configuration or we
can use “sudo raspi-config” command in the shell.

8.2 Design Approach


Project implementation can be explained in five parts. Each part explains about
interfacing between each component used in project. The schematic diagram of the Raspberry Pi
board provides us internal information about interfacing the components.

Figure 8.1 Raspberry Pi 3 Model B Schematics

First the main component of the project is Raspberry Pi camera to take photos. This
component is connected using camera serial interface bus to the camera serial interface port on
the board. Once the camera is connected we have to ensure that camera module interfacing is
enabled on the Raspberry Pi configuration settings if not enable it. Then reboot the Raspberry Pi
to use the camera. To check if the camera is connected to the board type the command

72
“vcgencmd get_camera” in the shell. If any connected it shows “supported=1 detected=1” on
the shell.

Second the next component that needs to connect is PIR sensor. The sensor have a three
pin connector. The pins are Power, Data, and Ground respectively. These pins should be
connected with their respective GPIO pins of the Raspberry Pi. To check if it is connected and
working perfectly we need to run small program which displays a message when motion
detected.

Third the last component that needs to be connected is USB Wi-Fi adapter. Though there
is onboard Wi-Fi on Raspberry Pi it doesn’t meet our requirement. We need a Wi-Fi adapter that
can sniff the packets on the Wi-Fi router network. We can do this by using RT5370 Wi-Fi
adapter which works in Monitor Mode. This is required because to check whether user mobile
phone which is connected to the router and in the range.

Fourth, once all are connected we have to run set the Wi-Fi in monitor mode by using
commands in shell.

iw phy phy0 interface add mon0 type monitor

ifconfig mon0 up

These above commands will set Wi-Fi adapter to monitor mode start it. Now we have to
place the code and its configuration file in their respective folder and start the program by using
the command in the shell.

rpi-security.py –d

Fifth, after starting the program if runs continuously until we stop it manually. It
continuously monitor the Wi-Fi router network to check the user mobile phone presence. If the
user phone is not in range then the system will arm and start detecting motion in its area. If any
motion is detected PIR sensor sends a signal to the Raspberry Pi board. Now, system check for
user mobile for one last time before confirming. If it didn’t find again then system asks camera to
take a photo and saves it to the SD card memory and at the same time it sends a copy of the file
to the telegram bot.

73
As discussed earlier, telegram bot is used to receive photos and send commands to the
system. We can send some commands which can trigger bot and in return get reply for it from
the system.

 /status : Sends a status report.


 /enable : Enables the service after it being disabled.
 /disable : Disables the service until re-enabled.
 /photo : Sends a photo at that instant of the time.
 /gif : Sends a gif format at that instant of time.

Once the program starts running it can only be stopped manually there are threads in the
programs which continuously run and monitor tasks that are assigned for them.

74
CHAPTER 9

RESULTS
In this chapter, we discuss about the result of the Camera Surveillance using Raspberry
Pi. Once everything is connected as discussed in last chapter. Now, we have start the program
that monitors the motion detected signals from PIR sensor and take a photo then send it to
telegram bot.

The program of this project includes threads which runs in the background and doesn’t
interrupt the execution on the process. These threads are called as Daemon Threads. So, first we
need to start the daemon threads by using a system control command

sudo systemctl daemon-reload

This provides user automatic arm and disarm of the system using user mobile phone
range. To achieve this application we have to monitor the router every few seconds for the traffic
of the user mobile phone on the network. This require a USB Wi-Fi adapter to work in special
mode discussed above called monitor mode. This mode allows Wi-Fi adapter to monitor for user
mobile phone and arm the system if the phone traffic is disconnected form the router. But the
USB Wi-Fi adapter should be configured to monitor mode using commands.

iw phy phy0 interface add mon0 type monitor

ifconfig mon0 up

The last step to run the program we have to use python commands which directly start the
program and it has a debug mode where errors are printed when caught which comes in handy
when the program suddenly quits without any prior alert.

rpi-security.py –d

The figure shows the how the commands are given in the command prompt of Raspberry
Pi. It also shows how the system provides a log of data as it arm, disarm, monitor for mobile
phone, motion detection and sending messages to telegram bot.

75
Figure 9.1 Command Prompt Log of Program

Figure 9.2 Message to Telegram Bot

76
We can also send pre-defined commands from telegram bot and get reply from the
system according to it. There are some commands which are discussed earlier are shown here.

Figure 9.3 Commands Sent through Telegram Bot

Figure 9.4 Commands Received from Telegram Bot

77
CHAPTER 10

CONCLUSION AND FUTURE SCOPE


10.1 Conclusion
In this project, we introduced a simple and efficient way for monitoring an area. This
design involves low cost and most reliable components that arm and disarm depending on user’s
presence. In this Raspberry Pi board is selected to make decisions according to the data provided
by the sensor and take necessary action. We also included a debug mode for this process which
gives us information about the error when occurred. This system entirely depends on the internet
for sending images which are taken when motion detected to the users mobile phone. It is best
way to monitor small houses, offices. If there is case where system or mobile is unable to access
the internet connection then the system works normally and saves a copy of the image taken and
sends when it can access the internet. This can also be considered as backup of images in case of
any damage to mobile. Taking advantage of the Android system which can run applications like
telegram we can receive the images to our phone using internet connection on both devices
irrespective of our location. This system can also be upgraded by adding more number of PIR
sensors so that we can monitor more area.

10.2 Future Scope


There are several things that could be added to the project to make it more robust in the
future. Here is a list of a few of the possibilities.

 Increasing number of sensor will expand the monitoring area of the system for
better detection of any motion.
 Face Detection can be added to the system which can send names of the person
with their photo.
 System and main door lock can be connected to automatically open the door or
get commands from the bot which can open the door when the detected person is
identified
 Everyday update of status and photos can be sent to email provided in the
program.

78
APPENDIX
PROJECT CODE
/****************************************************************************/
rpi-security.conf
/****************************************************************************/

# Pin connected to the PIR sensor


pir_pin=14
# MAC addresses to monitor for
mac_addresses=<mac_address>
# The Telegram bot token.
telegram_bot_token=<token>
# The wireless interface in monitor mode
network_interface=mon0
# Debug mode. Setting this to true will send more verbose logging to syslog
debug_mode=false
# Time to wait since last packet detected before arming, in seconds
packet_timeout=100
# camera_mode can be 'photo' or 'gif'
camera_mode=photo
# Path to save captured images or videos
camera_save_path=/var/tmp
# Flip image vertically
camera_vflip=true
# Flip image horizontally
camera_hflip=true
# Image size in pixels for photos
camera_image_size=1024x768
# Number of photos to take or number of GIF frames x3 when motion is detected.
camera_capture_length=1

79
/****************************************************************************/
rpi-security.py
/****************************************************************************/
import os
import argparse
import logging
import logging.handlers
from ConfigParser import SafeConfigParser
import RPi.GPIO as GPIO
from datetime import datetime, timedelta
import sys
import time
import signal
import yaml
def parse_arguments():
p = argparse.ArgumentParser(description='A simple security system to run on a Raspberry Pi.')
p.add_argument('-c', '--config_file', help='Path to config file.', default='/etc/rpi-security.conf')
p.add_argument('-d', '--debug', help='To enable debug output to stdout', action='store_true',
default=False)
return p.parse_args()
def get_network_address(interface_name):
from netaddr import IPNetwork
from netifaces import ifaddresses
interface_details = ifaddresses(interface_name)
my_network = IPNetwork('%s/%s' % (interface_details[2][0]['addr'], interface_details[2][0]
['netmask']))
network_address = my_network.cidr
logger.debug('Calculated network: %s' % network_address)
return str(network_address)
def get_interface_mac_addr(network_interface):
result = False
try:

80
f = open('/sys/class/net/%s/address' % network_interface, 'r')
def parse_config_file(config_file):
def str2bool(v):
return v.lower() in ("yes", "true", "t", "1")
default_config = {
'camera_save_path': '/var/tmp',
'network_interface': 'mon0',
'packet_timeout': '700',
'debug_mode': 'False',
'pir_pin': '14',
'camera_vflip': 'False',
'camera_hflip': 'False',
'camera_image_size': '1024x768',
'camera_mode': 'video',
'camera_capture_length': '3'
}
cfg = SafeConfigParser(defaults=default_config)
cfg.read(config_file)
dict_config = dict(cfg.items('main'))
dict_config['debug_mode'] = str2bool(dict_config['debug_mode'])
dict_config['camera_vflip'] = str2bool(dict_config['camera_vflip'])
dict_config['camera_hflip'] = str2bool(dict_config['camera_hflip'])
dict_config['pir_pin'] = int(dict_config['pir_pin'])
dict_config['camera_image_size'] = tuple([int(x) for x in dict_config['camera_image_size'].split('x')])
dict_config['camera_capture_length'] = int(dict_config['camera_capture_length'])
dict_config['camera_mode'] = dict_config['camera_mode'].lower()
dict_config['packet_timeout'] = int(dict_config['packet_timeout'])
if ',' in dict_config['mac_addresses']:
dict_config['mac_addresses'] = dict_config['mac_addresses'].lower().split(',')
else:
dict_config['mac_addresses'] = [ dict_config['mac_addresses'].lower() ]

81
return dict_config
def take_photo(output_file):
if args.debug:
GPIO.output(32, True)
time.sleep(0.25)
GPIO.output(32, False)
try:
camera.capture(output_file)
def take_gif(output_file, length, temp_directory):
temp_jpeg_path = temp_directory + "/rpi-security-" + datetime.now().strftime("%Y-%m-%d-%H%M
%S") + 'gif-part'
jpeg_files = ['%s-%s.jpg' % (temp_jpeg_path, i) for i in range(length*3)]
try:
for jpeg in jpeg_files:
camera.capture(jpeg, resize=(800,600))
im=Image.open(jpeg_files[0])
jpeg_files_no_first_frame=[x for x in jpeg_files if x != jpeg_files[0]]
ims = [Image.open(i) for i in jpeg_files_no_first_frame]
im.save(output_file, append_images=ims, save_all=True, loop=0, duration=200)
im.close()
for imfile in ims:
imfile.close()
for jpeg in jpeg_files:
os.remove(jpeg)
def archive_photo(photo_path):
#command = 'cp %(source) %(destination)' % {"source": "/var/tmp/blah", "destination":
"s3/blah/blah"}
pass
def telegram_send_message(message):
if 'telegram_chat_id' not in state:
return False
try:

82
bot.sendMessage(chat_id=state['telegram_chat_id'], parse_mode='Markdown', text=message,
timeout=10)
def telegram_send_file(file_path):
if 'telegram_chat_id' not in state:
logger.error('Telegram failed to send file %s because Telegram chat_id is not set. Send a message to
the Telegram bot' % file_path)
return False
filename, file_extension = os.path.splitext(file_path)
try:
if file_extension == '.mp4':
bot.sendVideo(chat_id=state['telegram_chat_id'], video=open(file_path, 'rb'), timeout=30)
elif file_extension == '.gif':
bot.sendDocument(chat_id=state['telegram_chat_id'], document=open(file_path, 'rb'), timeout=30)
elif file_extension == '.jpeg':
bot.sendPhoto(chat_id=state['telegram_chat_id'], photo=open(file_path, 'rb'), timeout=10)
else:
logger.error('Uknown file not sent: %s' % file_path)
def process_photos(network_address, mac_addresses):
logger.info("thread running")
while True:
if len(captured_from_camera) > 0:
if alarm_state['current_state'] == 'armed':
arp_ping_macs(mac_addresses, network_address, repeat=3)
for photo in list(captured_from_camera):
if alarm_state['current_state'] != 'armed':
break
logger.debug('Processing the photo: %s' % photo)
alarm_state['alarm_triggered'] = True
if telegram_send_file(photo):
archive_photo(photo)
captured_from_camera.remove(photo)
for photo in list(captured_from_camera):

83
captured_from_camera.remove(photo)
# Delete the photo file
time.sleep(5)
def update_alarm_state(new_alarm_state):
if new_alarm_state != alarm_state['current_state']:
alarm_state['previous_state'] = alarm_state['current_state']
alarm_state['current_state'] = new_alarm_state
alarm_state['last_state_change'] = time.time()
telegram_send_message('rpi-security: *%s*' % alarm_state['current_state'])
def monitor_alarm_state(packet_timeout, network_address, mac_addresses):
logger.info("thread running")
while True:
time.sleep(3)
now = time.time()
if alarm_state['current_state'] != 'disabled':
if now - alarm_state['last_packet'] > packet_timeout + 20:
update_alarm_state('armed')
elif now - alarm_state['last_packet'] > packet_timeout:
arp_ping_macs(mac_addresses, network_address)
else:
update_alarm_state('disarmed')
def telegram_bot(token, camera_save_path, camera_capture_length, camera_mode):
def prepare_status(alarm_state_dict):
def readable_delta(then, now=time.time()):
td = timedelta(seconds=now - then)
days, hours, minutes = td.days, td.seconds // 3600, td.seconds // 60 % 60
text = '%s minutes' % minutes
if hours > 0:
text = '%s hours and ' % hours + text
if days > 0:
text = '%s days, ' % days + text

84
return text
return '*rpi-security status*\nCurrent state: _%s_\nLast state: _%s_\nLast change: _%s
ago_\nUptime: _%s_\nLast MAC detected: _%s %s ago_\nAlarm triggered: _%s_' % (
alarm_state_dict['current_state'],
alarm_state_dict['previous_state'],
readable_delta(alarm_state_dict['last_state_change']),
readable_delta(alarm_state_dict['start_time']),
alarm_state_dict['last_packet_mac'],
readable_delta(alarm_state_dict['last_packet']),
alarm_state_dict['alarm_triggered']
)
def save_chat_id(bot, update):
if 'telegram_chat_id' not in state:
state['telegram_chat_id'] = update.message.chat_id
def check_chat_id(update):
if update.message.chat_id != state['telegram_chat_id']:
return False
else:
return True
def help(bot, update):
if check_chat_id(update):
bot.sendMessage(update.message.chat_id, parse_mode='Markdown', text='/status: Request
status\n/disable: Disable alarm\n/enable: Enable alarm\n/photo: Take a photo\n/gif: Take a gif\n',
timeout=10)
def status(bot, update):
if check_chat_id(update):
bot.sendMessage(update.message.chat_id, parse_mode='Markdown',
text=prepare_status(alarm_state), timeout=10)
def disable(bot, update):
if check_chat_id(update):
update_alarm_state('disabled')
def enable(bot, update):
if check_chat_id(update):

85
update_alarm_state('disarmed')
def photo(bot, update):
if check_chat_id(update):
file_path = camera_save_path + "/rpi-security-" + datetime.now().strftime("%Y-%m-%d-%H%M
%S") + '.jpeg'
take_photo(file_path)
telegram_send_file(file_path)
def gif(bot, update):
if check_chat_id(update):
file_path = camera_save_path + "/rpi-security-" + datetime.now().strftime("%Y-%m-%d-%H%M
%S") + '.gif'
take_gif(file_path, camera_capture_length, camera_save_path)
telegram_send_file(file_path)
def error(bot, update, error):
logger.error('Update "%s" caused error "%s"' % (update, error))
updater = Updater(token)
dp = updater.dispatcher
dp.add_handler(RegexHandler('.*', save_chat_id), group=1)
dp.add_handler(RegexHandler('.*', debug), group=2)
dp.add_handler(CommandHandler("help", help))
dp.add_handler(CommandHandler("status", status))
dp.add_handler(CommandHandler("disable", disable))
dp.add_handler(CommandHandler("enable", enable))
dp.add_handler(CommandHandler("photo", photo))
dp.add_handler(CommandHandler("gif", gif))
dp.add_error_handler(error)
updater.start_polling(timeout=10)
def motion_detected(channel):
current_state = alarm_state['current_state']
if current_state == 'armed':
logger.info('Motion detected')

86
file_prefix = config['camera_save_path'] + "/rpi-security-" + datetime.now().strftime("%Y-%m-%d-
%H%M%S")
if config['camera_mode'] == 'gif':
camera_output_file = "%s.gif" % file_prefix
take_gif(camera_output_file, config['camera_capture_length'], config['camera_save_path'])
captured_from_camera.append(camera_output_file)
elif config['camera_mode'] == 'photo':
for i in range(0, config['camera_capture_length'], 1):
camera_output_file = "%s-%s.jpeg" % (file_prefix, i)
take_photo(camera_output_file)
captured_from_camera.append(camera_output_file)
def exit_cleanup():
GPIO.cleanup()
if 'camera' in vars():
camera.close()
def exit_clean(signal=None, frame=None):
logger.info("rpi-security stopping...")
exit_cleanup()
sys.exit(0)
def exit_error(message):
logger.critical(message)
exit_cleanup()
try:
current_thread().getName()
except NameError:
sys.exit(1)
else:
os._exit(1)
if __name__ == "__main__":
GPIO.setwarnings(False)
# Parse arguments and configuration, set up logging
args = parse_arguments()

87
config = parse_config_file(args.config_file)
logger = setup_logging(debug_mode=config['debug_mode'], log_to_stdout=args.debug)
state = read_state_file(args.state_file)
sys.excepthook = exception_handler
captured_from_camera = []
# Some intial checks before proceeding
if check_monitor_mode(config['network_interface']):
config['network_interface_mac'] = get_interface_mac_addr(config['network_interface'])
# Hard coded interface name here. Need a better solution...
config['network_address'] = get_network_address('wlan0')
else:
exit_error('Interface %s does not exist, is not in monitor mode, is not up or MAC address unknown.'
% config['network_interface'])
if not os.geteuid() == 0:
exit_error('%s must be run as root' % sys.argv[0])
# Now begin importing slow modules and setting up camera, Telegram and threads
import picamera
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import srp, Ether, ARP
from scapy.all import conf as scapy_conf
scapy_conf.promisc=0
scapy_conf.sniff_promisc=0
import telegram
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, RegexHandler
from threading import Thread, current_thread
from PIL import Image
GPIO.setmode(GPIO.BCM)
GPIO.setup(32, GPIO.OUT, initial=False)
try:
camera = picamera.PiCamera()
camera.resolution = config['camera_image_size']
camera.vflip = config['camera_vflip']

88
camera.hflip = config['camera_hflip']
camera.led = False
except Exception as e:
exit_error('Camera module failed to intialise with error %s' % e)
try:
bot = telegram.Bot(token=config['telegram_bot_token'])
except Exception as e:
exit_error('Failed to connect to Telegram with error: %s' % e)
# Set the initial alarm_state dictionary
alarm_state = {
'start_time': time.time(),
'current_state': 'disarmed',
'previous_state': 'stopped',
'last_state_change': time.time(),
'last_packet': time.time(),
'last_packet_mac': None,
'alarm_triggered': False
}
# Start the threads
telegram_bot_thread = Thread(name='telegram_bot', target=telegram_bot, kwargs={'token':
config['telegram_bot_token'], 'camera_save_path': config['camera_save_path'], 'camera_capture_length':
config['camera_capture_length'], 'camera_mode': config['camera_mode'],})
telegram_bot_thread.daemon = True
telegram_bot_thread.start()
monitor_alarm_state_thread = Thread(name='monitor_alarm_state', target=monitor_alarm_state,
kwargs={'packet_timeout': config['packet_timeout'], 'network_address': config['network_address'],
'mac_addresses': config['mac_addresses']})
monitor_alarm_state_thread.daemon = True
monitor_alarm_state_thread.start()
capture_packets_thread = Thread(name='capture_packets', target=capture_packets,
kwargs={'network_interface': config['network_interface'], 'network_interface_mac':
config['network_interface_mac'], 'mac_addresses': config['mac_addresses']})
capture_packets_thread.daemon = True
capture_packets_thread.start()

89
process_photos_thread = Thread(name='process_photos', target=process_photos,
kwargs={'network_address': config['network_address'], 'mac_addresses': config['mac_addresses']})
process_photos_thread.daemon = True
process_photos_thread.start()
signal.signal(signal.SIGTERM, exit_clean)
time.sleep(2)
try:
GPIO.setup(config['pir_pin'], GPIO.IN)
GPIO.add_event_detect(config['pir_pin'], GPIO.RISING, callback=motion_detected)
logger.info("rpi-security running")
telegram_send_message('rpi-security running')
while 1:
time.sleep(100)
except KeyboardInterrupt:
exit_clean()

90

You might also like