You are on page 1of 49

CHAPTER 1

INTRODUCTION
In recent years there has been increasing interest in wearable health monitoring
devices, both in research and industry. These devices are particularly important to the world’s
increasingly aging population, whose health has to be assessed regularly or monitored
continuously. For example, a third or more of the 78 million baby boomers and 34 million of
their parents may be at risk for the development of devastating diseases including
cardiovascular disease, stroke and cancer. Experts predict that presymptomatic testing could
save millions of lives and dollars in the coming decades. The implications and potential of
these wearable health monitoring technologies are paramount, since they could:
The new generation computer programs that perform intelligent human conversation.
Every chat bot has typically three parts. The first one is typed orspeech input from the user in
natural language, second is the typed or spoken output from the chatbot and then the process
of passing the input through the program so that an understandable output is produced. This
whole process is repeated until the end of the conversations reached. The Patient chatbot is a
system designed for Patients where they can ask any Doctor related questions like best
events, contact details , latest trending course, etc. Even if the Patient does not frame sentence
properly system will understands the query and answer accordingly. The user doesn’t need
to follow any specific format to ask questions. NLP(Natural Language Processing) concept
has been used that is concerned with programming computers so that natural language is
processed and used in order to get output to user. The purpose of a chat bot system is to
simulate a human conversation; the chat bot architecture integrates a computational algorithm
and language model to emulate information chat communication between a human user and a
computer using natural language.
OBJECTIVE:

 The main objectives of this project prevent the heart attack patient from the
critical situation. Due to the sensor detection the user can easily find the
patient health performance.
 The device is enrolling the user details such as patient heart range, patient
diabetes details and so on.
 Due to this the patient can easily predict from the critical situation.

1
CHAPTER 2
SYSTEM STUDY
2.PROBLEM DEFINITION
Abnormal measurements must be excluded to unnecessary intervention of healthcare
professionals. With continuous monitoring, the amount of collected physiological data from
monitored patients becomes large and in-tractable. A real-time processing using lightweight
algorithm is required to detect abnormal values and to distinguish between patient’s health
degradation and faulty measurements.

2.1Existing System
In our Hospital exists only the manual way of asking the queries to the appropriate
staffs which will be an inconvenient way for Patient since they could not clarify their doubts
at the time they need. Retrieval-based models (easier) use a repository of predefined
responses and some kind of heuristic to pick an appropriate response based on the input and
context. The heuristic could be as simple as a rule-based expression match, or as complex as
an ensemble of Machine Learning classifiers. These systems don’t generate any new text,
they just pick a response from a fixed set. Retrieval-based methods don’t make grammatical
mistakes. However, they may be unable to handle unseen cases for which no appropriate
predefined response exists. For the same reasons, these models can’t refer back to contextual
entity information like names mentioned earlier in the conversation.
Disadvantages
 It only response for predefined keywords.

 Difficult to update staff details and provide time consuming process.

2.2Proposed System
This Chatbot will automate the existing manual responding system thereby making
the existing system simpler. The Heart Patient Monitoring project is built using artificial
algorithms that analyses user’s queries and understand user’s message. This System is a web
application which provides answer to the query of the Patients. The Patient just have to query
through the bot which is used for chatting. Patients can chat using any format there is no
specific format the user has to follow. The System uses built in artificial intelligence to
answer the query. The answers are appropriate what the user queries. The User can query any

2
heart functions related activities through the system. The user does not have to personally go
to the Hospital for Consulting a Doctor. The System analyses the question and then answers
to the user. The system answers to the query as if it is answered by the person. With the help
of artificial intelligence, the system answers the query asked by the Patients. The system
replies using an effective Graphical user interface which implies that as if a real person is
talking to the user. The user just has to register himself to the system and has to login to the
system. After login user can access to the various helping pages. Various helping pages has
the bot through which the user can chat by asking queries related to symptoms of Diseases.
The system replies to the user with the help of effective graphical user interface. The user can
query about the disease related through online with the help of this web application. The user
can query ask with doctors related disease symptoms such as Heartbeat rating, Blood
Pressure level, and so on.
Advantages:
• Accurate feedback to the user anytime
• Able to detect the heart attack
• Efficient alert systems
Naïve Bayes algorithm:
It is a classification technique based on Bayes' Theorem with an assumption of
independence among predictors. In simple terms, a Naive Bayes classifier assumes that the
presence of a particular feature in a class is unrelated to the presence of any other feature.
In machine learning we are often interested in selecting the best hypothesis (h) given data (d).
In a classification problem, our hypothesis (h) may be the class to assign for a new data
instance (d).One of the easiest ways of selecting the most probable hypothesis given the data
that we have that we can use as our prior knowledge about the problem. Bayes’ Theorem
provides a way that we can calculate the probability of a hypothesis given our prior
knowledge.

3
Bayes’ Theorem is stated as:

Where
 P(h|d) is the probability of hypothesis h given the data d. This is called the posterior
probability.
 P(d|h) is the probability of data d given that the hypothesis h was true.
 P(h) is the probability of hypothesis h being true (regardless of the data). This is called the
prior probability of h.
 P(d) is the probability of the data (regardless of the hypothesis).
You can see that we are interested in calculating the posterior probability of P(h|d) from the
prior probability p(h) with P(D) and P(d|h).
After calculating the posterior probability for a number of different hypotheses, you
can select the hypothesis with the highest probability. This is the maximum probable
hypothesis and may formally be called the maximum a posteriori (MAP) hypothesis.
This can be written as:
MAP(h) = max(P(h|d))
or
MAP(h) = max((P(d|h) * P(h)) / P(d))
or
MAP(h) = max(P(d|h) * P(h))
The P(d) is a normalizing term which allows us to calculate the probability. We can
drop it when we are interested in the most probable hypothesis as it is constant and only used
to normalize.

4
Back to classification, if we have an even number of instances in each class in our
training data, then the probability of each class (e.g. P(h)) will be equal. Again, this would be
a constant term in our equation and we could drop it so that we end up with:
MAP(h) = max(P(d|h))
This is a useful exercise, because when reading up further on Naive Bayes you may
see all of these forms of the theorem.
Naive Bayes Classifier:
Naive Bayes is a classification algorithm for binary (two-class) and multi-class
classification problems. The technique is easiest to understand when described using binary
or categorical input values.
It is called naive Bayes or idiot Bayes because the calculation of the probabilities for
each hypothesis are simplified to make their calculation tractable. Rather than attempting to
calculate the values of each attribute value P(d1, d2, d3|h), they are assumed to be
conditionally independent given the target value and calculated as P(d1|h) * P(d2|H) and so
on.
This is a very strong assumption that is most unlikely in real data, i.e. that the
attributes do not interact. Nevertheless, the approach performs surprisingly well on data
where this assumption does not hold.
Representation Used By Naive Bayes Models
The representation for naive Bayes is probabilities.
A list of probabilities are stored to file for a learned naive Bayes model.
This includes:

 Class Probabilities: The probabilities of each class in the training dataset.


 Conditional Probabilities: The conditional probabilities of each input value given
each class value.
Learn a Naive Bayes Model FromData
Learning a naive Bayes model from your training data is fast.

Training is fast because only the probability of each class and the probability of each
class given different input (x) values need to be calculated. No coefficients need to be fitted
by optimization procedures.

5
Calculating Class Probabilities
The class probabilities are simply the frequency of instances that belong to each class
divided by the total number of instances.
For example in a binary classification the probability of an instance belonging to class
1 would be calculated as:
P(class=1) = count(class=1) / (count(class=0) + count(class=1))
In the simplest case each class would have the probability of 0.5 or 50% for a binary
classification problem with the same number of instances in each class.
Calculating Conditional Probabilities
The conditional probabilities are the frequency of each attribute value for a given
class value divided by the frequency of instances with that class value.

Machine learning

Unsupervised learning
Supervised learning

Statistical classification

Probabilistic classification

Association
Naïve Bayes
Classifier classifier
Regression Clustering rule
discovery

6
For example, if a “weather” attribute had the values “sunny” and “rainy” and the class
attribute had the class values “go-out” and “stay-home“, then the conditional probabilities of
each weather value for each class value could be calculated as:
 P(weather=sunny|class=go-out) = count(instances with weather=sunny and class=go-
out) / count(instances with class=go-out)
 P(weather=sunny|class=stay-home) = count(instances with weather=sunny and
class=stay-home) / count(instances with class=stay-home)
 P(weather=rainy|class=go-out) = count(instances with weather=rainy and class=go-out) /
count(instances with class=go-out)
 P(weather=rainy|class=stay-home) = count(instances with weather=rainy and class=stay-
home) / count(instances with class=stay-home)
Make Predictions With a Naive Bayes Model
Given a naive Bayes model, you can make predictions for new data using Bayes
theorem.
MAP(h) = max(P(d|h) * P(h))
Using our example above, if we had a new instance with the weather of sunny, we can
calculate:
go-out = P(weather=sunny|class=go-out) * P(class=go-out)
stay-home = P(weather=sunny|class=stay-home) * P(class=stay-home)
We can choose the class that has the largest calculated value. We can turn these
values into probabilities by normalizing them as follows:
P(go-out|weather=sunny) = go-out / (go-out + stay-home)
P(stay-home|weather=sunny) = stay-home / (go-out + stay-home)
If we had more input variables we could extend the above example. For example,
pretend we have a “car” attribute with the values “working” and “broken“. We can multiply
this probability into the equation.
For example below is the calculation for the “go-out” class label with the addition of
the car input variable set to “working”:
go-out = P(weather=sunny|class=go-out) * P(car=working|class=go-out) * P(class=go-out)

7
Bayes algorithm working:
Naive Bayes is a machine learning algorithm for classification problems. It is based
on Bayes' probability theorem. It is primarily used for text classification which involves high
dimensional training data sets.
It is fast to build models and make predictions with Naive Bayes algorithm.

TRAIN DATA

MACHINE LEARNING
ALGORITHM

NEW DATA CLASSIFIER PREDICTION

The main objective of this research is to develop a prototype Health


Care Prediction Systemusing, Naive Bayes .The System can discover and extract
hidden knowledge associated withdiseases (heart attack, cancer and diabetes) from a
historical heart disease database. It is a classification technique based on Bayes' Theorem
with an assumption of independence among predictors. In simple terms, aNaive Bayes
classifier assumes that the presence of a particular feature in a class is unrelated to the
presence of any other feature.
In machine learning we are often interested in selecting the best hypothesis (h) given data (d).
In a classification problem, our hypothesis (h) may be the class to assign for a new data
instance (d).One of the easiest ways of selecting the most probable hypothesis given the data
that we have that we can use as our prior knowledge about the problem. Bayes’ Theorem

8
provides a way that we can calculate the probability of a hypothesis given our prior
knowledge.

Advantages Bayes algorithm:


 Very simple, easy to implement and fast.
 If the NB conditional independence assumption holds, then it will converge quicker
than discriminative models like logistic regression.
 Even if the NB assumption doesn’t hold, it works great in practice.
 Need less training data.
 Highly scalable. It scales linearly with the number of predictors and data points.
 Can be used for both binary and multi-class classification problems.
 Can make probabilistic predictions.
 Handles continuous and discrete data.
 Not sensitive to irrelevant features.

9
CHAPTER 3
SYSTEM SPECIFICATION

3.1HARDWARE REQUIREMENTS
 Processor : Intel processor 3.0 GHz
 RAM : 2GB
 Hard disk : 500 GB
 Compact Disk : 650 Mb
 Keyboard : Standard keyboard
 Mouse : Logitech mouse
 Monitor : 15 inch color monitor

3.2 SOFTWARE REQUIREMENTS


PHP 5 Asp.Net code behind Python 2.4
Front End C#
MYSQL SQL SERVER 2008 NoSQL/MySQL/Oracle
Back End 5.7/MangoDB
Operating Windows OS ( XP, Windows XP,7,8,8.1 Windows, Linux, or Mac
System 2007, 2008)
WAMP Server 2.5 IIS Apache HTTP Server or
Server Nginx
32-bit or 64-bit 32-bit or 64-bit 32-bit or 64-bit Operating
System type Operating System Operating System System
Macromedia Microsoft Visual Netbeans/Eclipse
IDE Dreamweaver 8.0 Studio 2010
Depends upon the Do Do
DLL title

10
3.3 SOFTWARE DESCRIPTION
PYTHON
Python is an interpreted, object-oriented, high-level programming language with
dynamic semantics. Its high-level built in data structures, combined with dynamic typing and
dynamic binding, make it very attractive for Rapid Application Development, as well as for
use as a scripting or glue language to connect existing components together. Python's simple,
easy to learn syntax emphasizes readability and therefore reduces the cost of program
maintenance. Python supports modules and packages, which encourages program modularity
and code reuse. The Python interpreter and the extensive standard library are available in
source or binary form without charge for all major platforms, and can be freely distributed.
Often, programmers fall in love with Python because of the increased productivity it
provides. Since there is no compilation step, the edit-test-debug cycle is incredibly fast.
Debugging Python programs is easy: a bug or bad input will never cause a segmentation
fault. Instead, when the interpreter discovers an error, it raises an exception. When the
program doesn't catch the exception, the interpreter prints a stack trace. A source level
debugger allows inspection of local and global variables, evaluation of arbitrary expressions,
setting breakpoints, stepping through the code a line at a time, and so on. The debugger is
written in Python itself, testifying to Python's introspective power. On the other hand, often
the quickest way to debug a program is to add a few print statements to the source: the fast
edit-test-debug cycle makes this simple approach very effective.
Comparing Python to Other Languages
Python is often compared to other interpreted languages such as Java, JavaScript,
Perl, Tcl, or Smalltalk. Comparisons to C++, Common Lisp and Scheme can also be
enlightening. In this section I will briefly compare Python to each of these languages. These
comparisons concentrate on language issues only. In practice, the choice of a programming
language is often dictated by other real-world constraints such as cost, availability, training,
and prior investment, or even emotional attachment. Since these aspects are highly variable, it
seems a waste of time to consider them much for this comparison.
Java:
Python programs are generally expected to run slower than Java programs, but they
also take much less time to develop. Python programs are typically 3-5 times shorter than
equivalent Java programs. This difference can be attributed to Python's built-in high-level

11
data types and its dynamic typing. For example, a Python programmer wastes no time
declaring the types of arguments or variables, and Python's powerful polymorphic list and
dictionary types, for which rich syntactic support is built straight into the language, find a use
in almost every Python program. Because of the run-time typing, Python's run time must
work harder than Java's. For example, when evaluating the expression a+b, it must first
inspect the objects a and b to find out their type, which is not known at compile time. It then
invokes the appropriate addition operation, which may be an overloaded user-defined
method. Java, on the other hand, can perform an efficient integer or floating point addition,
but requires variable declarations for a and b, and does not allow overloading of the +
operator for instances of user-defined classes.
For these reasons, Python is much better suited as a "glue" language, while Java is
better characterized as a low-level implementation language. In fact, the two together make
an excellent combination. Components can be developed in Java and combined to form
applications in Python; Python can also be used to prototype components until their design
can be "hardened" in a Java implementation. To support this type of development, a Python
implementation written in Java is under development, which allows calling Python code from
Java and vice versa. In this implementation, Python source code is translated to Java bytecode
(with help from a run-time library to support Python's dynamic semantics).
Javascript:
Python's "object-based" subset is roughly equivalent to JavaScript. Like JavaScript
(and unlike Java), Python supports a programming style that uses simple functions and
variables without engaging in class definitions. However, for JavaScript, that's all there is.
Python, on the other hand, supports writing much larger programs and better code reuse
through a true object-oriented programming style, where classes and inheritance play an
important role.
Perl:
Python and Perl come from a similar background (Unix scripting, which both have
long outgrown), and sport many similar features, but have a different philosophy. Perl
emphasizes support for common application-oriented tasks, e.g. by having built-in regular
expressions, file scanning and report generating features. Python emphasizes support for
common programming methodologies such as data structure design and object-oriented
programming, and encourages programmers to write readable (and thus maintainable) code
by providing an elegant but not overly cryptic notation. As a consequence, Python comes

12
close to Perl but rarely beats it in its original application domain; however Python has an
applicability well beyond Perl's niche.
Tcl:
Like Python, Tcl is usable as an application extension language, as well as a stand-
alone programming language. However, Tcl, which traditionally stores all data as strings, is
weak on data structures, and executes typical code much slower than Python. Tcl also lacks
features needed for writing large programs, such as modular namespaces. Thus, while a
"typical" large application using Tcl usually contains Tcl extensions written in C or C++ that
are specific to that application, an equivalent Python application can often be written in "pure
Python". Of course, pure Python development is much quicker than having to write and
debug a C or C++ component. It has been said that Tcl's one redeeming quality is the Tk
toolkit. Python has adopted an interface to Tk as its standard GUI component library.
Tcl 8.0 addresses the speed issuse by providing a bytecode compiler with limited data
type support, and adds namespaces. However, it is still a much more cumbersome
programming language.
Smalltalk:
Perhaps the biggest difference between Python and Smalltalk is Python's more
"mainstream" syntax, which gives it a leg up on programmer training. Like Smalltalk, Python
has dynamic typing and binding, and everything in Python is an object. However, Python
distinguishes built-in object types from user-defined classes, and currently doesn't allow
inheritance from built-in types. Smalltalk's standard library of collection data types is more
refined, while Python's library has more facilities for dealing with Internet and WWW
realities such as email, HTML and FTP.
Python has a different philosophy regarding the development environment and
distribution of code. Where Smalltalk traditionally has a monolithic "system image" which
comprises both the environment and the user's program, Python stores both standard modules
and user modules in individual files which can easily be rearranged or distributed outside the
system. One consequence is that there is more than one option for attaching a Graphical User
Interface (GUI) to a Python program, since the GUI is not built into the system.
C++:
Almost everything said for Java also applies for C++, just more so: where Python
code is typically 3-5 times shorter than equivalent Java code, it is often 5-10 times shorter
than equivalent C++ code! Anecdotal evidence suggests that one Python programmer can

13
finish in two months what two C++ programmers can't complete in a year. Python shines as a
glue language, used to combine components written in C++.
Common Lisp and Scheme:
These languages are close to Python in their dynamic semantics, but so different in
their approach to syntax that a comparison becomes almost a religious argument: is Lisp's
lack of syntax an advantage or a disadvantage? It should be noted that Python has
introspective capabilities similar to those of Lisp, and Python programs can construct and
execute program fragments on the fly. Usually, real-world properties are decisive: Common
Lisp is big (in every sense), and the Scheme world is fragmented between many incompatible
versions, where Python has a single, free, compact implementation.
Execute Python Syntax:
As we learned in the previous page, Python syntax can be executed by writing directly
in the Command Line:
>>>print("Hello, World!")
Hello, World!
Or by creating a python file on the server, using the .py file extension, and running it
in the Command Line:
C:\Users\Your Name>python myfile.py
Python Indentations:
Where in other programming languages the indentation in code is for readability only,
in Python the indentation is very important.
Python uses indentation to indicate a block of code.
Example
if 5 > 2:
  print("Five is greater than two!")
MySQL :
MySQL is the world's most used open source relational database management system
(RDBMS) as of 2008 that run as a server providing multi-user access to a number of
databases.

The MySQL development project has made its source code available under the terms
of the GNU General Public License, as well as under a variety of proprietary agreements.
MySQL was owned and sponsored by a single for-profit firm, the Swedish company MySQL
AB, now owned by Oracle Corporation.

14
MySQL is a popular choice of database for use in web applications, and is a central
component of the widely used LAMP open source web application software stack—LAMP is
an acronym for "Linux, Apache, MySQL, Perl/PHP/Python." Free-software-open source
projects that require a full-featured database management system often use MySQL.
For commercial use, several paid editions are available, and offer additional
functionality. Applications which use MySQL databases include: TYPO3, Joomla,
WordPress, phpBB, MyBB, Drupal and other software built on the LAMP software stack.
MySQL is also used in many high-profile, large-scale World Wide Web products, including
Wikipedia,[9]Google[10][11] (though not for searches), Facebook,[12][13][14]Twitter,[15]Flickr,
Nokia.com,[17] and YouTube.
[16]

Interfaces:
MySQL is primarily an RDBMS and ships with no GUI tools to administer MySQL
databases or manage data contained within the databases. Users may use the included
command line tools, or use MySQL "front-ends", desktop software and web applications that
create and manage MySQL databases, build database structures, back up data, inspect status,
and work with data records. The official set of MySQL front-end tools, MySQL Workbench
is actively developed by Oracle, and is freely available for use.
Graphical:
The official MySQL Workbench is a free integrated environment developed by
MySQL AB, that enables users to graphically administer MySQL databases and visually
design database structures. MySQL Workbench replaces the previous package of software,
MySQL GUI Tools. Similar to other third-party packages, but still considered the
authoritative MySQL frontend, MySQL Workbench lets users manage database design &
modeling, SQL development (replacing MySQL Query Browser) and Database
administration (replacing MySQL Administrator).
MySQL Workbench is available in two editions, the regular free and open
sourceCommunity Edition which may be downloaded from the MySQL website, and the
proprietary Standard Edition which extends and improves the feature set of the Community
Edition.
Command line:
MySQL ships with some command line tools. Third-parties have also developed tools to
manage a MySQL server, some listed below.

15
 Maatkit - a cross-platform toolkit for MySQL, PostgreSQL and Memcached,
developed in Perl.[30]Maatkit can be used to prove replication is working correctly, fix
corrupted data, automate repetitive tasks, and speed up servers. Maatkit is included
with several GNU/Linux distributions such as CentOS and Debian and packages
areavailable for Fedora and Ubuntu as well. As of late 2011, Maatkit is no longer
developed, but Percona has continued development under the Percona Toolkit brand.
Programming:
MySQL works on many different system platforms, including AIX, BSDi, FreeBSD,
HP-UX, eComStation, i5/OS, IRIX, Linux, Mac OS X, Microsoft Windows, NetBSD, Novell
NetWare, OpenBSD, OpenSolaris, OS/2 Warp, QNX, Solaris, Symbian, SunOS, SCO
OpenServer, SCO UnixWare, Sanos and Tru64. A port of MySQL to OpenVMS also exists.
[32]

MySQL is written in C and C++. Its SQL parser is written in yacc, and a home-
brewed lexical analyzer.[33] Many programming languages with language-specific APIs
include libraries for accessing MySQL databases. These include MySQL Connector/Net for
integration with Microsoft's Visual Studio (languages such as C# and VB are most commonly
used) and the JDBC driver for Java. In addition, an ODBC interface called MyODBC allows
additional programming languages that support the ODBC interface to communicate with a
MySQL database, such as ASP or ColdFusion. The HTSQL - URL-based query method also
ships with a MySQL adapter, allowing direct interaction between a MySQL database and any
web client via structured URLs.
Features:
As of April 2009, MySQL offered MySQL 5.1 in two different variants: the open source
MySQL Community Server and the commercial Enterprise Server. MySQL 5.5 is offered
under the same licences. They have a common code base and include the following features:

 A broad subset of ANSI SQL 99, as well as extensions


 Cross-platform support
 Stored procedures
 Triggers
 Cursors
 Updatable Views
 Information schema

16
 Strict mode (ensures MySQL does not truncate or otherwise modify data to conform
to an underlying data type, when an incompatible value is inserted into that type)
 X/Open XAdistributed transaction processing (DTP) support; two phase commit as
part of this, using Oracle's InnoDB engine
 Independent storage engines (MyISAM for read speed, InnoDB for transactions and
referential integrity, MySQL Archive for storing historical data in little space)
 Transactions with the InnoDB, and Cluster storage engines; savepoints with InnoDB
 SSL support
 Query caching
 Sub-SELECTs (i.e. nested SELECTs)
 Replication support (i.e. Master-Master Replication & Master-Slave Replication) with
one master per slave, many slaves per master, no automatic support for multiple
masters per slave.
 Full-text indexing and searching using MyISAM engine
 Embedded database library
 Unicode support (however prior to 5.5.3 UTF-8 and UCS-2 encoded strings are
limited to the BMP, in 5.5.3 and later use utf8mb4 for full unicode support)
 ACID compliance when using transaction capable storage engines (InnoDB and
Cluster)
 Partititoned tables with pruning of partitions in optimiser
 Shared-nothing clustering through MySQL Cluster
 Hot backup (via mysqlhotcopy) under certain conditions
 Multiple storage engines, allowing one to choose the one that is most effective for
each table in the application (in MySQL 5.0, storage engines must be compiled in; in
MySQL 5.1, storage engines can be dynamically loaded at run time):

 Native storage engines (MyISAM, Falcon, Merge, Memory (heap), Federated,


Archive, CSV, Blackhole, Cluster, EXAMPLE, Maria, and InnoDB, which
was made the default as of 5.5)
 Partner-developed storage engines (solidDB, NitroEDB, ScaleDB, TokuDB,
Infobright (formerly Brighthouse), Kickfire, XtraDB, IBM DB2). InnoDB
used to be a partner-developed storage engine, but with recent acquisitions,
Oracle now owns both MySQL core and InnoDB.

17
 Community-developed storage engines (memcache engine, httpd, PBXT,
Revision Engine)
 Custom storage engines

 Commit grouping, gathering multiple transactions from multiple connections together


to increase the number of commits per second. (PostgreSQL has an advanced form of
this functionality)

The developers release monthly versions of the MySQL Server. The sources can be
obtained from MySQL's website or from MySQL's Bazaar repository, both under the GPL
license.

18
CHAPTER 4
SYSTEM ANALSIS
4.1 FEASIBILITY STUDY:
Depending on the results of the initial investigation the survey is now expanded to amore
detailed feasibility study. “FEASIBILITY STUDY” is a test of system proposal according to
its workability, impact of the organization, ability to meet needs and effective use of the
resources. It focuses on these major questions:
 What are the user’s demonstrable needs and how does a candidate system meet them?
 What resources are available for given candidate system?
 What are the likely impacts of the candidate system on the organization?
 Whether it is worth to solve the problem?

During feasibility analysis for this project, following primary areas of interest are to be
considered. Investigation and generating ideas about a new system does this.
4.2Technical feasibility:
A study of resource availability that may affect the ability to achieve an acceptable
system. This evaluation determines whether the technology needed for the proposed system is
available or not.
 Can the work for the project be done with current equipment existing software
technology & available personal?
 Can the system be upgraded if developed?
 If new technology is needed then what can be developed?
4.3Economical feasibility:
Economic justification is generally the “Bottom Line” consideration for most systems.
Economic justification includes a broad range of concerns that includes cost benefit analysis.
In this we weight the cost and the benefits associated with the candidate system and if it suits
the basic purpose of the organization i.e. profit making, the projectis making to the analysis
and design phase. The financial and the economic questions during the preliminary
investigation are verified to estimate the following:
• The cost to conduct a full system investigation.
• The cost of hardware and software for the class of application being considered.
• The benefits in the form of reduced cost.

19
• The proposed system will give the minute information, as a result the performance is
improved which in turn may be expected to provide increased profits.
• This feasibility checks whether the system can be developed with the available funds. The
Hospital Management System does not require enormous amount of money to be developed.
This can be done economically if planned judicially, so its economically feasible. The cost of
project depends upon the number of man-hours required.
4.4Operational Feasibility:
It is mainly related to human organizations and political aspects. The points to be
considered are:
 What changes will be brought with the system?
 What organization structures are disturbed?
 What new skills will be required? Do the existing staff members have these
skills? If not, can they be trained in due course of time?

The system is operationally feasible as it very easy for the End users to operate it. It only
needs basic information about Windows platform.
4.5Schedule feasibility:
Time evaluation is the most important consideration in the development of project. The time
schedule required for the developed of this project is very important since more development
time effect machine time, cost and cause delay in the development of other systems. A
reliable Hospital Management System can be developed in the considerable amount of time.

20
CHAPTER 5
SYSTEM DESIGN

System design is the phase that bridges the gap between problem domain and the
existing system in a manageable way. This phase focuses on the solution domain, i.e. “how
to implement?”. It is the phase where the SRS document is converted into a format that can
be implemented and decides how the system will operate.

In this phase, the complex activity of system development is divided into several smaller
sub-activities, which coordinate with each other to achieve the main objective of system
development.

Identify design goal

System
decomposition

Identification of
concurrency

Hardware allocation

System design
Data management

Global resource

Software control

Boundary
condition

21
5.1 Inputs to System Design
System design takes the following inputs −

 Statement of work
 Requirement determination plan
 Current situation analysis
 Proposed system requirements including a conceptual data model, modified DFDs,
and Metadata (data about data).

5.2 Output Design:


The physical design of the database specification the physical configuration of the
database on the storage media. This includes detailed specification of data elements, data
type, index option, and other parameters residing in DBMS data dictionary. It is the
Detail Design of system that include modules and the database’s hardware and software
specification of the System.
Computer output is the most important and direct source of information to the user.
Efficient, intelligent output design should improve the system’s relationship with the user
and help desiccation making. A major form of output is the hard copy from the printer.
The output devices to consider depended on factors such as compatibility of the devices
with the system, responds time requirement, expected, expected print quality and number
of copies needed.
Output Design for this software includes:
 The articles are display and which are unwanted but not yet reached their expiry dates
are manually removed remover by the user.
 A form is provided for filter and search for the various newsgroup topics.
 A form is provided to connect to the various new site.

22
5.3 System Architecture:

Training Phase Patients

Patient Query Patient Query

Pre Processing Pre Processing

Clustering Clustering

Feature Extraction Feature Extraction

Trained Data
Optimal Result

5.4 Modules description


There are four main modules are used such as

23
 User details
 Symptoms Identification
 Health Analysis
 Suggestion given
User details:
This module user registers their details through a web applications. The user
registration phase contains the name, age, height, weight and so on. Based on this registration
phase, doctor knows the details about users. The user registered their details such as user
name, patient details, call details and pressure details.

Registration
User Interface

Login
Symptoms identification:
This module used to create the questions related to heart related. The questions such
as what is your Heart beat Rate? What is your Blood Pressure Level ? What is your Sugar
Level? and so on.

User Input Chatbot


Heart rate analysis:

24
In this module, given inputs are analyzed by pre defined dataset. Then system
provide the appropriate solution for relevant data.

Suggestion Given:
In this module, This system checks the patient heart range and sends the message to
the patients about the first aid treatment suggestion .

SearchQuer Similarity
User
y Prediction
Query and Content
Data

5.5DATABASE DESIGN

Data Dictionary

A DBMS component that stores metadata. The data dictionary contains the data definition
and its characteristics and entity relationships. This may include the names and descriptions
of the various tables and fields within the database.

25
Field Name Datatype Description
Id int(11) Specify the id Identification
Uname varchar(30) Specify the id username
Pass int(11) Specify the id password
Status varchar(20) Specify the id status
Rtime varchar(20) Specify the id rtime
Name varchar(20) Specify the id name
Contact varchar(20) Specify the id contact
Email varchar(20) Specify the id email
secret_key varchar(20) Specify the id secret_key
Bp value int(3) Specify the patient bp value
Heart rate value int(3) Specify the patient heart rate value
Sugar value int(3) Specify the patient sugar value

Table name: admin


Field Type Nul Default
l
usernam varchar(30) Yes NULL
e
password varchar(30) Yes NULL

Table name: register

Field Type Null Default


Id int(11) Yes NULL
Name varchar(30) Yes NULL
Contact bigint(20) Yes NULL
Email varchar(30) Yes NULL
secret_key varchar(30) Yes NULL
Uname varchar(30) Yes NULL
Pass varchar(30) Yes NULL

26
Table name: patient

Field Type Null Default


Bp value int(3) Yes NULL
Heart rate int(3) Yes NULL
value
Sugar value int(3) Yes NULL

Table relationship
Normalization is the process of strutting relational databases schema such that most
ambiguity is removed. The stage of normalization are referred to as forms and progress from
the least restrictive (first normal form) through the most restrictive (Fifth normal form),
generally, most database designers do not attempt to implement anything higher then third
normal form of Boyce-Code Norma Form.
Types of Normal Form
First Normal Form
A relation is said to be in First normal Form (INF) if and each attributed of the
relation is atomic. More simply, t be INF, each column must contain only a single value and
each now contain the same column.
Second Normal Form
In the Second Norma Form, a relation must first fulfill the requirement to be in first
Normal Form. Additional, each donkey attribute in the relation must be functionality
dependent upon the primary key.
Database Design:
Data design is the process of producing the detailed data model of a database. This
logical data model contains all the needed logical and physical design choices and physical
storage parameters needed to generate a design in a data definition Language, which can then
be used to create a database. A fully attributed data module contains details attributes for each
entity.

27
The term databases design can be used to describe many different parts of the design of an
overall databases System. Principally, and most correctly, it can be thought of as the logical
design of the base data structure used to store the data. In the relational; model these are the
table are view s. in an the object databases the entitles and relationships map directly to
object classes and named relationship, however, the term database design could also be used
to apply overall process of design, not just be bases data structure, also the forms and queries
used as part of the overall database application within the Database Management System
(DBMS).
Design process
The process of doing databases design generally consists of step which will be carried
out by the databases out by the databases designer. Not all of this step will be necessary in all
case. Usually the designer must:
 Determine the data to be store in the databases
 Determine the relationships between the deferent element
 Superimpose a logical structure upon the data basic of the relationship.
Within the relational model the final step can generally be broken down into further
step that of determine the grouping of information within system, generally determine what is
the basic object about which information is being stored, and then determines the relationship
between these groups of information, or objects? This step is not necessary with an object
database.
The tree structure of data many enforce a hierarchal mode; organization, with a
parent-child relationship table, an object databases will simply use a one-to-many relationship
between instance of an object class. It also introduces the concept of a hierarchical
relationship between object classes, term inheritance.
Determining data to be stored:
In the majority of cases, the person who is doing of a databases is a person with
expertise in the area of databases design, rather the expertise in the domain form which the
data to be stored is drawn e.g. financial information, biological information etc. therefore the
data to be stored in the database most be determined in cooperation with a person who dose
have expertise in that domain, and who is aware of what data must be stored with the system.
This process is one which is generally considered part of requirements analysis, and
requires skills on the part of the databases designer to elicit the needed information form
those with the domain knowledge. This is because those with the necessary domain

28
knowledge frequently cannot express the clearly what their system requirements for the
database are as they unaccustomed to thinking in term of the discrete data elements which
must be stored. Data to be stored can determine by Requirement Specification.

Conceptual Schema
Mai article: Conceptual Schema
Once a databases designer is aware of the which is to be stored within the database,
they must then determine how the various pieces of that data relate to one another, when
performing this step, the designer is generally looking out for the dependence t\in the data,
where one piece of information is dependent upon another i.e.
When one piece of information changes, the other will also example, in a list of names and
address, assuming the normal situation where two people can have same address, but
One person can not have two addresses; the name is dependent upon the address, because if
the address is different then the associate named is different too. However, the inverse is not
necessary true, i.e. when the name changes address may be the same.
(NOTE: A common midsection is that the relational model is so called because of the starting
of relationship between data elements therein. This is not true. The relational model is so
named such because it is based upon mathematical structure know as relations.)
Logical Data:
Once the relationship and dependent amongst the various piece of information have
been determined, it is possible to arrangement system. In the case of relational databases the
storage object supported database management system. in the case of relational databases the
storage objects are table with store data in row and column.
Each table may represent an implement of ether a logical object or a relationship
joining one or more instance of one or more logical object. Relationship between tables may
be stored as likes connecting child table with parents. Since complex logical relationship is
themselves tables they probably have links to more one parent.
In an object database the storage objects corresponding directly to the object used by
the object-oriented programming language used to write the application that will manage and
access the data. The relationship may be defined as attributes of the object classes involved or
as method that operated on the object classes.

29
CHAPTER 6
TESTING
The process of checking that a software system meets specifications and that it fulfills
its intended purpose. It may also be referred to as software quality control. The testing
activity is used to identify and fix errors. In this proposed system, some errors could be
found. These errors are viewed and then analyzed using verification and validation process.
After evaluating the errors they are fixed.

• Unit Test.
• System Test
• Integration Test
• Functional Test
• Performance Test
• Beta Test
• Acceptance Test.
6.1 Unit Testing:
Unit testing is a software development process in which the smallest testable parts of
an application, called units, are individually and independently scrutinized for proper
operation. Unit testing is often automated but it can also be done manually. This testing mode
is a component of Extreme Programming (XP), a pragmatic method of software development
that takes a meticulous approach to building a product by means of continual testing and
revision.
In this project each and every modules tested separately, whether the user receive the
transaction, withdraw and deposit results correctly or not. If the admin update the user details
correctly modify or not. Each and Every module is check by developer.

6.2 Integration Testing:

30
Integration testing is used to verify the combining of the software modules.
Integration testing addresses the issues associated with the dual problems of verification and
program construction. System testing is used to verify, whether the developed system meets
the requirements.

The purpose of integration testing is to verify functional, performance, and


reliability requirements placed on major design items. These "design items", i.e. assemblages
(or groups of units), are exercised through their interfaces using black box testing, success
and error cases being simulated via appropriate parameter and data inputs. Simulated usage of
shared data areas and inter-process communication is tested and individual subsystems are
exercised through their input interface. Tests are constructed to test whether all the
components within assemblages interact correctly, for example across procedure calls or
process activations, and this is done after testing individual modules, i.e. unit testing. The
overall idea is a "building block" approach, in which verified assemblages are added to a
verified base which is then used to support the integration testing of further assemblages.
Some different types of integration testing are big bang, top-down, and bottom-up. Other
Integration Patterns are: Collaboration Integration, Backbone Integration, Layer Integration,
Client/Server Integration, Distributed Services Integration and High-frequency Integration.
6.3 System testing
It is a critical aspect of Software Quality Assurance and represents the ultimate review
of specification, design and coding. Testing is a process of executing a program with the
intent of finding an error. A good test is one that has a probability of finding an as yet
undiscovered error. The purpose of testing is to identify and correct bugs in the developed
system.
The developer check the software whether the program was successfully run in all the
operating systems.
6.4 Validation Testing:
It’s the process of using the new software for the developed system in a live
environment i.e., new software inside the organization, in order to find out the errors. The
validation phase reveals the failures and the bugs in the developed system. It will be come to
know about the practical difficulties the system faces when operated in the true environment.
The validation testing was mainly tested in each and every project. For example, in
login form, the valid user only allowed to view the website.

31
6.5 Verification Testing:
It’s the process of using the new software for the developed system in a live
environment i.e., new software inside the organization, in order to find out the errors. The
validation phase reveals the failures and the bugs in the developed system. It will be come to
know about the practical difficulties the system faces when operated in the true environment.
CHAPTER 7
IMPLEMENTATION AND MAINTENANCE
Software maintenance
Instructions for the developer
1. Save all the programs that make up the web application in a folder named ‘Novel-
Duplicate-Page’ and place the folder in C:\wamp\www directory in a system
connected to a local network. This system is the Web server for the web
application. Note the IP address of the server
2. Start up WAMP Server

TO RUN THE WEB APPLICATION:


1. Open a web browser in any system connected to the local network to which the
server is connected.
2. If the server’s IP address is 192.168.1.121 or hostname is localhost, then type the
following in the URL of the browser:
3. The browser opens a web page containing the home page of the application with
which the user can proceed.

32
CHAPTER 8
Conclusion
The proposed system would be a stepping stone in having in place an intelligent
query handling program.an intelligent question answering system has been developed using
the Naïve Bayesian concept. The system is capable of answering the query of the patient in
an interactive way using the chat agent that is used. Although there is still scope for
improvement, the system performs fairly well in identifying syntactically similar question
and to a certain extent semantics is also considered. Also because we make use of a filtering
process the search space is reduced and so the system becomes more efficient
algorithmically.

33
APPENDICES
Data flow diagram
A two-dimensional diagram that explains how data is processed and transferred in a
system. The graphical depiction identifies each source of data and how it interacts with other
data sources to reach a common output. Individuals seeking to draft a data flow diagram must
identify external inputs and outputs, determine how the inputs and outputs relate to each
other, and explain with graphics how these connections relate and what they result in. This
type of diagram helps business development and design teams visualize how data is
processed and identify or improve certain aspects.

Data flow Symbols:

Symbol Description

An entity. A source of data or a


destination for data.

A process or task that is


performed by the system.

A data store, a place where


data is held between processes.

34
A data flow.

LEVEL 0:

LEVEL 1:

35
LEVEL 2:

Sample Coding

PHP CODING:
<link rel="stylesheet" href="/static/style.css" type="text/css">
<form action="/login" method="POST">
<div>
<h1 align="center" style="color:#FFFFFF">Heart Patient Monitoring System</h1>
</div>
<div class="login">
<div class="login-screen">

36
<div class="app-title">
<h1>Login</h1>
</div>
<div class="login-form">
<div class="control-group">
<input name="text"placeholder="User name"><br><br>
<input type="password" name="text1"placeholder="Password"><br><br>
<input type="submit" value="LOGIN"><br><br>
{{ error}}
</div>
</div>
</div>
</div>
</form>
{% if text %}
user: {{ text }}
{% endif %}

<link rel="stylesheet" href="/static/style.css" type="text/css">


<form action="/check" method="POST">
<div class="login">
<div class="login-screen">
<div class="app-title">
<h1>USER DETAIL</h1>
</div>
<div class="login-form">
<div class="control-group">
<input name="text"placeholder="User Name"><br><br>
<input name="text1"placeholder="HEART BEAT VALUE"><br><br>
<input name="text2"placeholder="BP VALUE"><br><br>
<input name="text3"placeholder="SUGUR LEVEL"><br><br>

37
<input type="submit" value="CHECK"><br><br>
</div>
</div>
</div>
</div>
</form>

<link rel="stylesheet" href="/static/style.css" type="text/css">


<form action="/check" method="POST">
<div class="login">
<div class="login-screen">
<div class="app-title">
<h1>USER DETAIL</h1>
</div>
<div class="login-form">
Patient name :{{ text}}<br><br>
Heart beat Level:{{ text1}}<br><br>
BP Level :{{ text2}}<br><br>
Sugur Level :{{ text3}}<br><br>
{{m}}<br><br>
{{m1}}<br><br>
{{m2}}<br><br>
</div>
</div>
</div>
</div>
</form>
<div style="background-color:white">
<h1>TAKE THIS MEDICINE:</h1>{{md}}<br><br>
{{mm}}<br><br>
{{mmm}}<br><br>

38
</div>

PYTHON CODING:

from flask import Flask, request, render_template, redirect, url_for

import mysql.connector
app = Flask(__name__)
mydb = mysql.connector.connect(host="localhost",user="root",passwd="",
database="heartbeat")
t=""
@app.route("/")
def hello():
return render_template('helth.html')

@app.route('/login', methods=['GET', 'POST'])


def login():
error = None

if request.method == 'POST':
mycursor = mydb.cursor()
mycursor.execute("SELECT * FROM register WHERE name =%s",(request.form['text'],) )
myresult = mycursor.fetchall()
if len(myresult) > 0:
for row in myresult:
if request.form['text'] != row[1] or request.form['text1'] != row[2]:
error = 'Invalid Credentials. Please try again.'
return render_template('helth.html', error=error)
else:
return render_template('health.html',t=request.form['text'])

39
else:
error = 'no mached data.'
return render_template('helth.html', error=error)

@app.route("/register", methods=['POST'])
def register():
return render_template('register.html')

@app.route("/check", methods=['POST'])
def check():

if request.method == 'POST':

hb=request.form['text1']
bp=request.form['text2']
sl=request.form['text3']

if (int(hb) < 70):


mycursor = mydb.cursor()
mycursor.execute("SELECT * FROM heartrate WHERE Hrate LIKE '%5%'")
myresult = mycursor.fetchall()
return
render_template('check.html',text1=request.form['text1'],text2=request.form['text2'],text3=req
uest.form['text3'],m=myresult)

elif(int(hb) > 120):


mycursor = mydb.cursor()
mycursor.execute("SELECT * FROM heartrate WHERE Hrate LIKE '%1%'")
myresult = mycursor.fetchall()
return
render_template('check.html',text1=request.form['text1'],text2=request.form['text2'],text3=req
uest.form['text3'],m=myresult)

40
else:
myresult = "heart beat is normal"
return
render_template('check.html',text1=request.form['text1'],text2=request.form['text2'],text3=req
uest.form['text3'],mm=myresult)

if (int(bp) < 70):


mycursor = mydb.cursor()
mycursor.execute("SELECT * FROM bloodpresure WHERE Bprate LIKE '%6%'")
myresult1 = mycursor.fetchall()
return
render_template('check.html',text1=request.form['text1'],text2=request.form['text2'],text3=req
uest.form['text3'],m1=myresult1)

elif(int(bp) > 120):


mycursor = mydb.cursor()
mycursor.execute("SELECT * FROM bloodpresure WHERE Bprate LIKE '%1%'")
myresult1 = mycursor.fetchall()
return
render_template('check.html',text1=request.form['text1'],text2=request.form['text2'],text3=req
uest.form['text3'],m1=myresult1)

else:
myresult1 = "BP is normal"
return
render_template('check.html',text1=request.form['text1'],text2=request.form['text2'],text3=req
uest.form['text3'],mm1=myresult1)

if (int(sl) < 70):

41
mycursor = mydb.cursor()
mycursor.execute("SELECT * FROM sugar WHERE srate LIKE '%4%'")
myresult2 = mycursor.fetchall()
return
render_template('check.html',text1=request.form['text1'],text2=request.form['text2'],text3=req
uest.form['text3'],m2=myresult2)

elif(int(sl) > 140):


mycursor = mydb.cursor()
mycursor.execute("SELECT * FROM sugar WHERE srate LIKE '%1%'")
myresult2 = mycursor.fetchall()
return
render_template('check.html',text1=request.form['text1'],text2=request.form['text2'],text3=req
uest.form['text3'],m2=myresult2)

else:
myresult2 = "Sugur is normal"
return
render_template('check.html',text1=request.form['text1'],text2=request.form['text2'],text3=req
uest.form['text3'],mm2=myresult2)

##
##@app.route('/list')
##def list():
## mycursor = mydb.cursor()
## mycursor.execute("SELECT * FROM register")
## myresult = mycursor.fetchall()
## return render_template("list.html",rows = myresult)

42
@app.route("/register/save", methods=['GET', 'POST'])
defregister_save():

if request.method == 'POST':
try:
name=request.form['text']
pasw=request.form['text1']
repasw=request.form['text2']
mid=request.form['text3']
mno=request.form['text4']
add=request.form['text5']
if (name=="" or pasw=="" or repasw=="" or mid=="" or mno=="" or add==""):
msg="please enter the valied content"
return render_template("/register.html",msg = msg)
else:
mycursor = mydb.cursor()
mycursor.execute( "INSERT INTO register (name, pass, repass, mid, mbno, adds)
VALUES(%s,%s,%s,%s,%s,%s)",(name,pasw,repasw,mid,mno,add))
mydb.commit()
msg = "Successfully added"

except:
mydb.rollback()
msg = "Successfully added"

finally:
return render_template("/register.html",msg = msg)
##return render_template('echo1.html',
text=request.form['text'],text1=request.form['text1'],text2=request.form['text2'],text3=request.
form['text3'],test4=request.form['text4'],text5=request.form['text5'])
mydb.close()
return redirect(url_for('success',name))

43
if __name__ == "__main__":
app.run(debug=True)

Screen Layout
NEW USER REGISTERATION

44
45
LOGIN FORM

46
PATIENT DETIALS FORM

47
OUTPUT:

NORMAL

48
Reference
[1] Adrian Horzyk, Stanis law Magierski, and GrzegorzMiklaszewski “An Intelligent Internet
Shop-Assistant Recognizing a Customer Personality for Improving Man-Machine Interactions”
in Recent Advances in Intelligent Information Systems. ISBN 978-83-60434-59-8, pages 13–26

[2] Cai, C. H., Fu, A. W., Cheng, C. H. and Kwong, W. W. “Mining Association Rules with
Weighted Items.” in Proceedings of International Database Engineering and Applications
Symposium,

[3] Salto Martinez Rodrigo "Development and Implementation of a Chat Bot in a Social
Network" Information Technology: New Generations (ITNG), 2012 Ninth International
Conference on 16-18 April 2012

[4] S. J. du Preez, M. Lall, S. Sinha "An intelligent web-based voice chat bot" EUROCON 2009,
EUROCON '09. IEEE Date of Conference: 18-23 May 2009

[5] s. J. Du preez1, student member, ieee, m. Lall2, s. Sinha3, mieee, msaiee "an intelligent web-
based voice chat bot" enterprise application development, tshwane university of technology (tut),
staatsartillerie road, pretoria west, 0001, south africa

[6] Ramachandra. V. Pujeri1, G.M. Karthik" Constraint based frequent pattern mining for
generalized query templates from web log" 1KGiSL Institute of Technology, Coimbatore, Tamil
Nadu, INDIA

49

You might also like