You are on page 1of 45

Forecasting Electricity Commodity Consumption and Pricing for

Smart Building using Deep Learning Techniques

Abstract
Forecasting of energy consumption in Smart Buildings (SB) and using the extracted
information to plan and operate power generation are crucial elements of the Smart Grid (SG)
energy management. Energy consumption in buildings is one of the significant contributors
to energy efficiency programs worldwide. Additionally, a major component of the energy
consumed in the buildings is wasted through over-utilization of energy appliances such as
exhaust fans, and Heating, Ventilation, Air Conditioning (HVAC) systems, ineffective
control over thermal comfort, and not optimizing the start-up time and sequencing of
electrical equipment. Prediction of electrical loads and scheduling the generation resources to
match the demand enable the utility to mitigate the energy generation cost. Different
methodologies have been employed to predict energy consumption at different levels of
distribution and transmission systems. In this project, a novel hybrid deep learning model is
proposed to predict energy consumption in smart buildings. The proposed framework consists
of two stages, namely, data cleaning, and model building. The data cleaning phase applies
pre-processing techniques to the raw data and adds additional features of lag values. In the
model-building phase, the hybrid model is trained on the processed data. The hybrid deep
learning (DL) model is based on the stacking of fully connected layers, and unidirectional
Long Short-Term Memory (LSTMs) on bi-directional LSTMs. The proposed model is
designed to capture the temporal dependencies of energy consumption on dependent features
and to be effective in terms of computational complexity, training time, and forecasting
accuracy.
CHAPTER-1

INTRODUCTION

Electricity is a very important commodity needed for human life. It is used in houses for
heating, cooking, ironing and other important aspects of living. Industries use it for powering
plants and production. Offices also use it for their daily operation like powering the
computers, use of air condition and so on. It is generated from many sources which include,
the thermal, coal, nuclear and hydro. The existence of sufficient supply of electricity helps to
propel economic growth and stability of a country. This in turn gives rise to a lot of economic
activities and increase in Gross Domestic Product.
In recent years, energy consumption has become one of the major problems because of its
impact on the economic, financial spheres. In Saudi Arabia, there are many factors that cause
increasing energy consumption in non-residential buildings. In the CoC, there are many
problems that cause increased consumption. Extreme weather is one of the main problems,
causing summer temperature increases to increase energy consumption through repeated use
of air conditioning. In winter, the heat causes increased energy consumption through repeated
use of the heater, depending on the type, size, operation time and wattage consumed per hour.
The number of devices has a large load on energy in general, the increase in the number will
inevitably increase the consumption of energy. In the CoC, there are many devices and
servers. The growing number means increasing energy consumption. If we consider the
workday, there are many lectures which mean there is excessive consumption of energy
compared to the weekend. In addition to the practical examinations period, there are many
uses of the devices. The presence of many cafeterias in the college is an additional reason for
the increase in consumption because of the presence of a number of devices such as
refrigerators, coffee makers and microwaves in each cafeteria. The presence of multiple
devices around us is important and their quality is furthermore important to perform the best
performance, but the low quality will not perform properly. The quality of devices in the field
of electric power is very important because of the lack of quality whether it is old-fashioned
or being affected by weather affects their consumption. The quality of the devices affects the
consumption of energy where the lower quality the greater consumption.
Smart Meters

The term “smart meter” initially referred to the functionality of measuring the electricity used
and/or generated and the ability to remotely control the supply and cut off when necessary. It
was called AMR that used one-way communication and capable of automated monthly reads,
one way outage (or last gasp) and tamper detection, and simple load profiling. Over time the
AMR capability was extended into short term interval (hourly or less) data capture, on-
demand reads and
linking into and reading other commodities. A major upgrade of functionality occurred after
integration of the meters with two-way communication technology which has been called
advanced metering integrated (AMI). The upgrade included the incorporation of service
switching, time-based rates, remote programming, power quality measure, and a dashboard-
type user interface for real-time usage monitoring into the AMR. Although the term smart
meter started to be used only after the SG initiatives, it can be seen that the features and
functionality of the meters evolved from the manually read meters of the past to the AMI
meters with dashboard interfaces and two-way communication capability.

The smart meter is the measurement and information capture device and, in many instances,
connected to the communication device called smart meter gateway to establish a secure
energy-information network. The gateway could receive and communicate real-time
information from supplier, be a point of control for appliances, start and stop energy supply,
etc. It could also have a user interface called the “in-home-display” (IHD), which displays
energy consumption, cost, tariffs with real-time updates, etc. The smart meter can be
connected to the smart meter gateway which in turn communicates with different appliances
(washing machine, refrigerator, etc), local generation as well as heating, ventilation, and air
conditioning (HVAC). The measurements and information captured by the smart meter are
displayed via the IHD. The smart meter could directly communicate the consumption
information with the utility, but the gateway communicates with the next-level gateway in the
SG infrastructure to pass information for aggregation, demand–response activities, as well as
to utilities.
Stakeholders
A simple classification is consumers, electricity companies (utilities), and environment . In
some literature, a further level of granularity has been added to the electricity company class
by expanding to metering company (distributor), utility company, and supplier (retailer).
However, a more comprehensive classification would be those by NIST, where the
functionality of the whole energy usage cycle is defined to include bulk generation,
transmission, distribution, customers, service providers, operations, and markets.

The stakeholder involvement and benefits are summarized in Table I under each main
stakeholder group. For operations and markets, the information acquired from smart meters
will be vital for planning operations, responding to market demands, and anticipating changes
and disruptions to reduce risks to secure energy supply.
Smart-Metering Process
Although there are smart meters of varying technology and design, there is a common overall
process for data collection, communication, and analysis leading to decision support. The
smart metering process in fact could be thought of as part of the activation and functioning of
the SG. The smart meter gathers data locally and transfers via a local area network (LAN) to
a data collection point. There are two key categories of data collected as mentioned earlier.
Usage or consumption data refers to the actual electricity usage measured in kilowatt hours
(kWh), and this is read and transmitted in regular intervals. Depending on the particular
deployment and region, the frequency of data collection can vary from 1 h to every 15 min.
The issues faced with data collection and the implications and techniques for making use of
this data are further discussed in detail in the following sections. In terms of the processing of
data, some data processing could be carried out at the local collection points, but in most
cases, the data are transferred to the utilities’ central collection center via a wide area network
(WAN). The data collected at the utility are used for a number of business purposes such as
billing, network and service monitoring, profiling, prediction and planning.
Electric load forecasting: advantages and challenges
Load forecasting is the predicting of electrical power required to meet the short term, medium
term or long-term demand. The forecasting helps the utility companies in their operation and
management of the supply to their customers.
Electrical load forecasting is an important process that can increase the efficiency and
revenues for the electrical generating and distribution companies. It helps them to plan on
their capacity and operations in order to reliably supply all consumers with the required
energy.

Advantages of load forecasting

1. Enables the utility company to plan well since they have an understanding of the future
consumption or load demand.
2. Minimize the risks for the utility company. Understanding the future long-term load
helps the company to plan and make economically viable decisions in regard to future
generation and transmission investments.
3. Helps to determine the required resources such as fuels required to operate the
generating plants as well as other resources that are needed to ensure uninterrupted and
yet economical generation and distribution of the power to the consumers. This is
important for short-, medium-, and long-term planning.

4. The load forecasting helps in planning the future in terms of the size, location and type
of the future generating plant. By determining areas or regions with high or growing
demand, the utilities will most likely generate the power near the load. This minimizes
the transmission and distribution infrastructures as well as the associated losses.
5. Helps in deciding and planning for maintenance of the power systems. By
understanding the demand, the utility can know when to carry out the maintenance and
ensure that it has the minimum impact on the consumers. For example, they may
decide to do maintenance on residential areas during the day when most people are at
work and demand is very low.
6. Maximum utilization of power generating plants. The forecasting avoids under
generation or over generation.
Challenges in load forecasting

1. Forecasting is based on expected conditions such as weather. Unfortunately, the


weather is sometimes unpredictable and the forecasting may thus be different when
the actual weather differs from expected.
2. In addition, different regions may experience different weather conditions which will
definitely affect the electricity demand. This may have a negative impact on revenues,
especially if the utility generates more to meet an expected high demand and then it
turns out that the consumption is much less than what was generated either using
expensive methods such as fossil fuel generators, etc.
3. Most of the experienced utility forecasters use manual methods that rely on a
thorough understanding of a wide range of contributing factors based on upcoming
events or a particular dataset. Relying on the manual forecasting is not sustainable due
to the increasing number and complexity of the forecasting. Utilities must therefore
look for technologies that can accurately give results and eliminate problems that may
occur if experienced forecasters retire or leave employment.
4. The usage behavior varies between consumers using different types of meters and
especially between the smart and traditional meters as well as different tariffs. The
utility must understand this and develop separate forecast model for each of the
metering systems and then add them up for the final forecast value. Otherwise, they
will get an inaccurate forecasting.
5. Difficulties getting accurate data on consumption behavior due to changes in factors
such as pricing and the corresponding demand based on such a price change.
6. Load forecasting task is difficult due to the complex nature of loads which may vary
depending on the seasons and the total consumption for two similar seasons may vary.
7. It is sometimes difficult to accurately fit the numerous complex factors that affect
demand for electricity into the forecasting models. In addition, it may not be easy to
obtain an accurate demand forecast based on parameters such as change in
temperature, humidity, and other factors that influence consumption.
8. The utility may suffer losses if they do not understand and decide on an acceptable
margin of error in short term load forecasting.

Types of load forecasting:


Load forecasting is mainly done for three-time horizon based on the time of the requirement
by the utility company. Long-Term Load Forecasting (LTLF): It is mainly for system
planning, typically covers a period of 10 to 20 years. Medium-Term Load Forecasting
(MTLF): It is mainly for the scheduling of fuel supplies and maintenance usually covers a
few weeks. Short-Term Load Forecasting (STLF): It is done for the day-to-day operation and
scheduling of the power system. This is usually from an hour to a day. Methods of Load
Forecasting: In modern era many computer tools are available for load forecasting. Starting
with the use of conventional methods in computer there are many soft techniques for load
forecasting. Long Term Load Forecasting Techniques and Medium-Term Load Forecasting:
Approach and methods for both the LTLF and MTLF are nearly same since the load variation
in both periods can be approximated to each other. Some of the important methods are Trend
Analysis, End Use Analysis, Econometrics, Neural Network, and Multiple Linear Regression.
Short Term Load Forecasting: Soft techniques have taken over the conventional methods due
to their advantages in the use and the results. The soft techniques used for load forecasting
are: Similar Day Lookup Approach: This method is somehow similar to the conventional
method. It is based on searching historical data for any day with the same characteristics,
such as weather, humidity, day of the week, dates (if holiday or any special day) as that of the
present considered day. The historical data can be used in linear combination or regression to
make a trend analysis.
Pricing Intelligence

Smart meters will enable to set up dynamic tariff structures to improve efficiency in
electricity markets by better representing the costs of producing and delivering electricity at
different times. Consumers could benefit from these if they choose more flexible tariff
arrangements that better represent their electricity cost. To determine the price of electricity
as well as the tariffs, retailers consider payments to distributors for services, wholesale
electricity cost, retail services cost, as well as costs of any regulatory requirements. Smart
meter data enable deeper analysis for understanding the dynamics of supply and demand
resulting in better forecasting the needs, enabling pricing intelligence. Smart meter data can
be used to analyze and plan different rate structures, without discriminating by demographics
of low income, etc.
Deep Learning for Demand Response and Smart Grids
DR models use consumer and their load consumption data for maintaining grid sustainability
by load shedding during peak demands and managing the entire consumption data patterns
efficiently. Hence, designing an effective DR system is based on the present time adjustments
of load consumption data patterns that need to be extracted to obtain precise load
consumption patterns. In order to devise an efficient DR system, an effective prediction
technique is to be implemented, so that system can recommend suitable solutions reflecting
consumption behaviors by predicting forthcoming consumer usage patterns.
Hence, in this work, a hybrid model (HSBUFC) is developed based on the stacking of bi-
directional and uni-directional LSTMs followed by fully connected dense layers to achieve
the forecasts with high accuracy and low error percentages as compared to the mentioned
models.

1.1 Problem Identified


Electricity is a very important commodity needed for human life. It is used in houses for
heating, cooking, ironing and other important aspects of living. Industries use it for powering
plants and production. Offices also use it for their daily operation like powering the
computers, use of air condition and so on. It is generated from many sources which include,
the thermal, coal, nuclear and hydro. The existence of sufficient supply of electricity helps to
propel economic growth and stability of a country. This in turn gives rise to a lot of economic
activities and increase in Gross Domestic Product.
In recent years, energy consumption has become one of the major problems because of its
impact on the economic, financial spheres. About 66% of the social media conversation
around the July 2021 billing cycle has been happening on Twitter. Apart from satirical
memes, most tweets on this topic follow a similar pattern – A user reports a sudden hike in
their electricity bill amount, along with a picture of their white meter card/online account
summary which captures a multifold increase from the previous or average electricity bill
amount.
While a few users suspect that readings have not been recorded accurately, few others
question if meter assessment was carried out at all. In addition to concerns around
assessment, most consumers highlight that key information around billing calculations have
been left out in their bill alert. 

1.2 Scope of the project


In recent years, energy consumption has become one of the major problems because of its
impact on the economic, financial spheres. In Saudi Arabia, there are many factors that cause
increasing energy consumption in non-residential buildings. In the CoC, there are many
problems that cause increased consumption. Extreme weather is one of the main problems,
causing summer temperature increases to increase energy consumption through repeated use
of air conditioning. In winter, the heat causes increased energy consumption through repeated
use of the heater, depending on the type, size, operation time and wattage consumed per hour.
The number of devices has a large load on energy in general, the increase in the number will
inevitably increase the consumption of energy. In the CoC, there are many devices and
servers. The growing number means increasing energy consumption. If we consider the
workday, there are many lectures which mean there is excessive consumption of energy
compared to the weekend. In addition to the practical examinations period, there are many
uses of the devices. The proposed model is expected to contribute to efficient power-system
planning by accurately forecasting monthly residential electricity usage, which accounts for
the majority of total electricity usage. The ability to forecast monthly electricity demand with
high accuracy can enable power-system personnel and market players to make sustainable
power-system planning decisions to ensure efficient production and utilization of resources.
This accurate deep learning-based forecasting model could also be applied to other problems
associated with analysing time series and optimizing energy processes, energy conservation,
and sustainable use of energy resources.

1.3 Objective
 This project proposes the use of an ensemble of models to be used for producing
forecasts of home electric consumption data, considering total consumption, of
schedulable equipment, and of non-schedulable devices, to be employed in Hybrid
Deep Learning Model.
 This project aimed to propose a model that can accurately forecast monthly residential
electricity demand based on the hybrid deep learning (DL) model is based on the
stacking of fully connected layers, and unidirectional Long Short-Term Memory
(LSTMs) on bi-directional LSTMs.

CHAPTER-2

Problem definition

Electricity is a very important commodity needed for human life. It is used in houses for
heating, cooking, ironing and other important aspects of living. Industries use it for powering
plants and production. In recent years, energy consumption has become one of the major
problems because of its impact on the economic, financial spheres. Have you noticed that
even if you spend a month or two away from home, you’ll still have a decently-sized
electricity bill? Even though appliances aren’t “on” keeping your appliances plugged into
their outlets can cause high electricity bills. An easy way to conceptualize this is to view
these appliances as being on “standby” until being turned on. Most electronic devices
continue to draw electricity from the grid even if they are powered down or not in use and
also if the person house is in closed then the government authority who going to mark the
unit can put some wrong units also. The solution for the above problem is forecasting energy
consumption and pricing for smart building. The process flow is admin compare ,analyze and
predict the energy consumption using datasets values. Here the user register their details and
enter their home appliances details. In this process the user pay the bill in online and also
he/she knows the details about the energy consumption of each appliances so he/she can
reduce the energy consumption and also the user got an alert message if he consumed 50
percent of energy. Here the government authority who note the energy consumption per
month problem also got avoid because here we see the energy consumption details in online
itself.

2.1 Modules List

1. Forecaster Web App


2. Smart Metered Dataset Annotation
3. K-means Clustering
4. Fast Fourier Transform Feature Extraction
5. HSBUFC Classification
6. Performance Analysis
7. User Interface

2.2 Modules Description


1. Forecaster Web App
Forecaster Web App incorporates the overall systems that measure, collect and analyse
electricity usage. FWA system extends beyond Advanced Meter Reading (AMR) Technology
by allowing two-way communications between the utility provider’s system and the meter.
This enables demand-response actions or remote service barring or disconnects.

2. Smart Metered Dataset Annotation


2.1. Training Dataset
The dataset contains electrical consumption data for a single residential customer the data
were collected for a period of five months ranging from 11 January 2016 to 27 May 2016.
The study was conducted on one-hour resolution data. For all three architectures, the first
three years were used as the training data and the last year was used as testing data. For all
the architectures, the electricity consumption for the next 60 hours was predicted.
Register

Upload Previously Collected Data Set


EB Admin Login

Storage

2.2. Testing Data


To build and evaluate the proposed hybrid model, a real-world dataset of appliances energy
consumption is used. Smart meter data from each home appliances.

IoT

Convert to digital and CSV format


Meter Data from Home Cloud

Storage

2. Pre-processing
In this module, we need to replace the missing data by the Mean or Median of the entire
column. Soil Nutrients type are taken as feature Variables and the NULL values as well as
redundant values from the dataset are removed.

Previous Eb Consumption Preprocessing


Data Set or Meter Data

Null Redundant Missing

3. K-means Clustering
The popularity of K-means clustering can be attributed to its simplicity and generally
satisfactory performance. K-means is also implemented in many software solutions, both
proprietary and open source, making it an easy choice for fast clustering. The greedy design
approach of the K-means algorithm can create suboptimum solutions by unfortunate initial
starting conditions and converge in local optima; a problem that can be alleviated by
rerunning the algorithm several times.

K-means Clustering
Preprocessed Data

Appliances

4. Fast Fourier Transform Feature Extraction


Smart meter data can be regarded as signals; as such it could be advantageous to apply
techniques that leverage time series information like periodicity or autocorrelation. In Fast
Fourier transform (FFT), a frequency domain analysis technique for signals, is applied, but
several other techniques exist for analysing time series.

FFT Feature Extraction


Clustered Data

Time series

5. HSBUFC Classification
HSBUFC model consists of three types of layers: 1) Bidirectional LSTM layer, 2) Stacked
Uni-directional LSTM layers, and 3) Fully connected layers/dense layers. As discussed in the
earlier section, bi-directional LSTMs make use of both forward and backward dependencies.
The temporal long-term dependencies of the energy consumption values are extracted during
the feature learning process in two directions by the initial layer of bi-directional LSTM.
Next, LSTM layers, which are efficient in the forward dependencies, are employed in the top
layers, which receive the outputs from the lower layer after learning from the extracted
comprehensive and complex features.
HSBUFC
Extracted Data

Forecasting Pricing

1. Prediction
In this module, the Euclidean distance or Euclidean metric is the "ordinary" distance between
two points that one would measure with a ruler, and is given by the Pythagorean formula. By
using this formula as distance, Euclidean space (or even any inner product space) becomes a
metric space. The associated norm is called the Euclidean norm. Older literature refers to the
metric as Pythagorean metric.

The Euclidean distance between points p and q is the length of the line segment connecting
them: p.q In Cartesian coordinates, if p = (p , p ,..., p ) and q = (q , q ,..., q ) are two points in
Euclidean n space, then the distance from p to q, or from q to p is given by the following
heterogenous value difference metric.
Classified Result from
Trained Dataset
Prediction

Classified Result from


Testing Dataset

Load Forecasting Dynamic Pricing

7. Performance Analysis
In this module, the outcome is counted as true positive (TP); if the same outcome is
incorrectly classified as negative, it is counted as a false negative (FN). If the valid diagnosis
is CHD absent and it is correctly classified as negative, the outcome is counted as true
negative (TN); if the same outcome is incorrectly classified as positive, it is counted as a false
positive (FP).

𝑇𝑃+𝑇𝑁

𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 (%) = 𝑥 100

𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁

TP

𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 (%) = 𝑥 100

𝑇𝑃+FN
% TN

TN

Specificity (%) = 𝑥 100

𝑇N+FP

% TN

Euclidean
Trained Data File Stastical

Prediction Result visualization


Stastical
Test Data

Result Analysis
Stastical

Accuracy Sensitivity Specificity


Stastical Stastical Stastical

8. User Interface
 Consumer Login
In this module the Consumer register and login to find the energy consumption
Register

Consumer Number
Consumer Login

Load Forecasting Dynamic Pricing

CHAPTER-3
SYSTEM ANALYSIS

3.1 Existing System


The amount of electricity consumed or utilized is measured and the billing is done according
to the tariff of the concerned place. The major component that is responsible for bill
generation is the electricity meter. Various meters have been in use since the time electricity
was employed in day today life.
 Mechanical meters
The mechanical meters are the most basic and conventional form of meters. Their working
can be compared to that of an electrical motor. Here two coiled wheels run in opposite
directions. They in turn run a mechanical counter. Through this count the units of electricity
consumed is measured. These are the simplest type of meter and are not capable of providing
any other information other than unit count.
 Simple Digital meters
The digital meters were introduced to eliminate the mechanical metering system. A sensor is
used to measure current. They are usually used in everyday monitoring of electricity. These
sensors usually don’t have any inbuilt transmitting devices. The communication port is
present in the front and they are used to read the meter.
Disadvantage
 The additional cost to train personal, develop equipment, and implement new
processes for data storage
 Managing public reaction and feedback concerning new meters
 Making a long-term financial commitment to new hardware/software
 Ensuring the security and privacy of metering data

3.2 Proposed System


In recent years, energy consumption has become one of the major problems because of its
impact on the economic, financial spheres. In Saudi Arabia, there are many factors that cause
increasing energy consumption in non-residential buildings. In the CoC, there are many
problems that cause increased consumption. Extreme weather is one of the main problems,
causing summer temperature increases to increase energy consumption through repeated use
of air conditioning. In winter, the heat causes increased energy consumption through repeated
use of the heater, depending on the type, size, operation time and wattage consumed per hour.
The number of devices has a large load on energy in general, the increase in the number will
inevitably increase the consumption of energy. In the CoC, there are many devices and
servers. The growing number means increasing energy consumption. If we consider the
workday, there are many lectures which mean there is excessive consumption of energy
compared to the weekend. In addition to the practical examinations period, there are many
uses of the devices. In this project, a hybrid model (HSBUFC) is developed based on the
stacking of bi-directional and uni-directional LSTMs followed by fully connected dense
layers to achieve the forecasts with high accuracy and low error percentages as compared to
the mentioned models.

PROPOSED MODEL ARCHITECTURE


The framework of the proposed hybrid stacked bi-directional uni-directional LSTM with
fully connected dense layers (HSBUFC) model is illustrated in Figure 5. HSBUFC model
consists of three types of layers: 1) Bidirectional LSTM layer, 2) Stacked Uni-directional
LSTM layers, and 3) Fully connected layers/dense layers. As discussed in the earlier section,
bi-directional LSTMs make use of both forward and backward dependencies. The temporal
long-term dependencies of the energy consumption values are extracted during the feature
learning process in two directions by the initial layer of bi-directional LSTM. Next, LSTM
layers, which are efficient in the forward dependencies, are employed in the top layers, which
receive the outputs from the lower layer after learning from the extracted comprehensive and
complex features.

Advantages
 The proposed model is expected to contribute to efficient power-system planning by
accurately forecasting monthly residential electricity usage, which accounts for the
majority of total electricity usage.
 The ability to forecast monthly electricity demand with high accuracy can enable
power-system personnel and market players to make sustainable power-system
planning decisions to ensure efficient production and utilization of resources.
 This accurate deep learning-based forecasting model could also be applied to other
problems associated with analysing time series and optimizing energy processes,
energy conservation, and sustainable use of energy resources.

CHAPTER -4

SYSTEM REQUIREMENTS
4.1 Recommended System Requirements
 Processors:
 Intel® Core™ i5 processor 4300M at 2.60 GHz or 2.59 GHz (1 socket, 2 cores, 2
threads per core), 8 GB of DRAM
 Disk space: 320 GB
 Operating systems: Windows® 10, macOS*, and Linux*
Minimum System Requirements
 Processors: Intel Atom® processor or Intel® Core™ i3 processor
 Disk space: 1 GB
 Operating systems: Windows* 7 or later, macOS, and Linux
Hardware’s
 Laptop or PC
4.2 Software Requirements
 Server Side : Python 3.7.4(64-bit) or (32-bit)
 Client Side : HTML, CSS, Bootstrap
 IDE : Flask 1.1.1
 Back end : MySQL 5.
 Server : Wampserver 2i
 OS : Windows 10 64 –bit or Ubuntu 18.04 LTS “Bionic Beaver”

CHAPTER-5

SOFTWARE DESCRIPTION

Python 3.7.4
Python is a general-purpose interpreted, interactive, object-oriented, and high-level
programming language. It was created by Guido van Rossum during 1985- 1990. Like Perl,
Python source code is also available under the GNU General Public License (GPL). This
tutorial gives enough understanding on Python programming language.
Python is a high-level, interpreted, interactive and object-oriented scripting language. Python
is designed to be highly readable. It uses English keywords frequently where as other
languages use punctuation, and it has fewer syntactical constructions than other languages.
Python is a MUST for students and working professionals to become a great Software
Engineer specially when they are working in Web Development Domain. I will list down
some of the key advantages of learning Python:
Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compile your program before executing it. This is similar to PERL and PHP.
Python is Interactive − You can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python is Object-Oriented − Python supports Object-Oriented style or technique of
programming that encapsulates code within objects.
Python is a Beginner's Language − Python is a great language for the beginner-level
programmers and supports the development of a wide range of applications from simple text
processing to WWW browsers to games.
The Python Package Index (PyPI) hosts thousands of third-party modules for Python. Both
Python's standard library and the community-contributed modules allow for endless
possibilities.
The most basic use case for Python is as a scripting and automation language. Python isn’t
just a replacement for shell scripts or batch files; it is also used to automate interactions with
web browsers or application GUIs or to do system provisioning and configuration in tools
such as Ansible and Salt. But scripting and automation represent only the tip of the iceberg
with Python.
General application programming with Python
You can create both command-line and cross-platform GUI applications with Python and
deploy them as self-contained executables. Python doesn’t have the native ability to generate
a standalone binary from a script, but third-party packages like cx_Freeze and PyInstaller can
be used to accomplish that.
Data science and machine learning with Python
Sophisticated data analysis has become one of fastest-moving areas of IT and one of Python’s
star use cases. The vast majority of the libraries used for data science or machine learning
have Python interfaces, making the language the most popular high-level command interface
to for machine learning libraries and other numerical algorithms.
Web services and RESTful APIs in Python
Python’s native libraries and third-party web frameworks provide fast and convenient ways to
create everything from simple REST APIs in a few lines of code to full-blown, data-driven
sites. Python’s latest versions have strong support for asynchronous operations, letting sites
handle tens of thousands of requests per second with the right libraries.
Metaprogramming and code generation in Python
In Python, everything in the language is an object, including Python modules and libraries
themselves. This lets Python work as a highly efficient code generator, making it possible to
write applications that manipulate their own functions and have the kind of extensibility that
would be difficult or impossible to pull off in other languages.
Python can also be used to drive code-generation systems, such as LLVM, to efficiently
create code in other languages.
“Glue code” in Python
Python is often described as a “glue language,” meaning it can let disparate code (typically
libraries with C language interfaces) interoperate. Its use in data science and machine
learning is in this vein, but that’s just one incarnation of the general idea. If you have
applications or program domains that you would like to hitch up, but cannot talk to each other
directly, you can use Python to connect them.
Python 2 vs. Python 3
Python is available in two versions, which are different enough to trip up many new users.
Python 2.x, the older “legacy” branch, will continue to be supported (that is, receive official
updates) through 2020, and it might persist unofficially after that. Python 3.x, the current and
future incarnation of the language, has many useful and important features not found in
Python 2.x, such as new syntax features (e.g., the “walrus operator”), better concurrency
controls, and a more efficient interpreter.
Python 3 adoption was slowed for the longest time by the relative lack of third-party library
support. Many Python libraries supported only Python 2, making it difficult to switch. But
over the last couple of years, the number of libraries supporting only Python 2 has
dwindled; all of the most popular libraries are now compatible with both Python 2 and
Python 3. Today, Python 3 is the best choice for new projects; there is no reason to pick
Python 2 unless you have no choice. If you are stuck with Python 2, you have various
strategies at your disposal.
Python’s libraries
The success of Python rests on a rich ecosystem of first- and third-party software. Python
benefits from both a strong standard library and a generous assortment of easily obtained and
readily used libraries from third-party developers. Python has been enriched by decades of
expansion and contribution.
Python’s standard library provides modules for common programming tasks—math, string
handling, file and directory access, networking, asynchronous operations, threading,
multiprocess management, and so on. But it also includes modules that manage common,
high-level programming tasks needed by modern applications: reading and writing structured
file formats like JSON and XML, manipulating compressed files, working with internet
protocols and data formats (webpages, URLs, email). Most any external code that exposes a
C-compatible foreign function interface can be accessed with Python’s ctypes module.
The default Python distribution also provides a rudimentary, but useful, cross-platform GUI
library via Tkinter, and an embedded copy of the SQLite 3 database.
The thousands of third-party libraries, available through the Python Package Index (PyPI),
constitute the strongest showcase for Python’s popularity and versatility.
For example:
The BeautifulSoup library provides an all-in-one toolbox for scraping HTML—even tricky,
broken HTML—and extracting data from it.
Requests makes working with HTTP requests at scale painless and simple.
Frameworks like Flask and Django allow rapid development of web services that encompass
both simple and advanced use cases.
Multiple cloud services can be managed through Python’s object model using Apache
Libcloud.
NumPy, Pandas, and Matplotlib accelerate math and statistical operations, and make it easy
to create visualizations of data.
Python’s compromises
Like C#, Java, and Go, Python has garbage-collected memory management, meaning the
programmer doesn’t have to implement code to track and release objects. Normally, garbage
collection happens automatically in the background, but if that poses a performance problem,
you can trigger it manually or disable it entirely, or declare whole regions of objects exempt
from garbage collection as a performance enhancement.
An important aspect of Python is its dynamism. Everything in the language, including
functions and modules themselves, are handled as objects. This comes at the expense of
speed (more on that later), but makes it far easier to write high-level code. Developers can
perform complex object manipulations with only a few instructions, and even treat parts of an
application as abstractions that can be altered if needed.
Python’s use of significant whitespace has been cited as both one of Python’s best and worst
attributes. The indentation on the second line below isn’t just for readability; it is part of
Python’s syntax. Python interpreters will reject programs that don’t use proper indentation to
indicate control flow.
with open(‘myfile.txt’) as my_file:
file_lines = [x.rstrip(‘\n’) for x in my_file]
Syntactical white space might cause noses to wrinkle, and some people do reject Python for
this reason. But strict indentation rules are far less obtrusive in practice than they might seem
in theory, even with the most minimal of code editors, and the result is code that is cleaner
and more readable.
Another potential turnoff, especially for those coming from languages like C or Java, is how
Python handles variable typing. By default, Python uses dynamic or “duck” typing—great for
quick coding, but potentially problematic in large code bases. That said, Python has recently
added support for optional compile-time type hinting, so projects that might benefit from
static typing can use it.
What is MySQL? – An Introduction To Database Management Systems

Database Management is the most important part when you have humungous data
around you. MySQL is one of the most famous Relational Database to store & handle your
data. In this What is MySQL blog, you will be going through the following topics:

 What are Data & Database?


 Database Management System & Types of DBMS
 Structured Query Language (SQL)
 MySQL & its features
 MySQL Data Types

What are Data & Database?


Suppose a company needs to store the names of hundreds of employees working in
the company in such a way that all the employees can be individually identified. Then, the
company collects the data of all those employees. Now, when I say data, I mean that the
company collects distinct pieces of information about an object. So, that object could be a
real-world entity such as people, or any object such as a mouse, laptop etc.

Database Management System & Types of DBMS


A Database Management System (DBMS) is a software application that interacts
with the user, applications and the database itself to capture and analyze data. The data stored
in the database can be modified, retrieved and deleted, and can be of any type like strings,
numbers, images etc.

Types of DBMS

There are mainly 4 types of DBMS, which are Hierarchical, Relational, Network, and
Object-Oriented DBMS.

 Hierarchical DBMS:  As the name suggests, this type of DBMS has a style of
predecessor-successor type of relationship. So, it has a structure similar to that of a
tree, wherein the nodes represent records and the branches of the tree represent fields.
 Relational DBMS (RDBMS): This type of DBMS, uses a structure that allows the
users to identify and access data in relation to another piece of data in the database.
 Network DBMS: This type of DBMS supports many to many relations wherein
multiple member records can be linked.
 Object-oriented DBMS: This type of DBMS uses small individual software called
objects. Each object contains a piece of data, and the instructions for the actions to be
done with the data.

Structured Query Language (SQL)


SQL is the core of a relational database which is used for accessing and managing the
database. By using SQL, you can add, update or delete rows of data, retrieve subsets of
information, modify databases and perform many actions. The different subsets of SQL are as
follows:

 DDL (Data Definition Language) – It allows you to perform various operations on the
database such as CREATE, ALTER and DELETE objects.
 DML (Data Manipulation Language) – It allows you to access and manipulate data. It
helps you to insert, update, delete and retrieve data from the database.
 DCL (Data Control Language) – It allows you to control access to the database.
Example – Grant or Revoke access permissions.
 TCL (Transaction Control Language) – It allows you to deal with the transaction of
the database. Example – Commit, Rollback, Savepoint, Set Transaction.
Using MySQL
Of course, there’s not a lot of point to being able to change HTML output dynamically
unless you also have a means to track the changes that users make as they use your website.
In the early days of the Web, many sites used “flat” text files to store data such as usernames
and passwords. But this approach could cause problems if the file wasn’t correctly locked
against corruption from multiple simultaneous accesses. Also, a flat file can get only so big
before it becomes unwieldy to manage—not to mention the difficulty of trying to merge files
and perform complex searches in any kind of reasonable time. That’s where relational
databases with structured querying become essential. And MySQL, being free to use and
installed on vast numbers of Internet web servers, rises superbly to the occasion. It is a robust
and exceptionally fast database management system that uses English-like commands. The
highest level of MySQL structure is a database, within which you can have one or more
tables that contain your data. For example, let’s suppose you are working on a table called
users, within which you have created columns for surname, firstname, and
email, and you now wish to add another user. One command that you might use to do this is:
INSERT INTO users VALUES('Smith', 'John', 'jsmith@mysite.com'); Of course, as
mentioned earlier, you will have issued other commands to create the database and table and
to set up all the correct fields, but the INSERT command here shows how simple it can be to
add new data to a database. The INSERT command is an example of SQL (which stands for
Structured Query Language), a language designed in the early 1970s and reminiscent of one
of the oldest programming languages, COBOL. It is well suited, however, to database
queries, which is why it is still in use after all this time. It’s equally easy to look up data.
Let’s assume that you have an email address for a user and you need to look up that person’s
name. To do this, you could issue a MySQL query such as: SELECT surname, first name
FROM users WHERE email='jsmith@mysite.com'; MySQL will then return Smith, John and
any other pairs of names that may be associated with that email address in the database. As
you’d expect, there’s quite a bit more that you can do with MySQL than just simple INSERT
and SELECT commands. For example, you can join multiple tables according to various
criteria, ask for results in a variety of different orders, make partial matches when you know
only part of the string that you are searching for, return only the nth result, and a lot more.
Using PHP, you can make all these calls directly to MySQL without having to run the
MySQL program yourself or use its command-line interface. This means you can save the
results in arrays for processing and perform multiple lookups, each dependent on the results
returned from earlier ones, to drill right down to the item of data you need. For even more
power, as you’ll see later, there are additional functions built right into MySQL that you can
call up for common operations and extra speed.
The Apache Web Server
In addition to PHP, MySQL, JavaScript, and CSS, there’s actually a fifth hero in the dynamic
Web: the web server. In the case of this book, that means the Apache web server. We’ve
discussed a little of what a web server does during the HTTP server/client exchange, but it
actually does much more behind the scenes. For example, Apache doesn’t serve up just
HTML files—it handles a wide range of files, from images and Flash files to MP3 audio files,
RSS (Really Simple Syndication) feeds, and more. Each element a web client encounters in
an HTML page is also requested from the server, which then serves it up. But these objects
don’t have to be static files, such as GIF images. They can all be generated by programs such
as PHP scripts. That’s right: PHP can even create images and other files for you, either on the
fly or in advance to serve up later. To do this, you normally have modules either precompiled
into Apache or PHP or called up at runtime. One such module is the GD library (short for
Graphics Draw), which PHP uses to create and handle graphics.
Apache also supports a huge range of modules of its own. In addition to the PHP module, the
most important for your purposes as a web programmer are the modules that handle security.
Other examples are the Rewrite module, which enables the web server to handle a varying
range of URL types and rewrite them to its own internal requirements, and the Proxy module,
which you can use to serve up often-requested pages from a cache to ease the load on the
server. Later in the book, you’ll see how to actually use some of these modules to enhance
the features provided by the core technologies we cover. About Open Source Whether or not
being open source is the reason these technologies are so popular has often been debated, but
PHP, MySQL, and Apache are the three most commonly used tools in their categories. What
can be said, though, is that being open source means that they have been developed in the
community by teams of programmers writing the features they themselves want and need,
with the original code available for all to see and change. Bugs can be found and security
breaches can be prevented before they happen. There’s another benefit: all these programs are
free to use. There’s no worrying about having to purchase additional licenses if you have to
scale up your website and add more servers. And you don’t need to check the budget before
deciding whether to upgrade to the latest versions of these products.
What Is a WAMP, MAMP, or LAMP?
WAMP, MAMP, and LAMP are abbreviations for “Windows, Apache, MySQL, and PHP,”
“Mac, Apache, MySQL, and PHP,” and “Linux, Apache, MySQL, and PHP,” 13 www.it-
ebooks.info respectively. These abbreviations describe a fully functioning setup used for
developing dynamic Internet web pages. WAMPs, MAMPs, and LAMPs come in the form of
a package that binds the bundled programs together so that you don’t have to install and set
them up separately. This means you can simply download and install a single program and
follow a few easy prompts to get your web development server up and running in the quickest
time with the minimum hassle. During installation, several default settings are created for
you. The security configurations of such an installation will not be as tight as on a production
web server, because it is optimized for local use. For these reasons, you should never install
such a setup as a production server. However, for developing and testing websites and
applications, one of these installations should be entirely sufficient.
Using an IDE
As good as dedicated program editors can be for your programming productivity, their utility
pales into insignificance when compared to Integrated Developing Environments (IDEs),
which offer many additional features such as in-editor debugging and program testing, as
well as function descriptions and much more.
Web Framework
Web Application Framework or simply Web Framework represents a collection of libraries
and modules that enables a web application developer to write applications without having to
bother about low-level details such as protocols, thread management etc.
Flask
Flask is a web framework. This means flask provides you with tools, libraries and
technologies that allow you to build a web application. This web application can be some
web pages, a blog, a wiki or go as big as a web-based calendar application or a commercial
website.
Flask is often referred to as a micro framework. It aims to keep the core of an application
simple yet extensible. Flask does not have built-in abstraction layer for database handling,
nor does it have form a validation support. Instead, Flask supports the extensions to add such
functionality to the application.  Although Flask is rather young compared to
most Python frameworks, it holds a great promise and has already gained popularity among
Python web developers. Let’s take a closer look into Flask, so-called “micro” framework for
Python.
Flask was designed to be easy to use and extend. The idea behind Flask is to build a solid
foundation for web applications of different complexity. From then on you are free to plug in
any extensions you think you need. Also you are free to build your own modules. Flask is
great for all kinds of projects. It's especially good for prototyping.
Flask is part of the categories of the micro-framework. Micro-framework are normally
framework with little to no dependencies to external libraries. This has pros and cons. Pros
would be that the framework is light, there are little dependency to update and watch for
security bugs, cons is that some time you will have to do more work by yourself or increase
yourself the list of dependencies by adding plugins. In the case of Flask, its dependencies are:
 WSGI
Web Server Gateway Interface (WSGI) has been adopted as a standard for Python web
application development. WSGI is a specification for a universal interface between the web
server and the web applications.
 Werkzeug
It is a WSGI toolkit, which implements requests, response objects, and other utility functions.
This enables building a web framework on top of it. The Flask framework uses Werkzeug as
one of its bases.
 Jinja2
Jinja2 is a popular templating engine for Python. A web templating system combines a
template with a certain data source to render dynamic web pages.
 built-in development server and fast debugger
 integrated support for unit testing
 RESTful request dispatching
 Jinja2 templating
 support for secure cookies (client side sessions)
 WSGI 1.0 compliant
 Unicode based
Plus Flask gives you so much more CONTROL on the development stage of your project.
It follows the principles of minimalism and let’s you decide  how you will build your
application.

 Flask has a lightweight and modular design, so it easy to transform it to the web
framework you need with a few extensions without weighing it down
 ORM-agnostic: you can plug in your favourite ORM e.g. SQLAlchemy.
 Basic foundation API is nicely shaped and coherent.
 Flask documentation is comprehensive, full of examples and well structured. You
can even try out some sample application to really get a feel of Flask.
 It is super easy to deploy Flask in production (Flask is 100% WSGI 1.0
compliant”)
 HTTP request handling functionality.
 High Flexibility.

CHAPTER-6

SYSTEM ARCHITECTURE
Classification

Feature Extraction

Clustering

Preprocessing Load Forecasting

Dynamic Pricing

System Admin

Consumer
DATA FLOW DIAGRAMS

A data-flow diagram is a way of representing a flow of a data of a process or a system.


The DFD also provides information about the outputs and inputs of each entity and the
process itself. A data-flow diagram has no control flow, there are no decision rules and no
loops. 
The visual representation makes it a good communication tool between User and System
designer. Structure of DFD allows starting from a broad overview and expand it to a
hierarchy of detailed diagrams. DFD has often been used due to the following reasons: ...
Determination of physical system construction requirements.
Data flow Symbols:
Symbol Description

An entity. A source of data or a


destination for data.

A process or task that is


performed by the system.

A data store, a place where


data is held between processes.

A data flow.
LEVEL 0
A level 0 data flow diagram (DFD), also known as a context diagram, shows
a data system as a whole and emphasizes the way it interacts with external entities. This DFD
level 0 example shows how such a system might function within a typical retail business.

LEVEL 1:
A context level DFD is the most basic form of DFD. It aims to show how the
entire system works at a glance. There is only one process in the system and all the data flows
either into or out of this process. Context level DFD's demonstrates the interactions between
the process and external entities.
LEVEL 2:

A level 2 data flow diagram (DFD) offers a more detailed look at the processes that
make up an information system than a level 1 DFD does. It can be used to plan or record the
specific makeup of a system. ... You can then input the particulars of your own system.

UML Diagrams

Use-Case Diagram
Register

Login
user

Admin
Extract information

compare the processed data


EB

Analyze and predict the data

billing status

checking the bill status

Paying the bill

alert message

Class Diagram
Admin
compare
analyze User
predict Register
result Login
regitser billing status
Login bill

compare the dataset values() Register their details()


analyze the values() Login()
predict the data() checking the bil status()
final result() paying the bill()
Add user details()
Login()

EB
comapre
analyze
result Database
Billing status store
Message access

compare the user data and dataset values() store the data()
analyze the values() access the data()
predict the energy consumption()
give the billing status()
alert message to user()

Activity Diagram
Register

Login

add user
details

compare the current


dataset values

analyze and
predict the data

billing status

checkig the
EB bill status

paying the
amount

alert message

Sequence Diagram
Admin EB User

Register

Login

compare and analyze the values

Predict the values

add the user details

Register

Login

add their home appliances details

comapre,analyze and predict to


provide billing status

Checking the bill status

paying the EB bill amount

alert message for 50 percent usage

Collaboration Diagram
E-R Diagrams
CHAPTER-7

TESTING

7.1 TESTING PROCESS

The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub-assemblies, assemblies and/or a finished product it is the process of
exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

The system captures information related to energy demand, payments, and so on, from
different level of organization with the aim of capturing it from as close to the source. The
system consists of two login pages namely: first the administrator page and second the
consumer page. The system process is described as follows:
 After connecting online, the application starts by displaying the homepage. There after the
username and password are requested (the consumer or administrator). If it is a new user,
then the user will have to register by filling all the registration details.
 If the password and user id details are valid, then the consumer gets access to the main
page. On the main page, the consumer can: a. View his/her bill status b. paying the bill in
online.
The user starts by selecting the appliance he/she uses on daily usage basics. The wattage of
each of the appliances is provided. If a specific appliance is not listed, then the user can select
the ‘others’ option and insert the wattage of that specific appliance. The calculator now
calculates the total amount of usage of all the appliances used by the user and it gives 50
percent alert message also. The consumption calculator now uses this information to calculate
the total units consumed in kWh.

Results and Discussion


The actual and predicted values using the proposed model for appliances energy use on a
random day in the testing dataset. This shows the high accuracy and low error of the
proposed hybrid deep learning prediction model. Similarly, the real and predicted values
using the proposed forecasting model for global active power on a subset of records shows
high accuracy. The proposed model delivers substantially perfect accuracy performance at
both spikes and troughs in the illustration.

Conclusion
Improving the accuracy of energy consumption forecasting at the level of buildings will
hugely impact the generation and scheduling of energy resources and efficient utilization of
renewable energy resources. This project proposed a novel hybrid deep learning model that
amplifies the merits of uni-directional LSTMs, bidirectional LSTMs, and stacking of RNNs
on energy consumption forecasting accuracy. Bidirectional LSTMs are used to recognize the
underlying energy consumption patterns in both directions and forecast energy consumption
values with high accuracy. The obtained accuracy is irrespective of the high uncertainty and
stochasticity of individual household load demand. Superior accuracy performance was
demonstrated using two real energy consumption datasets in individual residential smart
buildings. Dropout regularization and early stopping were employed to prevent overfitting in
the proposed hybrid deep learning model. The proposed model has been evaluated against the
widely employed hybrid models including CNN-LSTM, ConvLSTM, LSTM encoder-
decoder model, other invariants of stacked LSTM models, and our proposed ensemble
AREM model from previous work. The superior performance of our proposed model in both
case studies and in multi-step fore- casting corroborates the merits of employing bidirectional
training. The proposed model is not limited to short-term energy forecasting.

Future Scope
To sum up, Flask is one of the most polished and feature-rich micro frameworks available.
Still young, Flask has a thriving community, first-class extensions, and an elegant API.
Flask comes with all the benefits of fast templates, strong WSGI features, thorough unit
testability at the web application and library level, extensive documentation. So next time
you are starting a new project where you need some good features and a vast number of
extensions, definitely check out Flask.
References

1. Ricardo J. Bessa , Center for Power and Energy Systems, Solar Power Forecasting for
Smart Grids Considering ICT Constraints.
2. Huang, S.J. and K.R. Shih, 2003. Short term load forecasting via ARMA model ident
ificat ion including non- Gaussian process consideration. IEEE Trans. Power Syst .,
18: 673-679.
3. Kandil Nahi, Rene Wamkeue, Maarouf saad and Semaan Georges, 2006. An efficient
approach for short term load forecasting using artificial neural networks. Int. J.
Electric Power Energy system ., 28: 525-530.
4. Mandal Paras, Tomonobu Senjyu, Naomitsu Urasaki, Toshihisa Funabashi, 2006. A
neural network based several hours ahead electric load forecast ing using similar days
approach. Int . J. Elect.
5. Topalli Ayca Kumluca, Ismet Erkmen and Ihsan Topalli, 2006.Intelligent short term
load forecast ing in Turkey. Int . J. Elect ric. Power Energy Syst., 28: 437- 447
6. Qingqing Mu , Yonggang Wu , Xiaoqiang Pan, Liangyi Huang,Xian Li Short-term
Load Forecasting Using Improved Similar Days Method 978-1-4244-4813-
5/10/$25.00 ©2010 IEEE
7. Jing-Min Wang and Li-Ping Wang, A new method for short-term electricity load
forecasting, Transactions of the Institute of Measurement and Control 30, 3/4 (2008)
pp. 331–344.
8. Ruzic, A.Vuckovic, and N. Nikolic, “Weather Sensitive Method for Short-Term Load
Forecasting in Electric Power Utility of Serbia”, IEEE Transaction on Power Systems,
18:1581– 1586, 2003
9. T. Haida and S. Muto, “Regression Based Peak Load Forecasting using
Transformation Technique”. IEEE Transactions on Power Systems, 9:1788–1794,
1994.
10. W. Charytoniuk, M.S. Chen, and P. Van Olinda. “Nonparametric Regression Based
Short-Term Load Forecasting”, IEEE Transactions on Power Systems, 13:725–730,
1998.
11. Short term load forecasting using time series modelling with peak load estimation
capability”, IEEE Transactions on Power Systems, Vol.16, No.3 August 2001.
12. D.C. Park, M.A. El-Sharkawi, R.J. Marks II, L.E. Atlas & M.J. Damborg, "Electric
load forecasting using an artificial neural network", IEEE Transactions on Power
Engineering, vol.6, pp.442-449 (1991)
13. H. Mori and S. Tsuzuki, "Power System Topological Observability Analysis Using a
Neural Network Model," Proc. of 2nd Sym. on Expert Systems Application to Power
Systems, pp.385-391, July, 1989
14. M. S. Kandil, S. M. El-Debeiky, Senior Member, IEEE, and N. E. Hasanien , Long-
Term Load Forecasting for Fast Developing Utility Using a Knowledge- Based
Expert System.
15. Mohamed Mohandes, Support vector machines for short-term electrical load
forecasting International Journal Of Energy Research Int. J. Energy Res. 2002;
26:335}345 (DOI: 10.1002/er.787)
16. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong, a general formulation for support vector
machines, Proceedings of the 9th International Conference on Neural Information
Processing (ICONIP'OZ) , Vol. 5
17. Study report on electricity demand curve and system peak reduction, Public Utilities
Commission of SRI LANKA, December 2012
18. Christian Crowley and Frederick L. Joutz Weather Effects on Electricity Loads:
Modeling and Forecasting 12 December 2005 Final Report EPA Weather Effects on
Electricity Loads
19. Sanjeev Kumar Aggarwal,, Lalit Mohan Saini 1, Ashwani Kumar Electricity price
forecasting in deregulated markets: A review

You might also like