You are on page 1of 26

MODULE 3

What is Modeling?
 Modeling is the process of producing a model; a model is a representation of the construction and
working of some system of interest. A model is similar to but simpler than the system it represents.
 One purpose of a model is to enable the analyst to predict the effect of changes to the system.
 A model should be a close approximation to the real system and incorporate most of its salient features.
On the other hand, it should not be so complex that it is impossible to understand and experiment with
it.
 A good model is a judicious tradeoff between realism and simplicity. Simulation practitioners
recommend increasing the complexity of a model iteratively.
 An important issue in modeling is model validity. Model validation techniques include simulating the
model under known input conditions and comparing model output with system output.
Classification of Models:

 Generally, a model intended for a simulation study is a mathematical model developed with the help of
simulation software.
 Mathematical model classifications include deterministic (input and output variables are
fixed values) or Stochastic (at least one of the input or output variables is probabilistic); Static (time is
not taken into account) or dynamic (time-varying interactions among variables are taken into account).
 Typically, simulation models are stochastic and dynamic.
What is Simulation?
 A simulation of a system is the operation of a model of the system. The model can be reconfigured and
experimented.
 Usually, this is impossible, too expensive or impractical to do in the system it represents.
 The operation of the model can be studied, and hence, properties concerning the behavior of the actual
system or its subsystem can be inferred.
 In its broadest sense, simulation is a tool to evaluate the performance of a system, existing or proposed,
under different configurations of interest and over long periods of real time.
 Simulation is used before an existing system is altered or a new system is built, to reduce the chances
of failure to meet specifications, to eliminate unforeseen bottlenecks, to prevent under or over-
utilization of resources, and to optimize system performance.
 For instance, simulation can be used to answer questions like:
 What is the best design for a new telecommunications network?
 What are the associated resource requirements?
 How will a telecommunication network perform when the traffic load increases by 50%?
 How will a new routing algorithm affect its performance? Which network protocol optimizes
Network performance?
 What will be the impact of a link failure?
Simulation Study:

 The iterative nature of the process is indicated by the system under study becoming the altered system
which then becomes the system under study and the cycle repeats.
 In a simulation study, human decision making is required at all stages, namely, model development,
experiment design, output analysis, conclusion formulation, and making decisions to alter the system
under study.
 The only stage where human intervention is not required is the running of the simulations, which most
simulation software packages perform efficiently.

Schematic of a Simulation Study:

Developing a Simulation Model:


 The steps involved in developing a simulation model, designing a simulation experiment, and
performing simulation analysis are:
 Step 1. Identify the problem: Enumerate problems with an existing system. Produce
requirements for a proposed system.
 Step 2. Formulate the problem: Select the bounds of the system, the problem or a part thereof,
to be studied. Define overall objective of the study and a few specific issues to be addressed.
 Step 3. Collect and process real system data: Collect data on system specifications (e.g.,
bandwidth for a communication network), input variables, as well as performance of the existing
system. Identify sources of randomness in the system, i.e., the stochastic input variables.
 Step 4. Formulate and develop a model: Develop schematics and network diagrams of the
system (How do entities flow through the system?). Translate these conceptual models to
simulation software acceptable form. Verify that the simulation model executes as intended.
 Step 5. Validate the model: Compare the model's performance under known conditions with the
performance of the real system. Perform statistical inference tests and get the model examined
by system experts.
 Step 6. Document model for future use: Document objectives, assumptions and input variables
in detail variables of a simulation model so that we may observe and identify the reasons for
changes in the performance measures.
 Step 7. Select appropriate experimental design: Select a performance measure, a few input
variables that are likely to influence it, and the levels of each input variable.
 Step 8. Establish experimental conditions for runs: Address the question of obtaining accurate
information and the most information from each run. Determine if the system is stationary
(performance measure does not change over time) or non-stationary (performance measure
changes over time).
 Step 9. Perform simulation runs: Perform runs according to steps 7-8 above.
 Step 10. Interpret and present results: Compute numerical estimates (e.g., mean, confidence
intervals) of the desired performance measure for each configuration of interest. To obtain
confidence intervals for the mean of auto-correlated data, the technique of batch means can
be used.
 Step 11. Recommend further course of action: This may include further experiments to increase
the precision and reduce the bias of estimators, to perform sensitivity analyses, etc.
 Simulation models consist of the following components: system entities, input variables, performance
measures, and functional relationships.
 For example in a simulation model of an M/M/1 queue, the server and the queue are system entities,
arrival rate and service rate are input variables, mean wait time and maximum queue length are
performance measures, and 'time in system = wait time + service time' is an example of a functional
relationship.
Problems where Simulation Modelling & Analysis is used:
In general, whenever there is a need to model and analyze randomness in a system, simulation is the tool of
choice. More specifically, situations in which simulation modeling and analysis is used include the following:
 It is impossible or extremely expensive to observe certain processes in the real world, e.g., next year's
cancer statistics, performance of the next space shuttle, and the effect of Internet advertising on a
company's sales.
 Problems in which mathematical model can be formulated but analytic solutions are either impossible
(e.g., job shop scheduling problem, high order difference equations) or too complicated (e.g., complex
systems like the stock market, and large scale queuing models).
 It is impossible or extremely expensive to validate the mathematical model describing the system, e.g.,
due to insufficient data.
Selection of Simulation Software:

 Although a simulation model can be built using general purpose programming languages which are
familiar to the analyst, available over a wide variety of platforms, and less expensive, most simulation
studies today are implemented using a simulation package.
 The advantages are reduced programming requirements, natural framework for simulation modeling,
conceptual guidance, automated gathering of statistics, and graphic symbolism for communication,
animation and increasingly, flexibility to change the model.
 There are hundreds of simulation products on the market, many with price tags of $15,000 or more.
 Naturally, the question of how to select the best simulation software for an application arises. Metrics
for evaluation include modeling flexibility, ease of use, modeling structure (hierarchical v/s flat; object-
oriented v/s nested), code reusability, graphic user interface, animation, dynamic business graphics,
hardware and software requirements, statistical capabilities, output reports and graphical plots,
customer support, and documentation.
Simulation Packages – Types:
 The two types of simulation packages are:
 Simulation languages
 Application-oriented simulators (Table 2)
 Simulation languages offer more flexibility than the application-oriented simulators.
 On the other hand, languages require varying amounts of programming expertise.
 Application-oriented simulators are easier to learn and have modeling constructs closely related to the
application.
 Most simulation packages incorporate animation which is excellent for communication and can be used
to debug the simulation program; a "correct looking" animation, however, is not a guarantee of a valid
model. More importantly, animation is not a substitute for output.

Benefits of Simulation Modelling and Analysis:

According to practitioners, simulation modeling and analysis is one of the most frequently used operations
research techniques. When used judiciously, simulation modeling and analysis makes it possible to:

 Obtain a better understanding of the system by developing a mathematical model of a system of


interest, and observing the system's operation in detail over long periods of time.
 Test hypotheses about the system for feasibility.
 Compress time to observe certain phenomena over long periods or expand time to observe a complex
phenomenon in detail.
 Study the effects of certain informational, organizational, environmental and policy changes on the
operation of a system by altering the system's model; this can be done without disrupting the real system
and significantly reduces the risk of experimenting with the real system.
 Experiment with new or unknown situations about which only weak information is available.
 Identify the "driving" variables - ones that performance measures are most sensitive to - and the
inter-relationships among them.
 Identify bottlenecks in the flow of entities (material, people, etc.) or information.
 Use multiple performance metrics for analyzing system configurations.
 Employ a systems approach to problem solving.
 Develop well designed and robust systems and reduce system development time.
Pitfalls to Guard against in Simulation:
Simulation can be a time consuming and complex exercise, from modeling through output analysis that
necessitates the involvement of resident experts and decision makers in the entire process. Following is a
checklist of pitfalls to guard against:
 Unclear objective
 Using simulation when an analytic solution is appropriate
 Invalid model
 Simulation model that is too complex or too simple
 Erroneous assumptions
 Undocumented assumptions (This is extremely important and it is strongly suggested that assumptions
made at each stage of the simulation modeling and analysis exercise be documented thoroughly)
 Using the wrong input probability distribution
 Replacing a distribution (stochastic) by its mean (deterministic)
 Using the wrong performance measure
 Bugs in the simulation program
 Using standard statistical formulas that assume independence in simulation output analysis
 Initial bias in output data
 Poor schedule and budget planning
 Poor communication among the personnel involved in the simulation study

Functional Model:
 The functional model is an abstract framework for understanding the main functionality groups of the
IoT architecture environment and their relationships.
 Longitudinal Functional Groups: Application, IoT Business Process Management, Virtual Entity & IoT
Service, Service Organisation, Communication, Devices
 Transversal Functionality Groups: Management, Security

 The functional model contains seven longitudinal functionality groups complemented by two transversal
functionality groups.
 These transversal groups provide functionalities that are required by the longitudinal groups.
 The policies governing the transversal groups will not only be applied to the groups themselves, but do
also pertain to the longitudinal groups.
 For example: for a security policy to be effective, it must ensure that there is no functionality provided
by a component that would circumvent the policy and provide an unauthorized access.
 The functional model is a layered model and the main communication flows between the FGs. Since the
transversal FGs (Management & Security) interface with most of the other FGs, their relationships are
not explicitly depicted.
Longitudinal Functionality Groups:
 IoT Business Process Management: The IoT Business Process Management Functionality Group (BPM
FG) relates to the integration of traditional business process management systems, as they are common
in the enterprise world, with the IoT.
 Virtual Entity & IoT Service: Service Organisation: It is the central functional group that acts as a
communication hub between several other functional groups.

 Communication: The CFG ensures reliable communication and flow control. The CFG enables bridging
among different networks, allowing devices to perform as a network entry point implementing
forwarding, filtering, connection tracking and packets aggregation functions. All those functionalities are
as well supported by an error detection and correction infrastructure implemented by this FG.
Transversal Functionality Groups:

 Management: The Management Functionality Group (Management FG) is responsible for the
composition and tracking of actions that involve one or more other FGs. Example: Turning the entire IoT
system into a sleep mode during an energy-harvesting cycle. Tracking of other FGs.
 Security: The Security Functionality Group (Security FG) is responsible for ensuring the security and
privacy of the IoT system. The Security FG is also in charge of protecting the user's private parameters.
IoT Design Methodology Steps:

IoT Design Methodology Steps for Home Automation System:


Purpose & Requirements Specification: To identify the system purpose, behavior and requirements.
 Purpose: A home automation system that allows controlling of the lights in a home application.
(Remotely, using a web).
 Behavior: The home automation system should have auto and manual modes. In auto mode, the system
measures the light level in the room and switches on the light when it gets dark. In manual mode, the
system provides the option of manually and remotely switching on/off the light.
 The requirements can be like data collection requirements, data analysis requirements, system
management requirements, data privacy and security requirements, user interface requirements.
 System Management Requirement: The system should provide remote monitoring and control
functions.
 Data Analysis Requirement: The system should perform local analysis of the data.
 Application Deployment Requirement: The application should be deployed locally on the device,
but should be accessible remotely.
 Security Requirement: The system should have basic user authentication capability.

Process Model Specification: It formally describes the use cases of the IoT system based on the purpose and
requirement specifications.
Domain Model Specification:

 The domain model describes the main concepts, entities and objects in the domain of IoT system to be
designed.
 It defines the attributes of the objects and relationships between objects.
 It provides an abstract representation of the concepts, objects and entities in the IoT domain,
independent of any specific technology or platform.
 The entities, objects and concepts include:
 Physical entity
 Virtual entity
 Device
 Resource
 Service
Information Model Specification: The Information Model defines the structure of all the information in the IoT
system and adds more details to the Virtual Entities by defining their attributes and relations. But it does not
describe the specifics of how the information is represented or stored.

Service Specifications: Service specifications define the services in the IoT system: service types, service
inputs/output, service endpoints, and service schedules.
IoT Level Specification: Highlights the deployment level for IoT system.

Functional View Specification: It defines the functions of the IoT systems grouped into various functional
groups. The functional groups included in a functional view are: Device, Communication, Services, Management,
Security and Application.

Operational View Specification: In this step, various options pertaining to the IoT system deployment and
operation are defined, such as, service hosting options, storage options, device options, application hosting
options, etc.
Device and Component Integration: The devices and components used here are Raspberry Pi minicomputer,
LDR sensor, Relay switch actuator.

Application Development: Auto – controls the light appliance automatically based on the lighting conditions in
the room. Light – When Auto mode is off, it is used for manually controlling the light appliance; when auto mode
is on, it reflects the current state of the light appliance.

Stages of Data Life Cycle:

The data life cycle is the sequence of stages that a particular unit of data goes through from its initial generation
or capture to its eventual archival and/or deletion at the end of its useful life. The various phases of a typical
data lifecycle are illustrated in the figures below:
 Creation: Data can be created in different forms like PDF, image, Word document or SQL database data
and in any organization, data is created in one of the 3 ways:
 Data Acquisition: acquiring already existing data which has been produced outside the
organization.
 Data Entry: manual entry of new data by personnel within the organization.
 Data Capture: capture of data generated by devices used in various processes in the organization.
 Storage: Once data has been created, it needs to be stored and protected, with the appropriate level of
security. A robust backup and recovery process should also be implemented to ensure retention of the
data during the lifecycle.
 Usage: During the usage phase of the data lifecycle, data is used to support activities in the organization.
Data can be viewed, processed, modified and saved. An audit trail should be maintained for all critical
data to ensure that all modifications to data are fully traceable. Data may also be made available to share
with others outside the organization.
 Archival: Data archival is the copying and removal of data from all active production environments where
it is stored in case it is needed again, but where no maintenance or general usage occurs. If necessary,
the data can be restored to an environment where it can be used.
 Destruction: Data destruction or purging is the removal of every copy of a data item from an organization
and is typically done from an archive storage location. The challenge of this phase is to ensure that the
data has been properly destroyed. It is important to ensure before destroying data that the data items
have exceeded their required regulatory retention period.
Data Lifecycle & its Analysis:

Fundamental steps of a data analytics project plan (AI, Machine Learning and Big DATA):
These seven data science steps ensure realization of business value from each unique project and mitigate the
risk of error.
 Define your goal: Understand the business or activity of the data project. Identify a clear objectives of
what you want to do with data.
 Get your data: Once the objectives are identified, collect and aggregate the data from different sources.
Ways to collect the data:
 Query database (using technical skills like MySQL to process the data)
 Scrape from the websites
 Connect to Web APIs
 Obtain data directly from files (downloading it from Kaggle or existing corporate data which are
stored in CSV)
 Explore and clean the data:
 Clean the data: Process of organizing and tidying up the data, removing what is no longer needed,
replacing what is missing and standardizing the format of the collected data. Steps involved:
 Convert the data from one format to another and consolidate everything into one
standardized format
 Filter the data
 Extracting and Replacing values(missing data sets or they could appear to be non-values,
this is the time to replace them accordingly)
 Split, Merge and Extract data (For example, the place of origin, contains both “City” and
“State”. Depending on the requirements, the data can be either merged or split)
 Explore the data: Once the data is ready to be used, and before processing using AI and Machine
Learning, the data needs to be examined:
 Figure out the business question and transform them into a data science question
 Inspect the data and its properties
 Compute descriptive statistics to extract features and test significant variables. Testing
significant variables often is done with correlation. (e.g. exploring the risk of someone
getting high blood pressure in relations to their height and weight)
 Enrich the Dataset: Process of enhancing, refining, and improving raw data.
 Build helpful visualizations:
 Visualization is the best way to explore and communicate the findings and is the next phase of
data analytics project.
 Graphs are also another way to enrich the dataset and develop more interesting features
 Data visualization to help to identify significant patterns and trends of the data. A better picture
of the data can be obtained through simple charts like line charts or bar charts further to
understand the importance of the data.
 Get predictive:
 Machine learning algorithms help in getting insights and predicting future trends
 Reduce the dimensionality of the data set and select the relevant ones which contribute to the
prediction of results.
 Interpret models and data. The predictive power of a model lies in its ability to generalize i.e. the
ability to generalize unseen future data
 Interpreting data refers to the presentation of data to a non-technical layman. This results in
answering to the business questions stated in objective together with the actionable insights that
is found through the data science process.
 Actionable insight is a key outcome of the data science process i.e. predictive analytics and later
on prescriptive analytics (i.e. to repeat a positive result, or prevent a negative outcome).
 Iterate: To make the data useful and accurate, need to constantly reevaluate, retrain it, and develop
new features.
Reuse existing IoT Solutions & Reusability Plan:
Reusing of existing components establish development of new technologies in less time and effort. One of the
main functionalities that can be reused is related to Device Management layer and the Black-box reuse. Steps
involved:
 Identify the reusable asset
 Integrate the reusable asset: Ways to integrate the reusable asset to the new system-
 Black-box reuse: The assets are integrated in the new system simply as they are.
 White-box reuse: New functionalities are implemented on the existing components for
integrating them.
In IoT development, due to the large scale of operations when building applications it is important to
know from the beginning the reuse strategy for integrating components based on the types of
functionalities that can be reused.
 Evaluate the reusable asset: It is necessary to ensure that the quality of the reusable components does
not compromise the overall quality of the application and they do not deteriorate the quality of the new
system.
 Extendible - to handle the variability of heterogeneous devices and technologies
 Flexible – to easily adapt to changes caused by the external environment and implement new
requirements
 Reusable - to allow further reuse in future IoT applications, saving time and effort
 Functional - to offer several functionalities through APIs.
 Understandable and Effective

Reusable Components: Cloud data storage, cloud service, sensors, controllers, IoT platform, computing devices.
MODULE 4
Requirements to Develop an IoT Project:
 Cloud computing: Cloud computing enables storage and processing of unstructured and structured data
into real-time information.
 Access: Another IoT requirement is its accessibility from anywhere and anytime.
 Security: Security is an important factor that forms a part of IoT requirements since confidential and
sensitive information is exchanged across the businesses.
 User experience: The more seamless the User Experience (UX), the greater the use of IoT systems.
 Smart machines: Smart machines form the basic components or the starting point from which all
connected things can be derived.
 Asset management: Managing assets through cloud-based services ease the functioning and
maintenance of IoT systems.
 Big Data analytics: Analysis of big data provides intelligent information, an ideal requirement for
industrial purposes.
Business Requirement means the Resources required to meet the purposes, collected at high level (exact
technical details are not required) like need sensing medium, wireless medium, but no need to mention the
exact sensor you are going to use or wireless protocol you are going to use. Business requirements are collected
by management people. Exact technical details are given by software and hardware engineers.
Approaches to gather Business Requirements:
Top 5 user requirements of IoT edge platforms:
1) Pick a platform with extensive protocol support for data ingestion: (Platform indicates embedded
system (both software and Hardware). Choose appropriate platform for Data ingestion. ZeroMQ is asyn
protocol, transfer the same data to multiple devices simultaneously).
 To seamlessly bring data from devices into the edge platform, enterprises should choose leading
IoT platforms that support an extensive mix of protocols for data ingestion.
 The list of protocols for industrial-minded edge platforms generally includes brownfield
deployment staples such as OPC-UA, BACNET and MODBUS as well as more current ones such
as ZeroMQ, Zigbee, BLE and Thread.
 Equally as important, the platform must be modular in its support for protocols, allowing
customization of existing and development of new means of asset communications.
2) Ensure the platform has robust capability for offline functionality:
 To ensure that the edge platform works when connectivity is down or limited, enterprises should
choose leading IoT edge platforms that provide capabilities in four functional areas.
 First, edge systems need to offer data normalization to successfully clean noisy sensor data.
 Second, these systems must offer storage to support intermittent (intermittent: irregular
connectivity. At that time no need to store all the data in local memory. Based on conditions, it
is enough to store the critical data alone with time stamp. But we can monitor the data
continuously), unreliable or limited connectivity between the edge and the cloud.
 Third, an edge system needs a flexible event processing engine at the edge making it possible to
generate insight from machine data when connectivity is constrained.
 Fourth, an IoT edge-enabled platform should integrate with systems including ERP, MES,
inventory management and supply chain management to help ensure business continuity and
access to real-time machine data.
 Double pagination: First data stored in local pipeline, then transfer to remote. So that we can
identify the data loss and do data comparison.
 Eg: Website offline function. Pages stored in Cache memory. First time it takes time to load. After
that due to cache memory, it will loaded fast.
3) Make sure the platform provides cloud-based orchestration to support device lifecycle management:
 To make sure that the edge platform offers highly secure device management, enterprises should
select IoT platforms that offer cloud-based orchestration (orchestration means development.
After the product developed how to improve the functionality like version to version) for
provisioning (provisioning means: provide or add additional features like Access, authentication,
security, Firewall), monitoring and updating of connected assets.
 Issue should be fixed and monitor the same whether it is occurred by next time or not
 Leading IoT platforms provide factory provisioning capabilities for IoT devices.
 These API-based interactions allow a device to be preloaded with certificates, keys, edge
applications and an initial configuration before it is shipped to the customer.
 In addition, platforms should monitor the device using a stream of machine and operational data
that can be selectively synced with cloud instances.
 Finally, an IoT platform should push updates over-the-air to edge applications, the platform itself,
gateway OSs, device drivers and devices connected to a gateway.
4) The platform needs a hardware-agnostic scalable architecture (agnostic means Interoperable, means
better fit for the applications):
 Since there are tens of thousands of device types in the world, enterprises should select IoT
platforms that are capable of running on a wide range of gateways and specialized devices.
 These platforms should employ the same software stack at the edge and in the cloud, allowing
a seamless allocation of resources.
 Platforms should support IoT hardware powered by chips that use ARM-, x86-, and MIPS-based
architectures.
 Using containerization technologies and native cross-compilation, the platforms offer a
hardware-agnostic approach that makes it possible to deploy the same set of functionalities
across a varied set of IoT hardware without modifications.
 Cross compilation is different from Native cross compilation.
 Example of Cross compilation: C python, J Python
 Example of Native cross compilation: Merging assembly code with Embedded C code. First we
convert the Embedded C code to hex code then merge with existing Hex code.
5) Comprehensive analytics and visualization tools make a big difference:
 Enterprises should choose IoT platforms that offer out-of-the-box capabilities to aggregate data,
run common statistical analyses and visualize data.
 These platforms should make it easy to integrate leading analytics toolsets and use them to
supplement or replace built-in functionality. Different IoT platform users will require different
analyses and visualization capabilities.
 For example, a plant manager and a machine worker will want to access interactive dashboards
that deliver useful information and relevant controls for each of their respective roles.
 Having flexibility in analytics and visualization capabilities will be essential for enterprises as they
develop IoT solutions for their multiple business units and operations teams.
 Enterprises worldwide are using IoT to increase security, improve productivity, provide higher
levels of service and reduce maintenance costs.
Defining Problem Statements:
What is a Problem Statement?
 The issue (problem), stated clearly and with enough contextual detail to establish
 Why it is important.
 The method of solving the problem, often stated as a claim
 The designer should understand the problem way before defining a potential solution.
 The problem definition should be a living document that can always be revisited and updated often when
necessary.
 Problem statement is a statement of a current issue or problem that requires timely action to improve
the situation.
 This statement concisely explains the barrier the current problem places between a functional process
and/or product and the current (problematic) state of affairs.
 This statement is completely objective, focusing only on the facts of the problem and leaving out any
subjective opinions.
 To make this easier, it's recommended that you ask who, what, when, where and why to create the
structure for your problem statement. This will also make it easier to create and read, and makes the
problem at hand more comprehensible and therefore solvable.
 The problem statement, in addition to defining a pressing issue, is a lead-in to a proposal of a timely,
effective solution.
Questions to ask (that help define a problem statement):
 What problem are we trying to solve?
 How do we know this is a real problem?
 Why is it important to solve?
 Who are our users?
 How will we know if we’ve solved the problem?
Defining Problem Statements:
 Start with “How might we…”, or “What can we do to…” type of questions.
 Frame according to specific users (User-centered approach)
 The 5 ‘W’s — Who, What, Where, When and Why
 Asking a lot of “why’s” (on both failures and success) help you to dive deeper into the problem
Why is a Problem Statement important?
 A problem statement is a communication tool. Problem statements are important to businesses,
individuals and other entities to develop projects focused on improvement.
 Whether the problem is pertaining to badly-needed road work or the logistics for an island construction
project; a clear, concise problem statement is typically used by a project's team to help define and
understand the problem and develop possible solutions.
 These statements also provide important information that is crucial in decision-making in relation to
these projects or processes.
 Problem statements have multiple purposes:
 The problem statement has other purposes, too. One is to identify and explain the problem in a
concise but detailed way to give the reader a comprehensive view of what's going on. This
includes identifying who the problem impacts, what the impacts are, where the problem occurs
and why and when it needs to be fixed.
 Another purpose of the problem statement is to clarify what the expected outcomes are.
Establishing what the desired situation would look like helps provide an overarching idea about
the project. The proposed solution and scope and goals of the solution are made clear through
this statement.
 Problem statements help guide projects:
 The problem statement provides a guide for navigating the project once it begins.
 It is continually referenced throughout the duration of the project to help the team remain
focused and on track.
 Near the completion of the project, this statement is again referred to in order to verify the
solution has been implemented as stated and that it does indeed solve the initial problem.
 This can help in making sure that proper steps are being taken to prevent the same problem from
happening again in the future.
 Bear in mind that the problem statement does not attempt to define the solution, nor does is
outline the methods of arriving at the solution.
 The problem statement is a statement that initiates the process by recognizing the problem.
Benefits of Problem Statement:

 Identify and explain the problem in a concise but detailed way.


 The problem statement provides a guide for navigating the project once it begins.
 It is continually referenced throughout the duration of the project to help the team remain focused and
on track.
 Near the completion of the project, this statement is again referred to in order to verify the solution has
been implemented as stated and that it does indeed solve the initial problem.
 The proposed solution and scope and goals of the solution are made clear through this statement.
 The problem statement is a statement that initiates the process by recognizing the problem. It is a tool
to gain support and approval of the project from management and stakeholders.
How to write a Problem Statements – As a Document: (IN DETAIL – LINK IN WHATSAPP)
A problem statement is a tool used to gain support and approval of the project from management and
stakeholders. As such, it must be accurate and clearly written. There are a few key elements to keep in mind
when crafting a problem statement that can have a positive impact on the outcome of the project.
 Describe how things should work
 Explain the problem and state why it matters
 Explain your problems financial costs
 Back up your claims
 Propose a solution
 Explain the benefits of your proposed solution(s)
 Conclude by summarizing the problem and solution
Other methods for defining problem statements: Affinity Diagrams, Empathy Mapping
Affinity Diagram:
 The Affinity Diagram is a method which can help you gather large amounts of data and organise them
into groups or themes.
 Affinity: Friendly discussion, like a group activity. Divide a problem into small modules and think the
possible risk/issues by deep dive into it. Using stick notes paste the small modules by writing the issues.
Color indicates the level of risk like high/low and medium. Dots indicates the number of voting. Based
on that we give higher priority to the problem statement

Empathy Mapping:
 An empathy map is a collaborative visualization used to articulate what we know about a particular type
of user.
 Empathy mapping: Empathy means Humanity. It is like an Interview process. All the major and brand
owners have their own company for doing Empathy mapping. Eg. Mercedz Benz: They won companies
in China, India and Germany for doing Empathy mapping.
 Direct/Virtual Interview happens in service Centres. Interview Questions are of a good standard. From
the answers from customers, they improve the product features or remove the unwanted features
sometime. Empathy mapping acts as an interface to meet the product owner and customers. Empathy
mapping will be done before the product launch or after the product launch. User giving Interview is
called as Persona.
Business Requirements for Use Case Development:
10 steps to a successful business case for IoT (IN DETAIL & 4 IoT BUSINESS CASE STRATEGIES WITH EXAMPLES
– LINK IN WHATSAPP)
 Recognize the need for a business case
 Start on the shop floor (organisation)
 Identify meaningful data
 Employ predictive analytics
 Track your products and assets
 Create new revenue models
 Move from drawing board to reality
 Choose the right IoT platforms and partners
 Build a proof of concept
 Rollout at scale
Assets for Development of IoT Solutions:
Asset Tracking (MORE ABOUT THIS – LINKS IN WHATSAPP)
 Asset tracking is the process of tracking a physical asset (can be human or equipment) within a
manufacturing facility to identify their location accurately and utilize them to the fullest.
 Nevertheless, effective asset management is something most manufacturers consider challenging due
to a lack of a digital, centralized place to track and monitor their asset utilization.
Smart Technologies behind Asset Tracking:
 Barcodes
 Radio Frequency Identification (RFID)
 NFC (Near Field Communication)
 GPS (Global Positioning System)
 Bluetooth Low Energy (BLE)
 Internet of Things (IoT)
4 Key features Best Asset Tracking Systems include:
 Asset tracking analytics
 Asset tracking reporting
 Asset tracking alerts
 Asset depreciation tracking
IoT-Enabled Asset Tracking: Track assets in real-time throughout the manufacturing factory
 How the IoT enables mobile asset tracking?
 Monitor equipments even in locations where humans can’t intervene
 Monitor equipments even in locations where humans can’t intervene
 Using IoT in Human Asset Management
MODULE 5
Value Engineering (VE):
 VE is an organized/systematic approach directed at analyzing the function of systems, equipment,
facilities, services, and supplies for the purpose of achieving their essential functions at the lowest life-
cycle cost consistent with required performance, reliability, quality, and safety.
 The implementation of the VE process on a problem typically increases performance, reliability, quality,
safety, durability, effectiveness, or other desirable characteristics.
 Value (V) = performance (f)/cost (c)
 VE isn’t a cost-cutting technique; instead, it is a systematic functional approach to maximize the
performance.
 Cost here refers to the product lifecycle cost (not on initial and other costs). (Cost analysis techniques
are different from VE.)
Why VE?

 Scarcity of materials was the starting point during World War 2.


 Now, effective utilization of materials/key makes it a better resource management.
 Ex: Temperature and humidity sensor (H/W), number of gateway for a smart home (H/W). Is this
applicable to IoT and its SW too?
Phases of VE:
IOT Reference Framework:

How can VE be applied to S/W Models?


 Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop
and test high quality softwares.
 The SDLC aims to produce a high-quality software that meets or exceeds customer expectations, reaches
completion within times and cost estimates.

You might also like