You are on page 1of 55

SDA ASSSIGNMENT NO 1

SUBMITTED BY: Mohsin Aslam


SECTION: BCSE 6B
REGISTRATION NO: F171BCSE186
Q1. Explain the components of Analysis model and its different functions

 Analysis model operates as a link between the 'system description' and the 'design
model'.
 In the analysis model, information, functions and the behavior of the system is defined
and these are translated into the architecture, interface and component level design in the 'design
modeling'.

Elements of the analysis model

1. Scenario based element


 This type of element represents the system user point of view.
 Scenario based elements are use case diagram, user stories.
2. Class based elements
 The object of this type of element manipulated by the system.
 It defines the object, attributes and relationship.
 The collaboration is occurring between the classes.
 Class based elements are the class diagram, collaboration diagram.
3. Behavioral elements
 Behavioral elements represent state of the system and how it is changed by the external
events.
 The behavioral elements are sequenced diagram, state diagram.
4. Flow oriented elements
 An information flows through a computer-based system it gets transformed.
 It shows how the data objects are transformed while they flow between the various
system functions.

Analysis Rules of Thumb

The rules of thumb that must be followed while creating the analysis model.

The rules are as follows:


 The model focuses on the requirements in the business domain. The level of abstraction
must be high i.e. there is no need to give details.
 Every element in the model helps in understanding the software requirement and focus
on the information, function and behavior of the system.
 The consideration of infrastructure and nonfunctional model  delayed in the design.
For example, the database is required for a system, but the classes, functions and behavior of
the database are not initially required. If these are initially considered, then there is a delay in
the designing.
 Throughout the system minimum coupling is required. The interconnections between the
modules is known as 'coupling'.
 The analysis model gives value to all the people related to model.
 The model should be simple as possible. Because simple model always helps in easy
understanding of the requirement.

Concepts of data modeling

 Analysis modeling starts with the data modeling.


 The software engineer defines all the data object that proceeds within the system and the
relationship between data objects are identified.
Data objects
 The data object is the representation of composite information.
 The composite information means an object has a number of different properties or
attribute.
For example, Height is a single value so it is not a valid data object, but dimensions contain the
height, the width and depth these are defined as an object.
Data Attributes
Each of the data object has a set of attributes.
Data object has the following characteristics:
 Name an instance of the data object.
 Describe the instance.
 Make reference to another instance in another table.

SDA ASSSIGNMENT NO 2

SUBMITTED BY: Mohsin Aslam


SECTION: BCSE 6B
REGISTRATION NO: F171BCSE186
Q1. Explain the 10 design principles of software design.

1. ‘Tunnel vision’ means limited vision. The user should not be limited to see only
one portion of the whole system.
2. The programmer should keep the record of the design so it’s traceable to analysis
model.
3. ‘Reinvent the wheel’ means reversing the cycle. All the information should be
taken all at once and the programmer should not go back again and again.
4. Requirements of both Psychology and Software Engineering must be fulfilled.
Psychologist will take users’ feedback while software engineer will focus on the
design etc. When we work for other domains, we have little or less knowledge
about it so we take help from domain experts to minimize the distance between
the two areas.
5. The design should be consistent and combined. Our design should not be unstable
and it should merge different areas.
6. If the designer needs to change something in the design. The design should be
able to take that change. Our design should be change prone.
7. The design should be capable to break down gently, even if there is some
abnormal behavior occurs.
8. Design and coding are two different terms. Design is the description of the logic,
which is used in solving the problem. Coding is the language specification which
is implementation of the design.
9. Design’s quality should be assessed when it is created and not after it’s Creation.
10. Design should have minimal conceptual errors. It must be ensured that
major conceptual errors of design such as ambiguousness and inconsistency are
addressed in advance before dealing with the syntactical errors present in
the design model.
SDA ASSSIGNMENT NO 3

SUBMITTED BY: Mohsin Aslam


SECTION: BCSE 6B
REGISTRATION NO: F171BCSE186

Q.1 Describe the three main factors influencing software design


Answer:
• Firmness: A program should not have any bugs that inhibit its function.
• Commodity: A program should be suitable for the purposes for which it was
intended.
•  Delight: The experience of using the program should be pleasurable one.

Q.2 Explain the four factors which effect the software design model
Answer:
1. Data/Class Design
2. Architectural Design
3. Interface Design
4. Component-Level Design
Data/Class Design:
• The data/class design transforms class models into design class realizations and the
requisite data structures required to implement the software.
• The objects and relationships defined in the Class-Responsibility-Collaborator (CRC)
diagram and the detailed data content depicted by class attributes and other notation
provide the basis for the data design action
Architectural Design:
• The architectural design defines the relationship between major structural elements of the
software, the architectural styles and design patterns that can be used to achieve the
requirements defined for the system, and the constraints that affect the way in which
architecture can be implemented.
• The architectural design representation—the framework of a computer-based system—is
derived from the requirements model.
Interface Design:
• The interface design describes how the software communicates with systems that
interoperate with it, and with humans who use it.
• An interface implies a flow of information (e.g., data and/or control) and a specific type
of behavior.
Therefore, usage scenarios and behavioral models provide much of the information required for
interface design.

Component-Level Design:
• The component-level design transforms structural elements of the software architecture
into a procedural description of software components.
• Information obtained from the class-based models, flow models, and behavioral models
serve as the basis for component design.

Q.3 Explain the quality guidelines needed to architect software


Answer:
• A design should exhibit an architecture that (1) has been created using recognizable
architectural styles or patterns, (2) is composed of components that exhibit good design
characteristics and (3) can be implemented in an evolutionary fashion
• For smaller systems, design can sometimes be developed linearly.
• A design should be modular; that is, the software should be logically partitioned into
elements or subsystems
• A design should contain distinct representations of data, architecture, interfaces, and
components.
•  A design should lead to data structures that are appropriate for the classes to be
implemented and are drawn from recognizable data patterns
• A design should lead to components that exhibit independent functional characteristics.

• A design should lead to interfaces that reduce the complexity of connections between
components and with the external environment.
•  A design should be derived using a repeatable method that is driven by information
obtained during software requirements analysis.

• A design should be represented using a notation that effectively communicates its


meaning.

Q.4 What design principles are followed for constructing software?


Answer:
• The design process should not suffer from tunnel vision
•  The design should be traceable to the analysis model.
•  The design should not reinvent the wheel.
•  The design should “minimize the intellectual distance” between the software and the
problem as it exists in the real world.
•  The design should exhibit uniformity and integration.
•  The design should be structured to accommodate change.
•  The design should be structured to degrade gently, even when aberrant data, events, or
operating conditions are encountered.
•  Design is not coding, coding is not design.
•  The design should be assessed for quality as it is being created, not after the fact.
• The design should be reviewed to minimize conceptual (semantic) errors.

Q.5 Explain the “Open the door” phenomena in terms of stepwise software
design refinement
Answer:
An example of a procedural abstraction would be the word open for a door. Open implies a long
sequence of procedural steps (e.g., walk to the door, reach out and grasp knob, turn knob and
pull door, step away from moving door, etc.).
A data abstraction is a named collection of data that describes a data object.
In the context of the procedural abstraction open, we can define a data abstraction called door.
Like any data object, the data abstraction for door would encompass
a set of attributes that describe the door (e.g., door type, swing direction, opening mechanism,
weight, dimensions)
SDA ASSSIGNMENT NO 4
SUBMITTED BY: Mohsin Aslam
SECTION: BCSE 6B
REGISTRATION NO: F171BCSE186

Q.1 Explain the system’s decomposition stages into


subsystems

System Decomposition
System decomposition begins by decomposing the system into cohesive, well-defined
subsystems. Subsystems are then decomposed into cohesive, well-defined
components. Components are then decomposed into cohesive, well-defined sub-
components:
In fact, there is no important distinction between system, sub-system, component, and
sub-component. So the above process can be reduced to a simpler iterative process:
 

Q.2 Name and explain the eight design issues in software


design.
A number of key issues must be dealt with when designing software. Some are quality
concerns that all software must address—for example, performance, security, reliability,
usability, etc. Another important issue is how to decompose, organize, and package
software components. This is so fundamental that all design approaches address it in
one
way or another Software Design Principles, Software Design Strategies and Methods).
In
contrast, other issues “deal with some aspect of software’s behavior that is not in the
application domain, but which addresses some of the supporting domains”. Such
issues,
which often crosscut the system’s functionality, have been referred to as aspects, which
“tend not to be units of software’s functional decomposition, but rather to be properties
that affect the performance or semantics of the components in systemic ways”. A
number
of these key, crosscutting issues are discussed in the following sections (presented in
alphabetical order).
2.1 Concurrency
Design for concurrency is concerned with decomposing software into processes, tasks, and
threads and dealing with related issues of efficiency, atomicity, synchronization, and scheduling.
2.2 Control and Handling of Events
This design issue is concerned with how to organize data and control flow as well as how to
handle reactive and temporal events through various mechanisms such as implicit invocation
and call-backs.
2.3 Data Persistence
This design issue is concerned with how to handle long-lived data.
2.4 Distribution of Components
This design issue is concerned with how to distribute the software across the hardware
(including computer hardware and network hardware), how the components communicate, and
how middleware can be used to deal with heterogeneous software.
2.5 Error and Exception Handling and Fault Tolerance
This design issue is concerned with how to prevent, tolerate, and process errors and deal with
exceptional conditions.
2.6 Interaction and Presentation
This design issue is concerned with how to structure and organize interactions with users as
well as the presentation of information (for example, separation of presentation and business
logic using the Model-View-Controller approach). Note that this topic does not specify user
interface details, which is the task of user interface design (see topic 4, User Interface Design).
2.7 Security
Design for security is concerned with how to prevent unauthorized disclosure, creation,
change, deletion, or denial of access to information and other resources. It is also
concerned with how to tolerate security-related attacks or violations by limiting damage,
continuing service, speeding repair and recovery, and failing and recovering securely.
Access control is a fundamental concept of security, and one should also ensure the
proper use of cryptology.

Q.3 Explain the system issues and their relationship with the
system design

A process as complex as product software development comes with its


own set of challenges – challenges that you might encounter every day;
challenges that need to be addressed almost immediately to reduce the
impact they have on your end product.

So, we’ve identified the biggest challenges for software product companies,


but what can you do to overcome them?

Challenge 1: Project Infrastructure


Problem: An unestablished project environment is always a common
challenge in terms of its impact on project delivery. If the environment is not
available, then there is no way you can proceed with your project on time
and under budget.

Solution: To ensure efficient project development, test and pre-production


environments should be made available during the development, testing,
and user acceptance testing (UAT) phases. Invest in a solid IT
infrastructure upfront to create a better software development environment.

Challenge 2: Development Expectations and Outcome


Problem: A major reason for the complexity of software projects is the
constant changing of requirements. Not surprisingly, 33% of the
respondents of the Stack Overflow Developer Survey consider building
products with unspecific requirements, as their biggest
challenge. Requirements gathering is a lot more than a handful of business
consultants coming up with their ideal product – it is understanding fully
what a project will deliver.
Solution: To ensure that the product outcomes align with expectations and
requirements, a solid process and line of communication need to be
established. Remember the following best practices.

 Define and agree on the scope of the project


 Don’t assume end-user needs and requirements
 Communicate the needs and expectations between the development and ideation teams
 Involve users from the start of existing product refurbishment
 Consider UX from the start of new product development
 Create a clear, concise and thorough requirements document and confirm your
understanding of the requirements
 Create a prototype to confirm and/or refine final agreed-upon requirements

Challenge 3: Quality Assurance


Problem: Not reviewing code, or suppressing errors are just a means that
developers use to save time and meet deadlines.

Solution: Following a formal quality assurance process is imperative for a


successful launch. If you witness developers trying to cut corners in the
development process, discourage it immediately. Encourage them to
use best code development practices to meet the requirements sooner and
more efficiently

Challenge 4: Undefined Quality Standards

Problem: Defect identification is inevitable during functionality testing, even


if the product has been through thorough unit testing during the
development phase.

Solution: When you come out with the test approach, scenarios,


conditions, cases, and scripts, make sure your test plan covers all the
requirements that are to be delivered by planning several cycles of testing.

Challenge 5: Adapting the Latest Market Trends


Problem: Catering to the latest technology requirements such as mobile-
first or mobile-only or desktop-first is often challenging. If you don’t have
resources with hands-on experience in the latest and trending
technologies, it is sure to impact your time to market.
Solution: Make sure your resources constantly polish their skills to remain
relevant. This means staying up-to-date on market trends and
exploring insights into the new technology and trends that are out there.

Challenge 6: Design Influences


Problem: Product designs are under constant influence from stakeholders,
the development organization, and other internal and external factors.
Managing these influences is essential for maximizing the quality of
systems and their related influence on future business opportunities. The
increase of easily accessible, simple applications has resulted in user
expectations growing exponentially.

Solution: Make sure you streamline your design and offer a consistent


experience across devices, operating systems, and form factors.

Challenge 7: System & Application Integration


Problem: There are thousands of different technologies, systems, and
applications available for businesses. Integrating third-party or other
custom applications, such as your ERP systems, website, or inventory
management database adds substantial complexity to your project. And the
bigger challenge with integration is that they remain hidden throughout the
development process, and surface only at the end, leading to extra costs,
delays, lowered quality, and sometimes even failure of the project.

Solution: To conform your software solution to the external constraints of


other systems, you should:

 Get a clear understanding of end-user requirements


 Implement an enterprise-wide framework for the platform structure of the application
 Discover and research new technologies
 Design and develop new solutions
 Test and evaluate ideas to ensure optimum integration
 Pay extra attention to research and development, testing, and prototyping.
 Test, test, and test again before deploying the solution

Challenge 8: Project Management


Problem: Very often multi-tasking might give you more trouble than
expected. Resources cannot focus on a single task or module if their
manager bombards them with tasks.
“To be successful in project management you absolutely have to be an
excellent planner,” says Ryan Chan, founder and CEO of UpKeep
Maintenance Management.

Solution: One obvious way to be an excellent planner is to leverage


project management tools like Project Pro in O365 and keep projects,
resources, and teams organized and on track. Stay on track, meet all
deadlines, work seamlessly across applications, and efficiently and
effortlessly manage your projects. Always keep task allocation sequential
rather than parallel and encourage resources to give their best in whatever
they do.

Challenge 9: Test Environment Duplication


Problem: Testing a software system in a controlled environment is difficult
since the user is not immersed in a completely realistic working situation.
It’s impractical to gauge how a user will really use the application in
different situations on a regular basis until it’s deployed. However, with
software applications for both B2B and B2C segments becoming more and
more diversified than in the past, controlled testing is not sufficient.

Solution: Testing the software, application or product in a separate real-life


test environment is critical to your software’s success. This will allow you to
see what is working well and what is working poorly in a vacuum vs real-life
use.

Challenge 10: Security Infrastructure


Problem: Security breaches are on the rise; a recent study estimates that
96% of all web applications contain at least one serious vulnerability. How
do you cope with evolving security threats? How do you keep each layer of
your software or application secure?

Solution: Security is not just the responsibility of the software engineer but


also the responsibility of all the stakeholders involved including the
management, project managers, business analysts, quality assurance
managers, technical architects, and application and developers. If you want
to keep your infrastructure and company safe, remember the following best
practices:

 Look beyond technology to improve the security of your software


 Develop software using high-level programming languages with built-in security features
 Require security assurance activities such as penetration testing and code review
 Perform essential core activities to produce secure applications and systems

Q.4 Illustrate the client, end user, developer relationship as


stakeholders in a software design process .

End User: End user basically the user which will use your
application to get benefit from it and in return become a source of
income to you.
Developer : Developer are the people which will maintain your
product and will check timely that is anything required or need to
do changes in your system to make it better and user friendly for
the clients.

Q.5. What are the six general trade-off issues in a software design
process?

Conceptual design involves a series of tradeoff decisions among significant


parameters - such as operating speeds, memory size, power, and I/O
bandwidth - to obtain a compromise design which best meets the
performance requirements. Both the uncertainty in these requirements and
the important tradeoff factors should be ascertained. Those factors which
can be used to evaluate the design tradeoffs (usually on a qualitative basis)
include:

 Reliability
 Expandability
 Programmability
 Maintainability
 Compatibility
 Adaptability
 Availability
 Development Status and Cost
Q.6 Map a network packet communication example in terms
of software-modular based communication.
Network mapping is the study of the physical connectivity of networks e.g. the Internet.
Network mapping discovers the devices on the network and their connectivity. It is not to be
confused with network discovery or network enumerating which discovers devices on the
network and their characteristics such as (operating system, open ports, listening network
services, etc.). The field of automated network mapping has taken on greater importance as
networks become more dynamic and complex in nature.

Q.7 Explain the following terms in details


Sub-routine: Subroutines may be defined within a program, or a set of
subroutines may be packaged together in a library. Libraries of subroutines
may be used by multiple programs, and most languages provide some
built-in library functions. The C language has a very large set of functions in
the C standard library. All of the functions in the C standard library are
available to any program that has been linked with the C standard library.
Even assembly programs can make use of this library. Linking is done
automatically when is used to assemble the program source. All that the
programmer needs to know is the name of the function and how to pass
parameters
Co-routines: Coroutine basically is general control structures where flow
control is cooperatively passed between two different routines while
returning. They are computer program components that generalize
subroutines for non-preemptive multitasking, by allowing execution to be
suspended and resumed.

Coroutines are well-suited for implementing familiar program components


such as cooperative tasks, exceptions, event loops, iterators, and infinite
lists. To know the in-depth definition, click here.

Why Coroutines is Required

In order to read a file and parse it while reading into some meaningful data,
one can either read it step by step at every line or may also load the entire
content in memory, which would not be recommended for large text cases
e.g text editors like Microsoft Word. In general, to throw away the stack
concept completely, it is needed. And also when we want things to do
concurrently i.e. non-preemptive multitasking, we need coroutines for
concurrency.

How Coroutines Works

It rescheduled at specific points in the program and do not execute


concurrently, programs using coroutines can avoid locking entirely.
This is also considered as a benefit of event-driven or asynchronous
programming.

It is almost similar to threads but one main difference is that threads are
typically preemptively scheduled while coroutines are not and this is
because threads can be rescheduled at any instant and can execute
concurrently while coroutines rescheduled at specific points and not
execute concurrently.

Use Cases of Coroutines

o State Machines
o Actor Model
o Generators
o Communicating Sequential Processes
o Reverse Communication
State Machines: It is useful to implement state machines within a single
subroutine, where the state is determined by the current entry or exit point
of the procedure. This results in the more readable code as compared to
the use of goto.

Actor Model: It is very useful to implement the actor model of concurrency.


Each actor has its own procedures, but they give up control to the central
scheduler, which executes them sequentially.

Generators: It is useful to implement generators which are useful for


streams particularly input/output and for traversal of data structures.

Communicating Sequential Processes: It is useful to implement


communicating sequential processes where each sub-process is a
coroutine. Channel input/output and blocking operation yield coroutines and
a scheduler unblock them on completion events.

Reverse Communication: They are useful to implement reverse


communication which is commonly used in mathematical software, wherein
a procedure needs the using process to make a computation.

Comparison with Subroutines and Threads

Subroutines are the special cases of coroutines. When subroutines are


called, execution begins at the start, and once subroutines exit, it is
finished. An instance of a subroutine only returns once, and it does not hold
state between invocations.

This is very much similar to threads. However, coroutines are cooperatively


multitasked while threads are preemptively multitasked. This means that
coroutines provide concurrency, not parallelism.

Benefits of Coroutines

1. Implement in asynchronous programming.


2. Implement functional programming techniques.
3. Implement it because of poor support for true parallelism.
4. Pre-emptive scheduling can be achieved using coroutines.
5. Keep the system’s utilization high.
6. Requires less resource than threads.
7. Resource locking is less necessary.
8. Increases locality of reference.

1. Event-based style: What is event-based architecture? Event-based


architecture is an architectural design that uses the production and consumption of events to
control behaviour. When a specific component has finished running, instead of directly calling
a function of another component to incite additional actions, it would instead produce and
propagate an event out via an event bus to other components, which will then react by
performing some actions.
SDA ASSSIGNMENT NO 5

SUBMITTED BY: Mohsin Aslam


SECTION: BCSE 6B
REGISTRATION NO: F171BCSE186
Q: Make a Detailed note on the following points.
1. System types (Personnel systems, Embedded systems, Distributed
systems)
2. Thin Client model
3. Fat Client model
4. 3-tier middleware architecture.

Ans:
1. System Types:

 Embedded Systems:
EMBEDDED SYSTEM is a combination of computer software and hardware which is
either fixed in capability or programmable. An embedded system can be either an
independent system, or it can be a part of a large system. It is mostly designed for a
specific function or functions within a larger system. For example, a fire alarm is a
common example of an embedded system which can sense only smoke.

Example of Embedded Systems

Laser Printer

Laser Printers are using embedded systems to manage various aspect of the printing.
Apart from performing the main task of printing, it has to take user inputs, manage
communication with the computer system, to handle faults, and sense papers left on the
tray, etc.

Here, the main task of the microprocessor is to understand the text and control the
printing head in such a way that it discharges ink where it is needed.

To perform this, it needs to decode the different files given to it and understand the font
and graphics. It will consume substantial CPU time to process the data as well as it has to
take user inputs, control motors, etc.
 Distributed Systems:
A distributed system, also known as distributed computing, is a system with multiple
components located on different machines that communicate and coordinate actions in
order to appear as a single coherent system to the end-user.

The machines that are a part of a distributed system may be computers, physical
servers, virtual machines, containers, or any other node that can connect to the network,
have local memory, and communicate by passing messages.

There are two general ways that distributed systems function:

1. Each machine works toward a common goal and the end-user views results as one
cohesive unit.
2. Each machine has its own end-user and the distributed system facilitates sharing
resources or communication services.

Although distributed systems can sometimes be obscure, they usually have three primary
characteristics: all components run concurrently, there is no global clock, and all
components fail independently of each other.

Benefits and challenges of distributed systems

There are three reasons that teams generally decide to implement distributed systems:

 Horizontal Scalability—Since computing happens independently on each node, it


is easy and generally inexpensive to add additional nodes and functionality as
necessary.
 Reliability—Most distributed systems are fault-tolerant as they can be made up of
hundreds of nodes that work together. The system generally doesn’t experience
any disruptions if a single machine fails.
 Performance—Distributed systems are extremely efficient because work loads
can be broken up and sent to multiple machines.

However, distributed systems are not without challenges. Complex architectural design,
construction, and debugging processes that are required to create an effective distributed
system can be overwhelming.

Three more challenges you may encounter include:

 Scheduling—A distributed system has to decide which jobs need to run, when
they should run, and where they should run. Schedulers ultimately have
limitations, leading to underutilized hardware and unpredictable runtimes.
 Latency—The more widely your system is distributed, the more latency you can
experience with communications. This often leads to teams making tradeoffs
between availability, consistency, and latency.
 Observability—Gathering, processing, presenting, and monitoring hardware
usage metrics for large clusters is a significant challenge.

How a Distributed System Works

Hardware and software architectures are used to maintain a distributed system.


Everything must be interconnected—CPUs via the network and processes via the
communication system.

Types of distributed systems

Distributed systems generally fall into one of four different basic architecture models:

1. Client-server—Clients contact the server for data, then format it and display it to
the end-user. The end-user can also make a change from the client-side and
commit it back to the server to make it permanent.
2. Three-tier—Information about the client is stored in a middle tier rather than on
the client to simplify application deployment. This architecture model is most
common for web applications.
3. n-tier—Generally used when an application or server needs to forward requests to
additional enterprise services on the network.
4. Peer-to-peer—There are no additional machines used to provide services or
manage resources. Responsibilities are uniformly distributed among machines in
the system, known as peers, which can serve as either client or server.

Example of a Distributed System

Distributed systems have endless use cases, a few being electronic banking systems,
massive multiplayer online games, and sensor networks.

StackPath utilizes a particularly large distributed system to power its content delivery


network service. Every one of our points of presence (PoPs) has nodes that form a
worldwide distributed system. And to provide top notch content delivery, StackPath
stores the most recently and frequently requested content in edge locations closest to the
location it is being used

 Personnel systems:
Human Resources Services, Inc. designs entire personnel systems for cities and towns. In
these projects we carefully consider the specific needs of the municipality and examine
all aspects of personnel/human resource management. This would include areas such as
recruitment and selection, promotion, training and professional development, pay and
classification, EEO/affirmative action, labor relations, benefits administration, record-
keeping, worker’s compensation, civil service, disciplinary procedures, and staffing
needs which are studied in depth when designing the personnel/human resource system.
HRS also considers the municipality's form of government, its unique organizational
characteristics, and any pertinent statutory requirements as it relates to personnel and/or
human resource management.
HRS will typically conduct an overview assessment of the organization's current
personnel/human resource operations, and make necessary recommendations as to how it
should strengthen its systems.  The analysis includes a review of the personnel/human
resource department (or operations) as it currently exists; employee relations; a checklist
audit of core HR functional areas; potential areas for outsourcing and/or co-sourcing; the
HR needs of the municipality as a whole; market analysis, and recommended job
descriptions and proposed organizational structure.  HRManagement_Cycle.pdf
Our solutions take into account the unique and custom needs of our municipal clients.  In
summary the consulting services can include all or some of the following technical
assistance areas:

An analysis of the municipality's HR processes (e.g., recruitment,


Effectiveness compensation, succession planning, performance management system,
Assessment etc.) to determine their alignment with the   municipality's
daily operations and human capital strategies.

Organization An analysis of the HR organizational structure to determine its ability


Structure to deliver the HR processes and programs that support the
Review municipality's operations and human capital strategies.

HRS can assist with the development and planning of the


Development personnel/HR operations and facilitate the achievement of the HR
Planning mission through ongoing organizational, site-based, and individual
professional development services
2. Thin Client Model:
A thin client is a computer that runs from resources stored on a central server instead of a
localized hard drive. Thin clients work by connecting remotely to a server-based
computing environment where most applications, sensitive data, and memory, are stored.

What are the benefits of a thin client?

Thin clients have a number of benefits, including:

 Reduced cost
 Increased security
 More efficient manageability
 Scalability
Thin client deployment is more cost effective than deploying regular PCs. Because so
much is centralized at the server-side, thin client computing can reduce IT support and
licensing costs.

Security can be improved through employing thin clients because the thin client itself is
restricted by the server. Thin clients cannot run unauthorized software, and data can’t be
copied or saved anywhere except for the server. System monitoring and management is
easier based on the centralized server location.

Thin clients can also be simpler to manage, since upgrades, security policies, and more
can be managed in the data center instead of on the endpoint machines. This leads to less
downtime, increasing productivity among IT staff as well as endpoint machine users.

In what ways can thin clients be used?


There are three ways a thin client can be used: shared services, desktop virtualization, or
browser based.

With shared terminal services, all users at thin client stations share a server-based
operating system and applications. Users of a shared services thin client are limited to
simple tasks on their machine like creating folders, as well as running IT-approved
applications.

Desktop virtualization, or UI processing, means that each desktop lives in a virtual


machine, which is partitioned off from other virtual machines in the server. The operating
system and applications are not shared resources, but they still physically live on a
remote server. These virtualized resources can be accessed from any device that is able to
connect to the server.
A browser-based approach to using thin clients means that an ordinary device
connected to the internet carries out its application functions within a web browser
instead of on a remote server. Data processing is done on the thin client machine, but
software and data are retrieved from the network.

3. Fat Client Model:


A fat client (sometimes called a thick client) is a networked computer with most
resources installed locally, rather than distributed over a network as is the case with a thin
client. Most PCs (personal computers), for example, are fat clients because they have
their own hard driveDVD drives, software applications and so on.

Fat clients are almost unanimously preferred by network users because they are very
customizable and the user has more control over what programs are installed and specific
system configuration. On the other hand, thin clients are more easily managed, are easier
to protect from security risks, and offer lower maintenance and licensing costs.

A system that has some components and software installed but also uses resources
distributed over a network is sometimes known as a rich client.

A fat client is often built with expensive hardware with many moving parts and should
not be placed in a hostile environment. Otherwise, the fat client may not function
optimally.

An example of a fat client is a computer that handles the majority of a complex drawing’s
editing with sophisticated, locally stored software. The system designer determines
editing or viewing access to this software.

A fat client has several advantages, including the following:

 Fewer server requirements because it does most of the application processing


 More offline work because a server connection is often not required
 Multimedia-rich application processing, such as video gaming facilitation, because
there are no increased server bandwidth requirements
 Runs more applications because many fat clients require that an operating system
reside on a local computer
 Easy network connection at no extra cost because many users have fast local PCs
 Higher server capacity because each fat client handles more processing, allowing
the server to serve more clients

4. 3-tier Middleware Architecture:


A 3-tier architecture is a type of software architecture which is composed of three “tiers”
or “layers” of logical computing. They are often used in applications as a specific type of
client-server system. 3-tier architectures provide many benefits for production and
development environments by modularizing the user interface, business logic, and data
storage layers. Doing so gives greater flexibility to development teams by allowing them
to update a specific part of an application independently of the other parts. This added
flexibility can improve overall time-to-market and decrease development cycle times by
giving development teams the ability to replace or upgrade independent tiers without
affecting the other parts of the system.

For example, the user interface of a web application could be redeveloped or modernized
without affecting the underlying functional business and data access logic underneath.
This architectural system is often ideal for embedding and integrating 3rd party software
into an existing application. This integration flexibility also makes it ideal for embedding
analytics software into pre-existing applications and is often used by embedded
analytics vendors for this reason. 3-tier architectures are often used in cloud or on-
premises based applications as well as in software-as-a-service (SaaS) applications.

What Do the 3 Tiers Mean?

 Presentation Tier- The presentation tier is the front end layer in the 3-tier system
and consists of the user interface. This user interface is often a graphical one
accessible through a web browser or web-based application and which displays
content and information useful to an end user. This tier is often built on web
technologies such as HTML5, JavaScript, CSS, or through other popular web
development frameworks, and communicates with others layers through API calls.
 Application Tier- The application tier contains the functional business logic
which drives an application’s core capabilities. It’s often written in Java, .NET, C#,
Python, C++, etc.
 Data Tier- The data tier comprises of the database/data storage system and data
access layer. Examples of such systems are MySQL, Oracle, PostgreSQL, Microsoft
SQL Server, MongoDB, etc. Data is accessed by the application layer via API calls.

Example of a 3-tier architecture: JReport.


The typical structure for a 3-tier architecture deployment would have the presentation tier
deployed to a desktop, laptop, tablet or mobile device either via a web browser or a web-
based application utilizing a web server. The underlying application tier is usually hosted
on one or more application servers, but can also be hosted in the cloud, or on a dedicated
workstation depending on the complexity and processing power needed by the
application. And the data layer would normally comprise of one or more relational
databases, big data sources, or other types of database systems hosted either on-premises
or in the cloud.

A simple example of a 3-tier architecture in action would be logging into a media account
such as Netflix and watching a video. You start by logging in either via the web or via a
mobile application. Once you’ve logged in you might access a specific video through the
Netflix interface which is the presentation tier used by you as an end user. Once you’ve
selected a video that information is passed on to the application tier which will query the
data tier to call the information or in this case a video back up to the presentation tier.
This happens every time you access a video from most media sites.

Benefits of Using a 3-Layer Architecture:

There are many benefits to using a 3-layer architecture including speed of development,
scalability, performance, and availability.  As mentioned, modularizing different tiers of
an application gives development teams the ability to develop and enhance a product with
greater speed than developing a singular code base because a specific layer can be
upgraded with minimal impact on the other layers.  It can also help improve development
efficiency by allowing teams to focus on their core competencies. Many development
teams have separate developers who specialize in front- end, server back-end, and data
back-end development, by modularizing these parts of an application you no longer have
to rely on full stack developers and can better utilize the specialties of each team.

Scalability is another great advantage of a 3-layer architecture. By separating out the


different layers you can scale each independently depending on the need at any given
time. For example, if you are receiving many web requests but not many requests which
affect your application layer, you can scale your web servers without touching your
application servers. Similarly, if you are receiving many large application requests from
only a handful of web users, you can scale out your application and data layers to meet
those requests without touch your web servers. This allows you to load balance each
layer independently, improving overall performance with minimal resources.
Additionally, the independence created from modularizing the different tiers gives you
many deployment options. For example, you may choose to have your web servers
hosted in a public or private cloud while you’re application and data layers may be hosted
onsite. Or you may have your application and data layers hosted in the cloud while your
web servers may be locally hosted, or any combination thereof.

By having disparate layers you can also increase reliability and availability by hosting
different parts of your application on different servers and utilizing cached results. With a
full stack system you have to worry about a server going down and greatly affecting
performance throughout your entire system, but with a 3-layer application, the increased
independence created when physically separating different parts of an application
minimizes performance issues when a server goes down.
SDA ASSSIGNMENT NO 6

SUBMITTED BY: Mohsin Aslam


SECTION: BCSE 6B
REGISTRATION NO: F171BCSE186
Make a report on the following interface patterns available for
1. Complete user interface
2. Page layout
3. Forms and input
4. Tables
5. Direct data manipulation
6. Navigation
7. Searching
8. Page Elements
9. E-Commerce

Ans:

1. Complete User Interface:

The user interface (UI) is the point of human-computer interaction and communication in
a device. This can include display screens, keyboards, a mouse and the appearance of
a desktop. It is also the way through which a user interacts with an application or
a website. The growing dependence of many businesses on web applications and mobile
applications has led many companies to place increased priority on UI in an effort to
improve the user's overall experience.

Types of user interfaces:

The various types of user interfaces include:

 Graphical user interface (GUI)

 Command line interface (CLI)

 Menu-driven user interface

 Touch user interface

 Voice user interface (VUI)

 Form-based user interface


 Natural language user interface

Examples of user interfaces:

Some examples of user interfaces include:

 Computer mouse

 Remote control

 Virtual reality

 ATMs

 Speedometer

 The old iPod click wheel

2. Page Layout:
Page layout refers to the arrangement of text, images, and other objects on a page. The
term was initially used in desktop publishing (DTP), but is now commonly used to
describe the layout of webpages as well. Page layout techniques are used to customize the
appearance of magazines, newspapers, books, websites, and other types of publications.

The page layout of a printed or electronic document encompasses all elements of the
page. This includes the page margins, text blocks, images, object padding, and any grids
or templates used to define positions of objects on the page. Page layout applications,
such as Adobe InDesign and QuarkXpress, allow page designers to modify all of these
elements for a printed publication. Web development programs, such as Adobe
Dreamweaver and Microsoft Expression Studio allow Web developers to create similar
page layouts designed specifically for the Web.

3. Forms and Inputs:


The INPUT element defines a form control for the user to enter input.
While INPUT is most useful within a FORM, HTML 4 allows INPUT in
any block-level or inline element other than BUTTON. However, old
browsers such as Netscape 4.x will not display any INPUT elements outside
of a FORM.

When a form is submitted, the current value of each INPUT element within


the FORM is sent to the server as name/value pairs.
The INPUT element's NAME attribute provides the name used. The value
sent depends on the type of form control and on the user's input.

The type of form control defined by INPUT is given by


the TYPE attribute. The default TYPE is text, which provides a single-line
text input field. The VALUE attribute specifies the initial value for the text
field. The SIZE and MAXLENGTH attributes suggest the number of
characters and maximum number of characters, respectively, of the text field.

While the MAXLENGTH attribute can be an effective guide to the user,


authors should not depend on the enforcement of a maximum number of
characters by the client. A user could copy the HTML document, remove
the MAXLENGTH attribute, and submit the form. Thus authors of form
handlers should ensure that any necessary input length checks are repeated on
the server side.

The password input type is a variation on the text type. The only difference


is that the input characters are masked, typically by a series of asterisks, to
protect sensitive information from onlookers. Note, however, that the actual
value is transmitted to the server as clear text, so password inputs do not
provide sufficient security for credit card numbers or other highly sensitive
information.

The following example uses text and password fields with


the LABEL element to bind text labels to the INPUT elements:

<P><LABEL ACCESSKEY=U>User name: <INPUT TYPE=text


NAME=username SIZE=8 MAXLENGTH=8></LABEL></P>
<P><LABEL ACCESSKEY=P>Password: <INPUT TYPE=password
NAME=pw SIZE=12 MAXLENGTH=12></LABEL></P>

The radio and checkbox input types provide switches that can be turned on


and off by the user. The two types differ in that radio buttons are grouped (by
specifying the same NAME attribute on each INPUT) so that only one radio
button in a group can be selected at any time. Checkboxes can be checked
without changing the state of other checkboxes with the same NAME.
The VALUE attribute, required for radio buttons and checkboxes, gives the
value of the control when it is checked. The boolean CHECKED attribute
specifies that the control is initially checked.

4. Tables:

A table is a named relational database data set that is organized by rows and columns.
The relational table is a fundamental relational database concept because tables are the
primary form of data storage.
Columns form the table’s structure, and rows form the content. Tables allow restrictions
for columns (i.e., allowed column data type) but not rows. Every database table must
have a unique name. Most relational databases have naming restrictions For example, the
name may not contain spaces or be a reserved keyword such as TABLE or SYSTEM.

Relational tables store data in columns and rows. When creating a table, columns must be
defined, but columns may be added or deleted after table creation. During this time,
column data restrictions may or may not be defined. For example, when creating a
CUSTOMER_MASTER table for storing customer information, definitions may be
added, e.g., a DATE_OF_BIRTH column accepting dates only or a
CUSTOMER_NAME column that may not be null (blank).
Table rows are the table’s actual data elements. In the CUSTOMER_MASTER table, the
rows hold each customer record. Thus, a row consists of a data element within each table
column. If a row value is not entered, the value is termed “null,” which does not have the
same meaning as a zero or space.
Tables also have other table relationships that are defined by special columns, and the
most prominent are primary and foreign keys. For example, the CUSTOMER_MASTER
table has a CUSTOMER_ID column that is used to uniquely identify each table
customer. If another table needs to refer to a certain customer, a corresponding column
(also known as a foreign key) that references the CUSTOMER_MASTER table’s
customer id may be inserted. Other tables do not need to store additional customer details
that are already stored in the CUSTOMER_MASTER table.

5. Direct Data Manipulation:


In computer science, direct manipulation is a human–computer interaction style which
involves continuous representation of objects of interest and rapid, reversible, and
incremental actions and feedback.[1] As opposed to other interaction styles, for example,
the command language, the intention of direct manipulation is to allow a user to
manipulate objects presented to them, using actions that correspond at least loosely to
manipulation of physical objects. An example of direct manipulation is resizing
a graphical shape, such as a rectangle, by dragging its corners or edges with a mouse.
Having real-world metaphors for objects and actions can make it easier for a user to learn
and use an interface (some might say that the interface is more natural or intuitive), and
rapid, incremental feedback allows a user to make fewer errors and complete tasks in less
time, because they can see the results of an action before completing the action, thus
evaluating the output and compensating for mistakes.

6. Navigation:

Navigation design is the discipline of creating, analyzing and implementing ways for
users to navigate through a website or app.

Navigation plays an integral role in how users interact with and use your products. It is
how your user can get from point A to point B and even point C in the least frustrating
way possible.

To make these delightful interactions, designers employ a combination of design patterns


including links, labels and other UI elements. These patterns provide relevant information
and make interacting with products easier.

Good navigation design can:

 Enhance a user’s understanding

 Give them confidence using your product

 Provide credibility to a product

The best kind of navigation design is one which promotes usability. Poor navigation will
result in fewer users for your product and this is why navigation design is central to user
experience design.
Navigation design is complex and there are many design patterns to choose from when
optimizing the user experience. A design pattern is a general, reusable solution to a
problem.

No one pattern is necessarily better than the other. Each pattern that you use in your
product will have to be carefully considered and tested before implementation.

This ensures that the navigation pattern you have chosen is right for your product but
more importantly that it is right for your users.

Ideally, you want to approach navigation from a user-centered design perspective.

7. Searching:
A search box is a combination of input field and submit button. One may think that the
search box doesn’t need a design; after all, it’s just two simple elements. But since the
search box is one of the most frequently used design element on content-heavy websites,
its usability is critical.
When dealing with a user interface with clear sections or levels, allowing users to refine
their searches according to these specific regions can help to reduce the number of
irrelevant items or options they must consider, saving them much time in the process. As
you can see from the example below, the user is able to select one of three different
search refinement categories: “This Mac,” “IDF Course – UI Design Patterns,” and
“Shared.”
Searching for a file on your computer may take a long time, due to the large number of
documents you will have collected over the years. Refining your search to a folder in
which the file is most likely located, however, saves a lot of time. In this case, the search
is refined to the folder “IDF Course – UI Design Patterns.”

8. Page Elements:
User interface (UI) elements are the parts we use to build apps or websites. They add
interactivity to a user interface, providing touch points for the user as they navigate their
way around; think buttons, scrollbars, menu items and checkboxes.

As a user interface (UI) designer, you’ll use UI elements to create a visual language and
ensure consistency across your product—making it user-friendly and easy to navigate
without too much thought on the user’s part.

In this guide, we’ll explore some of the most common user interface elements,
considering when and why you might use them.

User interface elements usually fall into one of the following four categories:

1. Input Controls
2. Navigation Components
3. Informational Components
4. Containers

Input controls allow users to input information into the system. If you need your users to
tell you what country they are in, for example, you’ll use an input control to let them do
so.

Navigational components help users move around a product or website. Common


navigational components include tab bars on an iOS device and a hamburger menu on an
Android.

Informational components share information with users.

Containers hold related content together.

9. E-Commerce:
E-commerce (electronic commerce) is the activity of electronically buying or selling
of products on online services or over the Internet. Electronic commerce draws on
technologies such as mobile commerce, electronic funds transfer, supply chain
management, Internet marketing, online transaction processing, electronic data
interchange (EDI), inventory management systems, and automated data
collection systems. E-commerce is in turn driven by the technological advances of
the semiconductor industry, and is the largest sector of the electronics industry.
Modern electronic commerce typically uses the World Wide Web for at least one part of
the transaction's life cycle although it may also use other technologies such as e-mail.
Typical e-commerce transactions include the purchase of online books (such as Amazon)
and music purchases (music download in the form of digital distribution such as iTunes
Store), and to a less extent, customized/personalized online liquor
store inventory services.[1] There are three areas of e-commerce: online
retailing, electronic markets, and online auctions. E-commerce is supported by electronic
business.[2]
E-commerce businesses may also employ some or all of the followings:

 Online shopping for retail sales direct to consumers via Web sites and mobile


apps, and conversational commerce via live chat, chatbots, and voice assistants[3]
 Providing or participating in online marketplaces, which process third-
party business-to-consumer (B2C) or consumer-to-consumer (C2C) sales
 Business-to-business (B2B) buying and selling;
 Gathering and using demographic data through web contacts and social media
 Business-to-business (B2B) electronic data interchange
 Marketing to prospective and established customers by e-mail or fax (for example,
with newsletters)
 Engaging in pretail for launching new products and services
 Online financial exchanges for currency exchanges or trading purposes
SDA ASSSIGNMENT NO 7

SUBMITTED BY: Mohsin Aslam


SECTION: BCSE 6B
REGISTRATION NO: F171BCSE186
Name and give a short detail of the seven mistakes in daily stand-up
meetings:

Commonly known as daily scrum or morning rollcall, the practice of stand-up meeting prevails
in agile software development that focuses on collaboration between team members to overcome
challenges and achieve goals. It is one of the many methodologies used in agile software
development to identify issues and develop an effective action plan. Moreover, it helps a team
self-organize and work as a team by improving communication.
In spite of its prevalence in corporate world, it comes with surprise to see so many organizations
not achieving any real purpose of stand-up meeting. Why? You will get the answer in this post:
1. Not Standing During the Meeting The only rule of stand-up meeting you cannot break.
Still, some senior members of a team take privileges to sit down that only induce others to
follow. Remember that the purpose of a stand-up meeting is to make a quick overview issues in a
project. By sitting down on a chair, you lose that urge to make it short and brief which kills spirit
of this type of meeting.
2. Micromanaging the Team Stand-up meeting is not about micromanaging your
subordinates or asking them nitty-gritty details of their work.
Rather, it reinforces team collaboration by identifying issues and unifying a strategy. By asking
your members questions like “what are your daily work targets?” or “what is your work
criterion?” you only disrespect their valuable time.
3. Choosing a Wrong Location Your choice of location plays an important role in success of
your stand-up meeting.
Conducting a stand-up meeting in an open-air space will only distract attention of your members.
Such a place allows distractions and commotions make your attendees lose concentration. To get
desired results of your meeting, you need to choose room or big hall where your team mates can
collaborate with each other without any external interference.
4. Failure to Make Rules Due to the nature of stand-up meeting, it is important to have
certain rules and regulations.
Make rules of your stand-up meeting and share it with your team members. Make them clear
about things, for example switching off cell phone and no chit-chats, so that you can get the most
out oftheir10-15 minutes.
5. Being Late in the Meeting Stand-up meeting is too short to come late.
As a scrum master, it should be your responsibility to make sure that everyone comes on time.
Due to brevity of stand-up meeting, you need to make it clear to each of your team members to
show up on time. Pose a penalty for late comers.
6. Not Keeping Focus on the Agenda Remember that the idea of a stand-up meeting is to
recognize challenges of a project and find solutions.
So, you need to keep your focus on identifying the issues and developing an action plan. By not
paying attention to these core issues, you will lose track of your project and impede its progress.
7. Only Scrum Master Speak While Others Listen Often, a stand-up meeting is led by
scrum master who happen to be a project manager or team leader.
Unlike a team meet, a stand-up meeting is timed to 10-15 minutes to discuss issues of a project.
So, the role of a leader should not be more than directing flow of conversation. However, when
you become the only voice in a stand-up meeting, then you deprive others of voicing their
concerns which only kills its spirit.
Remember that a stand-up meeting is about developing an action plan to overcome challenges in
a project. By not doing the above mentioned mistakes, you can ensure collaboration in your team
and make your project a success

1. Name and explain the six attributes of a good user story.


2. Mike Cohn specifies six fundamental attributes of a good user story in his
book User Stories Applied. These are (1) independent, (2) negotiable, (3) valuable to
users or customers/purchasers, (4) estimable, (5) small, and (6) testable.
3. For the sake of planning and prioritization, stories should not be dependent on one
another. Therefore, individual stories should be as atomic as possible to allow maximum
flexibility of scheduling and development. However, situations often arise where stories
have inherent dependencies, such as payment processing, where the first story will incur
the overhead of building the supporting infrastructure, which will then reduce the
size/complexity of the remaining stories that take advantage of it. The problem, of course,
being that this will force the team to attempt to attack the prioritization of dependent
stories while the stories are still being defined, so that the individual stories could be
properly estimated.
4. One solution to this is to roll all the dependent stories into a single story. This is
applicable if the resulting story is still relatively small. However, if collapsing the stories
results in something large and complex, another strategy is to temporarily treat the
infrastructure development as a separate story for estimating purposes, and annotate,
during story prioritization, that this story should be rolled into the first story that requires
the infrastructure to be built. The resulting combined story should then be re-estimated,
and overall priority adjusted accordingly.
5. The second attribute is that stories are negotiable. By definition, user stories are
placeholders for discussion and progressive elaboration. Thus, stories should be defined,
at any given time, only to the level needed to suit the purposes of estimating and
prioritization with respect to the applicable planning horizon. For release planning, this is
at the relatively high level needed to provide a “good enough” estimate as to size and
complexity. Later, during iteration planning, more detail will be added, as needed, to
provide a “good enough” estimate of tasks (and duration) needed to plan the iteration.
And, during daily planning, even more detail may be added in the form of diagrams and
other artifacts that a developer may need to actually develop it. However, at every step,
the attribute of negotiability is maintained, as acquired domain knowledge, overall
construction, and user feedback work to refine desired functionality. Note, also, that this
may trigger splitting of a story and/or re-estimating.
6. The third attribute is value to users or customers/purchasers. In his definition of
this attribute, Mike Cohn makes the distinction between those who use the software, and
those who purchase the software. For example, users don’t generally care which platform
a software package runs on, as long as it runs on theirs. However, a purchaser may have a
requirement that the software support specific versions of Internet Explorer (or Firefox,
etc.), or some other standardized platform. These would be captured as stories for
estimation and prioritization, along with stories for specific functionality requested by the
users.
7. What about developers, you may ask? Isn’t it possible that they have stories –
often related to specific architectures and technical errata – that should be defined and
placed on the backlog?
8. The answer to this is usually no. While these may be of benefit to the
programmers, they don’t necessarily demonstrate measurable benefit to the users and/or
purchasers of the product. Therefore, when these stories arise, they should be worded
with respect to the value they provide to the users/purchasers, such as “business rules
should be processed in ten seconds or less” as opposed to “the system should use an n-tier
architecture to improve performance.” This allows the stories to be worded in a way such
that they can be prioritized in relation to the other, user/purchaser stories on the backlog,
and allows the developers to define solutions to specific problems, instead of general
solutions that may be harder to quantify (and justify) with respect to business value.
9. The fourth attribute is that stories be estimatable. Obviously, this is an important
attribute, since estimating story size is a central part of the planning and prioritization
process. However, Mike Cohn points to three fundamental issues that may impede story
estimation: Lack of domain knowledge, lack of technical knowledge, and story size.
10. If the development team is having a difficult time estimating a story because it
doesn’t fully understand the feature, then this as a signal that the user(s) who asked for
the story need to provide more information so the team can make an informed estimate. If
the story can’t be estimated because of unfamiliar (or non-existent) technology, then the
answer is to either acquire people who understand the technology, train team members on
the technology, or conduct a small research project (or prototype) to gain enough
understanding so as to provide a reasonable estimate. If a story is too large, then it should
be split in accordance with the guidelines provided next.
11. The next attribute of a good story is size; specifically, smaller is generally better.
This is not to say that a story can’t be too small, because it can. But, to a point, smaller is
usually better because smaller stories tend to be easier to estimate, and with less
variability than larger stories.
12. When it comes to story size, Mike Cohn identifies two basic
determinates: compound stories, such as epics, that contain additional sub-stories,
and complex stories that do not easily lend themselves to splitting into smaller stories.
Splitting of compound stories is generally relatively easy: you just split the story into its
constituent parts (minding dependencies, as previously noted). Complex stories, however,
can be difficult to split because they may contain sub-stories that are tightly interrelated,
or some other issue that makes splitting difficult.
13. One common way of splitting complex stories is called “slicing the cake.” By this
method, the story is sliced by identifying the minimum amount of acceptable
functionality that cuts across all layers of the underlying technology or process, then
splitting everything else out into a separate story as “additional functionality.” This way,
the users can be provided with something that works, albeit in a reduced configuration,
with the option to add the rest of the functionality as priority determines. Another way of
splitting complex stories is to create a separate story called a “proof of concept.” This is
generally useful when working with new or little understood technology, and allows a
prototype to be produced that provides an example of a working solution. Once the proof
of concept has been completed, knowledge gained can be used to provide a better
estimate for the original story.
14. The final attribute of a good story is that it is testable. Non-testable stories usually
manifest themselves as vague requirements, such as “the user must have an enjoyable
experience,” or something equally non-quantifiable. Stories such as this should be
discarded, or rewritten in quantifiable terms. For example, “the user must have an
enjoyable experience” can be rewritten as “the application must score at 80% or above on
the provided user survey.” This provides a distinct, testable metric which allows the story
to be quantified and prioritized.

15. Explain Model, View and Controller along with their functional
responsibilities. What is the purpose of a Model View Controller? Illustrate the
example of Smalltalk-80TM system in terms of
MVC.

Model
The central component of the pattern. It is the application's dynamic data structure, independent
of the user interface.[5] It directly manages the data, logic and rules of the application.
View
Any representation of information such as a chart, diagram or table. Multiple views of the same
information are possible, such as a bar chart for management and a tabular view for accountants.
Controller
Accepts input and converts it to commands for the model or view.
In addition to dividing the application into these components, the model–view–controller design
defines the interactions between them.
 The model is responsible for managing the data of the application. It receives user input
from the controller.
 The view means presentation of the model in a particular format.
 The controller responds to the user input and performs interactions on the data model
objects. The controller receives the input, optionally validates it and then passes the input
to the model.
As with other software patterns, MVC expresses the "core of the solution" to a problem while
allowing it to be adapted for each system. Particular MVC designs can vary significantly from
the traditional description here.
Service
Between the controller and the model sometimes goes a layer which is called a service. It fetches
data from the model and lets the controller use the fetched data. This layer allows to separate
data storage (model), data fetching (service) and data manipulation (controller). Since this layer
is not part of the original MVC concept, it is optional in most cases but can be useful for code
management and reusability purposes in some cases.
History
One of the seminal insights in the early development of graphical user interfaces, MVC became
one of the first approaches to describe and implement software constructs in terms of
their responsibilities.[10]
Trygve Reenskaug introduced MVC into Smalltalk-79 while visiting the Xerox Palo Alto
Research Center (PARC) in the 1970s. In the 1980s, Jim Althoff and others implemented a
version of MVC for the Smalltalk-80 class library. Only later did a 1988 article in The Journal
of Object Technology (JOT) express MVC as a general concept.[13]
The MVC pattern has subsequently evolved,[14] giving rise to variants such as hierarchical
model–view–controller (HMVC), model–view–adapter (MVA), model–view–presenter (MVP), 
model–view–viewmodel (MVVM), and others that adapted MVC to different contexts.
The use of the MVC pattern in web applications exploded in popularity after the introduction
of NeXT's WebObjects in 1996, which was originally written in Objective-C (that borrowed
heavily from Smalltalk) and helped enforce MVC principles. Later, the MVC pattern became
popular with Java developers when WebObjects was ported to Java. Later frameworks for Java,
such as Spring (released in October 2002), continued the strong bond between Java and MVC.
The introduction of the frameworks Django (July 2005, for Python) and Rails (December 2005,
for Ruby), both of which had a strong emphasis on rapid deployment, increased MVC's
popularity outside the traditional enterprise environment in which it has long been popular.
MVC web frameworks now hold large market-shares relative to non-MVC web toolkits.
Use in web applications
Although originally developed for desktop computing, MVC has been widely adopted as a
design for World Wide Web applications in major programming languages. Several web
frameworks have been created that enforce the pattern. These software frameworks vary in their
interpretations, mainly in the way that the MVC responsibilities are divided between the client
and server.[15]
Some web MVC frameworks take a thin client approach that places almost the entire model,
view and controller logic on the server. This is reflected in frameworks such
as Django, Rails and ASP.NET MVC. In this approach, the client sends either hyperlink requests
or form submissions to the controller and then receives a complete and updated web page (or
other document) from the view; the model exists entirely on the server. Other frameworks such
as AngularJS, EmberJS, JavaScriptMVC and Backbone allow the MVC components to execute
partly on the client (also see Ajax)
Goals of MVC
Simultaneous development
Because MVC decouples the various components of an application, developers are able to work
in parallel on different components without affecting or blocking one another. For example, a
team might divide their developers between the front-end and the back-end. The back-end
developers can design the structure of the data and how the user interacts with it without
requiring the user interface to be completed. Conversely, the front-end developers are able to
design and test the layout of the application prior to the data structure being available.
Code reuse
The same (or similar) view for one application can be refactored for another application with
different data because the view is simply handling how the data is being displayed to the user.
Unfortunately this does not work when that code is also useful for handling user input. For
example, DOM code (including the application's custom abstractions to it) is useful for both
graphics display and user input. (Note that, despite the name Document Object Model, the DOM
is actually not an MVC model, because it is the application's interface to the user).
To address these problems, MVC (and patterns like it) are often combined with a component
architecture that provides a set of UI elements. Each UI element is a single higher-
level component that combines the 3 required MVC components into a single package. By
creating these higher-level components that are independent of each other, developers are able to
reuse components quickly and easily in other applications.
Advantages and disadvantages
Advantages
 Simultaneous development – Multiple developers can work simultaneously on the model,
controller and views.
 High cohesion – MVC enables logical grouping of related actions on a controller
together. The views for a specific model are also grouped together.
 Loose coupling – The very nature of the MVC framework is such that there is low
coupling among models, views or controllers
 Ease of modification – Because of the separation of responsibilities, future development
or modification is easier
 Multiple views for a model – Models can have multiple views
 Testability - with the clearer separation of concerns, each part can be better tested
independently (e.g. exercising the model without having to stub the view)
Disadvantage
The disadvantages of MVC can be generally categorized as overhead for incorrectly factored
software.
 Code navigability – The framework navigation can be complex because it introduces new
layers of indirection and requires users to adapt to the decomposition criteria of MVC.
 Multi-artifact consistency – Decomposing a feature into three artifacts causes scattering.
Thus, requiring developers to maintain the consistency of multiple representations at
once.
 Undermined by inevitable clustering – Applications tend to have heavy interaction
between what the user sees and what the user uses. Therefore each feature's computation
and state tends to get clustered into one of the 3 program parts, erasing the purported
advantages of MVC.
 Excessive boilerplate – Due to the application computation and state being typically
clustered into one of the 3 parts, the other parts degenerate into either boilerplate shims
or code-behind[16] that exists only to satisfy the MVC pattern.
 Pronounced learning curve – Knowledge on multiple technologies becomes the norm.
Developers using MVC need to be skilled in multiple technologies.
 Lack of incremental benefit – UI applications are already factored into components, and
achieving code reuse and independence via the component architecture, leaving no
incremental benefit to MVC

16. Explain in brief the N-Tier Client Server Pattern of software communication.
Also explain the five quality attributes and related issues in N-Tier client server
model.
Great products are often built on multi-tier architecture – or n-tier architecture, as it’s often
called. At Stackify, we love to talk about the many tools, resources, and concepts that can help
you build better. So in this post, we’ll discuss n-tier architecture, how it works, and what you
need to know to build better products using multi-tier architecture.
Definition of N-Tier Architecture
N-tier architecture is also called multi-tier architecture because the software is engineered to
have the processing, data management, and presentation functions physically and logically
separated.  That means that these different functions are hosted on several machines or clusters,
ensuring that services are provided without resources being shared and, as such, these services
are delivered at top capacity. The “N” in the name n-tier architecture refers to any number from
1.
Not only does your software gain from being able to get services at the best possible rate, but it’s
also easier to manage. This is because when you work on one section, the changes you make will
not affect the other functions.  And if there is a problem, you can easily pinpoint where it
originates.
A More In-Depth Look at N-Tier Architecture
N-tier architecture would involve dividing an application into three different tiers. These would
be the
1. logic tier,
2. the presentation tier, and
3. the data tier.
Image via Wikimedia Commons.
The separate physical location of these tiers is what differentiates n-tier architecture from the
model-view-controller framework that only separates presentation, logic, and data tiers in
concept. N-tier architecture also differs from MVC framework in that the former has a middle
layer or a logic tier, which facilitates all communications between the different tiers. When you
use the MVC framework, the interaction that happens is triangular; instead of going through the
logic tier, it is the control layer that accesses the model and view layers, while the model layer
accesses the view layer. Additionally, the control layer makes a model using the requirements
and then pushes that model into the view layer.
This is not to say that you can only use either the MVC framework or the n-tier architecture. 
There are a lot of software that brings together these two frameworks.  For instance, you can use
the n-tier architecture as the overall architecture, or use the MVC framework in the presentation
tier.
What Are the Benefits of N-Tier Architecture?
There are several benefits to using n-tier architecture for your software.  These are scalability,
ease of management, flexibility, and security.
 Secure: You can secure each of the three tiers separately using different methods.
 Easy to manage: You can manage each tier separately, adding or modifying each tier
without affecting the other tiers.
 Scalable: If you need to add more resources, you can do it per tier, without affecting the
other tiers.
 Flexible: Apart from isolated scalability, you can also expand each tier in any manner
that your requirements dictate.
In short, with n-tier architecture, you can adopt new technologies and add more components
without having to rewrite the entire application or redesigning your whole software, thus making
it easier to scale or maintain. Meanwhile, in terms of security, you can store sensitive or
confidential information in the logic tier, keeping it away from the presentation tier, thus making
it more secure.
Other benefits include:
 More efficient development. N-tier architecture is very friendly for development, as
different teams may work on each tier.  This way, you can be sure the design and
presentation professionals work on the presentation tier and the database experts work on
the data tier.
 Easy to add new features. If you want to introduce a new feature, you can add it to the
appropriate tier without affecting the other tiers.
 Easy to reuse. Because the application is divided into independent tiers, you can easily
reuse each tier for other software projects.  For instance, if you want to use the same
program, but for a different data set, you can just replicate the logic and presentation tiers
and then create a new data tier.
How It Works and Examples of N-Tier Architecture
When it comes to n-tier architecture, a three-tier architecture is fairly common. In this setup, you
have the presentation or GUI tier, the data layer, and the application logic tier.
The application logic tier. The application logic tier is where all the “thinking” happens, and it
knows what is allowed by your application and what is possible, and it makes other decisions. 
This logic tier is also the one that writes and reads data into the data tier.
The data tier. The data tier is where all the data used in your application are stored.  You can
securely store data on this tier, do transactions, and even search through volumes and volumes of
data in a matter of seconds.
The presentation tier. The presentation tier is the user interface.  This is what the software user
sees and interacts with. This is where they enter the needed information.  This tier also acts as a
go-between for the data tier and the user, passing on the user’s different actions to the logic tier.
Just imagine surfing on your favorite website. The presentation tier is the Web application that
you see. It is shown on a Web browser you access from your computer, and it has the CSS,
JavaScript, and HTML codes that allow you to make sense of the Web application. If you need
to log in, the presentation tier will show you boxes for username, password, and the submit
button.  After filling out and then submitting the form, all that will be passed on to the logic tier.
The logic tier will have the JSP, Java Servlets, Ruby, PHP and other programs. The logic tier
would be run on a Web server.  And in this example, the data tier would be some sort of
database, such as a MySQL, NoSQL, or PostgreSQL database. All of these are run on a separate
database server. Rich Internet applications and mobile apps also follow the same three-tier
architecture. 
And there are n-tier architecture models that have more than three tiers.  Examples are
applications that have these tiers:
 Services – such as print, directory, or database services.
 Business domain – the tier that would host Java, DCOM, CORBA, and other application
server object.
 Presentation tier.
 Client tier – or the thin clients.
One good instance is when you have an enterprise service-oriented architecture.  The enterprise
service bus or ESB would be there as a separate tier to facilitate the communication of the basic
service tier and the business domain tier.
Building applications out of tiers or layers offers a broad solution that developers generally find
easy to understand. It promises a generic approach that can be applied to every use case and it
fits neatly on a single PowerPoint slide. The problem is that it is too rigid a model to address the
more flexible demands of larger, more distributed systems.
The evolving data challenge
Tiered architecture originally emerged as a means of scaling from client-server applications to
internet-based solutions that could support hundreds of thousands of users. By placing a load-
balanced presentation tier on top of processing logic you were able to handle peak load more
effectively, provide a higher degree of resilience, re-use some code and make changes more
quickly.
It worked, hence the fact that it has become so popular. However, it’s too inflexible to be an
effective means of scaling for more modern low-latency applications where the data volumes are
exponentially higher.
Tiered architecture is based on the fallacy that design can somehow be separated from
deployment. This just does not work out in practice as a design based on layers says nothing
about how processing should be distributed. Every request tends to follow the same route on its
way to and from the database. The interfaces between these layers tends to be fairly chatty with
data being passed around in small chunks. This does not lend itself to remote invocation,
so layered applications often come unstuck when you try to distribute processing.
The end result are applications that are orientated around a centralised database server.
Processing tends to be very inefficient, particularly if your tiers are running in separate
environments. If you look at the actual work going on you may find that the majority of
processing involves remote calls and data transformations rather than serving up business
functionality.
The inflexibility of a generic solution
Tiered architecture presents a single abstract solution that tends to be applied in every use case.
This is too much of a generalisation as a generic solution will struggle to adapt to different
scaling and processing requirements. There will be times when all those layers feel like overkill
while complex, long-running operations may require more involved infrastructure to manage.
Dividing a system into rigid tiers tends to undermine flexibility. For example, a tiered design
may always dictate that validation always happens in the middle tier when there’s nothing wrong
with deploying the same validation logic in both the presentation and middle tiers in more simple
cases. More data-intensive logic may even be better situated closer to the data store. The point is
that a solution should meet specific processing needs rather than conforming to an arbitrary
abstraction.
A single processing route is likely to be too inflexible for most complex systems. You may want
to partition your data and processes to make it easier to optimise specific areas separately. Data
could also be brought closer to the presentation tier through caching mechanisms to reduce the
distance that requests have to travel. None of this can be achieved easily through rigid tiers that
cut across all your data and processes.
Defending the boundaries
Perhaps my biggest concern of tiered architecture is around the separation of concerns. This is
often an issue with layered or tiered systems, but it does take a while to manifest.
The generic nature of components in a tiered application can make it difficult to define and
defend clear abstractions. Tiers or layers tend to be demarcated by their technical role rather than
business functionality which can make it easy for logic to bleed between components. Over time
small functional changes will be introduced into each layer by time-pressured developers
needing somewhere convenient to add fixes.
After a while it becomes impossible to tell where things are going wrong and minor feature
requests necessitate code changes in every layer. De-coupling is never achieved and this
becomes particularly acute once you start trying to add new applications in. Anti-pattern clichés
such as the “big ball of mud” and “shotgun surgery” become every day realities

17. Explain the concepts of components, connectors, data, and topology provided
in Lunar Lander Architectural style case study.
Design elements:
• Components: objects (data and associated operations)
• Connectors: method invocations
• Data: arguments passed to methods
• Topology
• Can vary arbitrarily: data and interfaces can be shared through inheritance

You might also like