Professional Documents
Culture Documents
All-SDA-Assig - Mohsin Aslam F171BCSE186 BCSE 6B
All-SDA-Assig - Mohsin Aslam F171BCSE186 BCSE 6B
Analysis model operates as a link between the 'system description' and the 'design
model'.
In the analysis model, information, functions and the behavior of the system is defined
and these are translated into the architecture, interface and component level design in the 'design
modeling'.
The rules of thumb that must be followed while creating the analysis model.
SDA ASSSIGNMENT NO 2
1. ‘Tunnel vision’ means limited vision. The user should not be limited to see only
one portion of the whole system.
2. The programmer should keep the record of the design so it’s traceable to analysis
model.
3. ‘Reinvent the wheel’ means reversing the cycle. All the information should be
taken all at once and the programmer should not go back again and again.
4. Requirements of both Psychology and Software Engineering must be fulfilled.
Psychologist will take users’ feedback while software engineer will focus on the
design etc. When we work for other domains, we have little or less knowledge
about it so we take help from domain experts to minimize the distance between
the two areas.
5. The design should be consistent and combined. Our design should not be unstable
and it should merge different areas.
6. If the designer needs to change something in the design. The design should be
able to take that change. Our design should be change prone.
7. The design should be capable to break down gently, even if there is some
abnormal behavior occurs.
8. Design and coding are two different terms. Design is the description of the logic,
which is used in solving the problem. Coding is the language specification which
is implementation of the design.
9. Design’s quality should be assessed when it is created and not after it’s Creation.
10. Design should have minimal conceptual errors. It must be ensured that
major conceptual errors of design such as ambiguousness and inconsistency are
addressed in advance before dealing with the syntactical errors present in
the design model.
SDA ASSSIGNMENT NO 3
Q.2 Explain the four factors which effect the software design model
Answer:
1. Data/Class Design
2. Architectural Design
3. Interface Design
4. Component-Level Design
Data/Class Design:
• The data/class design transforms class models into design class realizations and the
requisite data structures required to implement the software.
• The objects and relationships defined in the Class-Responsibility-Collaborator (CRC)
diagram and the detailed data content depicted by class attributes and other notation
provide the basis for the data design action
Architectural Design:
• The architectural design defines the relationship between major structural elements of the
software, the architectural styles and design patterns that can be used to achieve the
requirements defined for the system, and the constraints that affect the way in which
architecture can be implemented.
• The architectural design representation—the framework of a computer-based system—is
derived from the requirements model.
Interface Design:
• The interface design describes how the software communicates with systems that
interoperate with it, and with humans who use it.
• An interface implies a flow of information (e.g., data and/or control) and a specific type
of behavior.
Therefore, usage scenarios and behavioral models provide much of the information required for
interface design.
Component-Level Design:
• The component-level design transforms structural elements of the software architecture
into a procedural description of software components.
• Information obtained from the class-based models, flow models, and behavioral models
serve as the basis for component design.
• A design should lead to interfaces that reduce the complexity of connections between
components and with the external environment.
• A design should be derived using a repeatable method that is driven by information
obtained during software requirements analysis.
Q.5 Explain the “Open the door” phenomena in terms of stepwise software
design refinement
Answer:
An example of a procedural abstraction would be the word open for a door. Open implies a long
sequence of procedural steps (e.g., walk to the door, reach out and grasp knob, turn knob and
pull door, step away from moving door, etc.).
A data abstraction is a named collection of data that describes a data object.
In the context of the procedural abstraction open, we can define a data abstraction called door.
Like any data object, the data abstraction for door would encompass
a set of attributes that describe the door (e.g., door type, swing direction, opening mechanism,
weight, dimensions)
SDA ASSSIGNMENT NO 4
SUBMITTED BY: Mohsin Aslam
SECTION: BCSE 6B
REGISTRATION NO: F171BCSE186
System Decomposition
System decomposition begins by decomposing the system into cohesive, well-defined
subsystems. Subsystems are then decomposed into cohesive, well-defined
components. Components are then decomposed into cohesive, well-defined sub-
components:
In fact, there is no important distinction between system, sub-system, component, and
sub-component. So the above process can be reduced to a simpler iterative process:
Q.3 Explain the system issues and their relationship with the
system design
End User: End user basically the user which will use your
application to get benefit from it and in return become a source of
income to you.
Developer : Developer are the people which will maintain your
product and will check timely that is anything required or need to
do changes in your system to make it better and user friendly for
the clients.
Q.5. What are the six general trade-off issues in a software design
process?
Reliability
Expandability
Programmability
Maintainability
Compatibility
Adaptability
Availability
Development Status and Cost
Q.6 Map a network packet communication example in terms
of software-modular based communication.
Network mapping is the study of the physical connectivity of networks e.g. the Internet.
Network mapping discovers the devices on the network and their connectivity. It is not to be
confused with network discovery or network enumerating which discovers devices on the
network and their characteristics such as (operating system, open ports, listening network
services, etc.). The field of automated network mapping has taken on greater importance as
networks become more dynamic and complex in nature.
In order to read a file and parse it while reading into some meaningful data,
one can either read it step by step at every line or may also load the entire
content in memory, which would not be recommended for large text cases
e.g text editors like Microsoft Word. In general, to throw away the stack
concept completely, it is needed. And also when we want things to do
concurrently i.e. non-preemptive multitasking, we need coroutines for
concurrency.
It is almost similar to threads but one main difference is that threads are
typically preemptively scheduled while coroutines are not and this is
because threads can be rescheduled at any instant and can execute
concurrently while coroutines rescheduled at specific points and not
execute concurrently.
o State Machines
o Actor Model
o Generators
o Communicating Sequential Processes
o Reverse Communication
State Machines: It is useful to implement state machines within a single
subroutine, where the state is determined by the current entry or exit point
of the procedure. This results in the more readable code as compared to
the use of goto.
Benefits of Coroutines
Ans:
1. System Types:
Embedded Systems:
EMBEDDED SYSTEM is a combination of computer software and hardware which is
either fixed in capability or programmable. An embedded system can be either an
independent system, or it can be a part of a large system. It is mostly designed for a
specific function or functions within a larger system. For example, a fire alarm is a
common example of an embedded system which can sense only smoke.
Laser Printer
Laser Printers are using embedded systems to manage various aspect of the printing.
Apart from performing the main task of printing, it has to take user inputs, manage
communication with the computer system, to handle faults, and sense papers left on the
tray, etc.
Here, the main task of the microprocessor is to understand the text and control the
printing head in such a way that it discharges ink where it is needed.
To perform this, it needs to decode the different files given to it and understand the font
and graphics. It will consume substantial CPU time to process the data as well as it has to
take user inputs, control motors, etc.
Distributed Systems:
A distributed system, also known as distributed computing, is a system with multiple
components located on different machines that communicate and coordinate actions in
order to appear as a single coherent system to the end-user.
The machines that are a part of a distributed system may be computers, physical
servers, virtual machines, containers, or any other node that can connect to the network,
have local memory, and communicate by passing messages.
1. Each machine works toward a common goal and the end-user views results as one
cohesive unit.
2. Each machine has its own end-user and the distributed system facilitates sharing
resources or communication services.
Although distributed systems can sometimes be obscure, they usually have three primary
characteristics: all components run concurrently, there is no global clock, and all
components fail independently of each other.
There are three reasons that teams generally decide to implement distributed systems:
However, distributed systems are not without challenges. Complex architectural design,
construction, and debugging processes that are required to create an effective distributed
system can be overwhelming.
Scheduling—A distributed system has to decide which jobs need to run, when
they should run, and where they should run. Schedulers ultimately have
limitations, leading to underutilized hardware and unpredictable runtimes.
Latency—The more widely your system is distributed, the more latency you can
experience with communications. This often leads to teams making tradeoffs
between availability, consistency, and latency.
Observability—Gathering, processing, presenting, and monitoring hardware
usage metrics for large clusters is a significant challenge.
Distributed systems generally fall into one of four different basic architecture models:
1. Client-server—Clients contact the server for data, then format it and display it to
the end-user. The end-user can also make a change from the client-side and
commit it back to the server to make it permanent.
2. Three-tier—Information about the client is stored in a middle tier rather than on
the client to simplify application deployment. This architecture model is most
common for web applications.
3. n-tier—Generally used when an application or server needs to forward requests to
additional enterprise services on the network.
4. Peer-to-peer—There are no additional machines used to provide services or
manage resources. Responsibilities are uniformly distributed among machines in
the system, known as peers, which can serve as either client or server.
Distributed systems have endless use cases, a few being electronic banking systems,
massive multiplayer online games, and sensor networks.
Personnel systems:
Human Resources Services, Inc. designs entire personnel systems for cities and towns. In
these projects we carefully consider the specific needs of the municipality and examine
all aspects of personnel/human resource management. This would include areas such as
recruitment and selection, promotion, training and professional development, pay and
classification, EEO/affirmative action, labor relations, benefits administration, record-
keeping, worker’s compensation, civil service, disciplinary procedures, and staffing
needs which are studied in depth when designing the personnel/human resource system.
HRS also considers the municipality's form of government, its unique organizational
characteristics, and any pertinent statutory requirements as it relates to personnel and/or
human resource management.
HRS will typically conduct an overview assessment of the organization's current
personnel/human resource operations, and make necessary recommendations as to how it
should strengthen its systems. The analysis includes a review of the personnel/human
resource department (or operations) as it currently exists; employee relations; a checklist
audit of core HR functional areas; potential areas for outsourcing and/or co-sourcing; the
HR needs of the municipality as a whole; market analysis, and recommended job
descriptions and proposed organizational structure. HRManagement_Cycle.pdf
Our solutions take into account the unique and custom needs of our municipal clients. In
summary the consulting services can include all or some of the following technical
assistance areas:
Reduced cost
Increased security
More efficient manageability
Scalability
Thin client deployment is more cost effective than deploying regular PCs. Because so
much is centralized at the server-side, thin client computing can reduce IT support and
licensing costs.
Security can be improved through employing thin clients because the thin client itself is
restricted by the server. Thin clients cannot run unauthorized software, and data can’t be
copied or saved anywhere except for the server. System monitoring and management is
easier based on the centralized server location.
Thin clients can also be simpler to manage, since upgrades, security policies, and more
can be managed in the data center instead of on the endpoint machines. This leads to less
downtime, increasing productivity among IT staff as well as endpoint machine users.
With shared terminal services, all users at thin client stations share a server-based
operating system and applications. Users of a shared services thin client are limited to
simple tasks on their machine like creating folders, as well as running IT-approved
applications.
Fat clients are almost unanimously preferred by network users because they are very
customizable and the user has more control over what programs are installed and specific
system configuration. On the other hand, thin clients are more easily managed, are easier
to protect from security risks, and offer lower maintenance and licensing costs.
A system that has some components and software installed but also uses resources
distributed over a network is sometimes known as a rich client.
A fat client is often built with expensive hardware with many moving parts and should
not be placed in a hostile environment. Otherwise, the fat client may not function
optimally.
An example of a fat client is a computer that handles the majority of a complex drawing’s
editing with sophisticated, locally stored software. The system designer determines
editing or viewing access to this software.
For example, the user interface of a web application could be redeveloped or modernized
without affecting the underlying functional business and data access logic underneath.
This architectural system is often ideal for embedding and integrating 3rd party software
into an existing application. This integration flexibility also makes it ideal for embedding
analytics software into pre-existing applications and is often used by embedded
analytics vendors for this reason. 3-tier architectures are often used in cloud or on-
premises based applications as well as in software-as-a-service (SaaS) applications.
Presentation Tier- The presentation tier is the front end layer in the 3-tier system
and consists of the user interface. This user interface is often a graphical one
accessible through a web browser or web-based application and which displays
content and information useful to an end user. This tier is often built on web
technologies such as HTML5, JavaScript, CSS, or through other popular web
development frameworks, and communicates with others layers through API calls.
Application Tier- The application tier contains the functional business logic
which drives an application’s core capabilities. It’s often written in Java, .NET, C#,
Python, C++, etc.
Data Tier- The data tier comprises of the database/data storage system and data
access layer. Examples of such systems are MySQL, Oracle, PostgreSQL, Microsoft
SQL Server, MongoDB, etc. Data is accessed by the application layer via API calls.
A simple example of a 3-tier architecture in action would be logging into a media account
such as Netflix and watching a video. You start by logging in either via the web or via a
mobile application. Once you’ve logged in you might access a specific video through the
Netflix interface which is the presentation tier used by you as an end user. Once you’ve
selected a video that information is passed on to the application tier which will query the
data tier to call the information or in this case a video back up to the presentation tier.
This happens every time you access a video from most media sites.
There are many benefits to using a 3-layer architecture including speed of development,
scalability, performance, and availability. As mentioned, modularizing different tiers of
an application gives development teams the ability to develop and enhance a product with
greater speed than developing a singular code base because a specific layer can be
upgraded with minimal impact on the other layers. It can also help improve development
efficiency by allowing teams to focus on their core competencies. Many development
teams have separate developers who specialize in front- end, server back-end, and data
back-end development, by modularizing these parts of an application you no longer have
to rely on full stack developers and can better utilize the specialties of each team.
By having disparate layers you can also increase reliability and availability by hosting
different parts of your application on different servers and utilizing cached results. With a
full stack system you have to worry about a server going down and greatly affecting
performance throughout your entire system, but with a 3-layer application, the increased
independence created when physically separating different parts of an application
minimizes performance issues when a server goes down.
SDA ASSSIGNMENT NO 6
Ans:
The user interface (UI) is the point of human-computer interaction and communication in
a device. This can include display screens, keyboards, a mouse and the appearance of
a desktop. It is also the way through which a user interacts with an application or
a website. The growing dependence of many businesses on web applications and mobile
applications has led many companies to place increased priority on UI in an effort to
improve the user's overall experience.
Computer mouse
Remote control
Virtual reality
ATMs
Speedometer
2. Page Layout:
Page layout refers to the arrangement of text, images, and other objects on a page. The
term was initially used in desktop publishing (DTP), but is now commonly used to
describe the layout of webpages as well. Page layout techniques are used to customize the
appearance of magazines, newspapers, books, websites, and other types of publications.
The page layout of a printed or electronic document encompasses all elements of the
page. This includes the page margins, text blocks, images, object padding, and any grids
or templates used to define positions of objects on the page. Page layout applications,
such as Adobe InDesign and QuarkXpress, allow page designers to modify all of these
elements for a printed publication. Web development programs, such as Adobe
Dreamweaver and Microsoft Expression Studio allow Web developers to create similar
page layouts designed specifically for the Web.
4. Tables:
A table is a named relational database data set that is organized by rows and columns.
The relational table is a fundamental relational database concept because tables are the
primary form of data storage.
Columns form the table’s structure, and rows form the content. Tables allow restrictions
for columns (i.e., allowed column data type) but not rows. Every database table must
have a unique name. Most relational databases have naming restrictions For example, the
name may not contain spaces or be a reserved keyword such as TABLE or SYSTEM.
Relational tables store data in columns and rows. When creating a table, columns must be
defined, but columns may be added or deleted after table creation. During this time,
column data restrictions may or may not be defined. For example, when creating a
CUSTOMER_MASTER table for storing customer information, definitions may be
added, e.g., a DATE_OF_BIRTH column accepting dates only or a
CUSTOMER_NAME column that may not be null (blank).
Table rows are the table’s actual data elements. In the CUSTOMER_MASTER table, the
rows hold each customer record. Thus, a row consists of a data element within each table
column. If a row value is not entered, the value is termed “null,” which does not have the
same meaning as a zero or space.
Tables also have other table relationships that are defined by special columns, and the
most prominent are primary and foreign keys. For example, the CUSTOMER_MASTER
table has a CUSTOMER_ID column that is used to uniquely identify each table
customer. If another table needs to refer to a certain customer, a corresponding column
(also known as a foreign key) that references the CUSTOMER_MASTER table’s
customer id may be inserted. Other tables do not need to store additional customer details
that are already stored in the CUSTOMER_MASTER table.
6. Navigation:
Navigation design is the discipline of creating, analyzing and implementing ways for
users to navigate through a website or app.
Navigation plays an integral role in how users interact with and use your products. It is
how your user can get from point A to point B and even point C in the least frustrating
way possible.
The best kind of navigation design is one which promotes usability. Poor navigation will
result in fewer users for your product and this is why navigation design is central to user
experience design.
Navigation design is complex and there are many design patterns to choose from when
optimizing the user experience. A design pattern is a general, reusable solution to a
problem.
No one pattern is necessarily better than the other. Each pattern that you use in your
product will have to be carefully considered and tested before implementation.
This ensures that the navigation pattern you have chosen is right for your product but
more importantly that it is right for your users.
7. Searching:
A search box is a combination of input field and submit button. One may think that the
search box doesn’t need a design; after all, it’s just two simple elements. But since the
search box is one of the most frequently used design element on content-heavy websites,
its usability is critical.
When dealing with a user interface with clear sections or levels, allowing users to refine
their searches according to these specific regions can help to reduce the number of
irrelevant items or options they must consider, saving them much time in the process. As
you can see from the example below, the user is able to select one of three different
search refinement categories: “This Mac,” “IDF Course – UI Design Patterns,” and
“Shared.”
Searching for a file on your computer may take a long time, due to the large number of
documents you will have collected over the years. Refining your search to a folder in
which the file is most likely located, however, saves a lot of time. In this case, the search
is refined to the folder “IDF Course – UI Design Patterns.”
8. Page Elements:
User interface (UI) elements are the parts we use to build apps or websites. They add
interactivity to a user interface, providing touch points for the user as they navigate their
way around; think buttons, scrollbars, menu items and checkboxes.
As a user interface (UI) designer, you’ll use UI elements to create a visual language and
ensure consistency across your product—making it user-friendly and easy to navigate
without too much thought on the user’s part.
In this guide, we’ll explore some of the most common user interface elements,
considering when and why you might use them.
User interface elements usually fall into one of the following four categories:
1. Input Controls
2. Navigation Components
3. Informational Components
4. Containers
Input controls allow users to input information into the system. If you need your users to
tell you what country they are in, for example, you’ll use an input control to let them do
so.
9. E-Commerce:
E-commerce (electronic commerce) is the activity of electronically buying or selling
of products on online services or over the Internet. Electronic commerce draws on
technologies such as mobile commerce, electronic funds transfer, supply chain
management, Internet marketing, online transaction processing, electronic data
interchange (EDI), inventory management systems, and automated data
collection systems. E-commerce is in turn driven by the technological advances of
the semiconductor industry, and is the largest sector of the electronics industry.
Modern electronic commerce typically uses the World Wide Web for at least one part of
the transaction's life cycle although it may also use other technologies such as e-mail.
Typical e-commerce transactions include the purchase of online books (such as Amazon)
and music purchases (music download in the form of digital distribution such as iTunes
Store), and to a less extent, customized/personalized online liquor
store inventory services.[1] There are three areas of e-commerce: online
retailing, electronic markets, and online auctions. E-commerce is supported by electronic
business.[2]
E-commerce businesses may also employ some or all of the followings:
Commonly known as daily scrum or morning rollcall, the practice of stand-up meeting prevails
in agile software development that focuses on collaboration between team members to overcome
challenges and achieve goals. It is one of the many methodologies used in agile software
development to identify issues and develop an effective action plan. Moreover, it helps a team
self-organize and work as a team by improving communication.
In spite of its prevalence in corporate world, it comes with surprise to see so many organizations
not achieving any real purpose of stand-up meeting. Why? You will get the answer in this post:
1. Not Standing During the Meeting The only rule of stand-up meeting you cannot break.
Still, some senior members of a team take privileges to sit down that only induce others to
follow. Remember that the purpose of a stand-up meeting is to make a quick overview issues in a
project. By sitting down on a chair, you lose that urge to make it short and brief which kills spirit
of this type of meeting.
2. Micromanaging the Team Stand-up meeting is not about micromanaging your
subordinates or asking them nitty-gritty details of their work.
Rather, it reinforces team collaboration by identifying issues and unifying a strategy. By asking
your members questions like “what are your daily work targets?” or “what is your work
criterion?” you only disrespect their valuable time.
3. Choosing a Wrong Location Your choice of location plays an important role in success of
your stand-up meeting.
Conducting a stand-up meeting in an open-air space will only distract attention of your members.
Such a place allows distractions and commotions make your attendees lose concentration. To get
desired results of your meeting, you need to choose room or big hall where your team mates can
collaborate with each other without any external interference.
4. Failure to Make Rules Due to the nature of stand-up meeting, it is important to have
certain rules and regulations.
Make rules of your stand-up meeting and share it with your team members. Make them clear
about things, for example switching off cell phone and no chit-chats, so that you can get the most
out oftheir10-15 minutes.
5. Being Late in the Meeting Stand-up meeting is too short to come late.
As a scrum master, it should be your responsibility to make sure that everyone comes on time.
Due to brevity of stand-up meeting, you need to make it clear to each of your team members to
show up on time. Pose a penalty for late comers.
6. Not Keeping Focus on the Agenda Remember that the idea of a stand-up meeting is to
recognize challenges of a project and find solutions.
So, you need to keep your focus on identifying the issues and developing an action plan. By not
paying attention to these core issues, you will lose track of your project and impede its progress.
7. Only Scrum Master Speak While Others Listen Often, a stand-up meeting is led by
scrum master who happen to be a project manager or team leader.
Unlike a team meet, a stand-up meeting is timed to 10-15 minutes to discuss issues of a project.
So, the role of a leader should not be more than directing flow of conversation. However, when
you become the only voice in a stand-up meeting, then you deprive others of voicing their
concerns which only kills its spirit.
Remember that a stand-up meeting is about developing an action plan to overcome challenges in
a project. By not doing the above mentioned mistakes, you can ensure collaboration in your team
and make your project a success
15. Explain Model, View and Controller along with their functional
responsibilities. What is the purpose of a Model View Controller? Illustrate the
example of Smalltalk-80TM system in terms of
MVC.
Model
The central component of the pattern. It is the application's dynamic data structure, independent
of the user interface.[5] It directly manages the data, logic and rules of the application.
View
Any representation of information such as a chart, diagram or table. Multiple views of the same
information are possible, such as a bar chart for management and a tabular view for accountants.
Controller
Accepts input and converts it to commands for the model or view.
In addition to dividing the application into these components, the model–view–controller design
defines the interactions between them.
The model is responsible for managing the data of the application. It receives user input
from the controller.
The view means presentation of the model in a particular format.
The controller responds to the user input and performs interactions on the data model
objects. The controller receives the input, optionally validates it and then passes the input
to the model.
As with other software patterns, MVC expresses the "core of the solution" to a problem while
allowing it to be adapted for each system. Particular MVC designs can vary significantly from
the traditional description here.
Service
Between the controller and the model sometimes goes a layer which is called a service. It fetches
data from the model and lets the controller use the fetched data. This layer allows to separate
data storage (model), data fetching (service) and data manipulation (controller). Since this layer
is not part of the original MVC concept, it is optional in most cases but can be useful for code
management and reusability purposes in some cases.
History
One of the seminal insights in the early development of graphical user interfaces, MVC became
one of the first approaches to describe and implement software constructs in terms of
their responsibilities.[10]
Trygve Reenskaug introduced MVC into Smalltalk-79 while visiting the Xerox Palo Alto
Research Center (PARC) in the 1970s. In the 1980s, Jim Althoff and others implemented a
version of MVC for the Smalltalk-80 class library. Only later did a 1988 article in The Journal
of Object Technology (JOT) express MVC as a general concept.[13]
The MVC pattern has subsequently evolved,[14] giving rise to variants such as hierarchical
model–view–controller (HMVC), model–view–adapter (MVA), model–view–presenter (MVP),
model–view–viewmodel (MVVM), and others that adapted MVC to different contexts.
The use of the MVC pattern in web applications exploded in popularity after the introduction
of NeXT's WebObjects in 1996, which was originally written in Objective-C (that borrowed
heavily from Smalltalk) and helped enforce MVC principles. Later, the MVC pattern became
popular with Java developers when WebObjects was ported to Java. Later frameworks for Java,
such as Spring (released in October 2002), continued the strong bond between Java and MVC.
The introduction of the frameworks Django (July 2005, for Python) and Rails (December 2005,
for Ruby), both of which had a strong emphasis on rapid deployment, increased MVC's
popularity outside the traditional enterprise environment in which it has long been popular.
MVC web frameworks now hold large market-shares relative to non-MVC web toolkits.
Use in web applications
Although originally developed for desktop computing, MVC has been widely adopted as a
design for World Wide Web applications in major programming languages. Several web
frameworks have been created that enforce the pattern. These software frameworks vary in their
interpretations, mainly in the way that the MVC responsibilities are divided between the client
and server.[15]
Some web MVC frameworks take a thin client approach that places almost the entire model,
view and controller logic on the server. This is reflected in frameworks such
as Django, Rails and ASP.NET MVC. In this approach, the client sends either hyperlink requests
or form submissions to the controller and then receives a complete and updated web page (or
other document) from the view; the model exists entirely on the server. Other frameworks such
as AngularJS, EmberJS, JavaScriptMVC and Backbone allow the MVC components to execute
partly on the client (also see Ajax)
Goals of MVC
Simultaneous development
Because MVC decouples the various components of an application, developers are able to work
in parallel on different components without affecting or blocking one another. For example, a
team might divide their developers between the front-end and the back-end. The back-end
developers can design the structure of the data and how the user interacts with it without
requiring the user interface to be completed. Conversely, the front-end developers are able to
design and test the layout of the application prior to the data structure being available.
Code reuse
The same (or similar) view for one application can be refactored for another application with
different data because the view is simply handling how the data is being displayed to the user.
Unfortunately this does not work when that code is also useful for handling user input. For
example, DOM code (including the application's custom abstractions to it) is useful for both
graphics display and user input. (Note that, despite the name Document Object Model, the DOM
is actually not an MVC model, because it is the application's interface to the user).
To address these problems, MVC (and patterns like it) are often combined with a component
architecture that provides a set of UI elements. Each UI element is a single higher-
level component that combines the 3 required MVC components into a single package. By
creating these higher-level components that are independent of each other, developers are able to
reuse components quickly and easily in other applications.
Advantages and disadvantages
Advantages
Simultaneous development – Multiple developers can work simultaneously on the model,
controller and views.
High cohesion – MVC enables logical grouping of related actions on a controller
together. The views for a specific model are also grouped together.
Loose coupling – The very nature of the MVC framework is such that there is low
coupling among models, views or controllers
Ease of modification – Because of the separation of responsibilities, future development
or modification is easier
Multiple views for a model – Models can have multiple views
Testability - with the clearer separation of concerns, each part can be better tested
independently (e.g. exercising the model without having to stub the view)
Disadvantage
The disadvantages of MVC can be generally categorized as overhead for incorrectly factored
software.
Code navigability – The framework navigation can be complex because it introduces new
layers of indirection and requires users to adapt to the decomposition criteria of MVC.
Multi-artifact consistency – Decomposing a feature into three artifacts causes scattering.
Thus, requiring developers to maintain the consistency of multiple representations at
once.
Undermined by inevitable clustering – Applications tend to have heavy interaction
between what the user sees and what the user uses. Therefore each feature's computation
and state tends to get clustered into one of the 3 program parts, erasing the purported
advantages of MVC.
Excessive boilerplate – Due to the application computation and state being typically
clustered into one of the 3 parts, the other parts degenerate into either boilerplate shims
or code-behind[16] that exists only to satisfy the MVC pattern.
Pronounced learning curve – Knowledge on multiple technologies becomes the norm.
Developers using MVC need to be skilled in multiple technologies.
Lack of incremental benefit – UI applications are already factored into components, and
achieving code reuse and independence via the component architecture, leaving no
incremental benefit to MVC
16. Explain in brief the N-Tier Client Server Pattern of software communication.
Also explain the five quality attributes and related issues in N-Tier client server
model.
Great products are often built on multi-tier architecture – or n-tier architecture, as it’s often
called. At Stackify, we love to talk about the many tools, resources, and concepts that can help
you build better. So in this post, we’ll discuss n-tier architecture, how it works, and what you
need to know to build better products using multi-tier architecture.
Definition of N-Tier Architecture
N-tier architecture is also called multi-tier architecture because the software is engineered to
have the processing, data management, and presentation functions physically and logically
separated. That means that these different functions are hosted on several machines or clusters,
ensuring that services are provided without resources being shared and, as such, these services
are delivered at top capacity. The “N” in the name n-tier architecture refers to any number from
1.
Not only does your software gain from being able to get services at the best possible rate, but it’s
also easier to manage. This is because when you work on one section, the changes you make will
not affect the other functions. And if there is a problem, you can easily pinpoint where it
originates.
A More In-Depth Look at N-Tier Architecture
N-tier architecture would involve dividing an application into three different tiers. These would
be the
1. logic tier,
2. the presentation tier, and
3. the data tier.
Image via Wikimedia Commons.
The separate physical location of these tiers is what differentiates n-tier architecture from the
model-view-controller framework that only separates presentation, logic, and data tiers in
concept. N-tier architecture also differs from MVC framework in that the former has a middle
layer or a logic tier, which facilitates all communications between the different tiers. When you
use the MVC framework, the interaction that happens is triangular; instead of going through the
logic tier, it is the control layer that accesses the model and view layers, while the model layer
accesses the view layer. Additionally, the control layer makes a model using the requirements
and then pushes that model into the view layer.
This is not to say that you can only use either the MVC framework or the n-tier architecture.
There are a lot of software that brings together these two frameworks. For instance, you can use
the n-tier architecture as the overall architecture, or use the MVC framework in the presentation
tier.
What Are the Benefits of N-Tier Architecture?
There are several benefits to using n-tier architecture for your software. These are scalability,
ease of management, flexibility, and security.
Secure: You can secure each of the three tiers separately using different methods.
Easy to manage: You can manage each tier separately, adding or modifying each tier
without affecting the other tiers.
Scalable: If you need to add more resources, you can do it per tier, without affecting the
other tiers.
Flexible: Apart from isolated scalability, you can also expand each tier in any manner
that your requirements dictate.
In short, with n-tier architecture, you can adopt new technologies and add more components
without having to rewrite the entire application or redesigning your whole software, thus making
it easier to scale or maintain. Meanwhile, in terms of security, you can store sensitive or
confidential information in the logic tier, keeping it away from the presentation tier, thus making
it more secure.
Other benefits include:
More efficient development. N-tier architecture is very friendly for development, as
different teams may work on each tier. This way, you can be sure the design and
presentation professionals work on the presentation tier and the database experts work on
the data tier.
Easy to add new features. If you want to introduce a new feature, you can add it to the
appropriate tier without affecting the other tiers.
Easy to reuse. Because the application is divided into independent tiers, you can easily
reuse each tier for other software projects. For instance, if you want to use the same
program, but for a different data set, you can just replicate the logic and presentation tiers
and then create a new data tier.
How It Works and Examples of N-Tier Architecture
When it comes to n-tier architecture, a three-tier architecture is fairly common. In this setup, you
have the presentation or GUI tier, the data layer, and the application logic tier.
The application logic tier. The application logic tier is where all the “thinking” happens, and it
knows what is allowed by your application and what is possible, and it makes other decisions.
This logic tier is also the one that writes and reads data into the data tier.
The data tier. The data tier is where all the data used in your application are stored. You can
securely store data on this tier, do transactions, and even search through volumes and volumes of
data in a matter of seconds.
The presentation tier. The presentation tier is the user interface. This is what the software user
sees and interacts with. This is where they enter the needed information. This tier also acts as a
go-between for the data tier and the user, passing on the user’s different actions to the logic tier.
Just imagine surfing on your favorite website. The presentation tier is the Web application that
you see. It is shown on a Web browser you access from your computer, and it has the CSS,
JavaScript, and HTML codes that allow you to make sense of the Web application. If you need
to log in, the presentation tier will show you boxes for username, password, and the submit
button. After filling out and then submitting the form, all that will be passed on to the logic tier.
The logic tier will have the JSP, Java Servlets, Ruby, PHP and other programs. The logic tier
would be run on a Web server. And in this example, the data tier would be some sort of
database, such as a MySQL, NoSQL, or PostgreSQL database. All of these are run on a separate
database server. Rich Internet applications and mobile apps also follow the same three-tier
architecture.
And there are n-tier architecture models that have more than three tiers. Examples are
applications that have these tiers:
Services – such as print, directory, or database services.
Business domain – the tier that would host Java, DCOM, CORBA, and other application
server object.
Presentation tier.
Client tier – or the thin clients.
One good instance is when you have an enterprise service-oriented architecture. The enterprise
service bus or ESB would be there as a separate tier to facilitate the communication of the basic
service tier and the business domain tier.
Building applications out of tiers or layers offers a broad solution that developers generally find
easy to understand. It promises a generic approach that can be applied to every use case and it
fits neatly on a single PowerPoint slide. The problem is that it is too rigid a model to address the
more flexible demands of larger, more distributed systems.
The evolving data challenge
Tiered architecture originally emerged as a means of scaling from client-server applications to
internet-based solutions that could support hundreds of thousands of users. By placing a load-
balanced presentation tier on top of processing logic you were able to handle peak load more
effectively, provide a higher degree of resilience, re-use some code and make changes more
quickly.
It worked, hence the fact that it has become so popular. However, it’s too inflexible to be an
effective means of scaling for more modern low-latency applications where the data volumes are
exponentially higher.
Tiered architecture is based on the fallacy that design can somehow be separated from
deployment. This just does not work out in practice as a design based on layers says nothing
about how processing should be distributed. Every request tends to follow the same route on its
way to and from the database. The interfaces between these layers tends to be fairly chatty with
data being passed around in small chunks. This does not lend itself to remote invocation,
so layered applications often come unstuck when you try to distribute processing.
The end result are applications that are orientated around a centralised database server.
Processing tends to be very inefficient, particularly if your tiers are running in separate
environments. If you look at the actual work going on you may find that the majority of
processing involves remote calls and data transformations rather than serving up business
functionality.
The inflexibility of a generic solution
Tiered architecture presents a single abstract solution that tends to be applied in every use case.
This is too much of a generalisation as a generic solution will struggle to adapt to different
scaling and processing requirements. There will be times when all those layers feel like overkill
while complex, long-running operations may require more involved infrastructure to manage.
Dividing a system into rigid tiers tends to undermine flexibility. For example, a tiered design
may always dictate that validation always happens in the middle tier when there’s nothing wrong
with deploying the same validation logic in both the presentation and middle tiers in more simple
cases. More data-intensive logic may even be better situated closer to the data store. The point is
that a solution should meet specific processing needs rather than conforming to an arbitrary
abstraction.
A single processing route is likely to be too inflexible for most complex systems. You may want
to partition your data and processes to make it easier to optimise specific areas separately. Data
could also be brought closer to the presentation tier through caching mechanisms to reduce the
distance that requests have to travel. None of this can be achieved easily through rigid tiers that
cut across all your data and processes.
Defending the boundaries
Perhaps my biggest concern of tiered architecture is around the separation of concerns. This is
often an issue with layered or tiered systems, but it does take a while to manifest.
The generic nature of components in a tiered application can make it difficult to define and
defend clear abstractions. Tiers or layers tend to be demarcated by their technical role rather than
business functionality which can make it easy for logic to bleed between components. Over time
small functional changes will be introduced into each layer by time-pressured developers
needing somewhere convenient to add fixes.
After a while it becomes impossible to tell where things are going wrong and minor feature
requests necessitate code changes in every layer. De-coupling is never achieved and this
becomes particularly acute once you start trying to add new applications in. Anti-pattern clichés
such as the “big ball of mud” and “shotgun surgery” become every day realities
17. Explain the concepts of components, connectors, data, and topology provided
in Lunar Lander Architectural style case study.
Design elements:
• Components: objects (data and associated operations)
• Connectors: method invocations
• Data: arguments passed to methods
• Topology
• Can vary arbitrarily: data and interfaces can be shared through inheritance