You are on page 1of 7

A virtualization architecture is a conceptual model a

virtual infrastructure that is most frequently applied in


cloud computing. Virtualization itself is the process of
creating and delivering a virtual rather than a physical
version of something. This could be a desktop, an
operating system (OS), a server, a storage device or
network resources.
The architecture clearly specifies the arrangement and
interrelationships among the particular components in
the virtual environment. In a virtualization
architecture, specialized software is used to create a
virtual version of a computing resource. This eliminates
the need to re-create an actual version of that
resource. A logical name is assigned to the resource
and a pointer is provided to that resource on demand.
As a result, multiple OSes and applications can run on
the same machine and multiple users (or
organizations) can share a single physical instance of a
resource or application at the same time.
The virtualization architecture is a visual depiction or
model of virtualization. It maps out and describes the
various virtual elements in the ecosystem, including
the following:
application virtual services
infrastructure virtual services

⚫ virtual OS ⚫ hypervisor

Architecture Visualization is the graphical


representation of an architecture model or an
architecture view of a model.
This can be done on paper / printed (physical) or on a
computer screen (digital). But also a video, animation
or maquette are ways of Architectural Visualization.
Usage
When Architecture is a total concept, an Architecture
Visualization we see the way concepts work in an
enforced way that is applied onto a structure.
Architecture Visualizations let us see what the
difference is between the theoretical principles of
concepts and the practical application of principles of
concepts in an organization.
Four Main Types of 2d Architecture Visualizations
Graphics
In general there are four types of 2d Architecture
Visualizations Graphics:
Sketch - Such as informal design sketches
Drawing - Such as Informal principle detail drawings
Diagram - Such as formal process
diagrams and application diagrams
Photographic Image - Such as Architecture Photos,
Artist Impressions
Remote Procedure Call is a technique for building
distributed systems. Basically, it allows a program on
one machine to call a subroutine on another machine
without knowing that it is remote. RPC is not a
transport protocol: rather, it is a method of using
existing communications features in a transparent way.
This transparency is one of the great strengths of RPC
as a tool. Because the application software does not
contain any communication code, it is independent of
The particular communications hardware and
protocols used
The operating system used
• The calling sequence needed to use the
underlying communications software
This means that application software can be designed
and written before these choices have even been
made. Because it takes care of any data reformatting
needed, RPC also provides transparency to byte
ordering and differences in data representation (real
number formats, etc). RPC is not a new technique.
MACH architecture is a set of technology principles
behind new, best-of-breed technology platforms. The
acronym stands for Microservices- based, API-first,
Cloud-native, and Headless:

• Microservices: Individual pieces of business


functionality that are independently developed,
deployed and managed.

⚫ API-first: All functionality is exposed through an API,


making it possible to tie together two or more
applications or services.

• Cloud-Native SaaS: Software-as-a-Service that


leverages the full capabilities of the cloud, beyond
storage and hosting, including elastic scaling of highly
available resources. Functionality is updated manually,
eliminating the need for upgrade management.

• Headless: The front-end user experience is


completely decoupled from the back-end logic,
allowing for complete design freedom in creating the
user interface and for connecting to other channels
and devices (i.e. existing applications, IoT, A/R,
Vending While it's a relatively new term in the
industry, MACH is quickly gaining popularity for how it
helps businesses. MACH technologies support a
composable enterprise meaning every component is
pluggable, scalable, replaceable, and can be
continuously improved. MACH architecture gives
businesses the freedom to choose from the best tools
on the market, and maintain a structure that makes it
easy to add, replace, or remove those tools in the
future.

What are the benefits of MACH architecture?

Moving from monolithic or suite-based technology to


MACH architecture gives you the freedom to choose
from the best tools on the market today, and provides
a structure that makes it easy to add, replace, or
remove technologies in the future. Put simply, MACH
architecture allows you to break the replatform cycle
once and for all.
In addition to avoiding another instance of being
handcuffed by outdated technology and the inability to
innovate and evolve, here are four more benefits of
MACH architecture as explained
Improved speed with less risk
Execute a best-of-breed strategy
Say goodbye to upgrades
Seamless customizations and innovation
NFC stands for Near Field Communication. It enables
short range communication between compatible
devices. At least one transmitting device and another
receiving device is needed to transmit the signal. Many
devices can use the NFC standard and are considered
either passive or active.
So NFC devices can be classified into 2 types:
Passive NFC devices –
These include tags, and other small transmitters which
can send information to other NFC devices without the
need for a power source of their own. These devices
don’t really process any information sent from other
sources, and can not connect to other passive
components. NFC stands for Near Field
Communication. It enables short range communication
between compatible devices. At least one transmitting
devdon’t really process any information sent from
other sources, and can not connect to other passive
components. NFC stands for Near Field
Communication. It enables short range communication
between compatible devices. At least one transmitting
device and another receiving device is needed to
transmit the signal. Many devices can use the NFC
standard and are considered either passive or active.

So NFC devices can be classified into 2 types:

Passive NFC devices –


These include tags, and other small transmitters which
can send information to other NFC devices without the
need for a power source of their own. These devices
don’t really process any information sent from other
sources, and can not connect to other passive
components. Active NFC devices –
These devices are able to both the things i.e. send and
receive data. They can communicate with each other
as well as with passive devices. The most common
used in smartphones is the peer-to-peer mode.
Exchange of various piece of information is allowed
between 2 devices. In this mode both devices switch
between active when sending data and passive when
receiving.

The second mode i.e. read/write mode is a one-way


data transmission. The active device, possibly your
smartphone, links up with another device in order to
read information from it. NFC advertisement tags use
this mode.

The third mode of operation is card emulation. The


NFC device can function as a smart or contactless
credit card and make payments or tap into public
transport systems.
stract, 2) casual 1) weal

Istakt Consistency model.


with strict consistency all worites are
visible immediately to all processes a
of pt proform write operation shand visible to all
Replicas immediatly
weak Consistency
-It is not nesse sary to show the change in memory
done by every one ope to other processes The result of
write operation can be Combined and find to prrcasses
when they need
4) Realegs. Confistenty model.
All changes made to memory by the Bocesses ale
migrated to other Meckes All Changes made to
memory by other processes. ane migrated from other
node to per
(2) Casual Consistency model
All processes see only potentially related. operation in
same order. A memory operation is related to etter
memory operation if the first operation might have
affected by & send one
Code migration is the movement of programming code
from one system to another. There are three distinct
levels of code migration with increasing complexity,
cost and risk. Simple migration involves the movement
from language to a newer version. A second, more
complicated level of migration involves moving to a
different programming language. Migrating to an
entirely new platform or operating system is the most
complex type of migration. The first type of code
migration is a simple movement from one version of a
language to a newer, but syntactically different
version. This is the easiest of migration routes as the
basic structure and much of the programming
constructs usually do not change. In many cases, the
old code would actually work, but new and improved
routines or modularization can be improved by
retooling the code to fit the nature of the new
language. Therefore migrating the code would lead to
more efficiency in execution. The second level of code
migration would be migrating to a completely different
programming language. This could be caused by
porting to a new software system or implementing a
different relational database management system
(RDMS). This type of migration often requires that
programmers learn an entirely new language, or new
programmers be brought in to assist with the
migration. In this case, the entire program must be
rewritten from the ground up. Even though most of
the constructs are likely to exist in both languages, the
precise syntax is usually completely different. The most
complex example of code migration is migrating to an
entirely new platform and/or operating system (OS).
This not only changes the programming language, but
also the machine code behind the language. While
most modern programming languages shield the
programmer from this low level code, knowledge of
the OS and how it operates is essential to producing
code that is efficient and executes as expected.

A distributed system contains


multiple nodes that are physically
separate but linked together using
the network. All the nodes in this
system communicate with each other
and handle processes in tandem.
Each of these nodes contains a small
part of the distributed operating
system software.
A diagram to better explain the
distributed system is

Types of Distributed
Systems
The nodes in the distributed systems
can be arranged in the form of
client/server systems or peer to peer
systems. Details about these are as
follows −

Client/Server Systems
In client server systems, the client
requests a resource and the server
provides that resource. A server may
serve multiple clients at the same
time while a client is in contact with
only one server. Both the client and
server usually communicate via a
computer network and so they are a
part of distributed systems.

Peer to Peer Systems


The peer to peer systems contains
nodes that are equal participants in
data sharing. All the tasks are
equally divided between all the
nodes. The nodes interact with each
other as required as share resources.
This is done with the help of a
network.

Advantages of
Distributed Systems
Some advantages of Distributed
Systems are as follows −

 All the nodes in the distributed system


are connected to each other. So nodes
can easily share data with other nodes.
 More nodes can easily be added to the
distributed system i.e. it can be scaled
as required.
 Failure of one node does not lead to
the failure of the entire distributed
system. Other nodes can still
communicate with each other.
 Resources like printers can be shared
with multiple nodes rather than being
restricted to just one.
Disadvantages of
Distributed Systems
Some disadvantages of Distributed
Systems are as follows −

 It is difficult to provide adequate


security in distributed systems because
the nodes as well as the connections
need to be secured.
 Some messages and data can be lost in
the network while moving from one
node to another.
 The database connected to the
distributed systems is quite
complicated and difficult to handle as
compared to a single user system.
 Overloading may occur in the network
if all the nodes of the distributed
system try to send data at once.

You might also like