You are on page 1of 14

NAME-B SHRUTI

REG NO-17BCE1299

VIRTUALIZATION

DIGITAL ASSIGNMENT-1

1.Discuss what is JIT (Just in Time) compiler. How it is used to realize


JVM in Java environment. Will it aid realizing virtualization in system?

JIT stands for Just In Time. JIT compiler is a program which converts the Java
ByteCode into processor level instructions.The JIT compiler runs after the program
has begun and compiles the bytecode while the program is running into a quicker
and more local processor level directive set.After you've completed writing a Java
program, the source code are compiled by the Java compiler into bytecode. Then
the bytecode is converted into processor level instructions by the JIT compiler.
Thus the JIT compiler acts as a second compiler.The JIT compiler runs
simultaneously with the execution of the program. It compiles the bytecode into
platform-specific executable code that is instantly executed.Once the code is re-
compiled by the JIT compiler, it runs relatively quickly on the system. Just in time
compiler or JIT is an integral component of Java Virtual Machine along with
Garbage collector, which as name suggest does Just in time compilation. It is
capable of compiling Java code directly into machine language, which can greatly
improve performance of Java application.JIT is used to realize JVM in java
environment as:

JVM (“Java Virtual Machine”) has two meanings:

It is an abstract instruction set designed to run Java Programs. This instruction set
defines a relatively straightforward push-down stack machine to which Java source
programs are compiled; these instructions are stored in “.class” files.

It is a program that runs on a real computer, than can execute the abstract JVM
instructions. Such a JVM includes an “interpreter” for the individual JVM
instructions, but also includes all the supporting machinery required to support the
execution of Java code, including arithmetic, function calls, storage allocation,
garbage collection, thread scheduling, class file loaders, file I/O and other access to
the local operating system as needed to run complex Java applications.
NAME-B SHRUTI
REG NO-17BCE1299

Older JVMs literally interpreted the JVM instructions one-by-one at runtime. This
is easy to implement which is why this was done. But like any interpreter, such
run-time interpretation produces program execution times that are typically an
order of magnitude slower than native compiled machine code.

A “JIT compiler” (“JITter”) is a feature of most modern JVMs that compiles


chunks of JVM instructions to native machine code as new chunks, or previously
encountered chunks of JVM code are encountered during execution. One could use
an offline Java-to-native-code compiler (some actually exist) to do this to provide
traditional batch-compiling purposes but that isn’t the way the Java world went.

JITters have one advantage over batch compilers: they can compile code that
matches what happens at runtime, and can thus help optimize the dynamic
properties of Java code such as the dynamic dispatch that occurs in overloaded
Java method calls. They have a disadvantage: they don’t see the program at large
scale, and generally cannot do what good batch compilers do well: global
optimizations. The tradeoff, in practice, means Java programs that are JIT-
compiled running pretty well, but not as fast as C or C++ programs that are
compiled by traditional methods.

The Just-In-Time compiler is one of the integral parts of the Java Runtime
Environment. It is mainly responsible for performance optimization of Java-based
applications at run time or execution time. In general, the main motto of the
compiler is increasing the performance of an application for the end user and the
application developer.

Deep Dive into JIT in Java

 Byte code is the chief potential of Java’s WORA (Write once, run anywhere)
environment. Speed of the Java application depends on the way the byte
code gets converted to the native machine code. The bytecode can either be
interpreted or compiled to native code or directly executed on the processor.
But, if the bytecode is interpreted, it directly affects the speed of the
application.
 In order to speed up the performance, JIT compiler communicates with JVM
at the execution time in order to compile byte code sequences into native
machine code. Basically, when using JIT Compiler, the native code is easily
NAME-B SHRUTI
REG NO-17BCE1299

executed by the hardware when compared to JVM Interpreter. By doing so,


there will be a huge gain in execution speed.
 When the JIT Compiler compiles the series of byte code, it also performs
certain optimizations such as data analysis, translation from stack operations
to register operations, eliminating subexpressions, etc. This makes Java very
efficient when it comes to execution and performance.

Now that you know the fundamentals of JIT Compiler, let’s move further and
understand its working.

Java, J2EE & SOA Certification Training

 Instructor-led Sessions
 Real-life Case Studies
 Assignments
 Lifetime Access

Explore Curriculum

Working of JIT Compiler in Java

The JIT Compiler speeds up the performance of the Java applications at run time.
As Java is an object-oriented approach, it comprises of classes and objects.
Basically, it constitutes a byte code that is platform independent and executed by
the JVM across diversified architectures.

Work Flow:

Below diagram depicts how the actual working of compilation takes place in Java
Runtime Environment.
NAME-B SHRUTI
REG NO-17BCE1299

Programming & Frameworks Training

Next
1. When you code the Java Program, JRE uses javac compiler to compile the
high-level Source code to byte code. After this, JVM loads the byte code at
run time and converts into machine level binary code for further execution
using Interpreter.
2. As I have already mentioned above, interpretation of Java byte code reduces
the performance when compared to a native application. That’s where JIT
Compiler aids to boost up the performance by compiling the byte code into
native machine code “just-in-time” to run.
3. The JIT Compiler is activated and enabled by default when a method is
invoked in Java. When a method is compiled, Java Virtual Machine invokes
the compiled code of the method directly without interpreting it. Hence, it
does not require much memory usage and processor time. That basically
speeds up the performance of the Java Native Application.

So, that’s how it works. Now let’s dive deeper into this article and understand the
security aspects of JIT Compiler in Java.

Security Aspects of JIT in Java

The compilation of byte code into machine code by JIT Compiler is done directly
in memory. i.e. the compiler feeds the machine code directly into the memory and
executes it. In this case, it doesn’t store the machine code into the disk before
invoking the class file and executing it. Basically, the memory should be marked as
executable. For the sake of security issues, this should be completed after the code
NAME-B SHRUTI
REG NO-17BCE1299

is written into the memory. It also should be marked as read-only as executable


memory is a security hole. If you wish to know more about this, you can check out
this article on JIT Compiler Security aspects.

Now, let’s now move further and know the pros and cons of the Just-In-
Time Compiler in Java.

Pros and Cons of JIT in Java

Pros:

1. The Java code that you have written years before will run faster even today
and that improves the performance of Java Programs.
2. Native images also execute faster as they do not possess start-up activities
and requires less memory.
3. Code that is just-in-time compiled has some significant advantages over the
compiled unmanaged code.

4. 1. Reduced Memory Usage – It only compiles those methods that are


actually used.

5. 2. Good Locality of Reference – Code that is used together will often be in


the same page of memory, preventing expensive page faults.

6. 3. Cross-assembly inlining – Methods from the other DLLs, including the


.NET Framework, can be inlined into you own application, which can be a
significant savings.
7. There is also a benefit of hardware-specific optimisations, but in practice
there are only a few actual optimisations for the specific platforms.
However, it is becoming increasingly possible to target multiple platforms
(e.g. Any CPU) with the same piece of code. And it is likely we will see
more aggressive platform-specific optimisations in the future.

8. Most code optimisations in .NET do not take place in the language compiler
(the transformation from C#/VB.NET to IL). Rather, they occur on-the-fly in
the JIT compiler.
Cons:

1. Increases the complexity of Java Programs.


2. Programs with less code do not benefit with Just-In-Time Compilation.
NAME-B SHRUTI
REG NO-17BCE1299

2. Discuss in detail about process VM and system VM with


commercially available 4 products in each class.

Virtual Machines mainly divide into two broad categories, i.e. System VM (also
known as hardware virtual machine) and Process VM (also known as application
virtual machine) . The categorization is based on their usage and level of
correspondence to the associated physical machine. The system VM simulates the
complete system hardware stack and supports the execution of complete operating
system. On the other hand, Process VM adds up layer over an operating system
which is use to simulate the programming environment for the execution of
individual process. Virtual Machines are use to share & specify appropriate system
resources to the software (could be multiple operating systems or an application);
and these softwares are limited to their given resources provided by VM. Virtual
Machine Monitor (also known as Hypervisor) is the actual software layer which
provides the virtualization. Hypervisors are of two types, depending on their
association with the underlying hardware. The hypervisor that takes direct control
of underlying hardware is known as native or bare-metal VM; while hosted VM is
distinct software layer that runs with in operating system and hence have indirect
association with the underlying hardware. The system VM abstracts Instruction Set
Architecture (ISA), which is bit different from that of real hardware platform. The
main advantages of system VM includes consolidation (it allows the multiple
operating systems coexistence on single computer system with strong isolation to
each other), application provisioning, maintenance, high availability & disaster
recovery. Besides later advantages, regarding development aspects their
advantages are that it allows sandboxing, faster reboot and better debugging
access.
The application or process VM allows normal execution of application within
underlying operating system to support a single process. We can create multiple
instances of process VM to allow the execution of multiple applications associated
with multiple processes. The process VM is created when process begins and it
ends when process gets terminated. The main purpose of process VM is to provide
platform independency (in terms of programming environment), means allow exe-
cution of application in same manner on any of underlying hardware and software
platforms. In contrast to system VM (where low-level abstraction of ISA is
provided), process VM abstracts high level programming language. Process VM is
implemented using an interpreter; however the comparable performance to the
compiler based programming languages is achieved through just-in-time
compilation method.
NAME-B SHRUTI
REG NO-17BCE1299

Two of the most popular examples of process VMs are Java Virtual Machine
(JVM) and Common Language Runtime; used to virtualize the Java programming
language & .NET Framework programming environment respectively.

HYPER-V

Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a
software version of a computer, called a virtual machine. Each virtual machine acts
like a complete computer, running an operating system and programs. When you
need computing resources, virtual machines give you more flexibility, help save
time and money, and are a more efficient way to use hardware than just running
one operating system on physical hardware.

Hyper-V runs each virtual machine in its own isolated space, which means you can
run more than one virtual machine on the same hardware at the same time. You
might want to do this to avoid problems such as a crash affecting the other
workloads, or to give different people, groups or services access to different
systems.

Hyper-V can help you:

 Establish or expand a private cloud environment. Provide more flexible,


on-demand IT services by moving to or expanding your use of shared
resources and adjust utilization as demand changes.
 Use your hardware more effectively. Consolidate servers and workloads
onto fewer, more powerful physical computers to use less power and physical
space.
 Improve business continuity. Minimize the impact of both scheduled and
unscheduled downtime of your workloads.
 Establish or expand a virtual desktop infrastructure (VDI). Use a
centralized desktop strategy with VDI can help you increase business agility
and data security, as well as simplify regulatory compliance and manage
desktop operating systems and applications. Deploy Hyper-V and Remote
Desktop Virtualization Host (RD Virtualization Host) on the same server to
make personal virtual desktops or virtual desktop pools available to your
users.
 Make development and test more efficient. Reproduce different computing
environments without having to buy or maintain all the hardware you'd need
if you only used physical systems.
NAME-B SHRUTI
REG NO-17BCE1299

Hyper-V has required parts that work together so you can create and run
virtual machines. Together, these parts are called the virtualization platform.
They're installed as a set when you install the Hyper-V role. The required
parts include Windows hypervisor, Hyper-V Virtual Machine Management
Service, the virtualization WMI provider, the virtual machine bus (VMbus),
virtualization service provider (VSP) and virtual infrastructure driver (VID).

Hyper-V offers many features. This is an overview, grouped by what the features
provide or help you do.

Computing environment - A Hyper-V virtual machine includes the same basic


parts as a physical computer, such as memory, processor, storage, and networking.
All these parts have features and options that you can configure different ways to
meet different needs. Storage and networking can each be considered categories of
their own, because of the many ways you can configure them.

Disaster recovery and backup - For disaster recovery, Hyper-V Replica creates
copies of virtual machines, intended to be stored in another physical location, so
you can restore the virtual machine from the copy. For backup, Hyper-V offers two
types. One uses saved states and the other uses Volume Shadow Copy Service
(VSS) so you can make application-consistent backups for programs that support
VSS.

Optimization - Each supported guest operating system has a customized set of


services and drivers, called integration services, that make it easier to use the
operating system in a Hyper-V virtual machine.

Portability - Features such as live migration, storage migration, and import/export


make it easier to move or distribute a virtual machine.

Remote connectivity - Hyper-V includes Virtual Machine Connection, a remote


connection tool for use with both Windows and Linux. Unlike Remote Desktop,
this tool gives you console access, so you can see what's happening in the guest
even when the operating system isn't booted yet.

Security - Secure boot and shielded virtual machines help protect against malware
and other unauthorized access to a virtual machine and its data.
NAME-B SHRUTI
REG NO-17BCE1299

KVM

Kernel-based Virtual Machine (KVM) is an open source virtualization technology


built into Linux®. Specifically, KVM lets you turn Linux into a hypervisor that
allows a host machine to run multiple, isolated virtual environments called guests
or virtual machines (VMs).

KVM is part of Linux. If you’ve got Linux 2.6.20 or newer, you’ve got KVM.
KVM was first announced in 2006 and merged into the mainline Linux kernel
version a year later. Because KVM is part of existing Linux code, it immediately
benefits from every new Linux feature, fix, and advancement without additional
engineering.

KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need
some operating system-level components—such as a memory manager, process
scheduler, input/output (I/O) stack, device drivers, security manager, a network
stack, and more—to run VMs. KVM has all these components because it’s part of
the Linux kernel. Every VM is implemented as a regular Linux process, scheduled
by the standard Linux scheduler, with dedicated virtual hardware like a network
card, graphics adapter, CPU(s), memory, and disks.

KVM FEATURES
NAME-B SHRUTI
REG NO-17BCE1299

KVM is part of Linux. Linux is part of KVM. Everything Linux has, KVM has too.
But there are specific features that make KVM an enterprise’s preferred
hypervisor.

Security

KVM uses a combination of security-enhanced Linux (SELinux) and secure


virtualization (sVirt) for enhanced VM security and isolation. SELinux establishes
security boundaries around VMs. sVirt extends SELinux’s capabilities, allowing
Mandatory Access Control (MAC) security to be applied to guest VMs and
preventing manual labeling errors.
Storage

KVM is able to use any storage supported by Linux, including some local disks
and network-attached storage (NAS). Multipath I/O may be used to improve
storage and provide redundancy. KVM also supports shared file systems so VM
images may be shared by multiple hosts. Disk images support thin provisioning,
allocating storage on demand rather than all up front.

Hardware support

KVM can use a wide variety of certified Linux-supported hardware platforms.


Because hardware vendors regularly contribute to kernel development, the latest
hardware features are often rapidly adopted in the Linux kernel.
Memory management

KVM inherits the memory management features of Linux, including non-uniform


memory access and kernel same-page merging. The memory of a VM can be
swapped, backed by large volumes for better performance, and shared or backed
by a disk file.

Live migration

KVM supports live migration, which is the ability to move a running VM between
physical hosts with no service interruption. The VM remains powered on, network
connections remain active, and applications continue to run while the VM is
relocated. KVM also saves a VM's current state so it can be stored and resumed
later.

Performance and scalability


NAME-B SHRUTI
REG NO-17BCE1299

KVM inherits the performance of Linux, scaling to match demand load if the
number of guest machines and requests increases. KVM allows the most
demanding application workloads to be virtualized and is the basis for many
enterprise virtualization setups, such as datacenters and private clouds .

Scheduling and resource control

In the KVM model, a VM is a Linux process, scheduled and managed by the


kernel. The Linux scheduler allows fine-grained control of the resources allocated
to a Linux process and guarantees a quality of service for a particular process. In
KVM, this includes the completely fair scheduler, control groups, network name
spaces, and real-time extensions.

Lower latency and higher prioritization

The Linux kernel features real-time extensions that allow VM-based apps to run at
lower latency with better prioritization (compared to bare metal). The kernel also
divides processes that require long computing times into smaller components,
which are then scheduled and processed accordingly.

vSphere

vSphere, the virtualization platform of VMware, is a set of products that not only
includes virtualization, but also management and interface layers.
It provides a number of key components including infrastructure services (vCompute,
vStorage, and vNetwork), application services, vCenter Server, vSphere Client, etc.
NAME-B SHRUTI
REG NO-17BCE1299

Features of VMware vSphere:


 vCenter Server: A centralized management tool used to configure, provision and
manage virtual IT environments.
 vSphere Client: vSphere 6.7 has the final version of Flash-based vSphere Web
Client. Newer workflows in the updated vSphere Client release includes
vSphere Update Manager, Content library, vSAN, Storage policies, Host
profiles, VMware vSphere Distributed Switch™ topology diagram and
Licensing.
 vSphere SDKs: Provides interfaces for third-party solutions to access vSphere.
 VM File System: Cluster file system for VMs.
 Virtual SMP: Enables a single VM to use multiple physical processors at a time.
 vMotion: Enables live migration with transaction integrity.
 Storage vMotion: Enables VM file migration from one place to other without
service interruption.
 High Availability: If one server fails, VM is shifted to other server with spare
capacity to enable business continuity.
 Distributed Resource Scheduler (DRS): Assigns and balances compute
automatically across hardware resources available for VMs.
 Fault Tolerance: Generates copy of primary VM to ensure its continuous
availability.
NAME-B SHRUTI
REG NO-17BCE1299

 Distributed Switch (VDS): Spans multiple ESXi hosts and enables considerable
reduction of network maintenance activities.

XENSERVER

XenServer Industry-leading open source virtualization platform A value leader in


the virtualization space, XenServer is an open source platform for cloud, server
and desktop virtualization infrastructures. Organizations of any size can install
XenServer in less than 10 minutes to virtualize even the most demanding
workloads and automate management processes, thereby increasing IT flexibility
and agility and lowering TCO. With a rich set of management and automation
capabilities, a simple and affordable pricing model and optimizations for virtual
desktops and cloud computing, XenServer is designed to optimize datacenters and
clouds today and in the future.
Key XenServer benefits :
Cloud-proven virtualization that is used by the world’s largest clouds, directly
integrates with Citrix CloudPlatform and Apache™ CloudStack™ and is built on
an open and resilient cloud architecture .
Open source, community-driven virtualization from a strong community of users,
ecosystem partners and industry contributors that accelerates innovation, feature
richness and third-party integration
Value without compromise from a cost-effective and enterprise-ready cloudproven
platform that is trusted to power the largest clouds and run mission-critical
applications and large-scale desktop virtualization deployments .
Virtualize any infrastructure including clouds, servers and desktops, with a proven,
high-performance platform.

Features of Citrix XenServer:


 Site Recovery
 Host Failure Protection
 Multi-server management
 Dynamic Memory Control
 Active Directory Integration
 Role Based Administration and Control (RBAC)
 Mixed Resource Pools with CPU Masking
 Distributed Virtual Switch Controller
 In Memory read caching
 Live VM migration & Storage XenMotion
NAME-B SHRUTI
REG NO-17BCE1299

You might also like