Professional Documents
Culture Documents
REG NO-17BCE1299
VIRTUALIZATION
DIGITAL ASSIGNMENT-1
JIT stands for Just In Time. JIT compiler is a program which converts the Java
ByteCode into processor level instructions.The JIT compiler runs after the program
has begun and compiles the bytecode while the program is running into a quicker
and more local processor level directive set.After you've completed writing a Java
program, the source code are compiled by the Java compiler into bytecode. Then
the bytecode is converted into processor level instructions by the JIT compiler.
Thus the JIT compiler acts as a second compiler.The JIT compiler runs
simultaneously with the execution of the program. It compiles the bytecode into
platform-specific executable code that is instantly executed.Once the code is re-
compiled by the JIT compiler, it runs relatively quickly on the system. Just in time
compiler or JIT is an integral component of Java Virtual Machine along with
Garbage collector, which as name suggest does Just in time compilation. It is
capable of compiling Java code directly into machine language, which can greatly
improve performance of Java application.JIT is used to realize JVM in java
environment as:
It is an abstract instruction set designed to run Java Programs. This instruction set
defines a relatively straightforward push-down stack machine to which Java source
programs are compiled; these instructions are stored in “.class” files.
It is a program that runs on a real computer, than can execute the abstract JVM
instructions. Such a JVM includes an “interpreter” for the individual JVM
instructions, but also includes all the supporting machinery required to support the
execution of Java code, including arithmetic, function calls, storage allocation,
garbage collection, thread scheduling, class file loaders, file I/O and other access to
the local operating system as needed to run complex Java applications.
NAME-B SHRUTI
REG NO-17BCE1299
Older JVMs literally interpreted the JVM instructions one-by-one at runtime. This
is easy to implement which is why this was done. But like any interpreter, such
run-time interpretation produces program execution times that are typically an
order of magnitude slower than native compiled machine code.
JITters have one advantage over batch compilers: they can compile code that
matches what happens at runtime, and can thus help optimize the dynamic
properties of Java code such as the dynamic dispatch that occurs in overloaded
Java method calls. They have a disadvantage: they don’t see the program at large
scale, and generally cannot do what good batch compilers do well: global
optimizations. The tradeoff, in practice, means Java programs that are JIT-
compiled running pretty well, but not as fast as C or C++ programs that are
compiled by traditional methods.
The Just-In-Time compiler is one of the integral parts of the Java Runtime
Environment. It is mainly responsible for performance optimization of Java-based
applications at run time or execution time. In general, the main motto of the
compiler is increasing the performance of an application for the end user and the
application developer.
Byte code is the chief potential of Java’s WORA (Write once, run anywhere)
environment. Speed of the Java application depends on the way the byte
code gets converted to the native machine code. The bytecode can either be
interpreted or compiled to native code or directly executed on the processor.
But, if the bytecode is interpreted, it directly affects the speed of the
application.
In order to speed up the performance, JIT compiler communicates with JVM
at the execution time in order to compile byte code sequences into native
machine code. Basically, when using JIT Compiler, the native code is easily
NAME-B SHRUTI
REG NO-17BCE1299
Now that you know the fundamentals of JIT Compiler, let’s move further and
understand its working.
Instructor-led Sessions
Real-life Case Studies
Assignments
Lifetime Access
Explore Curriculum
The JIT Compiler speeds up the performance of the Java applications at run time.
As Java is an object-oriented approach, it comprises of classes and objects.
Basically, it constitutes a byte code that is platform independent and executed by
the JVM across diversified architectures.
Work Flow:
Below diagram depicts how the actual working of compilation takes place in Java
Runtime Environment.
NAME-B SHRUTI
REG NO-17BCE1299
Next
1. When you code the Java Program, JRE uses javac compiler to compile the
high-level Source code to byte code. After this, JVM loads the byte code at
run time and converts into machine level binary code for further execution
using Interpreter.
2. As I have already mentioned above, interpretation of Java byte code reduces
the performance when compared to a native application. That’s where JIT
Compiler aids to boost up the performance by compiling the byte code into
native machine code “just-in-time” to run.
3. The JIT Compiler is activated and enabled by default when a method is
invoked in Java. When a method is compiled, Java Virtual Machine invokes
the compiled code of the method directly without interpreting it. Hence, it
does not require much memory usage and processor time. That basically
speeds up the performance of the Java Native Application.
So, that’s how it works. Now let’s dive deeper into this article and understand the
security aspects of JIT Compiler in Java.
The compilation of byte code into machine code by JIT Compiler is done directly
in memory. i.e. the compiler feeds the machine code directly into the memory and
executes it. In this case, it doesn’t store the machine code into the disk before
invoking the class file and executing it. Basically, the memory should be marked as
executable. For the sake of security issues, this should be completed after the code
NAME-B SHRUTI
REG NO-17BCE1299
Now, let’s now move further and know the pros and cons of the Just-In-
Time Compiler in Java.
Pros:
1. The Java code that you have written years before will run faster even today
and that improves the performance of Java Programs.
2. Native images also execute faster as they do not possess start-up activities
and requires less memory.
3. Code that is just-in-time compiled has some significant advantages over the
compiled unmanaged code.
8. Most code optimisations in .NET do not take place in the language compiler
(the transformation from C#/VB.NET to IL). Rather, they occur on-the-fly in
the JIT compiler.
Cons:
Virtual Machines mainly divide into two broad categories, i.e. System VM (also
known as hardware virtual machine) and Process VM (also known as application
virtual machine) . The categorization is based on their usage and level of
correspondence to the associated physical machine. The system VM simulates the
complete system hardware stack and supports the execution of complete operating
system. On the other hand, Process VM adds up layer over an operating system
which is use to simulate the programming environment for the execution of
individual process. Virtual Machines are use to share & specify appropriate system
resources to the software (could be multiple operating systems or an application);
and these softwares are limited to their given resources provided by VM. Virtual
Machine Monitor (also known as Hypervisor) is the actual software layer which
provides the virtualization. Hypervisors are of two types, depending on their
association with the underlying hardware. The hypervisor that takes direct control
of underlying hardware is known as native or bare-metal VM; while hosted VM is
distinct software layer that runs with in operating system and hence have indirect
association with the underlying hardware. The system VM abstracts Instruction Set
Architecture (ISA), which is bit different from that of real hardware platform. The
main advantages of system VM includes consolidation (it allows the multiple
operating systems coexistence on single computer system with strong isolation to
each other), application provisioning, maintenance, high availability & disaster
recovery. Besides later advantages, regarding development aspects their
advantages are that it allows sandboxing, faster reboot and better debugging
access.
The application or process VM allows normal execution of application within
underlying operating system to support a single process. We can create multiple
instances of process VM to allow the execution of multiple applications associated
with multiple processes. The process VM is created when process begins and it
ends when process gets terminated. The main purpose of process VM is to provide
platform independency (in terms of programming environment), means allow exe-
cution of application in same manner on any of underlying hardware and software
platforms. In contrast to system VM (where low-level abstraction of ISA is
provided), process VM abstracts high level programming language. Process VM is
implemented using an interpreter; however the comparable performance to the
compiler based programming languages is achieved through just-in-time
compilation method.
NAME-B SHRUTI
REG NO-17BCE1299
Two of the most popular examples of process VMs are Java Virtual Machine
(JVM) and Common Language Runtime; used to virtualize the Java programming
language & .NET Framework programming environment respectively.
HYPER-V
Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a
software version of a computer, called a virtual machine. Each virtual machine acts
like a complete computer, running an operating system and programs. When you
need computing resources, virtual machines give you more flexibility, help save
time and money, and are a more efficient way to use hardware than just running
one operating system on physical hardware.
Hyper-V runs each virtual machine in its own isolated space, which means you can
run more than one virtual machine on the same hardware at the same time. You
might want to do this to avoid problems such as a crash affecting the other
workloads, or to give different people, groups or services access to different
systems.
Hyper-V has required parts that work together so you can create and run
virtual machines. Together, these parts are called the virtualization platform.
They're installed as a set when you install the Hyper-V role. The required
parts include Windows hypervisor, Hyper-V Virtual Machine Management
Service, the virtualization WMI provider, the virtual machine bus (VMbus),
virtualization service provider (VSP) and virtual infrastructure driver (VID).
Hyper-V offers many features. This is an overview, grouped by what the features
provide or help you do.
Disaster recovery and backup - For disaster recovery, Hyper-V Replica creates
copies of virtual machines, intended to be stored in another physical location, so
you can restore the virtual machine from the copy. For backup, Hyper-V offers two
types. One uses saved states and the other uses Volume Shadow Copy Service
(VSS) so you can make application-consistent backups for programs that support
VSS.
Security - Secure boot and shielded virtual machines help protect against malware
and other unauthorized access to a virtual machine and its data.
NAME-B SHRUTI
REG NO-17BCE1299
KVM
KVM is part of Linux. If you’ve got Linux 2.6.20 or newer, you’ve got KVM.
KVM was first announced in 2006 and merged into the mainline Linux kernel
version a year later. Because KVM is part of existing Linux code, it immediately
benefits from every new Linux feature, fix, and advancement without additional
engineering.
KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need
some operating system-level components—such as a memory manager, process
scheduler, input/output (I/O) stack, device drivers, security manager, a network
stack, and more—to run VMs. KVM has all these components because it’s part of
the Linux kernel. Every VM is implemented as a regular Linux process, scheduled
by the standard Linux scheduler, with dedicated virtual hardware like a network
card, graphics adapter, CPU(s), memory, and disks.
KVM FEATURES
NAME-B SHRUTI
REG NO-17BCE1299
KVM is part of Linux. Linux is part of KVM. Everything Linux has, KVM has too.
But there are specific features that make KVM an enterprise’s preferred
hypervisor.
Security
KVM is able to use any storage supported by Linux, including some local disks
and network-attached storage (NAS). Multipath I/O may be used to improve
storage and provide redundancy. KVM also supports shared file systems so VM
images may be shared by multiple hosts. Disk images support thin provisioning,
allocating storage on demand rather than all up front.
Hardware support
Live migration
KVM supports live migration, which is the ability to move a running VM between
physical hosts with no service interruption. The VM remains powered on, network
connections remain active, and applications continue to run while the VM is
relocated. KVM also saves a VM's current state so it can be stored and resumed
later.
KVM inherits the performance of Linux, scaling to match demand load if the
number of guest machines and requests increases. KVM allows the most
demanding application workloads to be virtualized and is the basis for many
enterprise virtualization setups, such as datacenters and private clouds .
The Linux kernel features real-time extensions that allow VM-based apps to run at
lower latency with better prioritization (compared to bare metal). The kernel also
divides processes that require long computing times into smaller components,
which are then scheduled and processed accordingly.
vSphere
vSphere, the virtualization platform of VMware, is a set of products that not only
includes virtualization, but also management and interface layers.
It provides a number of key components including infrastructure services (vCompute,
vStorage, and vNetwork), application services, vCenter Server, vSphere Client, etc.
NAME-B SHRUTI
REG NO-17BCE1299
Distributed Switch (VDS): Spans multiple ESXi hosts and enables considerable
reduction of network maintenance activities.
XENSERVER