Professional Documents
Culture Documents
Q1 & Q3 what is Aneka container? Explain three types of services that are hosted
inside the Aneka container.
1. Fabric Services
2. Foundation Services
3. Application Services
1. Fabric services:
Fabric Services define the lowest level of the software stack representing the
Aneka Container. They provide access to the resource-provisioning subsystem
and to the monitoring facilities implemented in Aneka.
2. Foundation services:
Fabric Services are fundamental services of the Aneka Cloud and define the
basic infrastructure management features of the system. Foundation Services
are related to the logical management of the distributed system built on top of
the infrastructure and provide supporting services for the execution of
distributed applications.
3. Application services:
Application Services manage the execution of applications and constitute a
layer that differentiates according to the specific programming model used for
developing distributed applications on top of Aneka.
Q2 Explain in detail public cloud deployment of Aneka cloud.
(NOT COMPLETE)
Q7 Describe the major difference between Aneka thread ad local thread with a
neat diagram.
3 Thread synchronization
The .NET base class libraries provide advanced facilities to support
thread synchronization by the means of
monitors, semaphores, reader-writer locks, and basic synchronization
constructs at the language level. Aneka
provides minimal support for thread synchronization that is limited to
the implementation of the join
operation for thread abstraction.
4 Thread priorities
The System.Threading.Thread class supports thread priorities, where the
scheduling priority can be one
selected from one of the values of the ThreadPriority enumeration:
Highest, AboveNormal, Normal,
BelowNormal, or Lowest.
5 Type serialization
Serialization is the process of converting an object into a stream of bytes
to store the object or transmit it to
memory, a database, or a file. Main purpose is to save the state of an
object to recreate it when needed. The reverse process is called
deserialization.
Local threads execute all within the same address space and share
memory;. Aneka threads are distributed and execute
on remote computing nodes, and this implies that the object code related
to the method to be executed
Q8. List and explain popular frameworks for task computing.
1. Condor
Condor is the most widely used and long-lived middleware for
managing clusters, idle workstations, and a collection of clusters.
Condor supports features of batch-queuing systems along with the
capability to checkpoint jobs and manage overload nodes.
It provides a powerful job resource-matching mechanism, which
schedules jobs only on resources that have
the appropriate runtime environment.
Condor can handle both serial and parallel jobs on a wide variety of
resources.
2. Globus Toolkit
The Globus Toolkit is a collection of technologies that enable grid
computing.
It provides a comprehensive set of tools for sharing computing power,
databases, and other services across corporate, institutional, and
geographic boundaries.
The toolkit features software services, libraries, and tools for resource
monitoring, discovery, and management as well as security and file
management.
3. Sun Grid Engine (SGE)
Sun Grid Engine (SGE), now Oracle Grid Engine, is middleware for
workload and distributed resource management.
Initially developed to support the execution of jobs on clusters, SGE
integrated additional capabilities and now is able to manage
heterogeneous resources and constitutes middleware for grid computing.
It supports the execution of parallel, serial, interactive, and parametric
jobs
4. BOINC
Berkeley Open Infrastructure for Network Computing (BOINC) is
framework for volunteer and grid computing. It allows us to turn desktop
machines into volunteer computing nodes that are leveraged to run
jobs when such machines become inactive.
BOINC supports job check pointing and duplication.
BOINC is composed of two main components: the BOINC server and the
BOINC client.
The BOINC server is the central node that keeps track of all the available
resources and scheduling jobs.
BOINC execution environment for job submission.
5. Nimrod/G
Tool for automated modeling and execution of parameter sweep
applications over global computational grids.
It provides a simple declarative parametric modeling language for
expressing parametric experiments.
It uses novel resource management and scheduling algorithms based on
economic principles.
It supports deadline- and budget-constrained scheduling of applications
on distributed grid resources to
minimize the execution cost and at the same deliver results in a timely
manner.
Q9 Explain features provided by Aneka for execution of parameter sweep
applications.
1 Domain decomposition
Domain decomposition is the process of identifying patterns of functionally
repetitive, but independent,computation on data.
This is the most common type of decomposition in the case of throughput
computing, and it relates to the identification of repetitive calculations
required for solving a problem.
The master-slave model is a quite common organization for these scenarios:
● The system is divided into two major code segments.
● One code segment contains the decomposition and coordination logic.
● Another code segment contains the repetitive computation to
perform.
● A master thread executes the first code segment.
● As a result of the master thread execution, as many slave threads as
needed are created to execute the repetitive computation.
● The collection of the results from each of the slave threads and an
eventual composition of the final result are performed by the master
thread.
5. Describe the features of Aneka management tools in terms of infrastructure,
platform and applications.
Q6 Explain the different techniques for parallel computation.
1 Domain decomposition
Domain decomposition is the process of identifying patterns of functionally
repetitive, but independent,computation on data.
This is the most common type of decomposition in the case of throughput
computing, and it relates to the identification of repetitive calculations
required for solving a problem.
The master-slave model is a quite common organization for these scenarios:
● The system is divided into two major code segments.
● One code segment contains the decomposition and coordination logic.
● Another code segment contains the repetitive computation to perform.
● A master thread executes the first code segment.
● As a result of the master thread execution, as many slave threads as
needed are created to execute the repetitive computation.
● The collection of the results from each of the slave threads and an
eventual composition of the final result are performed by the master
thread.
2 Functional decomposition