You are on page 1of 1

714374-25-21RQ

AID: 1112 | 22/01/2016

JobTracker: It is a master process for performing a MapReduce job. It manages the


lifecycle of Jobs and schedules Tasks on the cluster.
The working of JobTracker is listed below:
JobTracker receives a job by client application. The executable files, other related
files and InputSplits, required to execute the Job, are included in the submitted Job
package.
After accepting the Job, JobTracker places it on Job Queue. Map tasks are created for
each split, based on the input splits. Also, JobTracker generates a number of reduce
tasks as per the job configuration.
The scheduler of JobTracker is responsible for assigning a Task to the TaskTracker
from one of the ongoing jobs.
At parallel it manages and controls each step of job processing.
After the scheduling of a task, TaskTracker supervises implementation of the task and
also coordinates with JobTracker to execute management operations like cleanup and
ending Tasks in the case of failure.
On the completion of the task, it runs the Job cleanup task.
TaskTracker: It is the slave process such that at least one TaskTracker is executing on
every Worker nodes of the cluster. TaskTracker daemons are responsible for running
tasks, assigned to them by JobTracker.
The working of TaskTracker is listed below:
On startup TaskTracker daemons registers with the JobTracker.
These daemons run the tasks assigned by the JobTracker.
A separate process on the Worker node is used for running the task.
Along with creating and managing the process execution, it sends the information of
current status (status heartbeats) of the process to the JobTracker.
It can kill the process on the condition of failure, only at the desire of JobTracker.
TaskTracker provides many services to the tasks, like Shuffle which refers to
switching or shuffling among tasks.

You might also like