You are on page 1of 3

Ambari makes Hadoop management simpler by providing a consistent, secure platform

for operational control. Ambari provides an intuitive Web UI as well as a robust REST
API, which is particularly useful for automating cluster operations. With Ambari, Hadoop
operators get the following core benefits

● Simplified Installation, Configuration and Management. Easily and efficiently


create, manage and monitor clusters at scale. Takes the guesswork out of
configuration with Smart Configs and Cluster Recommendations. Enables
repeatable, automated cluster creation with Ambari Blueprints.
● Centralized Security Setup. Reduce the complexity to administer and configure
cluster security across the entire platform. Helps automate the setup and
configuration of advanced cluster security capabilities such as Kerberos and
Apache Ranger.
● Full Visibility into Cluster Health. Ensure your cluster is healthy and available with
a holistic approach to monitoring. Configures predefined alerts — based on
operational best practices — for cluster monitoring. Captures and visualizes
critical operational metrics — using Grafana — for analysis and troubleshooting.
Integrated with Hortonworks SmartSense for proactive issue prevention and
resolution.
● Highly Extensible and Customizable. Fit Hadoop seamlessly into your enterprise
environment. Highly extensible with Ambari Stacks for bringing custom services
under management, and with Ambari Views for customizing the Ambari Web UI.

It provides a highly interactive dashboard that allows administrators to


visualize the progress and status of every application running over the
Hadoop cluster.

Its flexible and scalable user interface allows a range of tools such as Pig, MapReduce,
Hive, etc. to be installed on the cluster and administers their performances in a
user-friendly fashion. Some of the key features of this technology can be highlighted as:
Ambari Metrics System (AMS) collects, aggregates, and serves
Hadoop and system metrics in Ambari-managed clusters.

AMS has four components:

● Metrics Monitors on each host in the cluster collect


system-level metrics and publish to the Metrics Collector.
● Hadoop Sinks plug in to Hadoop components to publish
Hadoop metrics to the Metrics Collector.
● The Metrics Collector is a daemon that runs on a specific
host in the cluster and receives data from the registered
publishers, the Monitors, and the Sinks.
● Grafana is a daemon that runs on a specific host in the
cluster and serves pre built dashboards for visualizing
metrics collected in the Metrics Collector.

You might also like