You are on page 1of 1

Sr.

Bigdata Infrastructure
Eng.
Job Description:
Main Purpose:
We are looking for a Senior Bigdata Infrastructure Eng. to Manage large scale Hadoop clusters
environments including cluster setup, upgrade, performance tuning, monitoring and Alerting.
Requirements:
• 4+ years of recent experience in Bigdata Infrastructure Eng.
• Bachelor’s Degree or more in Computer Science or a related field.

Responsibilities:
1) Installing configuring, and upgrading Cloudera/Hortonworks platform.
2) Enhancements and performance tuning on various Hadoop components: HDFS, YARN, Hive,
HBase, Spark, Impala, Kerberos, etc.
3) Perform production monitoring and support for Big Data infrastructure and Big Data applications
using Zabbix and Grafana.
4) Ability to solve any ongoing issues with operating Hadoop clusters.
5) Installing and configuring messaging systems, such as Kafka or RabbitMQ.
6) Installing and configuring Apache Applications such as Airflow, Superset etc.
7) Collaborate with development team and Perform deployments and updates of the system.
8) Develop security integrations and implement new security solutions.
9) Establishing Continuous Integration, and Continuous Deployment (CI/CD) pipelines for applicationsusing
tools such as Jenkins, Docker and Ansible.
Technical Skills:

1) Advanced experience in Hadoop ecosystems Architecture and components.


2) Advanced and solid knowledge in Hadoop cluster management and high-availability.
3) Experience with K8s cluster management and high-availability is plus.
4) Experience securing Hadoop stack with Sentry, Ranger, LDAP, Kerberos KDC.
5) Advanced knowledge of Linux and network administration.
6) Demonstrated experience in many of these technologies - Jenkins, Ansible,
Terraform,Docker, Kubernetes orchestration and similar technologies.
7) Proficient in scripting languages like Python, Shell etc.
8) Advanced Experience with query engine like Presto (Trino) is plus.

You might also like