Professional Documents
Culture Documents
abdulkhayuum@gmail.com
+91-9865139663
1d5720da-e7ae-424e-92d4-4f45695c5e5b
Objective
In pursuit of challenging assignment that would facilitate the maximum utilization and application of my
broad skills and expertise in making a positive difference to the organization
Technical Expertise
• Over 8+ years of experience in IT industry which includes various aspects of software development, Hadoop
& DevOps Administration
• Strong experience in Hadoop distribution such as Hortonworks.
• Hadoop Administrator Certified
• In-depth understanding of Hadoop Architecture and various components such as HDFS, Yarn, Hive, Kerberos,
Zookeeper, Kafka, Ranger, Hbase.
• Worked On automating incident creation to Service/Ask Now ticketing tool when there is any Hadoop
service failures
• Hands on experience with version control tools like GIT, GitHub and BitBucket.
• Expertise in DevOps Tools such as Git, Maven, SonarQube, Nexus/Jfrog Artifactory, Jenkins, Docker,
Kubernetes along with Shell scripting.
• Extensively worked on Jenkins for continuous integration from End to End automation for all build and
deployments & experience in report monitoring and notification functionality to report success or failure.
• Good understanding of Amazon Web Services (AWS), Creating EC2 Instances, S3, Auto scaling, ELB, IAM, VPC
and Configuring all necessary services.
• Experience in working with the release and deployment in Java/J2EE and Web applications environment.
• Hands on experience in Installing Ansible & configuring multiple hosts for automation deployment.
• Experience in working with Docker-swarm & Kubernetes container orchestration tool to deploy and maintain
Containers/Pods
Professional Experience
Technical Skills
Big Data Technologies HDP Services- HDFS, Yarn, Hive, Zookeeper, Kafka,
Kerberos, Sqoop
Version Control System tools GIT, GitHub
Operating Systems Windows, Linux
Configuration Management Ansible
NoSQL Database Hbase
Scripting Shell Scripting, Python
Application Server Apache Tomcat & Jboss
Continuous Integration Tools Jenkins
Containerization tool Docker , Docker Swarm and Kubernetes
Bug Tracking Tool JIRA, AskNow
Artifactory Repos Nexus and Jfrog Artifactory
Projects
Visa’s Machine Learning Platform (MLP) is a fully managed, multi-tenant, scalable platform to build and support
offline Machine Learning models. MLP offers the latest data science tools, seamless integration, and a secure environment
delivered as Platform-as-a-Service (PaaS).
Responsibilities:
• Building/Maintaining Docker container clusters managed by Kubernetes Linux, GIT, Docker, on GCP (Google Cloud
Platform). Utilized Kubernetes and Docker for the runtime environment system of the CI/CD to build, test deploy.
• Designed and implemented a continuous build-test-deployment (CI/CD) system with multiple component pipelines
using Jenkins to support weekly releases and out-of-cycle releases based on business needs
• Involved in development of test environment on Docker containers and configuring the Docker containers using
Kubernetes.
• Monitoring of micro services by using Grafana & Prometheus to find root cause analysis of failure services
• Enhancement / support of existing micro services
• Responsible for on boarding new users to Kubernetes cluster
• Provided end-to-end support to make sure application runs well in Kubernetes cluster
• Tracking all of the Pod's CPU/RAM utilization
• Monitoring Grafana to identify Kubernetes system metrics
• Respond to incidents and write blameless RCA's/postmortems
Deliver the technology solution for a domestic processing system in India that is compliant with Reserve Bank of India
(RBI) regulations.
Responsibilities:
Description
This project helped us to integrate ServiceNow, which is ticket portal in Dell and Ambari, which provides UI
interaction to Hortonworks distribution.
Passport Details