You are on page 1of 5

K.

Dixit
Professional Summary:

 Around 8+ years of IT experience in Continuous Integration and Continuous Deployment process with a strong
background in Linux/Unix Administration, Build and Release Management and Cloud Implementation.
 Experience in Infrastructure Development and Operations involving AWS Cloud platforms, EC2, EBS, S3, VPC,
RDS, SES, ELB, Auto Scaling, CloudFront, CloudFormation, ElasticCache, CloudWatch, SNS, AWS Import /
Export.
 Worked with Amazon IAM console to create custom users and groups.
 Worked on various version control repositories like GitHub, SVN.
 Well versed in managing servers on Amazon Web Services (AWS) platform using Chef & Puppet configuration
management.
 Built puppet manifests and modules for required automating provisioning of different types of server instances.
 Good experience in implementing Puppet to convert Infrastructure as code and adding multiple nodes to enterprise
Puppetmaster and managed all the Puppet agents.
 Extensively worked on Jenkins for continuous integration and for End to End automation for all build and deployments.
 Helped the QA team to integrate automation tests with builds and check the build success through Jenkins.
 Experience with build tools Ant and Maven for writing a build. xmls and pom.xmls respectively.
 Experience in Administration/Maintenance of Source Control Management Systems such as Git. Created tags and
branches, fixed merge issues and administered software repositories.
 Knowledge of GIT branching/tagging, creating new and managing existing Repository, Git revision control system
etc.
 Good knowledge in setting up Chef Infra, Bootstrapping nodes, creating and uploading recipes, node
convergence in Chef SCM.
 Worked in the agile development team to deliver end-to-end continuous integration/continuous delivery product in
an open source environment using tools like Chef & Jenkins.
 Experience with Docker and Vagrant for different infrastructure setup and testing of code.
 Created virtual images similar to the production environment using Docker.
 Have ample experience in load balancing and monitoring with Nagios.
 Exposed to all aspects of the Software Development Life Cycle (SDLC) such as Analysis, Planning,
Developing, Testing, and Implementing and Post-production analysis of the projects. 
 Strong ability to troubleshoot any issues generated while building, deploying and in production support and
documenting the build and release process.
 Good Interpersonal Skills, team-working attitude, takes initiatives and very proactive in solving problems and
providing the best solutions.

Technical Skills:

Programming Languages & Scripting: C, C++, PYTHON, RUBY, SHELL


Operating Systems: Windows, UNIX, LINUX, Red Hat Linux 3.x, 4.x, 5.x, 6.x.
Version Control Tools: GIT, GITHUB, SVN (SubVersion)
Database Technologies: MySql, SQL Server
Deployment Tools: Ansible, Chef, Puppet, and Docker
Cloud Technologies: Amazon Web Services, Azure, Open Stack
Methodologies: Agile/Scrum, Waterfall
Web/Application Servers Weblogic, WebSphere, Apache Tomcat
Bug Tracking Tools: JIRA, Bugzilla, and Nagios

Client: DuPont, Wilmington, DE Feb’2018 – Present


DevOps/AWS Engineer:
Responsibilities:

 Served as an advocate for Cloud and automation from the first day; developed Shell/Python Scripts for automation.
 Involved in AWS EC2/VPC/S3/SQS/SNS based automation thru Terraform, Ansible, Python, Bam Bash Scripts.
Adopted new features as they were released by Amazon, including ELB & EBS.
 Supported and developed tools for integration, automated testing, and release management.
 Used GitHub for code version management and GitHub pull requests for code review & change review.
 Suggested tools and related implementation to engineering teams regarding Cloud-based services; also executed QA
services for enhancing the efficiency of technologies and related updates for cloud storage applications and developed
Build & Deployment processes for pre-production environments.
 Designed roles and groups using AWS Identity and Access Management (IAM); maintained user accounts, RDS,
Route 53, VPC, RDB, Dynamo DB, SES, SQS & SNS services in AWS.
 Implemented & maintained monitoring & alerting of production and corporate servers using Cloud Watch.
 Deployed all production systems to EC2 and used S3 & Hadoop for data processing.
 Used AGILE MVN method to develop a build and ANT as a build tool.
 Used groovy and spring boot to collecting the data from users and packaged the data as JSON distributed to 43
applications. 
 Expertise in AWS Cloud IaaS stage with components VPC, ELB, Security Groups, EBS, AMI, CloudWatch, CloudFront
& Direct Connect.
 Worked on Docker container snapshots, attaching to a running container, removing images, managing
directory structures and managing containers.
 Experience in backup, monitoring, HA & DR solutions in the Amazon Cloud.
 Experience working with AWS Cloud computing and involved in creating AWS instances and deployed Linux and
Ubuntu on AWS environment and Expertise in migrating applications onto AWS.
 AWS Cloud platform and its features which includes EC2, VPC, EBS, AMI, SNS, RDS, EBS, Cloud Watch, Cloud
Trail, Cloud Formation AWS Config, Autoscaling, Cloud Front, IAM, S3.
 Amazon AWS-EC2 VPC and Virtualization, VMWare server Infrastructure build design solutions.
 Assign Roles, manage Users and groups and assign policies using AWS Identity and Access Management (IAM). 
 Developing AWS cloud formation templates and setting up Auto-scaling for EC2 instances.
 Worked on Puppet extensively for deployment of AWS EC2 instances, creating custom scripts and managing
changes through Puppet master server on its clients.
 Experience working on Docker hub, creating Docker images and handling multiple images primarily for middleware
installations and domain configurations.
 Utilized Kubernetes and Docker for the runtime environment for the CI/CD system to build, test, and Deploy.
 Implemented Micro-services using the Pivotal Cloud Foundry platform build upon Amazon Web Services Developed
Cloud Foundry prototype deployed an application for testing and evaluation.
 Configured and maintained Jenkins to implement the CI process and integrated the tool with Ant and Maven to
schedule the builds.
 Experience working on several Docker components like Docker Engine, Hub, Machine, Compose and Docker
Registry.
 Developed Agile processes using Groovy, JUnit to use continuous integration tools.
 Performed process automation, scheduling of processes using CRON jobs.
 Worked on Kubernetes and Docker containerization technologies to build and deploy services as images to a cloud
environment, also integrated the process as part of build pipeline.
 Setup and Install Puppet workstation, Puppet Server and bootstrapping the Puppet Clients.
 Developing modules, manifests, Resources and Run lists, Managing the Puppet client nodes, and upload the
modules to puppet-server from Git local reports.
 Deploy Apache/Tomcat applications using puppet.
 Worked with various DevOps tools: SVN and GIT for Version/Source control, Jenkins, Maven for Build
Management, and Nagios for monitoring and Splunk for Log management.

Environment: Jenkins master, AWS, Puppet, GIT, SVN, JUnit, ANT, Maven, Shell Scripts, Nagios, Apache
Tomcat.

Client: Amazon, Washington, Seattle Dec’2016 – Jan’2018


DevOps Engineer:
Responsibilities:
 Worked in building Puppet enterprise modules using puppet DSL to automate infrastructure provisioning and
configuration automation across the environments. Managed profiles for various technology stacks in Puppet.
 Designed & implemented Subversion & GIT metadata like elements, labels, attributes, triggers & hyperlinks 
 Implemented & maintained branching & build/release strategies utilizing SVN/GIT and given project support
 Designed & maintained the Subversion/GIT repositories, views, and access control strategies
 Used the Continuous Integration (CI) tool Bamboo to automate build and release management processes
 Configured email & messaging notifications, managed users & permissions, and system settings using Bamboo.
 Worked on Configuration Management tool Ansible; created & modified playbooks into Rackspace.
 Written Cloud formation templates and deployed AWS resources using it 
 Used Amazon S3 & managed related policies; utilized S3 bucket & Glacier for storage & backup on AWS
 Utilized Puppet for configuration management of hosted instances within AWS and utilized S3 bucket and Glacier
for storage and backup on AWS; configured and networked Virtual Private Cloud (VPC).
 Used Cloud watch logs to move application logs to S3 and created alarms based on applications’ exceptions.
 Developed Cloud Foundry prototype deployed an application for testing and evaluation.
 Created Groovy Services for extracting data from Jira for an internal project management system. 
 Evaluated Pivotal Cloud Foundry (PCF) Platform as a Service (PaaS) in a private cloud and Amazon Web
Services (AWS). 
 Worked with Docker and vagrant for different infrastructure setup and testing of code.
 Evaluated Pivotal Cloud Foundry (PCF) Platform as a Service (PaaS) in a private cloud and Amazon Web Services
(AWS). Performed deployment of the code to the Pivotal Cloud Foundry (PCF) using Jenkins. 
 Setup the entire process for automated DEV, QA, and production deployments with Ansible and Jenkins.
 Create services in Groovy utilizing JIRA SOAP API to automate and manage the release schedule of Projects. 
 Setup IIS servers and administered & configured IIS server on Windows 2008.
 Configured security and system in Jenkins and Deployed Docker Cluster to Kubernetes.
 Involved in environment provisioning solutions using Docker, Vagrant, Red Hat Satellite.
 Collaborate in the automation of AWS infrastructure via Terraform and Jenkins - software and services configuration
via chef cookbooks. Experienced in Setting up Salt Stack   Server/Workstation and Bootstrapping Nodes.
 Worked on big data (Hadoop) environment along with exposure to HIVE, Spark, Cassandra, SQL, and ETL
components. Handled migration of 2500+ Applications with 3000+ Databases; implementation of SQL Web Replication.
 Built DNS system in EC2 and managed all DNS-related tasks; configured applications using chef.
 Maintained all development tools & infrastructure and ensured availability for the 24/7 development cycle.
 Created Chef driven configuration of user accounts; installed packages on Chef to manage the attributes. 
 Writing chef recipes for various applications and deploying them in AWS using Terraform. 
 Used ANT & MAVEN as build tools on Java projects for the development of build artifacts on the source code. 
 (Web logic); managed Jobs on Hadoop cluster
 Installed & configured Cassandra cluster & CQL on the cluster and upgraded Cassandra cluster to latest releases
 Documented project's software release procedures; developed & distributed release notes for the scheduled
release
 Used the continuous integration (CI) tool AnthillPro to automate the daily processes. 
 Created & maintained technical documents to launch Hadoop Clusters and execute Hive queries & Pig Scripts.
 Integrated Sensu Monitoring tool to send notifications to Slack and Email using plugins & custom scripts; integrated
Jenkins to do an auto build when the code is pushed to Git.

Environment: Subversion, GIT, Chef, Salt Stack, Anthill pro, Ansible, Jenkins, Docker, Cassandra, Java/J2EE,
ANT, MAVEN, JIRA, Ruby, Linux, XML, Windows XP, Bamboo, Sensu, Windows Server 2003, Web logic, MY
SQL, Perl Scripts, Shell scripts.

Client: GAP, Columbus, Ohio July 2016 – Nov’2016


Build and Release Engineer/DevOps:
Responsibilities:
 Developed build using ANT and MAVEN as build tools and used CI tools to kick off the builds move
from one environment to other environments.
 Designed and developed shell scripts.
 Installed and configured GIT and communicating with the repositories in GITHUB.
 Responsible for design and maintenance of the Subversion Repositories and the access control strategies.
 Used the version control system GIT to access the repositories and used in coordinating with CI tools.
 Integrated maven with GIT to manage and deploy project related tags.
 Performed necessary day to day Subversion/GIT support for different projects.
 Used the Continuous Integration tools such as Jenkins for automating the build processes.
 Installed and Configured the Nexus repository manager for sharing the artifacts within the company.
 Configured Jenkins with plugins and created jobs.
 Implemented Puppet modules to automate the configuration of a broad range of services.
 Developed Puppet modules to automate deployment, configuration, and lifecycle management of key clusters.
 Wrote puppet manifests for deploying, configuring, and managing components.
 Used maven dependency management system to deploy snapshot and release artifacts to Nexus to share artifacts
across projects. Demonstrate an understanding of Azure architecture from networking/ Network Security Groups
standpoint. Ability to implement and deploy Azure offerings including both the IaaS and PaaS offering. 
 Architected and developed an Azure resource deployment tool to automate the management of cloud resources
and applications. Created a guide for migrating an existing application from on-premises to Azure.
 Designed, architecture & built Chef as a configuration management tool, Jenkins for Continuous Integration, and
Sensu Monitoring tool to replace Nagios and monitor critical applications & servers health.
 Hands-on experience in Azure PAAS, IAAS, SAAS. Deployed Java applications into web application servers
 Assisted end-to-end release process from the planning of release content through to actual release deployment to
production.
 Create and maintain thousands of virtual machines, including build pack deploys on Cloud Foundry, using Docker
and expert level Unix skills. Integration of Maven/Nexus, Jenkins, Urban Code Deploy with Patterns/Release, Git,
Confluence, Jira and Cloud Foundry. 
 Deployed Java/J2EE applications on to the Apache Tomcat server and configured it to host the websites.
 Coordinated with software development teams and QA teams. Performed clean builds according to scheduled
releases.
 Verified whether the methods used to create and recreate software builds are reliable and repeatable.
 Deployed the build artifacts into environments like QA, UAT according to the build lifecycle.

Environment: Ant, Maven, Apache & Tomcat, shell & Perl scripting, Subversion, Azure, Jenkins, Hudson, Windows
2000/XP, Linux, Git, GitHub, Puppet.

Client: Rovi Corp, Bangalore, India Feb’2014 - Dec’2014


Linux Administrator:
Responsibilities:
 Installation and configuration of Red Hat Linux, Solaris, Fedora, and CentOS on new server builds as well as during
the upgrade situations.
 Log management like monitoring and cleaning the old log files.
 Created user roles & groups for securing the resources using local operating System authentication.
 Experienced in tasks like managing User Accounts and Groups, Managing Disks and File-systems.
 System audit report like no. of logins, success & failures, running cron jobs.
 Remotely coping files using SFTP, FTP, SCP, and WINSCP & FILEZILLA.
 Experience in writing bash scripts for job automation.
 Monitoring & troubleshooting of any data center outages. Day-to-day administration on Sun Solaris which includes
Installation, upgrade & loading patches & packages.
 Manage system installation, troubleshooting, maintenance, and performance tuning, managing storage resources,
network configuration to fit application and database requirements.
 Responsible for modifying and optimizing backup schedules and developing shell scripts for it.
 Performed regular installation of patches using RPM and YUM.
 Set up intranet Web Servers with APACHE, multiple-site management with named-based virtual Hosting & access
control with HTTP authentication.
 Installing and configuring of Samba for the heterogeneous platform.
 Implemented the file sharing on the network by configuring NFS on the system to share essential resources.

Environment: Red Hat Linux, Centos, Solaris, Nagios, Jira, Tomcat, shell scripts, Windows, Putty, Storage,
VPN.

Client: Lycamobile, London, UK Oct’2011 – Dec’2013


System Admin:
Responsibilities:

 Perform daily system monitoring, verifying the integrity and availability of all hardware, server resources, systems,
and key processes, reviewing system and application logs, and verifying completion of scheduled jobs such as backups.
 Review and release of systems updates.
 Install new / rebuild existing servers and configure hardware, phones, network, services, settings, directories,
storage, etc. in accordance with standards and project/operational requirements.
 Established and maintained user accounts assigned file permissions and established password and
account policies.
 Proficient in troubleshooting system problems.
 Monitored client disk and general disk space usage. System performance monitoring and tuning.
 Maintaining and troubleshooting network connectivity. 
 Perform on-going performance tuning, hardware upgrades, and resource optimization as required. Configure CPU,
memory, and disk partitions as required.
 Support for Virtual infrastructure.
 Perform daily backup operations, ensuring all required file systems and system data are successfully backed up to
the appropriate media.
EDUCATION:

Bachelor of Engineering (Electronics and communication), India March 2005 – April 2009

Master’s in Mobile Telecommunication, UK Jan 2010 - July 2011

Master’s in electrical engineering, USA Jan 2015 – April 2016

You might also like