You are on page 1of 14

DEPLOYMENT GUIDE

Hazelcast IMDG on AWS:


Best Practices for Deployment
Hazelcast IMDG on AWS: Best Practices for Deployment
DEPLOYMENT GUIDE

Hazelcast IMDG on AWS:


Best Practices for Deployment

BEST PRACTICES FOR DEPLOYMENT


This paper is targeted towards cloud architects/developers who gear up a Hazelcast IMDG® application from
a physical environment to a virtualized cloud environment, namely EC2 (Amazon Elastic Compute Cloud). EC2
is a central part of Amazon.com’s cloud computing platform, Amazon Web Services (AWS). Focus of this paper
is to highlight best practices in running Hazelcast IMDG applications on EC2.

Hazelcast IMDG is a high performance application and is dependent on consistent network behavior: follow
this guide to maximize performance on the EC2 platform. Since multicast is not supported by EC2, Hazelcast is
able to leverage EC2 platform metadata to enable cluster auto discovery. This paper:

UU Provides an overview on the terminology and high-level architecture of Amazon Web Services (AWS)
UU Delivers guidance on a smooth deployment of Hazelcast on EC2
UU Explains some of the high availability options on EC2

AWS TERMINOLOGY
Amazon EC2

Amazon EC2 stands for Elastic Compute Cloud and is the core of Amazon`s cloud-based services. It provides
resizable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your
need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon
EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage
storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in
popularity, reducing your need to forecast traffic.

Regions and Availability Zones

Amazon EC2 is hosted in multiple locations world wide, each of which is composed of Regions and Availability
Zones. Each Region is a separate, independent geographic area and has multiple, isolated locations known as
Availability Zones. Although these zones are isolated, they are connected through low-latency links.

Using Amazon EC2 enables the resources (instances and data) to be placed in multiple locations. There is no
resource replication between regions unless it is specified. This topic is discussed in WAN Replication for EC2
Regions section. Note that some AWS resources might not be available in all Regions and Availability Zones,
so before deploying any applications, it is required to guarantee that the resources can be created in the
desired location. Amazon EC2 resources are global, tied to a region, or tied to an Availability Zone.

Since Amazon EC2 offers multiple regions, you can start Amazon EC2 instances in any location that meets your
requirements. For example, you may want to start instances in Asia Pacific region to be closer to your far-east
customers. Below is the table listing the regions that provide support for Amazon EC2.

REGION CODE REGION NAME

ap-northeast-1 Asia Pacific (Tokyo)


ap-southeast-1 Asia Pacific (Singapore)
ap-southeast-2 Asia Pacific (Sydney)
eu-west-1 EU (Ireland)
eu-central-1 EU (Frankfurt)

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 2


Hazelcast IMDG on AWS: Best Practices for Deployment

REGION CODE REGION NAME

sa-east-1 South America (Sao Paulo)


us-east-1 US East (Northern Virginia)
us-west-1 US West (Northern California)
us-west-2 US West (Oregon)

For more information about the regions and endpoints for Amazon EC2, see Regions and Endpoints in the
Amazon Web Services General Reference. For more information about endpoints and protocols in AWS
GovCloud (US), see AWS GovCloud (US) Endpoints in the AWS GovCloud (US) User Guide.

DEPLOYMENT BEST PRACTICES FOR HAZELCAST IMDG ON AWS


Below sections provide best practices and recommendations to deploy Hazelcast on AWS.

Which EC2 Instance Type to Choose

Hazelcast IMDG is an in-memory data grid that distributes the data and computation to the nodes that are
connected with a network, making it very sensitive to the network and related attributes. Below are some
recommendations on which instance types to choose:

1. Not all EC2 Instance types are the same in terms of network performance. It is highly recommended to go
for Enhanced Networking i.e. instances that have 10 Gigabit or High network performance for Hazelcast
deployments.
2. EC2 uses 2 types of virtualization:
a. Paravirtual (PV)
b. Hardware virtual machine (HVM)
PV instances are known to have higher overhead for system calls than HMV and system calls are known to
be major resource consuming aspect in Hazelcast for synchronous operations. The performance variable
in HVM is very high, therefore it is strongly recommended to use HVM instances over PV where possible.

Note: To be able to use HVM instances, you must pick a supported instance type and choose a HVM
enabled AMI. All of the instance types listed in the Instance Types table support HVM.

3. All Hazelcast cluster members should run on VMs hosted in the same placement group. This has proven
to have significant impact on performance (this can easily lead to a 50% performance increase) when
the physical hardware that is hosting VMs are not far apart from each other and located within same
placement group. Cluster strategy placement group will put instances in the same low latency group.

Selecting EC2 Instance Type

Hazelcast is an in-memory data grid that distributes the data and computation to the members that are
connected with a network, making Hazelcast very sensitive to the network. Not all EC2 Instance types are the
same in terms of the network performance. It is recommended that you choose instances that have High or
10 Gigabit+ network performance for Hazelcast deployments. Currently AWS has numerous selections for
network performance such as Up To 10 GB, 10GB, 25GB, and 100GB which equal a dedicated bandwidths of
1.5Gbps, 7Gbps, 14 Gbps, and 14 Gbps. Given that Hazelcast is a TCP Socket based solution it is important
to allocate an appropriate amount of bandwidth based on your needs. We recommend using the Hazelcast
Simulator for performing load tests to help create a capacity plan. For further assistance please contact https://
hazelcast.com/services.

You can check latest Instance Types at: https://aws.amazon.com/ec2/instance-types/

We have found the Memory Optimized instances to be best suited for Hazelcast. Amazon EC2 R5 instances are
the next generation of memory optimized instances for the Amazon Elastic Compute Cloud. R5 instances are
well suited for memory intensive applications such as high performance databases, distributed web scale in-

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 3


Hazelcast IMDG on AWS: Best Practices for Deployment

memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications.
Additionally, R5d instances have local storage, offering up to 3.6TB of NVMe-based SSDs.

Configuring the Security

By default, Hazelcast uses port 5701. It is recommended to create a Hazelcast specific security group. Then,
an inbound rule for port 5701 from sg-hazelcast needs to be added to this security group. Follow the below
instructions:

UU Open the Amazon EC2 console.


UU Click Security Groups in the left menu.
UU Click Create Security Group and enter a name (e.g. sg-hazelcast) and description for the security group,
click Yes, Create.
UU On Security Groups page, select the security group sg-hazelcast on the right pane.
UU You will see a field below the security group list with the tabs Details and Inbound. Select Inbound. Below
screen appears:

UU Select Custom TCP rule in the field Create a new rule. Type 5701 into the field Port range and sg-hazelcast
into Source.
UU Click Add Rule.

Configuring EC2 Auto Discovery

Hazelcast supports a wide variety of discovery mechanisms which include TCP, Multicast, GCP Cloud,
Apache jclouds, Azure Cloud, ZooKeeper, etcd, PCF, OpenShift, Heroku, Kubernetes, and AWS Cloud. To
configure Discovery using TCP/IP, you need the IP addresses upfront and this is not always possible. To solve
this problem, Hazelcast supports EC2 auto discovery, which is a layer on top of TCP/IP discovery. EC2 auto
discovery uses AWS API to get the IP addresses of possible Hazelcast nodes and feeds those IP addresses
to TCP/IP discovery. This way the discovery process becomes dynamic and it eliminates a need for knowing
the IP addresses upfront. To limit the IP addresses only to Hazelcast related nodes, EC2 discovery supports
filtering based on security group and/or tags. You can find more about the AWS Cloud Discovery at https://
github.com/hazelcast/hazelcast-aws.

Note: EC2 Auto Discovery requires the use of user keys. It’s recommended that you create a new IAM user
with limited permissions to support this function. See IAM Best Practices for more information on how to
manage IAM users in your account.

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 4


Hazelcast IMDG on AWS: Best Practices for Deployment

A simple cluster configuration should be as follows:

UU If you have the dependency to hazelcast-all.jar, there is no need to do anything. If you are not using
hazelcast-all.jar but depend on hazelcast-jar and other module jars that you need, then you should add
hazelcast-aws.jar too.
UU Disable Multicast and TCP/IP in the join section of the cluster configuration XML file.
UU Enable AWS and provide the iam-role in the join section.
UU Set the correct region. If it is not set, the default region us-east-1 will be used.
UU As an important recommendation, you can add the security group you specified at EC2 management
console (please see Configuring the Security section) to narrow the Hazelcast nodes to be within this
security group. This is done by specifying the security group name in the configuration XML file (using the
security-group-name tag, in the below sample it is given as sg-hazelcast).
UU You can also add any available tags to narrow only to Hazelcast nodes using the tags tag-key and tag-value.
UU If the cluster that you are trying to form is relatively big (>20 nodes), you may need to increase the
connection timeout (specified by the conn-timeout-seconds attribute). Otherwise, it is possible that not all
members join the cluster. It may lead to confusion to set the timeout on TCP/IP although we are using AWS.
But, remember that AWS discovery is using TCP/IP discovery underneath.

Sample configuration is shown below:

<join>
<tcp-ip conn-timeout-seconds=”20” enabled=”false”>
</tcp-ip>
<multicast enabled=”false”>
......
</multicast>
<aws enabled=”true”>
<iam-role>ec2-describe-role</iam-role>
<region>us-east-1</region>
<host-header>ec2.amazonaws.com</host-header>
<security-group-name>sg-hazelcast</security-group-name>
<tag-key>type</tag-key>
<tag-value>hz-nodes</tag-value>
</aws>
</join>

Node Failure Detection: Ping

Hazelcast nodes send heartbeats to each other at every second. This is to determine whether each node is
alive or not. If a node does not respond to a heartbeat for 5 minutes then it is considered down. Let`s say a
problem has occurred in the network (e.g. a node is terminated in EC2 management console, a switch has
broken, etc.) and a node, which is not in the garbage collection state (GC), cannot respond to heartbeat
messages. How do you know if that node is up or down: the answer is by using ICMP requests (pings).

Now, Amazon EC2 instances do not respond to pings by default. To learn whether a node is up or down, pings
should be enabled on EC2 through your security group and also ICMP must be enabled in your Hazelcast
configuration.

To enable ICMP on EC2:

UU After logging into AWS, from the management console click on “Security Groups” and select the related
group.

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 5


Hazelcast IMDG on AWS: Best Practices for Deployment

UU Click on “Inbound” tab and create a new rule with the following attributes:

–– Select “Custom ICMP Rule” in the field Create a new rule.


–– As the rule type, select Echo Request.
–– Into the Source field, type 0.0.0.0/0 to allow all nodes to ping your node, or type your own IP addresses.

UU Or, you can use the EC2 command line tool and type the below commands:

–– To allow all nodes to ping yours:


ec2-authorize default -P icmp -t -1:-1 -s 0.0.0.0/0
–– To allow specific nodes to ping yours:
ec2-authorize default -P icmp -t -1:-1 -s <IP addresses>

To enable ICMP on Hazelcast:

In your Hazelcast configuration, set the property hazelcast.icmp.enabled to true (which is false by default):

<properties>
<property name="hazelcast.icmp.enabled">true</property>
</properties>

Now, Hazelcast nodes can ping each other and if they get response from a node that does not answer to
heartbeats, that node is considered to be in GC or to have some anomalies like overloaded CPU. Therefore,
the nodes will wait for the heartbeat responses, and if there is no response for the heartbeats for 5 minutes,
then that node is considered as down.

On the other hand, if the nodes cannot get a response to pings from a node, then that node is considered as
down. This way a failure can be detected very fast.

Network Latency

Since data is sent-received very frequently in Hazelcast applications, latency in network becomes a crucial
issue. In general, moving groups of servers around in the hopes of better performance is not required. A user
will have a good experience if they follow the tenants in this guide, as well as additional recommendations
around using HVM (SR-IOV) and placement groups. If the user were not meeting performance expectations
for network latencies, the recommended solution would be to engage support.

Following are some of the tips when building the EC2 infrastructure for Hazelcast IMDG:

UU Create a cluster only within a region. We do not recommend deploying a single cluster that spans across
multiple regions.
UU If a Hazelcast application is hosted on Amazon EC2 instances in multiple EC2 regions, the latency can be
reduced by serving end users` requests from the EC2 region that has the lowest network latency. Changes
in network connectivity and routing results in changes in the latency between hosts on the Internet. Amazon
has a web service (Route 53) that let the cloud architects use DNS to route end-user requests to the EC2
region that gives the fastest response. This latency-based routing is based on latency measurements
performed over a period of time. Please have a look at http://docs.aws.amazon.com/Route53/latest/
DeveloperGuide/HowDoesRoute53Work.html.
UU Simply move the deployment to another Region. The tool located at http://www.cloudping.info/ gives
instant estimates on the latency from your location. By using it frequently, it can be helpful to determine the
regions that have the lowest latency.
UU Also, the test provided at http://cloudharmony.com/speedtest allows you not only test the network latency
but also the downloading/uploading speeds.

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 6


Hazelcast IMDG on AWS: Best Practices for Deployment

Debugging

When and if needed, Hazelcast IMDG can log the events for the instances that exist in a region. To see what
has happened or trace the activities while deploying, change the log level in your logging mechanism to
FINEST or DEBUG. After this change, you can also see whether the instances are accepted or rejected, and the
reason of rejection for the rejected instances in the generated log. Note that, changing the log level to one of
the mentioned levels may affect the performance of the cluster.

HIGH AVAILABILITY ACROSS ZONES


As you know Hazelcast IMDG is highly available thanks to the fact that it stores the backup of the data.

Let us explain this fact with an example without partition groups:

Assume that you have 6000 entries to be stored on 6 nodes, and the backup count is 1 (this means that every
entry in a distributed map will have 1 backup in the cluster). In this case, each node will be storing around
1000 entries. To be highly available and by default, Hazelcast will backup these 1000 entries on the other 5
nodes by storing (approx.) 200 entries on each of them. So, each node will have 1000 entries as primary and
another 1000, which is the combination of 200 entries from each other node, as the backup as shown in below
illustration (please show regard to the colors as they are aimed to show the primary entries in each node and
where their backups are):

Hazelcast IMDG Cluster

Primary
1000 1000 1000
Entries

200 200 200 200 200 200

Backup
200 200 200 200 200 200
Entries

200 200 200

Node 1 Node 2 Node 3

Primary
1000 1000 1000
Entries

200 200 200 200 200 200

Backup
200 200 200 200 200 200
Entries

200 200 200

Node 4 Node 5 Node 6

If a node crashes, Hazelcast IMDG will automatically recover the data that is lost from the backups. But, if two
nodes are lost at the same time in this configuration, then the cluster will lose some data (roughly 400 entries
will be lost for this example). Putting this situation in terms of EC2: in an EC2 environment, it is very possible
that an entire EC2 zone can go down causing your nodes in that zone to be dead. In order not to lose data in
that environment, Hazelcast supports partition groups where you create a cluster within the same region that
spans multiple zones.

From the same example above, assume that you have a 6-node cluster in us-east-1, where three of them are
in us-east-1a and the rest three is in us-east-1b. Now, what you would like is one zone to be the backup of the
other zone. Namely, when we store 6000 entries, each node will hold 1000. But, the backup of these 1000
entries will reside in three nodes on the other zone. This way, if one of the zones goes down entirely, Hazelcast
will not lose any data even if the backup count is 1. See the illustration below:

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 7


Hazelcast IMDG on AWS: Best Practices for Deployment

Hazelcast IMDG Cluster

Availability zone US East - 1a

Primary
1000 1000 1000
Entries

334 333 333


Backup
333 334 333 Entries
From 1b
333 333 334

Node 1 Node 2 Node 3

Availability zone US East - 1b

Primary
1000 1000 1000
Entries

334 333 333


Backup
333 334 333 Entries
From 1a
333 333 334

Node 4 Node 5 Node 6

Now, to have a zone to include the backup of the other one, you need to create member (node) groups under
the partition-group tag. For the above example, you will create a group of 3 members for each zone and
they will be the backups of each other. To create the member groups, you simply give the IP addresses of the
nodes under the member-group tag. Below is a sample configuration:

<hazelcast>
<partition-group enabled="true" group-type="CUSTOM">
<member-group>
<interface>12.12.12.1</interface>
<interface>12.12.12.2</interface>
<interface>12.12.12.3</interface>
</member-group>
<member-group>
<interface>12.12.13.*</interface>
</member-group>
</partition-group>
<hazelcast>

As you can see in the above sample, wildcards can be used for IP addresses (e.g. 12.12.13.* stands for the IP
addresses ranging between 12.12.13.0 and 12.12.13.255).

Now, you have 2 member groups, first one includes the nodes in, say us-east-1a and the second one includes
the ones in us-east-1b. This way, Hazelcast will group the primary partitions of the nodes in each member
group (zone) and distributes the backup of the nodes in one zone to the nodes in the other one (instead of
distributing the backups to all other nodes).

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 8


Hazelcast IMDG on AWS: Best Practices for Deployment

ZONE AWARE PARTITION GROUPS


You can use ZONE_AWARE configuration with the Hazelcast AWS Discovery Service plugin.

As discovery services, these plugins put zone information to the Hazelcast member attributes map during the
discovery process. When ZONE_AWARE is configured as partition group type, Hazelcast creates the partition
groups with respect to member attributes map entries that include zone information. That means backups are
created in the other zones and each zone will be accepted as one partition group.

When using the ZONE_AWARE partition grouping, a Hazelcast cluster spanning multiple AZs should have
an equal number of members in each AZ. Otherwise, it will result in uneven partition distribution among the
members.

Zone Aware is enabled with the following configuration:

<partition-group enabled="true" group-type="ZONE_AWARE" />

CONFIGURING AUTO SCALING


The recommended solution is to use Auto Scaling Lifecycle Hooks with Amazon SQS and a custom Lifecycle
Hook Listener script. Note however that, if your cluster is small and predictable, then you can try an alternative
solution mentioned in the conclusion.

AWS Auto Scaling Architecture

AWS Autoscaling Group

EC2 Instance

Hazelcast Member

lifestyle_hook_listener.sh

Recieve Message
EC2 Instance

Hazelcast Member

lifestyle_hook_listener.sh

AWS SQS
(Simple Queue Service)
EC2 Instance

Hazelcast Member

lifestyle_hook_listener.sh

Lifestyle Hooks
Add Message

Instance Launch

Instance Terminate

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 9


Hazelcast IMDG on AWS: Best Practices for Deployment

Setting it up requires the following steps:

UU Create AWS SQS queue


UU Create Amazon Machine Image which includes Hazelcast and Lifecycle Hook Listener script
UU Create Auto Scaling Group (with Scaling Policy to increase/decrease by 1 instance or by the number within
a partition group see Zone Aware Partition Groups for further information)
UU Create Lifecycle Hooks (“Instance Launch” and “Instance Terminate”)

HAZELCAST MEMBER CLI


https://github.com/hazelcast/hazelcast-member-tool

The command line tool “hazelcast-member” is packaged in yum for redhat, apt for debian/ubuntu and brew
for macOS.

'hazelcast-member start'

It has many command line options to add config files and it has a nice process management feature much like
docker where the processes have names. You can bind to specific interfaces, set java memory etc.

try 'hazelcast-member status'

It also has tab completion, which is handy for stop...

To enable tab completion add to your profile:

if which hazelcast-member > /dev/null; then eval "$(hazelcast-member init -)"; fi

Bash completion has been installed to:

/usr/local/etc/bash_completion.d

Mac Install :-

$ brew tap hazelcast/homebrew-hazelcast


$ brew install hazelcast-member

Debian/Ubuntu Install :-

# Resolving .deb packages


Using the command line, import Bintray’s GPG key:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 379CE192D401AB61

Add the following to your /etc/apt/sources.list system config file and update the lists of packages:

echo "deb https://dl.bintray.com/hazelcast/deb stable main" | sudo tee -a /etc/


apt/sources.list
sudo apt-get update

## Installing hazelcast-member

Using the command line, execute the following command:

sudo apt-get install hazelcast-member

RedHat Install :-

# Resolving RPM packages

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 10


Hazelcast IMDG on AWS: Best Practices for Deployment

This is what you need to do to resolve RPM artifacts from Bintray:

1. Run the following to get a generated .repo file:


'''
wget https://bintray.com/hazelcast/rpm/rpm -O bintray-hazelcast-rpm.repo
'''
or - Copy this text into a 'bintray-hazelcast-rpm.repo' file on your Linux machine:
'''
#bintraybintray-hazelcast-rpm - packages by hazelcast from Bintray
[bintraybintray-hazelcast-rpm]
name=bintray-hazelcast-rpm
baseurl=https://dl.bintray.com/hazelcast/rpm
gpgcheck=0
repo_gpgcheck=0
enabled=1
'''
2. From the command line:
'''
sudo mv bintray-hazelcast-rpm.repo /etc/yum.repos.d/
'''
# Installing hazelcast-member

Using the command line, execute the following command:

yum install hazelcast-member

AWS ELASTIC CONTAINER SERVICES FOR KUBERNETES


Cross Region and Cross Zone Deployment

AZ: 2a AZ: 1a

AZ: 2b AZ: 1b

AZ: 2c AZ: 1c

US-WEST-2 US-EAST-1

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 11


Hazelcast IMDG on AWS: Best Practices for Deployment

Here are a list of tools and resources needed for a deployment of Hazelcast using Kubernetes on AWS with EKS:

UU AWS command tool - https://docs.aws.amazon.com/cli/latest/userguide/installing.html


UU AWS IAM Authenticator for Kubernetes - https://github.com/kubernetes-sigs/aws-iam-authenticator
UU Hazelcast Deployment on Kubernetes - https://github.com/hazelcast/charts
UU eksctl - a CLI for Amazon EKS - https://eksctl.io/

Minimum required versions:


UU Kubernetes version 1.10
UU Helm Client & Server 2.9.1
UU Tiller 2.9.1
UU AWS CLI version 1.16.18
UU Python version Python 3, or Python 2.7.9
UU AWS-IAM-AUTHENTICATOR (same as CLI)

Once you have AWS CLI installed and configured for your credentials/region then check the version:

 aws --version
aws-cli/1.16.49 Python/2.7.10 Darwin/18.0.0 botocore/1.12.39

Check your Kubernetes, tiller, and nodes:

 kubectl get all --namespace kube-system


NAME READY STATUS RESTARTS AGE
pod/aws-node-ksz74 1/1 Running 0 3h
pod/aws-node-l6flf 1/1 Running 1 3h
pod/kube-dns-7cc87d595-4q9vt 3/3 Running 0 4h
pod/kube-proxy-6lg88 1/1 Running 0 3h
pod/kube-proxy-ll89n 1/1 Running 0 3h
pod/tiller-deploy-5c688d5f9b-mf7rb 1/1 Running 0 2h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 4h
service/tiller-deploy ClusterIP 10.100.175.241 <none> 44134/TCP 2h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/aws-node 2 2 2 2 2 <none> 4h
daemonset.apps/kube-proxy 2 2 2 2 2 <none> 4h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-dns 1 1 1 1 4h
deployment.apps/tiller-deploy 1 1 1 1 2h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-dns-7cc87d595 1 1 1 4h
replicaset.apps/tiller-deploy-5c688d5f9b 1 1 1 2h
replicaset.apps/tiller-deploy-f9b8476d 0 0 0 2h  This is a broken tiller with no
nodes (0 desired, 0 current, 0 available)

Tiller may need to be fixed with the following commands:

 kubectl create serviceaccount --namespace kube-system tiller


 kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin
--serviceaccount=kube-system:tiller
 kubectl patch deploy --namespace kube-system tiller-deploy -p ‘{“spec”:{“template”:
{“spec”:{“serviceAccount”:”tiller”}}}}’

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 12


Hazelcast IMDG on AWS: Best Practices for Deployment

With kubernetes, helm, tiller, aws cli, aws-iam-authenticator, and eksctl all installed you can then execute the
one line create cluster command:

 eksctl create cluster --name=hazelcast-imdg --nodes=4 --node-type=m5.xlarge

Example output from the create cluster command:

globalpayments eksctl create cluster


2018-11-08T12:41:27-05:00 [ℹ] using region us-east-1
2018-11-08T12:41:27-05:00 [ℹ] setting availability zones to [us-east-1a us-east-1b
us-east-1e]
2018-11-08T12:41:27-05:00 [ℹ] subnets for us-east-1a - public:192.168.0.0/19
private:192.168.96.0/19
2018-11-08T12:41:27-05:00 [ℹ] subnets for us-east-1b - public:192.168.32.0/19
private:192.168.128.0/19
2018-11-08T12:41:27-05:00 [ℹ] subnets for us-east-1e - public:192.168.64.0/19
private:192.168.160.0/19
2018-11-08T12:41:28-05:00 [ℹ] using “ami-0440e4f6b9713faf6” for nodes
2018-11-08T12:41:28-05:00 [ℹ] creating EKS cluster “ferocious-mushroom-1541698887” in
“us-east-1” region
2018-11-08T12:41:28-05:00 [ℹ] will create 2 separate CloudFormation stacks for
cluster itself and the initial nodegroup
2018-11-08T12:41:28-05:00 [ℹ] if you encounter any issues, check CloudFormation
console or try ‘eksctl utils describe-stacks --region=us-east-1 --name=ferocious-
mushroom-1541698887’
2018-11-08T12:41:28-05:00 [ℹ] creating cluster stack “eksctl-ferocious-mushroom-
1541698887-cluster”
2018-11-08T12:51:25-05:00 [ℹ] creating nodegroup stack “eksctl-ferocious-mushroom-
1541698887-nodegroup-0”
2018-11-08T12:54:51-05:00 [✔] all EKS cluster resource for “ferocious-
mushroom-1541698887” had been created
2018-11-08T12:54:51-05:00 [✔] saved kubeconfig as “/Users/terrywalters/.kube/config”
2018-11-08T12:54:51-05:00 [ℹ] the cluster has 0 nodes
2018-11-08T12:54:51-05:00 [ℹ] waiting for at least 2 nodes to become ready
2018-11-08T12:55:23-05:00 [ℹ] the cluster has 2 nodes
2018-11-08T12:55:23-05:00 [ℹ] node “ip-192-168-25-67.ec2.internal” is ready
2018-11-08T12:55:23-05:00 [ℹ] node “ip-192-168-63-184.ec2.internal” is ready
2018-11-08T12:55:25-05:00 [ℹ] kubectl command should work with “/Users/terrywalters/.
kube/config”, try ‘kubectl get nodes’
2018-11-08T12:55:25-05:00 [✔] EKS cluster “ferocious-mushroom-1541698887” in “us-
east-1” region is ready

Deploy Hazelcast with ManagementCenter

 helm install --set hazelcast.licenseKey=ENTERPRISE_KEY_HERE --name my-release


hazelcast/hazelcast-enterprise
… output omitted ...
** Hazelcast cluster is being deployed! **

© 2019 Hazelcast Inc. www.hazelcast.com DEPLOYMENT GUIDE 13


Hazelcast IMDG on AWS: Best Practices for Deployment

By changing my aws configure to a different region (check AWS EKS for available regions) I deployed to both
regions us-east and us-west:

And for each region you will find a VPC with numerous Subnets that cross zones (us-east-1a, us-east-1b,...)

350 Cambridge Ave, Suite 100, Palo Alto, CA 94306 USA Hazelcast, and the Hazelcast, Hazelcast Jet and Hazelcast IMDG logos
Email: sales@hazelcast.com Phone: +1 (650) 521-5453 are trademarks of Hazelcast, Inc. All other trademarks used herein are the
Visit us at www.hazelcast.com property of their respective owners. ©2019 Hazelcast, Inc. All rights reserved.

You might also like