You are on page 1of 14

5th August 2018 GNS3 v2 with Google Compute Engine

[https://res.cloudinary.com/binarynature/image/upload/v1503009271/gns3-google-cloud-header_mwm46q.png]
I can't believe it's been almost four years since I wrote the original GNS3 with Google Compute Engine
[https://binarynature.blogspot.com/2014/12/gns3-with-google-compute-engine.html] post. Both GNS3 and Google Compute
Engine have evolved quite a bit from then to now. What hasn't changed is our need for more horsepower to run our
ever-expanding labs.

What's new and improved?


Updated for GNS3 v2.x

The original post covered GNS3 version 1.x. Refer to the following for information on changes and new features:
What’s new in GNS3 version 2.0? [https://docs.gns3.com/1jtdTQAcKa7JmQTNH2LoxQmOYalts7O0urmZ9CNnoEpU/index.html]
What’s new in GNS3 version 2.1? [https://docs.gns3.com/1_xhTp3fAzIKfE6ujQ-P1_CCaKuhgbQblOT7uITr91qI/index.html]
What’s new in GNS3 version 2.2?
[https://docs.google.com/document/d/1auCG_fHgJrG73iwvQuvONsacknnIYfmayeYRcpt70sE/preview]

KVM nested virtualization for Google Compute Engine

Google Compute Engine now supports nested virtualization [https://cloud.google.com/compute/docs/instances/enable-nested-


virtualization-vm-instances] . This feature is HUGE. We were previously limited to Dynamips, IOL, VPCS, Docker, and
QEMU (lacking hardware-assisted virtualization) for running GNS3 devices within the confines of a GCE VM instance.
QEMU with KVM [https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine] was the missing piece. Not anymore. Resource-
hungry network virtual appliances like the PAN VM [https://docs.gns3.com/appliances/pan-vm-fw.html] , Cisco Nexus 9000v
[https://docs.gns3.com/appliances/cisco-nxosv9k.html] , and even the monstrous Cisco IOS XRv 9000
[https://docs.gns3.com/appliances/cisco-iosxrv9k.html] can now be run in our GNS3 projects with Google Compute Engine.

Deployment streamlined with Ansible

Let's face it; the original post required far too many manual steps. After setting a few variables, we can perform an
Ansible playbook run that automates over 70 tasks in less than 10 minutes.
transfer: 92 B received, 180 B sent
persistent keepalive: every 25 seconds
❯ curl -s http://172.16.253.1:3080/v2/computes | jq '.'
[
{
"capabilities": {
"node_types": [
"cloud",
"ethernet_hub",
"ethernet_switch",
"nat",
"vpcs",
"virtualbox",
"dynamips",
"frame_relay_switch",
"atm_switch",
"qemu",
"vmware",
"traceng",
"docker",
"iou"
],
"platform": "linux",
"version": "2.2.5"
},
"compute_id": "local",
"connected": true,
"cpu_usage_percent": 0.2,
"host": "127.0.0.1",
"last_error": null,
"memory_usage_percent": 4.6,
"name": "gns3server",
"port": 3080,
"protocol": "http",
"user": null
}
]

~/Dev/gcp-gns3server master*
gcp-gns3server ❯

[https://asciinema.org/a/312158?speed=4]

Simple and secure WireGuard VPN connection to remote gns3server

WireGuard is a simple yet fast and modern VPN that utilizes state-of-the-art cryptography. WireGuard aims to be as
easy to configure and deploy as SSH. A VPN connection is made by exchanging public keys – precisely like
exchanging SSH keys – and WireGuard transparently handles the rest.

[https://res.cloudinary.com/binarynature/image/upload/v1583864230/wg-gcp-gns3server_iawxr8.png]
Dynamic DNS (DDNS)
Our VM instance uses an ephemeral external IP address. An ephemeral external IP address remains attached to an
instance only until it's stopped and restarted, or the instance is terminated. When a stopped instance is started again, a
new ephemeral external IP address is assigned to the instance.

A static external IP address would be preferable, but Google charges us when the address is reserved and not in use.
This includes when the VM instance is stopped. Instead of modifying the WireGuard client configuration every time the
instance is started, our method is to link the ephemeral external IP address with Dynamic DNS
[https://www.cloudflare.com/learning/dns/glossary/dynamic-dns] .

Steps
01. Create a domain with Duck DNS.
Navigate your web browser to Duck DNS [https://www.duckdns.org] , and use one of the OAuth providers to create an
account. We need two values from Duck DNS: the token and domain. A token is automatically generated, but you
need to add a domain (subdomain). The domain is the unique identifier for our specific VM instance on the public
Internet.

02. Create a WireGuard tunnel for the remote GNS3 server connection.
a. Download and install WireGuard [https://www.wireguard.com/install/] .
b. Open the WireGuard application.
c. Press the Ctrl + N key combination to add an empty tunnel.
d. Enter gns3server in the Name field.
e. Add the following content starting at the line below the PrivateKey attribute:

Address = 172.16.253.2/24

[Peer]
PublicKey =
AllowedIPs = 172.16.253.0/24
Endpoint = <subdomain>.duckdns.org:51820
PersistentKeepalive = 25

Your Endpoint value is the Duck DNS domain you just created with the WireGuard port number appended. Don't save
yet because we still require the public key value from the peer. We will come back to complete the configuration in a
later step.

Take note of your Public key value right below the Name property. An Ansible variable (wireguard_peer_pubkey) will
require this in a later step.
[https://res.cloudinary.com/binarynature/image/upload/v1583958664/wg-gns3server-new-tunnel-pre_sckwls.png]
03. Create a new GCP project.
a. Go to the Manage resources [https://console.cloud.google.com/cloud-resource-manager] page.
b. Click the CREATE PROJECT button.
c. Enter a value for Project Name.
d. Click the CREATE button.

New Customers: Google offers a generous free trial [https://cloud.google.com/free/docs/frequently-asked-questions] .

04. Enable the Google Compute Engine API.


The Google Compute Engine API [https://cloud.google.com/compute/docs/reference/rest/v1/] can be enabled by simply
navigating to the VM Instances page.

a. Go to the VM Instances [https://console.cloud.google.com/compute/instances] page.


b. Wait for the Compute Engine is getting ready. This may take a minute or more. message to clear.

NOTE: You may need to click the bell icon (Notifications) located in the upper-right corner to display the current status if
a couple of minutes has passed without a page refresh.

05. Activate the Google Cloud Shell.


The next several steps utilize the Google Cloud Shell [https://cloud.google.com/shell] command-line interface.

[https://res.cloudinary.com/binarynature/image/upload/v1584924250/gcloud-shell-icon_txmarq.png]
06. Install Ansible.
The Google Cloud Shell instance provides a good selection of tools and utilities, but Ansible is not one of them. This
omission is simple to resolve.

Install Ansible with the Python package manager.

$ pip3 install --user --upgrade ansible

07. Prepend the PATH variable with the $HOME/.local/bin directory.


We need to instruct the bash shell to search for the Ansible executables in the $HOME/.local/bin directory. The
directory is not included in the PATH variable by default.

Open the shell .profile file with the nano text editor.

$ nano $HOME/.profile

Add the code block highlighted in bold at the end of the file.

# ~/.profile: executed by the command interpreter for login shells.


# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.

# the default umask is set in /etc/profile; for setting the umask


# for ssh logins, install and configure the libpam-umask package.
#umask 022

# if running bash
if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
fi
fi

# set PATH so it includes user's private bin if it exists


if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH"
fi

if [ -d "$HOME/.local/bin" ] ; then
PATH="$HOME/.local/bin:$PATH"
fi

control + o (Save) the file, press the enter key to confirm, and then control + x (exit) the nano text
editor.

Verify the the configuration change.


$ tail -n 3 $HOME/.profile
if [ -d "$HOME/.local/bin" ] ; then
PATH="$HOME/.local/bin:$PATH"
fi

08. Exit the Google Cloud Shell.


The PATH variable modification won't take effect until we reopen the Google Cloud Shell.

$ exit

09. Reactivate the Google Cloud Shell.

[https://res.cloudinary.com/binarynature/image/upload/v1584924250/gcloud-shell-icon_txmarq.png]
10. Verify the PATH variable is updated for the Ansible binaries.

$ ansible --version
ansible 2.9.6
config file = None
configured module search path = ['/home/marc/.ansible/plugins/modules', '/usr/share/ansib
ansible python module location = /home/marc/.local/lib/python3.7/site-packages/ansible
executable location = /home/marc/.local/bin/ansible
python version = 3.7.3 (default, Mar 10 2020, 02:33:39) [GCC 6.3.0 20170516]

11. Create an Ansible playbooks directory and change to it.

$ mkdir -p $HOME/playbooks && cd $_

12. Clone the gcp-gns3server repository from GitHub and change to the new directory.

$ git clone https://github.com/mweisel/gcp-gns3server.git


$ cd gcp-gns3server

13. Set variables for the deployment.


a. Click the Launch Editor icon on the Google Cloud Shell toolbar. It splits the web browser window with the text editor
at the top and the terminal at the bottom.

[https://res.cloudinary.com/binarynature/image/upload/v1583867282/gcloud-shell-
toolbar-txt-editor_ocneiw.png]
b. Click the (playbooks | gcp-gns3server) vars.yml file to edit. It's a YAML file, so I highly recommend you check out the
YAML - Quick Guide [https://www.tutorialspoint.com/yaml/yaml_quick_guide.htm] if you're not familiar with the syntax.

c. At the top of the file, we have the PROJECT AND ZONE section. The gcp_project_id variable value is your Project
ID.
$ gcloud config get-value project
Your active configuration is: [cloudshell-8080]
my-gns3-project-0042

The gcp_zone variable value depends on your location and if the zone supports either the Intel Skylake or Haswell
CPU platforms. Skylake or Haswell is a requirement for GCE nested virtualization.
Use GCP ping [http://www.gcping.com] to find the nearest region/zone [https://cloud.google.com/compute/docs/regions-
zones] .
Does the nearest region/zone support either Skylake or Haswell [https://cloud.google.com/compute/docs/regions-
zones/#available] ?

For example, I am located in the Pacific Northwest (PNW) of the United States, so my nearest region is us-west1 in
The Dalles, Oregon. I have a choice of us-west1-a, us-west1-b, or us-west1-c for zones in the region. Does the zone I
select support either Intel Skylake or Haswell?

$ gcloud compute zones describe us-west1-b --format="value(availableCpuPlatforms)"


Intel Skylake;Intel Broadwell;Intel Haswell;Intel Ivy Bridge;Intel Sandy Bridge

Looks good, so I enter these values with the text editor.

---
### PROJECT and ZONE ###

# https://console.cloud.google.com/iam-admin/settings/project
# `gcloud config get-value project`
gcp_project_id: my-gns3-project-0042
# https://cloud.google.com/compute/docs/regions-zones
# `gcloud compute zones list`
gcp_zone: us-west1-b

d. The STORAGE section pertains to the persistent disk [https://cloud.google.com/persistent-disk] properties. The defaults
provide us with 30 GB HDD storage, which may be sufficient for most use cases. These parameters also qualify for the
GCP Free Tier [https://cloud.google.com/free] .

You also have the option to increase the storage size and/or change the storage type to solid-state drive (SSD)
[https://en.wikipedia.org/wiki/Solid-state_drive] with the pd-ssd value. Additional charges will apply. See the Disk pricing
[https://cloud.google.com/compute/pricing#disk] section of the Google Compute Engine Pricing
[https://cloud.google.com/compute/pricing] page. For example, the following creates a persistent disk with a size of 64 GB
and type of SSD:

### STORAGE ###

gcp_disk_size: 64
gcp_disk_type: pd-ssd

e. In the COMPUTE section, the gcp_vm_type defines the machine type [https://cloud.google.com/compute/docs/machine-
types] . The machine type we select mainly comes down to the virtual device type(s) and the amount we plan to run
within our VM instance. The Cisco VIRL Resource Calculator [https://learningnetworkstore.cisco.com/virlfaq/calculator] gives a
reasonable estimate when using Cisco images. Your budget also comes into play. Again, use the Google Compute
Engine Pricing [https://cloud.google.com/compute/pricing] page as a reference point.

NOTE: The n1-standard-2 machine type should be adequate for labs consisting of 4 to 8 Cisco IOSv/IOSvL2 devices.

### COMPUTE ###

# https://cloud.google.com/compute/vm-instance-pricing
gcp_vm_type: n1-standard-2

f. Next up is the GNS3 section. The gns3_version variable value needs to match the GNS3 client version installed on
your local computer.

[https://res.cloudinary.com/binarynature/image/upload/v1583869467/gns3-client-version_jfdeuk.png]

### GNS3 ###

# https://github.com/GNS3/gns3-server/releases
gns3_version: 2.2.5

g. The Duck DNS variables take the values from the first step. For example, my domain is binarynature and my token
is ce2f4de5-3e0f-4149-8bcc-7a75466955d5. FYI, that token is no longer valid ;)

### DUCKDNS ###

# https://www.duckdns.org
ddns_domain: binarynature
ddns_token: ce2f4de5-3e0f-4149-8bcc-7a75466955d5

h. Finally, let's wrap this up with the WIREGUARD section. Remember when I stated you should take note of the public
key value in the second step? This is where you enter it.

### WIREGUARD ###


wireguard_peer_pubkey: 1ciOzqdTvKu2hpJ89q4L3MgivQ+NtxWicf9xajbPQHc=

The vars.yml file is now complete. The Cloud Shell text editor should be set to auto-save by default but press the Ctrl +
S key combination to be sure.

14. Create a Google Cloud service account and key file.


We need a method for authentication and authorization to the Google Cloud APIs with Ansible. The Ansible playbook
execution performs the following operations:
Add a Google Cloud service account with the Cloud IAM Editor role.
Create the associated service account key file.

$ ansible-playbook gcp_service_account_create.yml

15. Provision Google Cloud resources and deploy the GNS3 server.
We're now ready to run the Ansible playbook to provision our Google Cloud resources and deploy the GNS3 server.

$ ansible-playbook site.yml

16. Complete the client WireGuard configuration and establish the VPN connection.
With the conclusion of the deployment, we can now retrieve the WireGuard public key from our peer (gns3server) to
complete the WireGuard VPN configuration on our local computer.

a. From the Cloud Shell terminal, copy the public_key value ...

...
RUNNING HANDLER [wireguard : syncconf wireguard] ******************************************
changed: [35.199.187.220]

RUNNING HANDLER [gns3 : restart gns3] *****************************************************


changed: [35.199.187.220]

TASK [print wireguard public key] *********************************************************


ok: [35.199.187.220] => {
"public_key": "VknjKPWU3mJK6HlippeJ/LaeOH0uHOoA/lTCwzrKbTo="
}

PLAY RECAP ********************************************************************************


35.199.187.220 : ok=78 changed=68 unreachable=0 failed=0 skipped=3
localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=1

Playbook run took 0 days, 0 hours, 6 minutes, 14 seconds

and paste it on the right side of the = operator for the PublicKey variable name.
[https://res.cloudinary.com/binarynature/image/upload/v1583958677/wg-gns3server-new-tunnel-post_scegug.png]
b. Click the Save button.
c. At the WireGuard Tunnels list window, click the Activate button for the gns3server entry.
[https://res.cloudinary.com/binarynature/image/upload/v1583958693/wg-gns3server-activate_browjy.png]
17. Bind the local GNS3 client to the remote GNS3 server.
a. Open the GNS3 client application.
b. In the Setup Wizard [https://docs.gns3.com/1yL-p0vPROWPTkQqkEzL2IaDu7iYW-PUzpFamnksHH98/index.html#h.cbe0x5ipb0vm]
window, select Run appliances on a remote server (advanced usage) for server type.
c. Click the Next button.
d. Enter the following values for the Host and Port properties:
Host: 172.16.253.1
Port: 3080 TCP
e. Click the Next button.
f. Click the Finish button.
[https://res.cloudinary.com/binarynature/image/upload/v1583454245/gns3-gui-remote-srv-cfg_hyerh8.png]
18. All set.
Most of the backend work is complete, so I now hand you off to the following resources to help you further configure
your GNS3 environment:
GNS3 Documentation [https://docs.gns3.com]
GNS3 Fundamentals (Official Course) Part 1 [http://academy.gns3.com/p/gns3-fundamentals-official-course]
GNS3 Fundamentals (Official Course) Part 2 [http://academy.gns3.com/p/gns3-fundamentals-official-course2]
Store and retrieve GNS3 images with Google Cloud Storage [https://binarynature.blogspot.com/2018/08/store-retrieve-gns3-
images-with-google-cloud-storage.html]

GNS3 with Remote Server Workflow


01. Start the VM instance with one of the following:
a. The Google Cloud Shell [https://cloud.google.com/shell] (or if Google Cloud SDK [https://cloud.google.com/sdk/docs] installed
locally):

$ gcloud compute instances list


NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gns3server us-west1-b n1-standard-2 10.138.0.2 TERMINATED

$ gcloud compute instances start gns3server --zone us-west1-b --quiet


Starting instance(s) gns3server...done.
Updated [https://www.googleapis.com/compute/v1/projects/my-gns3-project-0042/zones/us-west1

b. The Google Cloud Console [https://cloud.google.com/cloud-console] :


[https://res.cloudinary.com/binarynature/image/upload/v1533076908/start-gns3server-console_wrx9sw.png]
c. The Google Cloud Console mobile app [https://cloud.google.com/console-app]

02. Activate the WireGuard VPN session.

03. Open the GNS3 client application.

04. Start the node(s) within GNS3.

05. Happy Labbing!

06. Save the configuration at the node-level (e.g., copy run start, commit, etc.).

07. Stop the node(s) within GNS3.

08. Exit the GNS3 client application.

09. Deactivate the WireGuard VPN session.

10. Stop the VM instance with one of the following:


a. The Google Cloud Shell [https://cloud.google.com/shell] (or if Google Cloud SDK [https://cloud.google.com/sdk/docs] installed
locally):

$ gcloud compute instances list


NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gns3server us-west1-b n1-standard-2 10.138.0.2 35.199.147.244 RUNNING

$ gcloud compute instances stop gns3server --zone us-west1-b --quiet


Stopping instance(s) gns3server...done.
Updated [https://www.googleapis.com/compute/v1/projects/my-gns3-project-0042/zones/us-west1

b. The Google Cloud Console [https://cloud.google.com/cloud-console]


c. The Google Cloud Console mobile app [https://cloud.google.com/console-app] :
[https://res.cloudinary.com/binarynature/image/upload/v1533091945/stop-
gns3server-ios_ssjvth.png]

Related
Project on GitHub [https://github.com/mweisel/gcp-gns3server]
SSH local port forwarding with remote GNS3 server [https://binarynature.blogspot.com/2019/06/ssh-local-port-forwarding-
remote-gns3-server.html]

Posted 5th August 2018 by Marc Weisel

Labels: Ansible, Cisco, Cloud, GNS3, Juniper, KVM, Linux, macOS, SSH, Windows

You might also like