Professional Documents
Culture Documents
Pexip Infinity Oracle Deployment Guide V34.a
Pexip Infinity Oracle Deployment Guide V34.a
Infrastructure
Deployment Guide
Software Version 34
March 2024
Pexip Infinity and Oracle Cloud Infrastructure Deployment Guide
Contents
Introduction 3
Deployment guidelines 4
Deployment options 4
Recommended instance types and call capacity guidelines 5
Security and SSH keys 5
Site-to-Site VPN for private/hybrid cloud deployments 6
IP addressing 6
Introduction
The Oracle Cloud Infrastructure (OCI) service provides scalable computing capacity in the Oracle Cloud. Using Oracle Cloud eliminates
your need to invest in hardware up front, so you can deploy Pexip Infinity even faster.
Pexip Infinity deployments in Oracle Cloud Infrastructure is a technical preview feature.
You can use Oracle Cloud to launch as many or as few virtual servers as you need, and use those virtual servers to host a Pexip
Infinity Management Node and as many Conferencing Nodes as required for your Pexip Infinity platform.
Pexip publishes disk images for the Pexip Infinity Management Node and Conferencing Nodes. These images may be used to launch
instances of each node type as required.
Deployment guidelines
This section summarizes the Oracle Cloud deployment options and limitations, and provides guidance on our recommended Oracle
Cloud instance types, security groups and IP addressing options.
This flowchart provides an overview of the basic steps involved in deploying the Pexip Infinity platform on Azure:
Deployment options
There are three main deployment options for your Pexip Infinity platform when using the Oracle Cloud platform:
l Private cloud: all nodes are deployed within Oracle Cloud. Private addressing is used for all nodes and connectivity is achieved by
configuring a VPN tunnel between the corporate network and Oracle Cloud. As all nodes are private, this is equivalent to an on-
premises deployment which is only available to users internal to the organization.
l Public cloud: all nodes are deployed within Oracle Cloud. All nodes have a private address but, in addition, public IP addresses are
allocated to each node. The node's private addresses are only used for inter-node communications. Each node's public address is
then configured on the relevant node as a static NAT address. Access to the nodes is permitted from the public internet, or a
restricted subset of networks, as required. Any systems or endpoints that will send signaling and media traffic to those Pexip
Infinity nodes must send that traffic to the public address of those nodes. If you have internal systems or endpoints
communicating with those nodes, you must ensure that your local network allows such routing.
l Hybrid cloud: the Management Node, and optionally some Conferencing Nodes, are deployed in the corporate network. A VPN
tunnel is created between the corporate network and Oracle Cloud. Additional Conferencing Nodes are deployed in Oracle Cloud
and are managed from the on-premises Management Node. The Oracle Cloud-hosted Conferencing Nodes can be either internally-
facing, privately-addressed (private cloud) nodes; or externally-facing, publicly-addressed (public cloud) nodes; or a combination of
private and public nodes (where the private nodes are in a different Pexip Infinity system location to the public nodes).
All of the Pexip nodes that you deploy in the cloud are completely dedicated to running the Pexip Infinity platform— you maintain full
data ownership and control of those nodes.
Note that 1 OCPU gives 2 vCPUs, either because the hypervisor has available physical cores, or because of Hyper-Threading on a single
physical core.
We have observed the following performance guidelines for HD call capacity per Transcoding Conferencing Node:
l VM.Standard.2 (Intel 8167M @ 2.0GHz)
o 4 OCPUs: 12 HD
o 8 OCPUs: 23 HD
o 24 OCPUs: 63 HD
l VM.Optimized3.Flex (Intel Xeon Gold 6354 CPU @ 3.0GHz)
o 4 OCPUs: 23 HD
o 8 OCPUs: 44 HD
o 18 OCPUs: 82 HD
l If you are using a Linux or Mac SSH client to access your instance you must use the chmod command to make sure that your
private key file on your local client (SSH private keys are never uploaded) is not publicly viewable. For example, if the name of your
private key file is my-key-pair.pem, use the following command: chmod 400 /path/my-key-pair.pem
IP addressing
All Oracle VM instances are allocated a Private IP address. You can optionally also assign a static (reserved) Public IP address to a VM
instance. You should assign a public address to all nodes in a public cloud deployment, and to any externally-facing nodes in a hybrid
deployment, that you want to be accessible to conference participants located in the public internet.
Pexip Infinity nodes must always be configured with the private IP address associated with its instance, as it is used for all internal
communication between nodes. To associate an instance's public IP address with the node, configure that public IP address as the
node's Static NAT address (via Platform > Conferencing Nodes).
The private IP address should be used as the Conferencing Node address by users and systems connecting to conferences from the
corporate network (via the site-to-site VPN) in a private or hybrid cloud deployment. When an instance has been assigned a public IP
address and that address is configured on the Conferencing Node as its Static Nat address, all conference participants must use that
external address to access conferencing services on that node.
For more information on setting up your Oracle Cloud environment, see https://docs.oracle.com/en-
us/iaas/Content/GSG/Concepts/settinguptenancy.htm and https://docs.oracle.com/en-
us/iaas/Content/Compute/Tasks/launchinginstance.htm for more details.
Creating a compartment
A compartment is a logical container for the collection of resources used for your Pexip Infinity deployment. A compartment is not
specific to a region.
You create a compartment via Identity & Security > Compartments.
We recommend creating a separate compartment for each Pexip Infinity deployment (e.g. production system, lab system). This
example has created a "PexDocs" compartment within the root compartment.
Source IP Protocol Source Port Range Destination Port Range Type and Code
10.0.0.0/8 ESP
Where 0.0.0.0/0 implies any source/destination, <management station IP address/subnet> should be restricted to a single IP address
or subnet for SSH access only, and 10.0.0.0/8 is for the internal communication between the Pexip nodes via their private IP addresses.
There is no need to modify the default Egress rules.
Your ingress rules should look similar to these:
1. Download to your local PC the generic OVAs for the Pexip Infinity Management Node and Conferencing Nodes.
a. Go to https://dl.pexip.com/infinity/index.html and then select the appropriate directory for your software version.
b. Download the generic Management Node OVA (Pexip_Infinity_v<version>_generic_pxMgr_<build>.ova).
c. Download the generic Conferencing Node OVA (Pexip_Infinity_v<version>_generic_ConfNode_<build>.ova).
2. Create a bucket in the Oracle Object Storage (Storage > Buckets). A bucket is a logical container used by Object Storage for storing
your files.
You can use the default bucket properties.
In this example, "Pexip-images" was created:
3. Select the bucket and then Upload the two generic OVA files that you downloaded in step 1 into the bucket.
f. Repeat the process for the second OVA file. You can start the second import while the first import is still in progress (they are
asynchronous operations).
This process takes 20-25 minutes per image and results in the two custom images you need for installation.
Deploying the VM
To deploy a Management Node on an Oracle Cloud VM:
1. Prepare a Management Node disk image as described in Obtaining and preparing disk images for Oracle Cloud deployments.
2. From the Oracle Console, go to Compute > Instances.
3. Select Create Instance.
4. Enter a Name, for example "pexmgr".
5. Ensure the correct Compartment is selected.
6. In the Image and shape section:
a. Change the Image to the previously created Custom Image for the Management Node. (Change the Image source to Custom
images to see the options.)
b. Select an appropriate shape for the Management Node. We recommend: VM.Standard3.Flex (or VM.Standard2 if 3.Flex isn't
available) with 2 OCPUs or larger.
See Recommended instance types and call capacity guidelines for more information, and see Custom images and instance
Note that you use a keypair for the console connection itself, which is used separately from the keypair assigned to the instance.
When using this method:
o You should use a key type of SSH-2 RSA, a key size of 2048 bits, and you need to use a .ppk key file.
o After creating the Console Connection, use the Copy Serial Console Connection option to connect to the instance via
PowerShell (remember to edit the key file names and paths as appropriate).
o After the instance username is displayed you may need to press Enter to activate the console and bring up the
pexipmcumgr login: prompt.
o If you experience "Network error: Connection refused" errors it could be because your version of plink and ppk files are not
compatible — if you are using PuTTYgen to create or convert the key you can switch between version 2 and version 3 ppk
files. Alternatively, ensure that you have the latest versions of PuTTYgen and plink installed.
IP address Accept the defaults for the IP address, Network mask and Gateway settings.
Network mask
Gateway
Hostname Enter your required Hostname and Domain suffix for the Management Node.
Domain suffix
DNS servers Configure one or more DNS servers. You must override the default values if it is a private deployment.
NTP servers Configure one or more NTP servers. You must override the default values if it is a private deployment.
Web administration username Set the Web administration username and password.
Password
Send deployment and usage Select whether or not to Send deployment and usage statistics to Pexip.
statistics to Pexip
The DNS and NTP servers at the default addresses are only accessible if your instance has a public IP address.
The installation wizard will fail if the NTP server address cannot be resolved and reached.
The installation begins and the Management Node restarts using the values you have configured.
When the Management Node has restarted the console displays a login prompt:
<hostname> login:
At this point you can close the console. All further configuration should now be done (if you encounter SSL connection errors when
using the Administrator interface, wait a few seconds to allow the relevant services to start before trying again).
Next steps
After you have run the installation wizard, you must perform some preliminary configuration before you can deploy a Conferencing
Node. See Initial platform configuration — Oracle Cloud for more information.
1. Open a web browser and type in the IP address or DNS name that you assigned to the Management Node using the installation
wizard (you may need to wait a minute or so after installation is complete before you can access the Administrator interface).
2. Until you have uploaded appropriate TLS certificates to the Management Node, your browser may present you with a warning that
the website's security certificate is not trusted. You should proceed, but upload appropriate TLS certificates to the Management
Node (and Conferencing Nodes, when they have been created) as soon as possible.
The Pexip Infinity Conferencing Platform login page will appear.
3. Log in using the web administration username and password you set using the installation wizard.
You are now ready to begin configuring the Pexip Infinity platform and deploying Conferencing Nodes.
As a first step, we strongly recommend that you configure at least 2 additional NTP servers or NTP server pools to ensure that log
entries from all nodes are properly synchronized.
It may take some time for any configuration changes to take effect across the Conferencing Nodes. In typical deployments,
configuration replication is performed approximately once per minute. However, in very large deployments (more than 60
Conferencing Nodes), configuration replication intervals are extended, and it may take longer for configuration changes to be applied
to all Conferencing Nodes (the administrator log shows when each node has been updated).
Brief details of how to perform the initial configuration are given below. For complete information on how to configure your Pexip
Infinity solution, see the Pexip Infinity technical documentation website at docs.pexip.com.
Configuration Purpose
step
1. Enable DNS Pexip Infinity uses DNS to resolve the hostnames of external system components including NTP servers, syslog servers,
SNMP servers and web proxies. It is also used for call routing purposes — SIP proxies, gatekeepers, external call control
(System > DNS
and conferencing systems and so on. The address of at least one DNS server must be added to your system.
Servers)
You will already have configured at least one DNS server when running the install wizard, but you can now change it or
add more DNS servers.
Configuration Purpose
step
2. Enable NTP Pexip Infinity uses NTP servers to obtain accurate system time. This is necessary to ensure correct operation, including
configuration replication and log timestamps.
(System > NTP
Servers) We strongly recommend that you configure at least three distinct NTP servers or NTP server pools on all your host servers
and the Management Node itself. This ensures that log entries from all nodes are properly synchronized.
You will already have configured at least one NTP server when running the install wizard, but you can now change it or
add more NTP servers.
3. Add licenses You must install a system license with sufficient concurrent call capacity for your environment before you can place calls
to Pexip Infinity services.
(Platform >
Licenses)
4. Add a These are labels that allow you to group together Conferencing Nodes that are in the same datacenter. You must have at
system location least one location configured before you can deploy a Conferencing Node.
(Platform >
Locations)
5. Upload TLS You must install TLS certificates on the Management Node and — when you deploy them — each Conferencing Node. TLS
certificates certificates are used by these systems to verify their identity to clients connecting to them.
(Certificates > All nodes are deployed with self-signed certificates, but we strongly recommend they are replaced with ones signed by
TLS either an external CA or a trusted internal CA.
Certificates)
6. Add Virtual Conferences take place in Virtual Meeting Rooms and Virtual Auditoriums. VMR configuration includes any PINs required
Meeting Rooms to access the conference. You must deploy at least one Conferencing Node before you can call into a conference.
(Services >
Virtual Meeting
Rooms)
7. Add an alias A Virtual Meeting Room or Virtual Auditorium can have more than one alias. Conference participants can access a Virtual
for the Virtual Meeting Room or Virtual Auditorium by dialing any one of its aliases.
Meeting Room
(done while
adding the
Virtual Meeting
Room)
Next step
You are now ready to deploy a Conferencing Node. See Deploying a Conferencing Node in Oracle Cloud for more information.
Deploying the VM
To deploy a Conferencing Node on an Oracle Cloud VM:
1. Prepare a Conferencing Node disk image as described in Obtaining and preparing disk images for Oracle Cloud deployments.
Note that if you have upgraded your Pexip Infinity software, you need a Conferencing Node disk image for the software version
you are currently running.
2. From the Oracle Console, go to Compute > Instances.
3. Select Create Instance.
4. Enter a Name, for example "pexconf01".
5. Ensure the correct Compartment is selected.
6. In the Image and shape section:
a. Change the Image to the previously created Custom Image for the Conferencing Node. (Change the Image source to Custom
images to see the options.)
b. Select an appropriate shape for the Conferencing Node. We recommend: VM.Standard3.Flex (or VM.Standard2 if 3.Flex isn't
available) with a minimum of 2 OCPUs
See Recommended instance types and call capacity guidelines for more information, and see Custom images and instance
Name Enter the name to use when referring to this Conferencing Node in the Pexip Infinity Administrator interface.
Description An optional field where you can provide more information about the Conferencing Node.
Hostname Enter the hostname and domain to assign to this Conferencing Node. Each Conferencing Node and
Domain Management Node must have a unique hostname.
The Hostname and Domain together make up the Conferencing Node's DNS name or FQDN. We recommend
that you assign valid DNS names to all your Conferencing Nodes.
IPv4 address Enter the IP address to assign to this Conferencing Node when it is created.
Network mask Enter the IP network mask to assign to this Conferencing Node.
Note that IPv4 address and Network mask apply to the eth0 interface.
Option Description
Gateway IPv4 address Enter the IP address of the default gateway to assign to this Conferencing Node.
This the first host address in the CIDR of the instance's subnet.
Note that the Gateway IPv4 address is not directly associated with a network interface, except that the
address entered here lies in the subnet in which either eth0 or eth1 is configured to use. Thus, if the gateway
address lies in the subnet in which eth0 lives, then the gateway will be assigned to eth0, and likewise for eth1.
Secondary interface Leave this option blank as dual network interfaces are not supported on Conferencing Nodes deployed in
IPv4 address public cloud services.
Secondary interface Leave this option blank as dual network interfaces are not supported on Conferencing Nodes deployed in
network mask public cloud services.
Note that Secondary interface IPv4 address and Secondary interface network mask apply to the eth1
interface.
System location Select the physical location of this Conferencing Node. A system location should not contain a mixture of
proxying nodes and transcoding nodes.
If the system location does not already exist, you can create a new one here by clicking to the right of the
field. This will open up a new window showing the Add System Location page.
Configured FQDN A unique identity for this Conferencing Node, used in signaling SIP TLS Contact addresses.
TLS certificate The TLS certificate to use on this node. This must be a certificate that contains the above Configured FQDN.
Each certificate is shown in the format <subject name> (<issuer>).
IPv6 address The IPv6 address for this Conferencing Node. Each Conferencing Node must have a unique IPv6 address.
If this is left blank, the Conferencing Node listens for IPv6 Router Advertisements to obtain a gateway
address.
IPv4 static NAT address Configure the Conferencing Node's static NAT address, if you have a assigned a public/external IP address to
the instance.
Static routes From the list of Available Static routes, select the routes to assign to the node, and then use the right arrow
to move the selected routes into the Chosen Static routes list.
Enable distributed This should usually be enabled (checked) for all Conferencing Nodes that are expected to be "always on", and
database disabled (unchecked) for nodes that are expected to only be powered on some of the time (e.g. nodes that
are likely to only be operational during peak times).
Enable SSH Determines whether this node can be accessed over SSH.
Use Global SSH setting: SSH access to this node is determined by the global Enable SSH setting (Platform >
Global Settings > Connectivity > Enable SSH).
Off: this node cannot be accessed over SSH, regardless of the global Enable SSH setting.
On: this node can be accessed over SSH, regardless of the global Enable SSH setting.
Option Description
SSH authorized keys You can optionally assign one or more SSH authorized keys to use for SSH access.
From the list of Available SSH authorized keys, select the keys to assign to the node, and then use the right
arrow to move the selected keys into the Chosen SSH authorized keys list.
Note that in cloud environments, this list does not include any of the SSH keys configured within that cloud
service.
Use SSH authorized When a node is deployed in a cloud environment, you can continue to use the SSH keys configured within the
keys from cloud service cloud service where available, in addition to any of your own assigned keys (as configured in the field above).
If you disable this option you can only use your own assigned keys.
Default: enabled.
3. Select Save.
4. You are now asked to complete the following fields:
Option Description
SSH password Enter the password to use when logging in to this Conferencing Node's Linux operating system over SSH. The
username is always admin.
Logging in to the operating system is required when changing passwords or for diagnostic purposes only, and should
generally be done under the guidance of your Pexip authorized support representative. In particular, do not change
any configuration using SSH — all changes should be made using the Pexip Infinity Administrator interface.
5. Select Download.
A message appears at the top of the page: "The Conferencing Node image will download shortly or click on the following link".
After a short while, a file with the name pexip-<hostname>.<domain>.xml is generated and downloaded.
Note that the generated file is only available for your current session so you should download it immediately.
6. Browse to https://<conferencing-node-ip>:8443/ and use the form provided to upload the configuration file to the Conferencing
Node VM.
If you cannot access the Conferencing Node, check that you have allowed the appropriate source addresses in your security list for
management traffic. In public deployments and where there is no virtual private network, you need to use the public address of
the node.
The Conferencing Node will apply the configuration and reboot. After rebooting, it will connect to the Management Node in the
usual way.
You can close the browser window used to upload the file.
After deploying a new Conferencing Node, it takes approximately 5 minutes before the node is available for conference hosting and for
its status to be updated on the Management Node. Until it becomes available, the Management Node reports the status of the
Conferencing Node as having a last contacted and last updated date of "Never". "Connectivity lost between nodes" alarms relating to
that node may also appear temporarily.
4. Go to Remote Peering Connections Attachments and select Create a Remote Peering Connection.
Give it a name, for example "Frankfurt Peering Connection" (based on the region we are peering with) and create the connection.
5. Select the Remote Peering Connection and take note of the Remote Peering OCID.
6. Switch to the Region you are peering with, in this case Germany Central (Frankfurt).
That region should already have its own VCN set up.
7. Repeat the process as described above to set up a DRG, and then a peering connection from that region back to the first region:
a. Create a Dynamic Routing Gateway in that region, e.g. "Frankfurt Peering".
b. Create a Virtual Cloud Network Attachment (e.g. "Frankfurt VCN attachment") and attach the region's VCN to the DRG.
c. Create a Remote Peering Connections Attachment for the region you are peering with (e.g. "Zurich Peering Connection").
8. At this stage, the peering status shows as "New (not peered)" (it is also not peered in the other region).
10. The Peer status changes to "Pending", and then to "Peered" on both sides of the Peering Connection.
f. Switch to the other Region (Frankfurt in this case) and repeat the process from the other side, where the destination CIDR
12. Repeat this process for all of your other regions, setting up Remote Peering Connections and route tables between each region.
1. Put the Conferencing Node into maintenance mode via the Pexip Infinity Administrator interface on the Management Node:
a. Go to Platform > Conferencing Nodes.
b. Select the Conferencing Node(s).
c. From the Action menu at the top left of the screen, select Enable maintenance mode and then select Go.
While maintenance mode is enabled, this Conferencing Node will not accept any new conference instances.
d. Wait until any existing conferences on that Conferencing Node have finished. To check, go to Status > Live View.
2. Stop the Conferencing Node instance on Oracle:
a. From the Oracle cloud console, select Compute > Instances to see the status of all of your instances.
b. Select the instance you want to shut down.
c. At the top of the Instance Details page, select Stop.
After reinstating a Conferencing Node, it takes approximately 5 minutes for the node to reboot and be available for conference
hosting, and for its last contacted status to be updated on the Management Node.
1. If you have not already done so, put the Conferencing Node into maintenance mode via the Pexip Infinity Administrator interface
on the Management Node:
a. Go to Platform > Conferencing Nodes.
b. Select the Conferencing Node(s).
c. From the Action menu at the top left of the screen, select Enable maintenance mode and then select Go.
While maintenance mode is enabled, this Conferencing Node will not accept any new conference instances.
d. Wait until any existing conferences on that Conferencing Node have finished. To check, go to Status > Live View.
2. Delete the Conferencing Node from the Management Node:
a. Go to Platform > Conferencing Nodes and select the Conferencing Node.
b. Select the check box next to the node you want to delete, and then from the Action drop-down menu, select Delete selected
Conferencing Nodes and then select Go.
3. Terminate the Conferencing Node instance on Oracle:
a. From the Oracle cloud console, go to Compute > Instances.
b. Select the instance you want to permanently remove.
c. From the More Actions drop-down, select Terminate to remove the instance.
Note that the instance will terminate immediately, but it may take several hours for the terminated instance to be removed from
the list of instances shown in the console.