Professional Documents
Culture Documents
0
V40
Installation Guide
Guardicore Centra Installation Guide
Intro 332
Preconditions: 332
Managing AWS Access: 332
EC2 IAM Role: 332
Guardicore Delegate Access: 332
AWS Policy definition: 333
Starting AWS Orchestration Configuration: 334
Configuring AWS Authentication: 334
Configuring EC2 IAM Role Authentication: 335
Configuring Guardicore Delegate Access Authentication: 336
Configuring Customer Credentials Authentication: 336
Creating an AWS IAM role: 336
Orchestration Information Appears On the Assets Page: 337
4.1.3 Azure Orchestration 338
Intro 338
How to Configure Azure Orchestration 338
Configure a read-only user in the Azure account 338
Add permissions to application user 338
Configure Azure orchestration in the Centra management 338
Important notes 339
4.1.4 GCP Orchestration 340
Intro 340
Configuring GCP Orchestration 340
Step 1: Set Up a Read Only Service Account in GCP 340
Step 2: Add GCP Orchestration to Centra 341
4.1.5 OCI Orchestration 343
Intro 343
Configuring OCI Orchestration 344
Step 1 - In OCI, create an orchestration user for Centra 344
Step 2 - In Centra, configure the OCI orchestration 345
4.1.6 Openstack Orchestration 347
Setting Up OpenStack Orchestration 347
Step 1: Configure a read-only user on the OpenStack platform 347
Revision Procedure:
This guide is updated and maintained by the professional services team, thus the updating
procedure is performed by them.
All updates which are required can be inserted into the guide as comments or suggestions and will
be approved by the administrator of the installation guide.
All updates will be recorded in the above section for historical knowledge.
In order to insert an instruction, guide or contect- it is recommended to first contact the PS team
in order for it to be inserted in the proper location and according to the common form.
1 General Information
Guardicore Centra Security Platform
The Guardicore Centra Security Platform is a comprehensive data center and cloud security
solution that provides a single console for managing segmentation, access control, and security
policies throughout your entire environment. Centra makes visualizing and securing on-premises
and cloud workloads fast and simple. It creates human-readable views of your complete
infrastructure – from the data center to the cloud – with fast and intuitive workflows for
segmentation policy creation.
Note: Disaster Recover (DR) setup is not covered in this guide, although it is mentioned in the
Scaling Architecture section- for general understanding. Consult with Guardicore support for
setting up DR.
This manual is intended for IT professionals and System Administrators who are familiar with their
infrastructure management systems, be it VMware, HyperV, AWS or Azure.
Windows and Linux operating systems and administration knowledge is required.
Chapter no. 2 is dedicated to Centra installation instructions, and includes all platforms currently
supported to deploy the Centra solution on, like VMware, Azure etc.
The correct way to deploy Centra is to follow the instructions in chapter 2 which are relevant for
your environment, followed by agent deployment instructions from chapter 3 / agents installation
instructions in the UI.
Agents
Agents are deployed on servers in your network and are capable of sending information that
reveals the source and destination of flows, rerouting suspicious flows to the Deception server
(honeypot), and enforcing security policies.
Aggregators
Virtual machines called Aggregators gather and process the data gathered from Agents, and
communicate with the Guardicore Management server.
Collectors
Virtual machines called Collectors gather information on information flows in environments
where Agents cannot be installed.
Management Server
A Management Server receives, analyses, enriches, and manages the collected data.
Deception Server
A Deception Server manages a farm of different-flavored honeypot instances (Windows and
Linux). Failed connections are redirected to a fully interactive honeypot, limiting the attacker’s
interactions to that instance. Following session recording and automatic analysis of its content,
complete incident information is reported to Management.
Prepare Site
Step Description
1 Optional: Set-up a Guardicore Network
● Network used by Guardicore components for internal communication.
● Should enable communication between the Management server, Aggregators,
Collectors, Deception server, and Agents.
● Make sure there is communication across the covered hosts. See the Connectivity
Requirements Diagram for details.
Download the appropriate Image files from the Guardicore customer portal:
On-Premises ✓ ✓ ✓
On-Premises with
✓ ✓ ✓ ✓
Distributed
Management
SaaS
✓
Note: For VMWare deployment, The ESX Collector component is installed using the same
OVA as the Aggregator (the Guardicore Aggregation Server OVA).
After downloading, provision VMs using the image files. For the On-Premises with Distributed
Management Cluster, see the table in the section above, Requirements for Distributed
Management Cluster, for the list of servers that you will need for the Management Cluster.
Component Requirements
CPU 8 vCPUs
Storage 530GB
Aggregator
CPU 4 vCPUs
CPU 8 vCPUs
Connectivity The Deception Server should have a single interface in the Guardicore
network*, used for communication with the Management Server and ESX
Collectors.
CPU 2 vCPUs
SPAN or IP-Flow For resource and connectivity requirements- refer the specific install
Collectors guides for the components.
*Guardicore Network: Used for communication with the Management Server, Deception Server,
Collectors, and Aggregators.
Note: Requirements for Aggregators, Collectors, and Deception Server are as detailed in the
On-Premises Requirements Section above. To design a Distributed Management Cluster that is
customized for your system, consult with Guardicore Professional Services/Customer Success on
which nodes are needed and how many of each type. See the Scaling Architecture section for
further understanding of the required VMs for a clustered deployment.
Management Prerequisites
Component
Node
CPU 8 vCPUs
● 32 GB RAM
Memory ● 30GB storage capacity for root file system
● 500GB storage capacity under /storage mount
● Deploy the server in Thick Provisioning
Worker Node
CPU 8 vCPUs
● 32 GB RAM
Memory
● 30GB storage capacity for root file system
● 200GB storage capacity under /storage mount
● Deploy the server in Thick Provisioning
Elastic Node
CPU 8 vCPUs
Memory ● 32 GB RAM
● 30GB storage capacity for root file system
● 1TB storage capacity under /storage mount
● Deploy the server in Thick Provisioning
MongoDB Node
CPU 8 vCPUs
Memory ● 32 GB RAM
● 30GB storage capacity for root file system
● 1TB storage capacity under /storage mount
● Deploy the server in Thick Provisioning
RabbitMQ Node
CPU 8 vCPUs
Memory ● 32 GB RAM
● 30GB storage capacity for root file system
● 200GB storage capacity under /storage mount
● Deploy the server in Thick Provisioning
InfluxDB Node
CPU 8 vCPUs
Memory ● 32 GB RAM
● 30 GB storage capacity for root file system
● 200 GB storage capacity under /storage mount
● Deploy the server in Thick Provisioning
The above diagram indicates the connectivity requirements for a single Management Server
configuration. For a distributed Management cluster, there are more requirements for
communication within the cluster; contact Guardicore support for internal communication
requirements for the Management cluster.
This section includes subsections for both On-Premises Configuration, and Distributed
Management Cluster configuration.
Note: Obtain IPs for On-prem components: Management Server, Deception Server, Collector(s), and
Aggregator(s) prior to commencing the install process as these IP addresses will be used during the
installation process.
Note: After initial deployment of the Centra management cluster, advise with the Professional Services
team for the latest “Service Pack” version, obtain the needed files and install the package according to
Appendix B.
1. Make sure the Management server VM was provisioned using the appropriate OVA, and
that the compute specs and networking are set as required (see the section on Enabling
Optional Services below before starting) .
User : admin
Password : GCAdmin123
Note:
● After the root user’s password is set in step 4 below, the `admin` user will be disabled.
● After system boot, the installation wizard will wait for the docker service to be ready in
order to start. This may take up to 5 minutes, during which you might not be able to login
5. Type a new root user password for the machine and click OK. The password should consist
of at least 6 characters and contain both upper and lowercase letters and numbers, but no
punctuation marks or other symbols. You will be asked to enter your password selection
twice.
8. Select Static and click OK to set the Guardicore Network interface manually as in the
following example:
A similar wizard will display for each connected interface. Repeat interface configuration for all
connected interfaces.
Note: Having more than two network interfaces for the management is not supported.
Note: Steps 9-10 will only appear if you have more than one network interface for the
Management server.
9. Select the interface matching the Guardicore Internal Network, used for connectivity with
other Guardicore Centra components:
10. Select the interface matching the External Network - used for users’ connectivity to UI /
REST API / SSH.
11. Define the IP addresses that should be allowed to connect to the Management Server over
SSH (port 22). To allow all, add 0.0.0.0/0
Note: Guardicore strongly recommends that you configure this now and click Yes:
Explanation: Select YES to reset the iptables INPUT chain config, unless you already set
any local rules manually before running the wizard, and Guardicore confirmed you don't
need to reset this INPUT chain. You may have set any other rules manually in case you have
more appliances running on this machine. If not, and the sole function of this machine is
running the GC appliance, you can safely click YES and have the automation define the
iptables for you.
Choosing YES will clear all rules in the INPUT chain. To skip, choose NO. This can be
configured after the completion of the wizard- on the machine itself.
12. Set a Guardicore Secure Communication Token_ID (password). This will be used by Centra
components to authenticate against the Management Server during installation.
16. You must click Yes to enable the Segmentation Policy Enforcement feature. Selecting No is
no longer supported. The following is displayed:
17. Type a name for the environment. The name will be used during the integration with
Guardicore’s health monitoring system and should be later coordinated with a Guardicore
representative.
Note: Selecting an environment name is optional. You can skip setting a name by leaving the text
empty.
20. Type the Management IP in the Guardicore Internal Network and click OK:
21. Click Yes to continue or No to edit your configuration. After clicking Yes, the following is
displayed.
22. Click OK to start the installation. Installation execution can take up to 30 minutes:
After the installation is complete, you can log in into Centra’s UI using the user admin and the
password you chose.
Note: It is possible to replace the UI certificate with your own (customer) certificate. In order to
do it, create a support ticket. You will be emailed as soon as the request is received.
For efficiently installing Centra in a Distributed Management configuration, follow these steps:
1. Make sure you have provisioned the required machines as detailed in the section
Requirements for Distributed Management Cluster.
2. Create an IP-plan so that each member of the cluster has an IP you can assign during the
installation process.
The objective of this step is to provision all required VMs from *.OVA templates and connect the
VMs to the network, so all subsequent steps can be done remotely over SSH sessions.
2. Deploy each of the Management Distributed Nodes from the Distributed nodes OVA.
3. Turn on all the deployed Management Cluster VMs, and login with the following
credentials:
User : root
Password : GuardR00t111
4. Make sure the time on each node is correct and synched with the Control node. You can
achieve this either by manually setting the time on each node or by ticking the
“Synchronize guest time with host” box in the VM options of the machine, under Settings
(in the Vsphere Client). Failing to accomplish this stage on all nodes will result in a failed
installation.
5. On each machine, configure the network interfaces according to the deployment IP-plan.
Using “ifconfig -a”, identify which MAC address is assigned to each logical interface,
comparing those with vSphere settings to identify the interfaces that should be configured
according to the IP-plan.
Make sure the network interfaces are up and running by performing:
ifconfig eth0/1.. up
and confirm with ifconfig again.
rm /etc/netplan/*
######################################################################################
######################################################################################
##########################
## Examples:
#network:
# ethernets:
# ens160:
# addresses: []
# dhcp4: true
# dhcp-identifier: mac
# version: 2
network:
ethernets:
ens160:
addresses: [192.168.1.1/24]
gateway4: 192.168.1.254
dhcp4: no
nameservers:
addresses: [8.8.8.8]
version: 2
#network:
# ethernets:
# ens160:
# addresses: [192.168.1.1/24]
# gateway4: 192.168.1.254
# dhcp4: no
# nameservers:
# addresses: [8.8.8.8]
# ens192:
# addresses: []
# dhcp4: true
# dhcp-identifier: mac
# version: 2
###################################################################################
###################################################################################
Note - the nameservers configuration is optional. If multiple DNS servers are needed, separate
them with commas.
9. Connect to each instance remotely using SSH. To do this, follow the instructions in the next
section (Preconfiguration of Management Nodes). After completion, you will be ready to
run the setup wizard from the Control.
C. Preconfigure the Management Cluster Nodes
In this step you configure hostnames for the Management cluster nodes, reset the root password,
and sync SSH keys from the Control. You will then be ready to run the setup wizard.
Configure Hostnames
On each of the Management cluster node instances (excluding the Control), configure a
meaningful hostname. A suggested naming scheme is provided here, although you may want to use
alternative hostnames that comply with the company policy instead.
Configure as following:
1. Run the following, replacing <HOSTNAME> with the new hostname:
hostnamectl set-hostname <HOSTNAME>
NOTE: If you get “Failed to create bus connection: No such file or directory” then simply
reboot, log back in, and then retry.
3. Edit the line containing the loopback IP address in the file /etc/hosts.
Replace
127.0.1.1 gc-management-node
with
127.0.1.1 <NEW_HOSTNAME>
2. Allow passwordless SSH login from the Control node to all the other nodes, by running the
following command on the Control node for each node:
ssh-copy-id <Node IP>
mgmt-setup
Click OK and select a new root user password for the machine. You will be asked to enter your
password selection twice:
3. Click Yes to set the root password and disable the default “admin” user:
7. A similar wizard will display per each connected interface. Repeat interface configuration
for all connected interfaces.
Note: Default gateway should be set Only on one interface, usually on the external interface!!!
8. Select the interface matching the Guardicore Internal Network, used for connectivity with
other Guardicore Centra components.
9. Select the interface matching the External Network - used for users connectivity to UI /
REST API / SSH:
10. Define the IP addresses that should be allowed to connect to the Management Server over
SSH (port 22). To allow all, add 0.0.0.0/0:
Note: Guardicore strongly recommends that you configure this now and click Yes:
Explanation: Select YES to reset the iptables INPUT chain config, unless you already set
any local rules manually before running the wizard, and Guardicore confirmed you don't
need to reset this INPUT chain. You may have set any other rules manually in case you have
more appliances running on this machine. If not, and the sole function of this machine is
running the GC appliance, you can safely click YES and have the automation define the
iptables for you.
Choosing YES will clear all rules in the INPUT chain. To skip, choose NO. This can be
configured after the completion of the wizard- on the machine itself.
12. Set a Guardicore Secure Communication password. This password should be a secret
password, used by Centra components to authenticate against the Management Server
during installation. Please use only alphanumeric characters for passwords.
14. Click Yes to enable the Guardicore Reputation Service. Note that this setting can be later
changed from UI.
15. Entering the environment name is optional. You can skip it by leaving the text empty. If
entered, the name is used during system health monitoring integration, and should later be
communicated to Guardicore representative.
18. Enter Management Worker nodes IP addresses. Make sure to also include the
management Control IP in the Workers list.
Note: In the next steps (19-22), you configure the IP of each dedicated external node. The
controller node’s IP should only be included in the list if the controller node is planned to
take one of these roles.
19. Enter MongoDB node IP address. If the deployment requires more than one MongoDB
node, enter only the IP of the 1st MongoDB node, and configure a RabbitMQ Redundancy
Cluster or a MongoDB HA Cluster after this step is complete.
21. Enter InfluxDB node/nodes IP address/es. Note - in case the influxDB node will run on the
Management Control node, specify the Management Control’s IP. Leaving this screen
empty and not configured will break the install process.
22. Enter RabbitMQ node IP address. If the deployment requires more than one RabbitMQ
node, enter only the IP of the 1st RabbitMQ node, and see Configuring RabbitMQ
Redundancy Cluster for adding an additional node after this step is complete.
Enter Postgress Daily Flows node IP address. In case there is no external node for the Postgress
service- fill in the Management Control node’s IP address. Leaving this screen empty and not
24. Click OK to start the installation (ignore the “Setup completed” message). Installation
execution can take up to 60 minutes.
Note:
In case the flow is interrupted with the following error:
Upgrade failed on state “START_CLUSTER_INFRA”, check
“/var/log/guardicore/upgrade_service.log”
28. Validate the UI is accessible by connecting to Centra UI by browsing to the Control node’s
external interface IP over port 443. Note: There is an option to replace the UI certificate by your
own (customer) certificate. Click here to create a support ticket. You will be emailed as soon as we
receive your request.
gc-cluster-cli health
Setup
Execute the following command on Control to configure a standby RabbitMQ node:
gc-cluster-cli add_node --node_type rabbitmq --node_address
<Standby_RabbitMQ_IP>
Validation
This command should perform the following actions:
Note: Be aware that in case of a failover process, existing unprocessed messages in the queue
will be lost.
The primary MongoDB server is the "Control" node. It is the only MongoDB that is allowed to write
data and all writes go through this MongoDB node. Whenever a replica set is installed, an
election process is held to elect the primary MongoDB server. This will also happen if the current
primary dies.
The secondary server replicates data from the primary node. Secondary servers are not allowed to
write. A secondary server can become a primary node via an election process. (assuming it's
configuration allows it).
When there is a successful Failover (i.e. when the primary node fails and cannot communicate with
the rest of the cluster secondary nodes for more then 10 sec, by default), a new primary node is
elected.
1. Make sure that all MongoDB replica set nodes are available via ssh and will not require
a password. You can use the following command to do this:
ssh-copy-id <node_address>
2. Make sure that the iptables will allow the 27017 (TCP) traffic between these nodes.
gc-mongodb-cluster-cli init
Note: When running init, the app group is stopped to avoid CPU spikes due to the
interruption of MongoDB service.
Note: If the initialization of the MongoDB replica set fails following the previous command,
use the next walkaround:
export IMAGE_TAG=$(python -OO
/var/lib/guardicore/management/docker_deploy/scripts/cluster/startup
/helpers/get_image_tag.pyo -i gc-service)
exit
2. Verify the MongoDB replica set was successfully initialized using the following
command:
gc-mongodb-cluster-cli health
At this point there should be a single server with status Primary whose health is ok.
Once the replica-set has been initialized, it is ready to accept new members.
1. To add a node to the replica set & the Management cluster use the following:
The --add_to_cluster will make sure to first add the new machine as a management cluster
node. This will do what gc-cluster-cli add_node does, just specifically for the mongodb node
type. Once the node is part of the management cluster it will be added as a member in the
initialized MongoDB replica set.
IMPORTANT:
● There is NO need to add the MongoDB replica set nodes via gc-cluster-cli.
● If the nodes were already added via gc-cluster-cli, skip passing the
--add_to_cluster flag when running the above.
2. To verify the new node successfully joined the existing MongoDB replica use the
following command:
gc-mongodb-cluster-cli health
■ At this point there should be a single server with status Primary whose health is ok.
■ In addition, you should have a single server with status Secondary whose health is ok.
(above example has more nodes).
This process can be repeated for each node you want to add to the replica set.
■ It is critical for high availability to have a replica set with at least 3 nodes.
■ The number of nodes must be odd for the primary election to work (there is an arbiter
option which is not yet supported).
Once the process of forming the MongoDB replica set is complete and all desired nodes are a part
of the replica set issue the following command:
gc-cluster-cli cluster-restart
User: admin
Password: GCAdmin123
Note: After the root user’s password is set in step 4 below, the `admin` user will be disabled.
4. Type a new root user password for the machine and click OK. The following is displayed:
6. Choose Static to set the Guardicore Network interface manually. The following appears:
7. If there is more than one network interface for the Deception server, The following prompt
will appear:
8. Select the interface which is considered the “Attacks Interface” (The interface from which
the Deception server is receiving attack events) and click OK.
9. Enter the IP address of the Management Server in the Guardicore network and click OK.
The following is displayed:
10. In the Tunnel port screen accept the default 443 port by clicking OK. The following is
displayed:
11. Enter the Secure Communications password as set in the installation of the Management
Server and click OK. The following appears:
12. Define the IP addresses that should be allowed to connect to the Deception Server over
SSH (port 22). To allow all, add 0.0.0.0/0
● ESX collectors
● SPAN collectors
● IP Flow collectors
Collectors and Aggregators are deployed from the same OVA and are represented as components
in the same installation screens. Instructions for deployment of each type of collector are provided
in the following subsections.
Note: For ESX and SPAN collectors, a properly configured and working vSphere Orchestration is
a prerequisite for the Collector to function fully. Without a configured vSphere Orchestration,
the Collector will provide Reveal functionality, but will not provide Deception functionality.
Prerequisites:
Note: Prior to Collector installation, a virtual SPAN port group should be created on each of the
host's vSwitches that are to be monitored. See Appendix B: Create a SPAN Network Port
instructions.
Note: After the root user’s password is set on step 4, the `admin` user will be disabled.
5. Click OK.
7. Select the SPAN interfaces (multiple vSwitches on the same host can be monitored with a
single ESX Collector):
11. Enter the IP address of the Management Server in the Guardicore network:
12. Enter the Secure Communications password as set in the installation of the Management
Server:
13. In Advanced Settings, configure any setting you wish to change/use. Otherwise select
Continue.
a. Under Hostname, write the hostname you wish to use for the Collector:
b. If you are using an external PKI with a SCEP endpoint, you can use the collector as
a SCEP proxy. Enter the SCEP server address to enable this configuration:
c. Under Cluster Roles, select the roles you wish the Collector to take:
14. Define the IP addresses that should be allowed to connect to the ESX Collector over SSH
(port 22). To allow all, add 0.0.0.0/0
16. Choose the Cluster ID for the collector. Note: Starting v36, when installing a collector in
an environment without Legacy Deception feature in the Aggregator- the cluster ID for
the collector must be different from the Aggregator’s cluster ID. Orchestration then must
be set to this cluster.
17. If DRS is enabled on the VMWare cluster, the Collector VM should be fixed to its host. See
full instructions on Appendix B.
The Collector Deployment tool (GuarDeployer) allows mass deployment of ESX Collectors on
VMware environments. The tool works with a single Datacenter at a time but can run multiple
times for coverage.
1. Tool setup
3. Deployment
Prerequisites:
This step sets up the tool and helps the user create a full configuration which could be executed at
a later stage.
1. Deploy a machine from the template GC-TEMPLATE. This machine should be connected to
networks that allow connectivity to:
User: admin
Password: admin
Step 3: Deployment
After filling in the fields, click Submit and Run to start the deployment.
The new components will be deployed from the GC-TEMPLATE and you'll start to see new
machines deployed and reconfigured in the vSphere Tasks screen.
● SPAN port connection to the host's vSwitch connected to a physical port connected to a
physical switch / NPB / TAP where the physical servers that are to be monitored are
connected. This interface does not require an IP.
● An interface in the GuardiCore network, used for communication with the Management
and the Deception Server (if Deception is activated). This interface should be assigned with
a static IP.
4. Select a network interface as the “Output Interface” for the Span interface:
8. In Advanced Settings, configure any setting you wish to change/use and click OK.
Otherwise, select Continue:
9. Under hostname, write the hostname you wish to use for the Aggregator:
Role Description
16. On the Override Configuration dialog box, click Show Advanced Options and In the list of
Advanced Options, select port mirror cloud driver. In the right pane, click protected-cidr:
13. In the Protected-Cidr screen, add the IPs of the subnets which should be covered by the SPAN
Collector and click OK.
Deployment of an IP Flow Collector is required in case there is a need to ingest traffic from the
following formats:
Connectivity
The IP Flow Collector should have the following interfaces:
IPFIX Notes:
● If you intend to ingest IPFIX data either from an F-5 integration or directly from the
switches, mark the IPFIX option. It is not supported to ingest IPFIX data both from
switches and via F-5 integration at the same time.
● If IPFIX data is going to be ingested directly from switches and not via F-5
integration, set the following configuration, on the Collector VM, after the
completion of the install wizard:
○ In /etc/guardicore/mitigation.conf
○ Set:
■ [ipflow-machine-tracker]
■ F5-support = False
○ Restart the flow collector service, either via UI or CLI
○ If there is a time skew between the switch time and the collector server
time, which may be the case when the time is not synchronized with a time
server or there is time zone difference etc., set the following configuration,
with the time offset in minutes:
■ [ipflow-machine-tracker]
■ Time-offset = <time in minutes>
○ Restart the flow collector service, either via UI or CLI
4. Select port for each flow format (one for each format):
9. Enter the Secure Communications password as set in the installation of the Management
Server:
11. In case the collector is going to be used for F-5 integration, follow this section. It is
relevant if you intend to configure a webhook for a collector in the Centra F5
orchestration. If you intend to configure a webhook, do the following for the relevant
collector
a. Under Advanced Settings, select Set Aggregator Cluster Roles:
Note: Configuring DRS rules for ESX Collectors is required only if vSphere DRS is in use
How to create DRS rules to fix ESX Collectors to hosts using the vSphere Client
5. Click Add under Virtual Machine DRS Groups to create a Virtual Machine DRS Group. Add
the Collector VM. Name it indicatively (recommended -
“guardicore-vm-group-for-COLLECTOR-X”).
6. Click the Rule tab, give the new rule an indicative name (recommended -
“guardicore-rule-for-HOSTNAME”).
7. From the Type drop-down menu, click Virtual Machines to Hosts.
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1
005508)
How to create DRS rules to fix ESX Collectors to hosts using the vSphere Web Client
2. Click Add and create a new Host DRS Group. Add the ESX host of the Collector. Name it
indicatively (recommended - “guardicore-host-group-HOSTNAME”).
3. Click Add and create a new VM DRS Group. Add the Collector VM. Name it indicatively
(recommended - “guardicore-vm-group-for-COLLECTOR-X”).
4. Click the Rule tab, give the new rule an indicative name (recommended -
“guardicore-rule-for-HOSTNAME”).
5. From the Type drop-down menu, click Virtual Machines to Hosts.
6. Under Vm Group, select the newly created VM group.
7. Select Must run on hosts in group.
8. Under Host Group, select the newly created Cluster Host Group and click OK.
Create a SPAN Network port for the ESX Collector using Vsphere Client
1. Open the host configuration tab, go to the Networking > Add Networking.
2. Create a new network on each vSwitch with the name GC-SPAN and set its vlan ID to All (4095).
(Tip: If there are multiple switches on the host, add the switch name to avoid name conflicts).
3. Go to the CG-SPAN Properties and under the Security tab set all the policy exceptions to
“Accept”
2. Go to the network properties and under the Security tab set all the policy exceptions to Accept.
Create a SPAN Network for the ESX Collector through Vsphere Web Client
1. On the vsphere web client, under Hosts and clusters view, select the ESX host and go to the
Networking > Add host networking
2. Create a new Virtual machine port group named GC-SPAN on each vSwitch, and set its vlan ID
to All (4095).
Tip: If there are multiple switches on the host, add the switch name to avoid name conflicts.
3. Select each newly created portgroup, and click Edit Settings. Under the Security tab set all the
policy exceptions to “Accept”.
1. In the vSphere web client, under Networking tab, select the dvSwitch and create a new
Distributed Port Group. Set the name to “GC-SPAN”, VLAN type to VLAN Trunking, and the trunk
range to “0-4094”.
2. Select each newly created dvPortgroup, and click Edit Settings. Under the Security tab set
all the policy exceptions to “Accept”.
To create span session in n1kv switch run the following commands on the n1kv switch console:
1. Create port profile that will be used for the mirrored traffic (the span “destination”)
configure terminal
port-profile type vethernet gc-span
switchport mode trunk
switchport trunk allowed vlan all
vmware port-group
no shutdown
state enabled
end
2. Set up a monitor session; it is required to define which vlan IDs are mirrored. If you reach the
limit (32 vlan per monitor session) you may need to set more than one monitor session.
configure t
monitor session 1
# Here you should specify the vlans of the port profile you use in your
network.
# In this example, we used vlan 1,2 .. 32
source vlan 1-32 both
destination port-profile gc-span
no shutdown
end
3. Create another port profile that will be used for injecting packets back into the network
configure terminal
port-profile type vethernet gc-span-inject
switchport mode trunk
switchport trunk allowed vlan all
vmware port-group
no shutdown
state enabled
end
In case an alternative solution such as GSLB is used, create the FQDN alias record in the GSLB.
Note: If the Aggregator was installed using the wizard prior to configuration of the FQDN, it is
possible to add the FQDN of the Aggregator’s cluster after the installation process. Refer to the
guide for configuring the Aggregator FQDN on an installed Aggregator.
Deploy Aggregators
3. In case a “Mega Aggregator” is needed, make sure the VM is provisioned with 32GB of
RAM and 12 vCPUs.
If not, stop the VM and change the resources consumption via “Edit settings”.
The Mega-aggregator is used to handle ~ 2000 Agents, with deception disabled. Please
note this when assigning responsibilities to the Aggregator in the installation wizard.
4. Turn on the Aggregator VM and open the console.
User: admin
Password: GCAdmin123
Note: After the root user’s password is set in step 4, the `admin` user will be disabled.
9. Select the features you want the Aggregator to activate on its associated Agents:
Deception Agents Server Select this option to turn on deception capabilities for Agents
on guest servers that are not protected by ESX Collectors.
Enforcement Agents Server Select this option only if you want to turn on policy
enforcement capabilities.
Detection Agent Server Select this option to enable file integrity monitoring (FIM)
capabilities.
Agents Load Balancer Select this option to allow distribution of the Agent load to
other Aggregators in the cluster.
Legacy Deception Select this option to support deception for agents from
version prior to v36. Unmarking this option will cause old
agents not to redirect traffic to the deception server, letting
the Aggregator handle ~ 250 agents.
Marking this option will turn on support for redirecting
deception traffic for old agents (prior to v36), but will limit
the number of agents handled to ~ 100.
Note: In case there is a need to change the configuration and
add a support for this feature, run the setup of the aggregator
again (aggr-setup) and mark the feature as enabled under
Administration > Aggregator > Features > Legacy Deception.
11. Choose Static to set the Guardicore Network and the Agent facing interfaces manually.
A Static IP should be reserved in the customer’s network.
13. A similar wizard will be shown per each connected interface. Repeat interface
configuration until all connected interfaces are set.
14. Enter the IP address of the Management Server in the Guardicore network:
15. Enter the Secure Communications password as set in the installation of the Management
Server:
16. Define the IP addresses that should be allowed to connect to the Aggregator over SSH
(port 22). To allow all, add 0.0.0.0/0
17. In Advanced Settings, configure any setting you wish to change/use and click OK.
Otherwise, select Continue:
18. Under hostname, write the hostname you wish to use for the Aggregator:
Role Description
22. If the Aggregator has a NAT facing the Agents, enter its IP address here:
25. At some point during the installation, the Cluster ID box appears:
26. Choose the Cluster ID for the Aggregator. If you do not have multiple
Collectors/Aggregator clusters, choose ‘default’.
NOTE: The design of how Collectors/Aggregators are assigned to clusters should be done
in consultation with Guardicore Professional Services/Customer Success.
Mega Aggregators
[enforcement_worker]
max_worker_number = 6
at the end.
This section includes the computing resource requirements for a successful installation,
instructions for preparing for the installation and required networking for the deployment.
The following install guide for AWS deployment is for an AIO management application.
Note: After initial deployment of the Centra management cluster, advise with the Professional Services
team for the latest “Service Pack” version, obtain the needed files and install the package according to
Appendix B.
Verify:
1. Administrative access to the AWS account.
2. Amazon Machine Image (AMI) of the Management and Aggregator components are shared
by GuardiCore with the customer’s AWS account/s, in appropriate regions/s.
3. For setting up a VPC, configuring subnets and other AWS configurations needed for the
success of this process- refer to this chapter. Please walk through every step in this chapter
to make sure connectivity is configured correctly.
4. Our AWS components’ appliance requires root user which can be reached in two ways:
a. SSH with root user and password to the aggregator. In this case please contact
Guardicore to receive the AMI’s root password.
b. SSH with ubuntu user and key, and switch to sudo after login. Ubuntu does have
sudo permissions. This option currently does not work.
1. Creating a VPC (an isolated portion of the cloud for AWS objects):
a. Push Services > VPC > Your VPCs > Create VPC
c. Add the CIDR you want your isolated environment to be in (i.e. 192.168.0.0/16).
You will later create subnets within this VPC in order to create different private
networks.
d. Add tags
e. Create VPC
c. Add the IPv4 CIDR that you want your subnet to have (i.e. 192.168.1.0/24)
e. Attach.
c. Create inbound rules for the security group. Best practice here is:
i. Create a rule to allow SSH into the network from your known IP address.
ii. Create a rule that allows internal traffic- TCP in the same CIDR block as
your subnet.
a. Left menu > Route Tables > Pick the correct one (with the correct VPC) > Routes >
Edit routes
b. Add 0.0.0.0/0 - let’s instances communicate freely outside from the VPC > in
Target- add you Internet Gateway.
A. Log in to your AWS web console and select the destination region for the Management
deployment.
a. Choose AMI. Select “My AMIs – AMIs shared with me” and locate the AMI shared by
Guardicore. You can find it by searching “Guardicore” or by using the AMI-ID (all your AMIs
are listed in EC2 Dashboard → Images → AMIs → Private)
i. Network: choose the VPC of which will be used to communicate with the GC
components, usually known as GC-net Subnet: choose the subnet within the VPC
within the management server that will communicate with other components.
ii. If you want to access the Management from outside of your network-
Choose enable.
iii. Otherwise- if all the communication is conducted inside the VPC- choose
disable. You can also assign a public IP later on during the process.
iii. Under Network Interfaces, a single interface should be connected, configured with
a single IP (which can be Auto-assigned).
This is your GC-net. Either assign an IP for the management instance or click auto-
assign.
d. Add storage. The required storage parameters are 40 for the root partition and 500 for the
storage partition.
e. Add tags. Add a Name (example value: GC_Management_X). Additional tags can be added
according to customers conventions.
f. Attach Security Groups. (see appendix 4 for details) The SG should allow the following:
i. Inbound: All TCP and ports from my IP (can be adjusted later on for SSH etc.
g. Review the settings and launch the instance. To allow the connection to the instance, you
will be requested to choose an existing key-pair in the account or to create a new key-pair.
Refer to appendix for details on creating Key Pair.
B. Right click on the address and associate it with the Aggregator instance.
b. If your key.pem resides under mnt/c/usr...- > applying chmod 400 will not change
the permissions as expected.
In order to change the permissions, move your key file to home directory (in root)
and apply the changes there, otherwise you will get a
Permissions 0555 for key.pem are too open
message.
c. Go to Instances → Instances, select the Aggregator, right click and select “Connect”
to view detailed instructions.
B. Validate that /etc/hosts contains the IP of your Management that the Aggregator would
face. Otherwise, add/fix.
Required line in /etc/hosts:
<Management_Public_IP> gc-management
C. Define the IP addresses that should be allowed to connect to the Aggregator over SSH
(port 22). To allow all, add 0.0.0.0/0 .
Note this setting sets iptables rules on the Management, which is also subject to a network
policy defined by the AWS Security Groups associated with the instance.
Note: Guardicore strongly recommends that you configure this now and click Yes:
Explanation: Select YES to reset the iptables INPUT chain config, unless you already set
any local rules manually before running the wizard, and Guardicore confirmed you don't
need to reset this INPUT chain. You may have set any other rules manually in case you have
more appliances running on this machine. If not, and the sole function of this machine is
running the GC appliance, you can safely click YES and have the automation define the
iptables for you.
Choosing YES will clear all rules in the INPUT chain. To skip, choose NO. This can be
configured after the completion of the wizard- on the machine itself.
H. Name the environment for the Centra system- The name that will be used and appear on
top of the page while logged in to the UI:
I. Pick type of installation- All In One- all services installed in one management server, or
Cluster- management services are divided through a cluster in the internal network.
For more details please refer to the original installation guide and follow the networking
procedure for a clustered installation, as there are more prerequisites for networking
between the components, such as configuring connection between the management
controller and the rest of the cluster, naming the instances etc.
J. Click yes to start the setup, and OK on the next screen to start the setup:
K. The setup wizard is done and will commence the installation now, after which your
Management server will be ready for use.
A. Log in to your AWS web console and select the destination region for the Aggregator
deployment.
a. Step 1: choose AMI. Select “My AMIs – AMIs shared with me” and locate the AMI shared
by GuardiCore. You can find it by searching “GuardiCore” or by using the AMI-ID (all your
AMIs are listed in EC2 Dashboard → Images → AMIs → Private)
b. Step 2: choose Instance Type. Select instance type, with EBS volume.
ii. In case a “Mega Aggregator” is needed, make sure the VM is provisioned with 32GB
of RAM and 12 vCPUs.
c. Step 3: configure Instance Details. Mandatory fields are:
i. Network: choose the VPC of the EC2 instances that will be deployed with Agents.
ii. Subnet: choose the subnet of the EC2 instances that will be deployed with Agents.
i. If you are creating a new Subnet, see Appendix 2 at the bottom.
i. If there are agents that cannot reach the Aggregator’s VPC IP (i.e not in
AWS)- choose Enable.
ii. If the agents are on AWS or you are not sure- choose Disable. You may
always allocate a public (elastic) IP after the instance is created.
iv. Under Network Interfaces, a single interface should be connected, configured with
a single IP (which can be Auto-assigned).
d. Step 4: Add storage. The required storage parameters are included in the AMI, default is
30GB. No need to change this, proceed to step 5.
e. Step 5: Add tags. Add a Name (example value: GC_Aggregator_X). Additional tags can be
added according to customers conventions.
iii. Inbound: Port 443 (HTTPS) from CIDR that will be covered by Agents. It is common
to allow 443 from any (0.0.0.0/0).
g. Step 7: Review the settings and launch the instance. To allow the connection to the
instance, you will be requested to choose an existing key-pair in the account or to create a
new key-pair.
Note: You will need to run chmod 400 <key-pair.pem> before connecting.
Note: Go to Instances → Instances, select the Aggregator, right click and select
“Connect” to view detailed instructions.
B. Validate that /etc/hosts contains the IP of your Management that the Aggregator would
face. Otherwise, add/fix.
Required line in /etc/hosts:
<Management_Public_IP> gc-management
D. Select the features you want the Aggregator to activate on its associated Agents:
Deception Agents Server Select this option to turn on deception capabilities for Agents
on guest servers that are not protected by ESX Collectors.
Enforcement Agents Server Select this option only if you want to turn on policy
enforcement capabilities.
Detection Agent Server Select this option to enable file integrity monitoring (FIM)
capabilities.
Agents Load Balancer Select this option to allow distribution of the Agent load to
other Aggregators in the cluster.
Legacy Deception Select this option to support deception for agents from
version prior to v36. Unmarking this option will cause old
agents not to redirect traffic to the deception server, letting
the Aggregator handle ~ 250 agents.
Marking this option will turn on support for redirecting
deception traffic for old agents (prior to v36), but will limit
the number of agents handled to ~ 100.
Note: In case there is a need to change the configuration and
add a support for this feature, run the setup of the aggregator
again (aggr-setup) and mark the feature as enabled under
Administration > Aggregator > Features > Legacy Deception.
Also- all aggregators in the cluster must be of the same type,
and- once changed, the aggregator will not support deception
for the other type of agents.
E. Enter the IP address of the Management Server in the GuardiCore Cloud, provided by
GuardiCore:
G. Define the IP addresses that should be allowed to connect to the Aggregator over SSH
(port 22). To allow all, add 0.0.0.0/0 .
Note this setting sets iptables rules on the Aggregator, which is also subject to a network
policy defined by the AWS Security Groups associated with the instance.
H. In the Advanced Settings, configure any setting you wish to change/use. Otherwise, select
‘Continue’.
a. Under hostname, write the hostname you wish to use for the Aggregator
b. If you wish to use an FQDN for the Aggregator, do so here (lower-case characters
are preferred).
I. Click Yes to allow Agents to communicate against the Aggregators public IP (for instance,
Agents from a different VPC / from outside AWS). In case only inter-VPC Agents are
expected, click No.
27.
Role Description
L. Choose Other Cluster ID, with indicative name (for instance: AWS_VPC_1):
Mega Aggregators
[enforcement_worker]
max_worker_number = 6
at the end.
This section includes subsections for both On-Premises Configuration, and Distributed
Management Cluster configuration.
Note: Obtain IPs for On-prem components: Management Server, Deception Server, Collector(s), and
Aggregator(s) prior to commencing the install process as these IP addresses will be used during the
installation process.
Note: Hyper-V installation uses VHD/VHDX files that are supplied by Guardicore. Contact your professional
services engineer for a download link.
Note: the Deployment procedure describes the deployment of a generic VM in Hyper-V, in this example- the
management AIO instance. The deployment procedure for every other component is similar except for the
server sizing and connectivity requirements as listed in the requirements section in the beginning of this
guide.
This procedure has been tested on Windows Server 2012 R2 and Windows Server 2016 64-bit.
Note : Deception Server can’t be imported on Windows 2012 R2 (Nested Virtualization does no exist on
Windows 2012 R2
To use Deception, Use Windows Server 2016 and see the specific section of that document.
1. Once downloaded, copy the files needed for the deployment to your Windows 2012 R2 / 2016 64-bit server.
3. Choose “Generation 1”
6. Your VM is ready for deployment. Before starting it, continue the guide for further configurations.
IMPORTANT : Before starting the Machine, you need to modify the settings.
8. Add the second disk of the Management by clicking on “IDE Controller 1”, select “Hard Drive” and click
“Add”
Select Disk_2 of the Management VM
10. Change the number of Virtual Processors “8” minimum for the Management
11. You can clean the VM by deleting the “scsi Controller”, deleting “DVD Drive”.
Don’t forget to apply all changes.
12. In order to add another network interface to the VM, gp to the “Add hardware” tag in the top and add
another interface. After adding the interface- select your desired network to connect to via the new
interface,
Note :
VMCheckpointId : 00000000-0000-0000-0000-000000000000
VMCheckpointName :
ResourcePoolName : Primordial
Count :8
CompatibilityForMigrationEnabled : False
CompatibilityForOlderOperatingSystemsEnabled : False
HwThreadCountPerCore :1
ExposeVirtualizationExtensions : False
Maximum : 100
Reserve :0
RelativeWeight : 100
MaximumCountPerNumaNode :8
MaximumCountPerNumaSocket :1
EnableHostResourceProtection : False
OperationalStatus : {}
StatusDescription : {}
Name : Processor
Id :
Microsoft:15C5F136-E774-40AF-A116-2283E4CA080E\b637f346-6a0e-4dec-af52-bd70cb80a21d\0
VMId : 15c5f136-e774-40af-a116-2283e4ca080e
VMName : Deception
VMSnapshotId : 00000000-0000-0000-0000-000000000000
VMSnapshotName :
CimSession : CimSession: .
ComputerName : WIN-INABEVB39G5
IsDeleted : False
1. Make sure the Management server VM was provisioned using the appropriate VHD, and
that the compute specs and networking are set as required (see the section on Enabling
Optional Services below before starting) .
User : admin
Password : GCAdmin123
Note:
● After the root user’s password is set in step 4 below, the `admin` user will be disabled.
● After system boot, the installation wizard will wait for the docker service to be ready in
order to start. This may take up to 5 minutes, during which you might not be able to login
5. Type a new root user password for the machine and click OK. The password should consist
of at least 6 characters and contain both upper and lowercase letters and numbers, but no
punctuation marks or other symbols. You will be asked to enter your password selection
twice.
8. Select Static and click OK to set the Guardicore Network interface manually as in the
following example:
A similar wizard will display for each connected interface. Repeat interface configuration for all
connected interfaces.
Note: Having more than two network interfaces for the management is not supported.
Note: Steps 9-10 will only appear if you have more than one network interface for the
Management server.
9. Select the interface matching the Guardicore Internal Network, used for connectivity with
other Guardicore Centra components:
10. Select the interface matching the External Network - used for users’ connectivity to UI /
REST API / SSH.
11. Define the IP addresses that should be allowed to connect to the Management Server over
SSH (port 22). To allow all, add 0.0.0.0/0
Note: Guardicore strongly recommends that you configure this now and click Yes:
Explanation: Select YES to reset the iptables INPUT chain config, unless you already set
any local rules manually before running the wizard, and Guardicore confirmed you don't
need to reset this INPUT chain. You may have set any other rules manually in case you have
more appliances running on this machine. If not, and the sole function of this machine is
running the GC appliance, you can safely click YES and have the automation define the
iptables for you.
Choosing YES will clear all rules in the INPUT chain. To skip, choose NO. This can be
configured after the completion of the wizard- on the machine itself.
12. Set a Guardicore Secure Communication Token_ID (password). This will be used by Centra
components to authenticate against the Management Server during installation.
16. You must click Yes to enable the Segmentation Policy Enforcement feature. Selecting No is
no longer supported. The following is displayed:
17. Type a name for the environment. The name will be used during the integration with
Guardicore’s health monitoring system and should be later coordinated with a Guardicore
representative.
Note: Selecting an environment name is optional. You can skip setting a name by leaving the text
empty.
20. Type the Management IP in the Guardicore Internal Network and click OK:
21. Click Yes to continue or No to edit your configuration. After clicking Yes, the following is
displayed.
22. Click OK to start the installation. Installation execution can take up to 30 minutes:
After the installation is complete, you can log in into Centra’s UI using the user admin and the
password you chose.
Note: It is possible to replace the UI certificate with your own (customer) certificate. In order to
do it, create a support ticket. You will be emailed as soon as the request is received.
For efficiently installing Centra in a Distributed Management configuration, follow these steps:
3. Make sure you have provisioned the required machines as detailed in the section
Requirements for Distributed Management Cluster.
4. Create an IP-plan so that each member of the cluster has an IP you can assign during the
installation process.
The objective of this step is to provision all required VMs from *.VHD templates and connect the
VMs to the network, so all subsequent steps can be done remotely over SSH sessions.
11. Deploy each of the Management Distributed Nodes from the Distributed nodes VHD.
12. Turn on all the deployed Management Cluster VMs, and login with the following
credentials:
User : root
Password : GuardR00t111
13. Make sure the time on each node is correct and synched with the Control node. You can
achieve this either by manually setting the time on each node or by ticking the
“Synchronize guest time with host” box in the VM options of the machine, under Settings
(in the Vsphere Client). Failing to accomplish this stage on all nodes will result in a failed
installation.
14. On each machine, configure the network interfaces according to the deployment IP-plan.
Using “ifconfig -a”, identify which MAC address is assigned to each logical interface,
comparing those with vSphere settings to identify the interfaces that should be configured
according to the IP-plan.
Make sure the network interfaces are up and running by performing:
ifconfig eth0/1.. up
and confirm with ifconfig again.
rm /etc/netplan/*
######################################################################################
######################################################################################
##########################
## Examples:
#network:
# ethernets:
# ens160:
# addresses: []
# dhcp4: true
# dhcp-identifier: mac
# version: 2
network:
ethernets:
ens160:
addresses: [192.168.1.1/24]
gateway4: 192.168.1.254
dhcp4: no
nameservers:
addresses: [8.8.8.8]
version: 2
#network:
# ethernets:
# ens160:
# addresses: [192.168.1.1/24]
# gateway4: 192.168.1.254
# dhcp4: no
# nameservers:
# addresses: [8.8.8.8]
# ens192:
# addresses: []
# dhcp4: true
# dhcp-identifier: mac
# version: 2
###################################################################################
###################################################################################
Note - the nameservers configuration is optional. If multiple DNS servers are needed, separate
them with commas.
16. Restart the network interface for the change to take effect:
netplan try
18. Connect to each instance remotely using SSH. To do this, follow the instructions in the next
section (Preconfiguration of Management Nodes). After completion, you will be ready to
run the setup wizard from the Control.
D. Preconfigure the Management Cluster Nodes
In this step you configure hostnames for the Management cluster nodes, reset the root password,
and sync SSH keys from the Control. You will then be ready to run the setup wizard.
Configure Hostnames
On each of the Management cluster node instances (excluding the Control), configure a
meaningful hostname. A suggested naming scheme is provided here, although you may want to use
alternative hostnames that comply with the company policy instead.
Configure as following:
5. Run the following, replacing <HOSTNAME> with the new hostname:
hostnamectl set-hostname <HOSTNAME>
NOTE: If you get “Failed to create bus connection: No such file or directory” then simply
reboot, log back in, and then retry.
7. Edit the line containing the loopback IP address in the file /etc/hosts.
Replace
127.0.1.1 gc-management-node
with
127.0.1.1 <NEW_HOSTNAME>
25. Allow passwordless SSH login from the Control node to all the other nodes, by running the
following command on the Control node for each node:
ssh-copy-id <Node IP>
mgmt-setup
Click OK and select a new root user password for the machine. You will be asked to enter your
password selection twice:
26. Click Yes to set the root password and disable the default “admin” user:
30. A similar wizard will display per each connected interface. Repeat interface configuration
for all connected interfaces.
Note: Default gateway should be set Only on one interface, usually on the external interface!!!
31. Select the interface matching the Guardicore Internal Network, used for connectivity with
other Guardicore Centra components.
32. Select the interface matching the External Network - used for users connectivity to UI /
REST API / SSH:
33. Define the IP addresses that should be allowed to connect to the Management Server over
SSH (port 22). To allow all, add 0.0.0.0/0:
Note: Guardicore strongly recommends that you configure this now and click Yes:
Explanation: Select YES to reset the iptables INPUT chain config, unless you already set
any local rules manually before running the wizard, and Guardicore confirmed you don't
need to reset this INPUT chain. You may have set any other rules manually in case you have
more appliances running on this machine. If not, and the sole function of this machine is
running the GC appliance, you can safely click YES and have the automation define the
iptables for you.
Choosing YES will clear all rules in the INPUT chain. To skip, choose NO. This can be
configured after the completion of the wizard- on the machine itself.
35. Set a Guardicore Secure Communication password. This password should be a secret
password, used by Centra components to authenticate against the Management Server
during installation. Please use only alphanumeric characters for passwords.
37. Click Yes to enable the Guardicore Reputation Service. Note that this setting can be later
changed from UI.
38. Entering the environment name is optional. You can skip it by leaving the text empty. If
entered, the name is used during system health monitoring integration, and should later be
communicated to Guardicore representative.
41. Enter Management Worker nodes IP addresses. Make sure to also include the
management Control IP in the Workers list.
Note: In the next steps (19-22), you configure the IP of each dedicated external node. The
controller node’s IP should only be included in the list if the controller node is planned to
take one of these roles.
42. Enter MongoDB node IP address. If the deployment requires more than one MongoDB
node, enter only the IP of the 1st MongoDB node, and configure a RabbitMQ Redundancy
Cluster or a MongoDB HA Cluster after this step is complete.
44. Enter InfluxDB node/nodes IP address/es. Note - in case the influxDB node will run on the
Management Control node, specify the Management Control’s IP. Leaving this screen
empty and not configured will break the install process.
45. Enter RabbitMQ node IP address. If the deployment requires more than one RabbitMQ
node, enter only the IP of the 1st RabbitMQ node, and see Configuring RabbitMQ
Redundancy Cluster for adding an additional node after this step is complete.
46. Enter Postgress Daily Flows node IP address. In case there is no external node for the
Postgress service- fill in the Management Control node’s IP address. Leaving this screen
48. Click OK to start the installation (ignore the “Setup completed” message). Installation
execution can take up to 60 minutes.
Note:
In case the flow is interrupted with the following error:
Upgrade failed on state “START_CLUSTER_INFRA”, check
“/var/log/guardicore/upgrade_service.log”
28. Validate the UI is accessible by connecting to Centra UI by browsing to the Control node’s
external interface IP over port 443. Note: There is an option to replace the UI certificate by your
own (customer) certificate. Click here to create a support ticket. You will be emailed as soon as we
receive your request.
gc-cluster-cli health
Setup
Execute the following command on Control to configure a standby RabbitMQ node:
gc-cluster-cli add_node --node_type rabbitmq --node_address
<Standby_RabbitMQ_IP>
Validation
This command should perform the following actions:
Note: Be aware that in case of a failover process, existing unprocessed messages in the queue
will be lost.
The primary MongoDB server is the "Control" node. It is the only MongoDB that is allowed to write
data and all writes go through this MongoDB node. Whenever a replica set is installed, an
election process is held to elect the primary MongoDB server. This will also happen if the current
primary dies.
The secondary server replicates data from the primary node. Secondary servers are not allowed to
write. A secondary server can become a primary node via an election process. (assuming it's
configuration allows it).
When there is a successful Failover (i.e. when the primary node fails and cannot communicate with
the rest of the cluster secondary nodes for more then 10 sec, by default), a new primary node is
elected.
3. Make sure that all MongoDB replica set nodes are available via ssh and will not require
a password. You can use the following command to do this:
ssh-copy-id <node_address>
4. Make sure that the iptables will allow the 27017 (TCP) traffic between these nodes.
gc-mongodb-cluster-cli init
Note: When running init, the app group is stopped to avoid CPU spikes due to the
interruption of MongoDB service.
Note: If the initialization of the MongoDB replica set fails following the previous command,
use the next walkaround:
export IMAGE_TAG=$(python -OO
/var/lib/guardicore/management/docker_deploy/scripts/cluster/startup
/helpers/get_image_tag.pyo -i gc-service)
exit
4. Verify the MongoDB replica set was successfully initialized using the following
command:
gc-mongodb-cluster-cli health
At this point there should be a single server with status Primary whose health is ok.
Once the replica-set has been initialized, it is ready to accept new members.
3. To add a node to the replica set & the Management cluster use the following:
The --add_to_cluster will make sure to first add the new machine as a management cluster
node. This will do what gc-cluster-cli add_node does, just specifically for the mongodb node
type. Once the node is part of the management cluster it will be added as a member in the
initialized MongoDB replica set.
IMPORTANT:
● There is NO need to add the MongoDB replica set nodes via gc-cluster-cli.
● If the nodes were already added via gc-cluster-cli, skip passing the
--add_to_cluster flag when running the above.
4. To verify the new node successfully joined the existing MongoDB replica use the
following command:
gc-mongodb-cluster-cli health
■ At this point there should be a single server with status Primary whose health is ok.
■ In addition, you should have a single server with status Secondary whose health is ok.
(above example has more nodes).
This process can be repeated for each node you want to add to the replica set.
■ It is critical for high availability to have a replica set with at least 3 nodes.
■ The number of nodes must be odd for the primary election to work (there is an arbiter
option which is not yet supported).
Once the process of forming the MongoDB replica set is complete and all desired nodes are a part
of the replica set issue the following command:
gc-cluster-cli cluster-restart
User: admin
Password: admin
Note: After the root user’s password is set in step 4 below, the `admin` user will be disabled.
4. Type a new root user password for the machine and click OK. The following is displayed:
6. Choose Static to set the Guardicore Network interface manually. The following appears:
7. Enter parameters for interface settings and click OK. The following appears:
8. Enter the IP address of the Management Server in the Guardicore network and click OK.
The following is displayed:
9. In the Tunnel port screen accept the default 443 port by clicking OK. The following is
displayed:
10. Enter the Secure Communications password as set in the installation of the Management
Server and click OK. The following appears:
11. Define the IP addresses that should be allowed to connect to the Deception Server over
SSH (port 22). To allow all, add 0.0.0.0/0
In case an alternative solution such as GSLB is used, create the FQDN alias record in the GSLB.
Note: If the Aggregator was installed using the wizard prior to configuration of the FQDN, it is
possible to add the FQDN of the Aggregator’s cluster after the installation process. Refer to the
guide for configuring the Aggregator FQDN on an installed Aggregator.
Deploy Aggregators
3. In case a “Mega Aggregator” is needed, make sure the VM is provisioned with 32GB of
RAM and 12 vCPUs.
If not, stop the VM and change the resources consumption via “Edit settings”.
The Mega-aggregator is used to handle ~ 2000 Agents, with deception disabled. Please
note this when assigning responsibilities to the Aggregator in the installation wizard.
4. Turn on the Aggregator VM and open the console.
User: admin
Password: admin
Note: After the root user’s password is set in step 4, the `admin` user will be disabled.
9. Select the features you want the Aggregator to activate on its associated Agents:
Deception Agents Server Select this option to turn on deception capabilities for Agents
on guest servers that are not protected by ESX Collectors.
Enforcement Agents Server Select this option only if you want to turn on policy
enforcement capabilities.
Detection Agent Server Select this option to enable file integrity monitoring (FIM)
capabilities.
Agents Load Balancer Select this option to allow distribution of the Agent load to
other Aggregators in the cluster.
Legacy Deception Select this option to support deception for agents from
version prior to v36. Unmarking this option will cause old
agents not to redirect traffic to the deception server, letting
the Aggregator handle ~ 250 agents.
Marking this option will turn on support for redirecting
deception traffic for old agents (prior to v36), but will limit
the number of agents handled to ~ 100.
Note: In case there is a need to change the configuration and
add a support for this feature, run the setup of the aggregator
again (aggr-setup) and mark the feature as enabled under
Administration > Aggregator > Features > Legacy Deception.
11. Choose Static to set the Guardicore Network and the Agent facing interfaces manually.
A Static IP should be reserved in the customer’s network.
13. A similar wizard will be shown per each connected interface. Repeat interface
configuration until all connected interfaces are set.
14. Enter the IP address of the Management Server in the Guardicore network:
15. Enter the Secure Communications password as set in the installation of the Management
Server:
16. Define the IP addresses that should be allowed to connect to the Aggregator over SSH
(port 22). To allow all, add 0.0.0.0/0
17. In Advanced Settings, configure any setting you wish to change/use and click OK.
Otherwise, select Continue:
18. Under hostname, write the hostname you wish to use for the Aggregator:
28.
Role Description
22. If the Aggre gator has a NAT facing the Agents, enter its IP address here:
25. At some point during the installation, the Cluster ID box appears:
26. Choose the Cluster ID for the Aggregator. If you do not have multiple
Collectors/Aggregator clusters, choose ‘default’.
NOTE: The design of how Collectors/Aggregators are assigned to clusters should be done
in consultation with Guardicore Professional Services/Customer Success.
Mega Aggregators
[enforcement_worker]
max_worker_number = 6
at the end.
This section includes the computing resource requirements for a successful installation,
instructions for preparing for the installation and required networking for the deployment.
2.4.1 Preconditions
1. Administrative access to the Azure Subscription and Resources.
2. Azure VHD of the Aggregator component is shared by Guardicore with the customer.
3. Microsoft Azure Storage Explorer for uploading the VHD onto the Azure platform.
a. Download and install the software on your machine.
4. Guardicore CentraTM Management has been set up for the customer.
5. *Azure Server with Internet access, sufficient storage (depending on component VHD), and
Microsoft Azure Storage Explorer installed
* - Recommandation
1. Download the Azure VHD from Guardicore’s Customer Portal and save it locally
2. Upload the VHD to a Disk on your Azure Platform using Microsoft Azure Storage Explorer
using the follow the steps:
iii. In the Azure Sign in dialog box, enter your Azure credentials.
iv. Select your subscription from the list and then click Apply.
iii. In Upload VHD specify your source VHD, the name of the disk, the
OS type (select Linux), the region you want to upload the disk to, as
well as the account type (works with both HDD and SSD). Select
Create.
v.
If the upload has finished and you don't see the disk in the right pane,
select Refresh.
3. Troubleshooting
a. If you do not have permissions to upload a VHD directly to a Disk, upload it to a
Blob, then create a managed Disk from the VHD blob following this guide:
https://aidanfinn.com/?p=20441
NOTE: Make sure to select the type of disk to be gen1 and not gen2, as gen1
uses BIOS while gen2 uses UEFI which is not supported.
b. If you are uploading from a local server (not an Azure server), and receive the
following message:
{"code":"DeploymentFailed","message":"At least one resource deployment
operation failed. Please list deployment operations for details. Please
see https://aka.ms/DeployOperations for usage
details.","details":[{"code":"BadRequest","message":"The specified cookie
value in VHD footer indicates that disk
'Guardicore_Aggregation_Server_49150.vhd' with blob
https://xxxxx.blob.core.windows.net:8443/guardicoreimage/Guardicore_Aggre
gation_Server_xxxxx.vhd is not a supported VHD. Disk is expected to have
cookie value 'conectix'."}]}
It is most likely caused due to a connectivity issue \ timeout, breaking the upload.
Make sure you have a stable connection and that your computer does not go to
sleep. If the issue persists, please retry uploading the VHD using an Azure
Server as recommended
c. If uploading still fails, please make sure the following permission settings are
valid:
i. The user has the necessary permissions to Create a Disk
ii. There are no Disk Access restrictions refraining the user from uploading
iii. There are no Locks on the Resource Group or Resources
1. Create the VM
a. From the Azure portal, on the left menu, select All services.
b. In the All services search box, enter disks and then select Disks to display
the list of available disks.
c. Select the disk that you would like to use. The Disk page for that disk opens.
d. In the Overview page, ensure that DISK STATE is listed as Unattached. If it
isn't, you might need to either detach the disk from the VM or delete the VM
2. Basics
i. Enter a Virtual machine name and either select an existing Resource
group or create a new one.
ii. Select a VM size.
1. Standard requirement for an Aggregator is 4CPU, 4GB RAM, 30GB
Storage. However, Azure recommendation for VM size for cost
savings is “B2ms”, which supports CPU burst usage.
2. In case a “Mega Aggregator” is needed, make sure the VM is
provisioned with 32GB of RAM and 12 vCPUs.
iii. In Inbound port rules None can be selected, we will attach a Security Group
later on.
3. Disks
i. Create and attach a new Disk
4. Networking
i. Either create new resources through the portal or select existing
ones for
1. Virtual network
2. Subnet
3. Public IP
ii. Choose Advanced NIC network security group, and either create a
new SG or choose an existing one.
The Security Group should allow the following:
1. Inbound: Port 22 (SSH) from company’s CIDR
2. Inbound: Port 443 (HTTPS) from CIDR that will be covered by
Agents. It is common to allow 443 from any (0.0.0.0/0).
3. Outbound: By default, Azure SG allows any outbound
communication to the Internet. To validate this, check the outbound
configuration of the SG (from EC2 dashboard, Network & Security
→ Security Groups) Following successful setup of the Aggregator,
this may be locked down to Port 443 (HTTPS) to Guardicore
Management IP Address.
4. By default a new Security Group enables any inbound and outbound
traffic within the Vnet, this may be modified to your needs
5. Other
a. Skip Management configuration phase.
b. In the Advanced configuration, validate the type of VM created is gen1, otherwise-
redeploy the disk from which the VM is created as gen1.
c. Add Tags if relevant
d. Review + Create the VM
e. Wait on this screen until the system notifies you the resource has been created.
d. Run
sudo su
ii. You may need to press enter if you are not prompted with login input
4. Select the features you want the Aggregator to activate on its associated Agents:
Deception Agents Server Select this option to turn on deception capabilities for Agents
on guest servers that are not protected by ESX Collectors.
Enforcement Agents Server Select this option only if you want to turn on policy
enforcement capabilities.
Detection Agent Server Select this option to enable file integrity monitoring (FIM)
capabilities.
Agents Load Balancer Select this option to allow distribution of the Agent load to
other Aggregators in the cluster.
Legacy Deception Select this option to support deception for agents from
version prior to v36. Unmarking this option will cause old
agents not to redirect traffic to the deception server, letting
the Aggregator handle ~ 250 agents.
Marking this option will turn on support for redirecting
deception traffic for old agents (prior to v36), but will limit
the number of agents handled to ~ 100.
Note: In case there is a need to change the configuration and
add a support for this feature, run the setup of the aggregator
again (aggr-setup) and mark the feature as enabled under
Administration > Aggregator > Features > Legacy Deception.
Also- all aggregators in the cluster must be of the same type,
and- once changed, the aggregator will not support deception
for the other type of agents.
5. Enter the IP address of the Management Server in the GuardiCore Cloud, provided by
GuardiCore:
7. Define the IP addresses that should be allowed to connect to the Aggregator over SSH
(port 22). To allow all, add 0.0.0.0/0 .
Note this setting sets iptables rules on the Aggregator, which is also subject to a network
policy defined by the Azure Security Groups associated with the instance.
8. In the Advanced Settings, please configure any setting you wish to change/use. Otherwise,
select ‘Continue’
a. Under hostname, write the hostname you wish to use for the Aggregator
b. If you wish to use an FQDN for the Aggregator, do so here (lower-case characters
are preferred).
9. Click Yes to allow Agents to communicate against the Aggregators public IP (for instance,
Agents from a different Network/ from outside Azure). In case only inter-Azure Network
Agents are expected, click No.
29.
Role Description
12. Choose Other Cluster ID, with indicative name (for instance: Azure_Network_1),
otherwise, the installed aggregator will try to connect to the wrong cluster manager-
unsuccessfully:
[enforcement_worker]
max_worker_number = 6
at the end.
This section includes the computing resource requirements for a successful installation,
instructions for preparing for the installation and required networking for the deployment.
2.5.1 Preconditions
1. Administrative access to the organization’s GCP console.
2. GCP image of the Aggregator component is shared by Guardicore with the customer /
downloaded from the customer portal and uploaded to the client's project.
Contact GC prior to installation in order to receive the image.
3. Connectivity requirements, FIREWALL RULES of the aggregator’s network:
a. Ingressing traffic, allow 80, 443 as needed.
b. Ingressing traffic from internal networks as needed.
c. Ingressing traffic via port 22 for administration causes, as needed.
a. Security: The aggregator image comes with a password and does not include SSH
keys.
b. Networking:
i. Network Interfaces- add the needed interfaces in order to connect to the
internal network, administration network etc.
ii. Configure needed subnets.
iii. Create internal IP + External IP (as per client’s requirements /
configuration). Ephemeral external- automatic external IP.
iv. Click on Done to complete the networking configuration.
8. Click Create to commence creation of the instance.
9. Validate internal and External IP created.
a. In order to anchor the Ephemeral IP to the aggregator and make it static, go to VPC
> External IP addresses, and change “Type” to Static.
This is recommended in order to keep the IP to the aggregator in case of a reboot.
Note: the -m -s flags skip the configuration of network settings that were already
configured by Azure, and skip resetting the root user’s password of the machine. (-m is for
managed host, -s is for saas environment)
5. Select the features you want the Aggregator to activate on its associated Agents:
Deception Agents Server Select this option to turn on deception capabilities for Agents
on guest servers that are not protected by ESX Collectors.
Enforcement Agents Server Select this option only if you want to turn on policy
enforcement capabilities.
Detection Agent Server Select this option to enable file integrity monitoring (FIM)
capabilities.
Agents Load Balancer Select this option to allow distribution of the Agent load to
other Aggregators in the cluster.
Legacy Deception Select this option to support deception for agents from
version prior to v36. Unmarking this option will cause old
agents not to redirect traffic to the deception server, letting
the Aggregator handle ~ 250 agents.
Marking this option will turn on support for redirecting
deception traffic for old agents (prior to v36), but will limit
the number of agents handled to ~ 100.
Note: In case there is a need to change the configuration and
add a support for this feature, run the setup of the aggregator
again (aggr-setup) and mark the feature as enabled under
Administration > Aggregator > Features > Legacy Deception.
6. Enter the IP address of the Management Server in the GuardiCore Cloud, provided by
GuardiCore:
8. Define the IP addresses that should be allowed to connect to the Aggregator over SSH
(port 22). To allow all, add 0.0.0.0/0 .
Note this setting sets iptables rules on the Aggregator, which is also subject to a network
policy defined by the GCP networking configuration associated with the instance.
9. In the Advanced Settings, please configure any setting you wish to change/use. Otherwise,
select ‘Continue’
a. Under hostname, write the hostname you wish to use for the Aggregator
b. If you wish to use an FQDN for the Aggregator, do so here (lower-case characters
are preferred).
10. Click Yes to allow Agents to communicate against the Aggregators public IP (for instance,
Agents from a different Network/ from outside Azure). In case only inter-Azure Network
Agents are expected, click No.
Role Description
13. Choose Other Cluster ID, with indicative name (for instance: Azure_Network_1),
otherwise, the installed aggregator will try to connect to the wrong cluster manager-
unsuccessfully:
[enforcement_worker]
max_worker_number = 6
at the end.
This section includes the computing resource requirements for a successful installation,
instructions for preparing for the installation and required networking for the deployment.
2.6.1 Preconditions
1. Administrative access to the organization’s OCI console.
2. OCI image of the Aggregator component is shared by Guardicore with the customer.
Contact GC prior to installation in order to receive the image.
3. Connectivity requirements, FIREWALL RULES of the aggregator’s network:
a. Guardicore network- to communicate with the rest of the aggregators.
This interface should receive a static IP.
b. Connectivity to agents (guests). An option to use NAT is supported. This interface
should be assigned with a static IP.
c. At least one Aggregator / Collector should be able to reach the OCI management
network. This interface should be assigned with a static IP.
4. Instance requirements for the Aggregator VM:
a. VM.Standard.E4.Flex.
b. 4GB RAM.
c. 2 OCPUs
d. 30GB Boot Volume
5. Select the features you want the Aggregator to activate on its associated Agents:
Deception Agents Server Select this option to turn on deception capabilities for Agents
on guest servers that are not protected by ESX Collectors.
Enforcement Agents Server Select this option only if you want to turn on policy
enforcement capabilities.
Detection Agent Server Select this option to enable file integrity monitoring (FIM)
capabilities.
Agents Load Balancer Select this option to allow distribution of the Agent load to
other Aggregators in the cluster.
Legacy Deception Select this option to support deception for agents from
version prior to v36. Unmarking this option will cause old
6. Enter the IP address of the Management Server in the GuardiCore Cloud, provided by
GuardiCore:
8. Define the IP addresses that should be allowed to connect to the Aggregator over SSH
(port 22). To allow all, add 0.0.0.0/0 .
Note this setting sets iptables rules on the Aggregator, which is also subject to a network
policy defined by the OCI networking configuration associated with the instance.
9. In the Advanced Settings, please configure any setting you wish to change/use. Otherwise,
select ‘Continue’
a. Under hostname, write the hostname you wish to use for the Aggregator
b. If you wish to use an FQDN for the Aggregator, do so here (lower-case characters
are preferred).
10. Click Yes to allow Agents to communicate against the Aggregators public IP (for instance,
Agents from a different Network/ from outside Azure). In case only inter-Azure Network
Agents are expected, click No.
Role Description
13. Choose Other Cluster ID, with indicative name (for instance: OCI_cluster), otherwise, the
installed aggregator will try to connect to the wrong cluster manager- unsuccessfully:
[enforcement_worker]
max_worker_number = 6
at the end.
3 Agents deployment
3.1 Overview of Agent Installation Steps
Checks to Perform BEFORE Deployment
5 Follow instructions for executing Agent installation provided on the Admin GUI under
Agent Installation. Instructions for installing Windows, Linux, AIX, and Solaris Agents
are also provided in this guide in the corresponding sections below.
The following sections provide guidelines and instructions on each of these stages.
Verify that the network enables servers on which Agents will be installed to communicate with the
Aggregator(s) that will manage them over port 443.. During Online installation, as well as after the
installation, the Agent keeps in constant communication with the Aggregator to fulfill its normal
operation. To do that, port TCP/443 should be opened from the Agent towards the Aggregator(s)
that manages it.
Note - Depending on allocated compute resources, a single Aggregator can support the following
number of Agents:
Verify OS support
Verify that the OSs on devices on which Agents are to be deployed are supported. Refer to this
section for updated information.
Guardicore supports the installation of Agents on many operating systems, mainly from the
Windows, Linux and Unix families. Most operating systems support Agents with full capability,
while Agents on some legacy OSs have partial capabilities. Full support means that all four Agent
modules are supported (Reveal, Deception, Enforcement, and Detection). Partial capabilities
means that some Agent modules are not supported, or that a module is functioning based on a
legacy mechanism.
To view the full list of supported operating systems in your Centra version, do the following:
4. Scroll through the list of supported versions and select the version that exists on
the device to which you are deploying. Versions that are fully supported appear
with a green check mark. Versions that are partially supported appear with a yellow
check mark. The modules that are supported/unsupported appear immediately
under the instruction title on the right:
Closeup View:
If the OS on a device is unsupported, the device’s security can still be covered by a Guardicore
Collector, with certain limitations (the Collector can issue alerts for violations of flow policies but
cannot enforce policies).
● Agent Binaries: After installation, the Agent binaries require 60MB of system's disk space.
● Log Files: Every Agent process is logged into a separate log file. By default, an additional
220MB of disk space is required for log files storage for the default “medium” log profile.
Refer to Agent Log Rotation Profiles for more info.
Overview
Agent installation profiles allow you to customize your initial Agent configuration and provide the
following benefits:
● Allow you to manage all Agent installation configurations from a single location.
● Eliminate the need for using configuration attributes as parameters for the local
installation of Agents on the server.
Installation profiles are relevant for install time only. Agent configuration can always be changed
after installation by selecting “override configuration” from the Agents screen. You can also reset
an Agent’s configuration to its profile as described in the Reset Configuration to Profile section.
To view and manage your installation profiles, you can open the Installation Profiles page in
Centra’s Administration screen, under Agents/Installation Profiles:
The Installation Profiles screen enables you to browse available profiles, create new ones, edit
existing profiles and delete those that are no longer needed. The screen also enables you to modify
the default installation profile.
Column Description
Profile Name Associates the Agent installation to a profile. See Agent Installation section for
detailed explanation.
Usage The number of Agents in the system that were installed and associated with this
profile. The number represents only Agents that are currently registered in
Centra.
Any Agent that is installed without an installation profile is associated with the default profile.
The default profile is also used as a base profile for any customized installation profile. Each
attribute that was changed in some customized installation profile, overrides the default profile
attribute.
Note: Modifying the default profile will not affect installed Agents, but will affect any new Agent
installation, regardless of the defined profile. This is because the default profile is the base of any
custom installation profile. Attributes that were changed in the custom installation profile won’t
be affected by changes in the default profile.
You can add a new installation profile by clicking on the Add new profile button:
Now you can define the installation profile name that will be used by any Agent installation
procedure. The installation profile name cannot be changed after being created
You can now select which attribute you want to set and override. Any override will override the
value which is defined by the default installation profile. Any unchanged attribute will get a value
which is defined by the default installation profile.
When installed, any new Agent associated with this profile will have attributes as follows:
● Attributes that were modified with override values will get the modified values of the new
customized profile.
Agent Installation
To install an Agent with an installation profile you need to specify it during installation.The Agent
will be installed with the Default installation profile in the following cases:
In each of these cases, a message indicating that an Agent was installed with the Default profile
will be logged in the Agent Log Screen in the Centra UI. Changing an installed Agent’s attributes by
changing its installation profile is not currently supported.
To change an Agent’s attributes, you need to override its configuration through the Override
Configuration option in the Agents screen.
To change an Agent’s installation profile, you’ll need to uninstall the Agent and reinstall it with the
new installation profile.
Note: After installation, it might take up to 5 minutes for the Agent to be initialized with its
installation profile.
1. You can specify the installation profile through the Agent installer user interface:
2. You can specify the installation profile using the installer CLI interface:
You can set the installation profile for a Linux Agent by specifying the designated environment
variable before the standard installation commands:
export GC_LOGGING_PROFILE=<profile>
You can edit installation profiles, but remember, your changes will affect newly installed Agents
only. Editing profiles does not directly affect Agents that are already installed. However, you can
reset an Agent’s configuration to its profile which will reset the configuration to the most
up-to-date profile configuration (i.e., the profile configuration that you most recently edited).
Note: If you modify the default profile, remember that it also modifies other profiles, as other
profiles are considered as modifications of the default profile.
You can always reset single, or multiple Agents’ configurations to their installation profile
configurations.
Selecting Reset to profile defaults will display a description of the operation. The listed Agents will
reset their configuration to the configuration of the profile listed in the Target Profile column:
When an Agent is installed, its profile appears in the Installed Profile column. If the profile no
longer exists, the value in the Target Profile will be default.
2. On the Installation screen, select the Aggregator with which the Agent will communicate:
Installation profiles can be created on Centra and enable installing Agents with specific
configuration settings. For instructions on how to create and manage installation profiles,
see .
5. Follow the instructions for downloading and installing the packages, for example:
Note: Before installing the packages, click Advanced Options for additional
customization options, if required:
Notes:
● In case of installing the Agent using the Aggregators cluster’s FQDN, replace the
Aggregator’s IP with the FQDN in the installation snippet. If the FQDN already points to
Aggregator’s cluster, replace the Aggregator cluster’s IP with the FQDN:
i. For Windows - windows_installer.exe /a aggregator.domain.com
/p "<installation password>"
● In case the Aggregator has multiple interfaces (one facing the Guardicore Management
server, and one facing the guest servers), the IP in the deployment snippet might be
pointing to the wrong interface and it might be necessary to change it manually.
Online installation: The installation flow pulls the package from the Guardicore Aggregator (that
fetches it from a repo on the Guardicore Management), then the flow installs and configures the
Agent. The installation instructions for this option per OS are included in the Guardicore UI.
“Online installation” allows installing the latest Agent version that is available in the repo on the
Guardicore Management without modifying automation scripts / packages.
OR
Offline installation: A package file is placed in the customers repo / automation platform, and
copied and installed on a target server by customer’s scripts.
“Offline installation” allows control of the deployed binaries within internal processes. It can be
also used in cases the Aggregator can’t yet be reached but the project already wants to deploy an
Agent (leaving it “orphan” until an Aggregator is available). However, the automation/package is to
be maintained with new Agent releases.
Both methods can be automated for rollout at scale within common configuration
management tools.
For AIX installation, only online installation is available using wget. For Solaris installation, both
online and offline using local files are available.
Installation Script
2. Open the Windows command prompt with administrative privileges and run the installer
with a minimal set of 2 parameters as follows:
windows_installer.exe /a <aggregator_FQDN> /p
"<agent_installation_passphrase>" /installation-profile default
/offline Installs the Agent using the version which is provided within the installer
(network connectivity to the Aggregator is not required)
/path Set custom installation path for the Agent program files.
/data-path Set custom installation path for the Agent data files (certificates, log files,
configuration and storage).
/logging-profile Set the logging rotation profile for the Agent ('min', 'max' or 'medium').
/labels list of labels in the form of key1:value1,key2:value2 for labeling the agent
instance
Expected Result
The following lines should appear in the installation script output when the installation is
completed successfully:
2. Wrong installation password - the following error will be shown in the end of the
installation script output when a wrong agent installation password is used:
Installation aborted due to authentication error while
downloading package, check if the password is correct.
3. The script was run with low privileges - an administrative password prompt will be shown
in case the script is run with low privileges.
Installation Script
1. Fetch the Windows installer exe from the Guardicore technical platform owner (to be
either downloaded from the Management server internal repo, or from the Guardicore
customer portal).
3. Open the Windows command prompt with administrative privileges and run the installer
with a minimal set of 5 parameters as follows:
GuardicoreAgentSetup.exe /q /offline /a <aggregator_FQDN> /p
"<agent_installation_passphrase>"
Note: Before running a programmatic offline installation, it is recommended to validate that the
latest version of the Agent is not already installed. To do so, query the content of the registry key
HKLM\SOFTWARE\GuardiCore\Version and compare it to the version of the Agent that is
about to be installed.
Expected Result
The following lines should appear in the installation script output when the installation is
completed successfully:
The Guardicore Agent Service service was started successfully.
Installing Agent UI
Installation completed 2020-02-10 07:56:05.768 UTC
Removing temporary installer files
Exiting installer 2020-02-10 07:56:05.768 UTC
a. On Windows server 2008 or 2008R2, if .NET 3.5 is not yet enabled, the Guardicore
installer will enable it.
b. On Windows server 2003 and older or Windows server 2012 and newer
Guardicore installer does not impact any existing .NET settings.
c. On Windows Desktop operating system Guardicore installer does not impact any
existing .NET settings.
Note - .NET 3.5 is always included in the OS. Guardicore does not install it, but only
enables it.
2. On Windows server 2003 and older, the Guardicore installer installs a Windows
dependency called KMDF. KMDF will not be installed on any OS newer than Windows
Server 2003.
Expected result when the Aggregator is accessible and the Agent is successfully installed:
* Service reveal-channel [Up]
* Service reveal [Up]
* Service enforcement-channel [Up]
* Service enforcement [Up]
* Service controller [Up]
The Agent should now appear in the Guardicore Centra UI (in the Administration/Agents
screen), and can now be managed centrally.
Expected result when the Aggregator is not accessible, but the Agent is successfully installed:
* Service reveal-channel [Down]
* Service reveal [Up]
* Service enforcement-channel [Down]
* Service enforcement [Up]
* Service controller [Up]
In case of a different result, the following common troubleshooting steps are recommended:
sc query "GC-AGENTS-SERVICE"
Expected result:
SERVICE_NAME: GC-AGENTS-SERVICE
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, NOT_PAUSABLE,
ACCEPTS_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
1. To validate the Agent is connected to the right Aggregator, the following line should appear
in the log file
C:\ProgramData\Guardicore\logs\gc-controller.log:
When the Aggregator is not accessible, the following line will appear:
2019-06-04 11:38:23,278749 [3068:MESSAGE] [channel] resolving
address <aggregator fqdn>:443
Note: <aggregator IP> is the IP address that was resolved for <aggregator fqdn>
To validate that the installation password used was correct (once the Agent connects to
the Aggregator), check that certificates were successfully acquired:
The following line should appear at the end of the log file
D:\ProgramData\Guardicore\logs\gc-cert-client.log:
When the Agent installation passphrase is incorrect, the following line should appear in the
log file:
In case the latest version of the Agent is already installed, re-running the installation process will
uninstall the agent and re-install agent again.
It is possible to modify the binaries and configuration paths - see Customization Options.
Requirements:
● Windows 2008 and above with minimum Powershell version 3.0
● Centra v31 and above.
● Active WinRM service at the endpoints.
Preparation/Prerequisites
Run script with high privileges user (such as domain admin).
You’ll be prompted to enter user credentials.
Place “nodes.txt” in the same directory of the script or provide a path to endpoints file location in
txt format (supported only when running from command line). This file should contain a list of the
nodes you wish to install the agents on. Can use hostnames or IPs.
Tip/ Important Note-
If endpoints are members of different domains and there is no trust between the domains,
consider to separate the script run and each time use the current domain admin’s credentials
Execution:
● Examples when running from command line:
.\Agent Deployment.ps1 FQDN.com PASS default
.\Agent Deployment.ps1 FQDN.com PASS default -path C:\endpoints.txt
Actions:
The script will perform the following actions:
1. Provide a ping test to all servers from the endpoints file and print the results to the
console.
2. Connect to each machine and do the following:
● Download the agent installation package from the Aggregators Cluster.
● Install the agent and wait for the installation to finish.
● Check and print whether installation passes successfully or fails. For failed
nodes’ installations, logs are retrieved from the endpoints remotely and
placed under “Endpoints Logs” directory which will be created in the script
directory. Retrieval of installation logs are done by using SMB
administrative share, thus please make sure that firewall(/local firewall)
doesn’t block this connection.
● Provide Agent’s services status for successful installations.
3. The script writes the records of all run operations within a log file called:
“gc_windows_agents_deployer.log“
and will be located in the same directory that the script was executed from.
Note: The following instructions apply for CentoOS based flavors - CentOS, RHEL, OracleLinux
SUSE, Amazon Linux etc. Contact Guardicore for Ubuntu or Debian instructions.
Installation Script
This is the template of the Agent installation script that should be run with a user that has sudo
permissions:
export UI_UM_PASSWORD='<agent_installation_passphrase>'
export GC_PROFILE='default'
wget https://<aggregator_FQDN>/guardicore-cas-chain-file.pem
--no-check-certificate -O /tmp/guardicore_cas_chain_file.pem
# expected checksum <certificate checksum>
SHA256SUM_VALUE=`sha256sum /tmp/guardicore_cas_chain_file.pem | awk '{print
$1;}'`
export INSTALLATION_CMD='wget --ca-certificate
/tmp/guardicore_cas_chain_file.pem -O- https://<aggregator_FQDN> | sudo -E
bash'
if [ $SHA256SUM_VALUE == <certificate checksum> ]; then eval
$INSTALLATION_CMD; else echo "Certificate checksum mismatch error"; fi
Parameters
Expected Result
The following lines should appear in the installation script output when the installation is
completed successfully:
Mon Feb 10 09:29:17 IST 2020 [*] Successfully downloaded reveal kernel modules
...
Mon Feb 10 09:29:17 IST 2020 [*] Successfully downloaded enforcement kernel modules
...
Mon Feb 10 09:29:18 IST 2020 [*] Package installation done!
...
Mon Feb 10 09:29:19 IST 2020 [*] Guardicore agent installed successfully
Wrong installation password - the following error will be shown at the end of the installation
script output when a wrong Agent installation password is used:
Mon Feb 10 08:16:41 IST 2020 [*] 'curl' command failed health-check and connectivity
test to aggregator, probably wrong password (12345678)
Mon Feb 10 08:16:41 IST 2020 [*] Deleting temporary CA file:
/tmp/guardicore_cas_chain_file.pem
Mon Feb 10 08:16:41 IST 2020 [*] Installation failed
Wrong certificate checksum - the following error will be shown in the end of the installation script
output when a wrong certificate checksum is used:
The script was run with low privileges - the following error will be shown in the end of the
installation script output when the script is run with low privileges:
Mon Feb 10 09:25:39 IST 2020 [*] Not running as root (Installation must be
executed from root user)
Unsupported OS version - the following error will be shown in the end of the installation script
output when the script runs on an unsupported OS version:
Mon Feb 10 08:53:57 IST 2020 [*] Checking agents support for this machine
Mon Feb 10 08:53:57 IST 2020 [*] Guardicore agent support for <OS
version name> is missing: Package not found
Mon Feb 10 08:53:57 IST 2020 [*] Contact Guardicore at
support@guardicore.com for more information
Mon Feb 10 08:53:57 IST 2020 [*] Uploading log file (32493 bytes) to server on
172.16.8.1:443
Mon Feb 10 08:53:57 IST 2020 [*] End of installation procedure (status:
no_ga_pkg_support)
Mon Feb 10 08:53:57 IST 2020 [*] Deleting temporary CA file:
/tmp/guardicore_cas_chain_file.pem
Mon Feb 10 08:53:57 IST 2020 [*] Installation failed
Contact Guardicore support to get Agent support for the necessary OS version.
Unsupported kernel version - the following error will be shown in the end of the installation script
output when the script runs on an unsupported Linux kernel version:
Mon Feb 10 09:02:51 IST 2020 [*] Failed to download reveal kernel modules
...
Mon Feb 10 09:02:51 IST 2020 [*] Failed to download enforcement kernel modules
Integrate the Management instance with the Guardicore KO SaaS, or contact Guardicore support
to acquire kernel modules for the necessary kernel modules (this usually takes up to two days).
Note that the installation is to be considered as completed successfully, as the KO module can be
published to the Agent remotely.
Installation Script
1. Fetch the Agent installation package RPM from the Guardicore technical platform owner
(to be either downloaded from the Management server internal repo, or from the
Guardicore customer portal).
3. In case an old Agent is already installed on server, remove the Agent and log files in
advance to reduce the footprint on the server:
gc-agent uninstall
rm -f /var/log/gc-*log*
4. Execute:
export IS_OFFLINE_PACKAGE=true
export UI_UM_PASSWORD='<agent installation passphrase>'
export SSL_SERVER="<aggregator fqdn>"
<rpm/deb> -i /tmp/<package_name>
Variable Description
export GC_LOGGING_PROFILE=min Set the logging rotation profile for the Agent ('min',
'max' or 'medium').
The Agent should now appear In the Guardicore Centra UI (Administration/Agents), and can be
managed centrally.
Expected result when the Aggregator is not accessible but the Agent is successfully installed:
* Service 'reveal' [Up]:
* Service 'reveal-channel' [Down]:
* Service 'enforcement' [Up]:
* Service 'enforcement-channel' [Down]:
* Service 'controller' [Up]:
In case of a different result, the following common troubleshooting steps are recommended -
1. To validate that the Agent is connected to the right Aggregator, run the following:
gc-agent system-status
This is the result when the Aggregator is accessible and the password is correct:
2. To validate that the installation password that was used was correct (once the Agent
connects to the Aggregator), check that certificates were successfully acquired:
When the Agent installation passphrase is incorrect, the following line should appear in the
log file /var/log/gc-cert-client.log:
gc-agent version
Validation of kernel version - once the Agent is connected to the Guardicore Aggregator, the
Guardicore technical platform owner will validate that the Agent successfully fetched the
required KO (kernel object) module. The Guardicore UI enables this validation, as well as proactive
search of the supported Kernel versions.
In case of a missing KO - integrate the Management instance with the Guardicore KO SaaS, or
contact Guardicore support to acquire kernel modules for the necessary kernel modules (this
usually takes up to 2 days). Note that the installation is considered as completed successfully, as
the KO module can be published to the Agent remotely.
In case the latest version of the Agent is already installed, re running the installation process will
only perform Agent certificates re enrollment. This will not cause an Agent restart and will take
effect only after restarting the Agent.
For Agents installed in offline installation, it is required to uninstall the Agent and then re-run the
offline installation process as described above.
It is possible to modify the binaries and configuration paths - see Customization Options.
Note: Agent uninstallation does not remove the installed IPF package. It should be
removed manually.
You can use this link to download GNU and open source tools for AIX.
By default, IPF loads persistent configuration into memory, typically from the file
/etc/ipf.conf. As the Agent starts and receives the latest policy from Centra, the Agent
converts the policy to IPF rules and overrides the existing IPF rules in memory, thus enforcing only
the rules received from Centra policy.
You can dump your current IPF configuration to the persistent IPF configuration file by running
the following command:
It is highly recommended to make sure that all the rules are being saved in the persistent
configuration file.
After Agent uninstallation, run the following command to remove existing rules and load the
previous IPF configuration: ipf -Fa -f /etc/ipf.conf
IPFilter Installation
The Agent installation procedure attempts to install an IPFilter package (version 4.1.13), if there is
no IPFilter package installed on the server. The package will not be removed in case of Agent
uninstallation. You can also download and install the IPFilter package manually by downloading
the package from IBM repository.
If the Agent is uninstalled all processes are stopped and the start/stop script (/etc/init.d/gc-agent)
will also be removed.
export UI_UM_PASSWORD='<agent_installation_passphrase>'
wget https://<Aggregator_IP_OR_FQDN>/guardicore-cas-chain-file.pem
--no-check-certificate -O /tmp/guardicore_cas_chain_file.pem
--sslcheckcert=0
# expected checksum {certificate checksum}
SHA256SUM_VALUE=`openssl dgst -sha256 /tmp/guardicore_cas_chain_file.pem |
awk '{print $2}'`
export INSTALLATION_CMD='wget --sslcafile=/tmp/guardicore_cas_chain_file.pem
-O- https://{Aggregator_IP_OR_FQDN} | bash'
if [ $SHA256SUM_VALUE == {certificate checksum} ]; then eval
$INSTALLATION_CMD; else echo "{sha256_mismatch}"; fi
The instructions can be found also in the Agent Installation Instructions page in the Centra UI.
AIX Upgrade
The usual upgrade procedure is to uninstall and then re-install the Agent. To uninstall the Agent
see the following section.
<certificate 1. In the management UI, open Administration -> Agents -> Agent
checksum>
Installation Instructions.
2. Look for “AIX”.
3. Click the Select Aggregator Server button and choose one of
the Aggregators.
A script will appear that shows the expected certificate
checksum value.
export GC_LOGGING_PROFILE=medium Set the logging rotation profile for the Agent ('min',
'max' or 'medium').
This profile cannot be changed after the
installation. (Default is medium)
Uninstall
In order to uninstall the Agent, run the command gc-agent uninstall.
The following files are left on the server after uninstallation:
File Description
IP Filter Package
1. POSIX compatible tools (sed, grep, etc) under the directory /usr/xpg4/bin/
3. For Agent installation using the Online Installation Script, wget version 1.12 or newer
located in /usr/sfw/bin/wget is required.
Note - if this requirement is missing, install the agent using online installation using local
files.
4. On Solaris version version 11.3 and below, it is required to have IP Filtering (IPF) version
4.1.9 or newer installed. See IPFilter below for more information.
5. On Solaris 11.4, Packet Filter (PF) firewall should be installed but disabled. See Packet
Filter below for more information.
Note - This is required for having the Enforcement module of the agent installed and running.
Installing the Agent with the Enforcement modules disabled allows agent installation without
disabling PF.
By default, IPF loads persistent configuration into memory, typically from the file
/etc/ipf/ipf.conf. As the Agent starts and receives the latest policy from Centra, the Agent
converts the policy to IPF rules and overrides the existing IPF rules in memory, thus enforcing only
the rules received from Centra policy.
● If the firewall is not disabled, the installation will stop and the following message is
displayed: “Solaris Firewall is enabled, disable firewall to install enforcement agent”
● To bypass this requirement, see Installing the Agent with the Enforcement Modules
disabled.
Agent installation and functionality varies according to the configuration of the Solaris Zone:
Global Zone
An Agent can run on a global zone with shared IP to provide L7 visibility and L4 enforcement for
the global zone and L4 visibility and enforcement for all it’s shared-ip non-global zones. In this
case, the global zone and all its shared-ip non-global zones will be treated as a single entity
("Asset") in the system.
Shared-IP Non-Global Zone
A shared-IP zone is a non-global zone that shares the IP state and configurations with the global
zone. In this configuration, it is not possible to install the Agent inside the shared-ip zone because
it cannot run the native Solaris IP Filtering (IPF) module, which belongs to the global zone only.
Shared-IP Non-Global Zones will therefore be treated as a single entity with their Global zone - in
case the Global zone has an Agent installed, L4 visibility and L4 enforcement will be enabled also
for its Shared-IP zones.
export UI_UM_PASSWORD='<agent_installation_passphrase>'
/usr/sfw/bin/wget
https://<Aggregator_IP_OR_FQDN>/guardicore-cas-chain-file.pem
--no-check-certificate -O /tmp/guardicore_cas_chain_file.pem
/usr/sfw/bin/wget --no-check-certificate https://<Aggregator_IP_OR_FQDN> -O-
| bash
The instructions can be found also in the Agent Installation Instructions page in Centra UI.
To customize Agent installation for Linux use these environment variables before launching the
RPM:
export GC_LOGGING_PROFILE=medium Set the logging rotation profile for the Agent ('min',
'max' or 'medium').
This profile cannot be changed after the
installation. (Default is medium)
2. From a server with accessibility to the Aggregator and curl installed, download the
Aggregator certificate chain:
curl -s -k -o guardicore_cas_chain_file.pem
https://<Aggregator_IP>/guardicore-cas-chain-file.pem
3. Download Solaris x86_64 and SPARCv9 agent packages from the Guardicore customer
portal. You should download the following two files:
a. gc-guest-agent-polling-sunos-sparcv9.pkg.gz
b. gc-guest-agent-polling-sunos-x86_64.pkg.gz
3. Depending on the CPU architecture of the Solaris server, upload one of the agent
installation packages. Run isainfo -k to decide which package to upload in the
following way:
5. Run
Export GC_SOLARIS_INSTALL_PKG_FILE=<agent_installation_package>.pkg
6. Run solaris_installation.sh
Note - The Aggregator has to be accessible in order for the installation to succeed
export DISABLE_ENFORCEMENT="true"
Uninstall
In order to uninstall the Agent run the command gc-agent uninstall, and reply yes twice to
the questions Do you want to remove this package? that are prompted regarding the Agent’s
packages,
File Description
Changing these values requires Guardicore to provide a customized package for the customer.
Prerequisites
● Install an Agent on each Docker container host machine.
● Run the following commands on each machine:
○ gc-config -s GC_CONTAINER_MODE="native" && gc-agent restart
○ gc-runc-install -install
● Default Grouping
○ The container fields used for grouping in the Reveal mao.
○ Most preferably set to image_name, container_names
● Allowed Docker Label Prefixes
○ The Docker labels who’s connections will be filtered into Centra.
○ Labels that don’t match any of these prefixes will be dropped (separated by new
lines).
● To add Docker Containers into Centra labels- Centra UI > Reveal > Labels
○ Create a label
○ Add a dynamic criteria of your choosing, for example:
Prerequisites:
Firewall Requirements
● Aggregator -> K8s api-server (usually to destination port 6443)
○ Where applicable also: Aggregator -> K8s api authenticator
● K8s api-server -> Aggregator:443
● Every node in the cluster destined to have an Agent installed on -> Aggregator:443
● Every node in the cluster destined to have an Agent installed on -> Guardicore’s GCR - If
using the recommended installation procedure with Guardicore’s GCR to obtain the
docker images.
● Save this info for later use when configuring the Kubernetes Orchestration in Centra.
4. Bind the cluster role ‘cluster-reader’ to the newly created service account:
# cat <<EOF >>./clusterrolereaderbinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: gc-cluster-reader-role-binding
subjects:
- kind: ServiceAccount
name: gc-reader
namespace: guardicore-orch
roleRef:
kind: ClusterRole
name: gc-cluster-reader
apiGroup: rbac.authorization.k8s.io
EOF
# kubectl create -f ./clusterrolereaderbinding.yaml
5. Get the token associated with the service account and save a copy to use later (note name
of gc-reader-token may differ):
# kubectl get secrets --namespace=guardicore-orch | grep gc-reader
gc-reader-token-lg9tr kubernetes.io/service-account-token 3 3d
# kubectl describe secret gc-reader-token-lg9tr --namespace=guardicore-orch | grep "token:"
token:
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZ
XJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3Bh...ByWg
a. Openshift:
You may receive multiple secret tokens, pick one.
6. Get the decoded cluster certificate and save a copy to use later:
a. Kubernetes:
kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' |
base64 --decode
b. GKE/EKS:
CA Certificate can be found under Kubernetes -> Clusters -> Show Credentials
c. OpenShift:
Copy the cluster CA certificate.
d. OpenShift 3.11:
oc config view --raw | grep certificate-authority-data | cut -f2- -d: | xargs | base64 -d
Agent Deployment
The Guardicore components are pushed to the cluster in a dedicated ‘guardicore’ namespace,
including the Agent that is deployed as a Daemonset, as well as the Guardicore webhook for the
admission control process.
A DaemonSet ensures that all (or some) nodes run a copy of a Pod. As nodes are added to the
cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage
collected. Deleting a DaemonSet will clean up the Pods it created.
Please take this into account when planning the Agent deployment that part of the Agent
deployment requires collecting information about all the currently running pods. The deployment
patches all existing K8s controllers as well, injecting a gc-init-container requirement to each of
them, resulting in pod restart.
Also be aware that Guardicore’s Agent runs as a highly privileged pod which inject ko module to
the host.
Agent deployment is possible in 3 ways: Helm based, online, and offline. For all deployment
options the K8s cluster must be able to access the Google container repository (GCR) from each
of the cluster’s nodes in order to pull the Guardicore Agent and webhook images.
*For on-prem with no internet connectivity, it is required to have private and public Container
registries to which we would need to load the Docker Images into or manually load them onto the
nodes.
**Using only a private Container registry without a public Container registry is currently only
supported for ‘Helm’ installation (not ‘Online Installation’).
We strongly recommend installing the Agents using Helm unless there is a good reason not to.
Make sure Helm is installed on the console being used to deploy the agent, where Kubectl is used -
reference.
If customers can’t access Guardicore’s GCR, using the customer’s Container registries is supported
and requires additional files and configuration of the Management server, please contact
Guardicore support for further instructions.
Container Registries deployed on storage Pods in the same cluster that Guardicore’s agents are
being installed are not supported.
It is important to understand how the Nodes authenticate against the Container Registry, if
each Node is authenticated / each Namespace / each Service Account / other method.
If Helm deployment is to be used, usage of only a private Container registry is supported, in other
installation procedures both private and public registries must be used.
After the images are pushed you may run the preferred deployment method while changing the
Container registry Address/es, if only private registries are used for Helm deployment apply
necessary changes written in Step 4.c.
In order to support it you will first need to download and push the integration images into your
registry:
Image Registry
gc-guest-agent_<version> Private
Use the following commands for both private and public registries:
1. Download the container images from our Customer Portal (contact Guardicore support)
2. Skip the following authentication step when not needed.
Connect to a machine which has a local docker registry and is authenticated with the
customer’s private registry (using the docker login command).
a. For example we run the following in order to login to google registry:
# docker login -u _json_key -p "$(cat gcr_key.json)" https://gcr.io
3. Upload the files downloaded in Step 1 to the machine & verify checksum (SHA256) of files.
4. Load the images files to the machine’s local registry (change image version if needed):
# docker load -i gc-admission-webhook_v5.39.21165.tgz
...
Loaded image: gc-admission-webhook:v5.39.21165
# docker load -i gc-deployment_v5.39.21165.tgz
...
Loaded image: gc-deployment:v5.39.21165
# docker load -i gc-init-container_v5.39.21165.tgz
...
Loaded image: gc-init-container:v5.39.21165
# docker load -i gc-guest-agent_v5.39.21165.2517.tgz
...
Loaded image: gcr.io/guardicore-28070656/gc-guest-agent:v5.39.21165.2517
5. Verify the images are loaded correctly:
# docker images
8. Push the tagged images to customer’s private registry (change <customer-registry> and
image version)
# docker push <customer-registry>/gc-admission-webhook:v5.39.21165
The push refers to repository [<customer-registry>/gc-admission-webhook]
…
v5.39.21165: digest:
sha256:744ab179d89a6a2208e2178c79bd7542e8ec3ae8d728a256a918d8d2e353d1b
1 size: 947
Helm deployment
Prerequisites:
Procedure:
ii.Make sure for authentication against your private registry you use an image
pull secret file named “docker_registry_auth.json” placed in your working
directory.
1. If you are already authenticated, create a file named
“docker_registry_auth.json” containing in it open and closed curly
brackets: {}
5. Run commands.
*Add --debug at the end of your Helm command to get verbose output
6. Validation:
a. # helm list
Should get an object named gc-app
b. # kubectl get all -n guardicore
Should get multiple GC K8s objects
c. Validate Agents in the Centra UI
i. Agents on the cluster nodes in the Agents Page
ii. Pod traffic in Network Log
iii. Pods in the Reveal maps
Online deployment
Prerequisites:
5. Copy the exports, wget certificate, and helm commands from Centra to the console you
will be running the commands from.
*Remove and do not run the wget to retrieve the docker_registry_auth.json from the
Aggregator.
a. Make sure you received a docker_registry_auth.json file from Guardicore support
and you place it in your working directory.
b. Make sure the SSL_SERVER and SSL_PORT are the Aggregator’s Agent facing IP &
Port.
c. If using the customer’s Container registry:
i. Change the DOCKER_PRIVATE_REGISTRY_ADDRESS and
DOCKER_PUBLIC_REGISTRY_ADDRESS to the customer’s private and
public registries.
(See additional instructions below under Custom Container Registry
Support)
ii. Make sure for authentication against your private registry you use an image
pull secret file named “docker_registry_auth.json” placed in your working
directory.
1. If you are already authenticated, create a file named
“docker_registry_auth.json” containing in it open and closed curly
brackets: {}
6. Run the commands.
Offline deployment
1. The difference between the online and offline deployments is that in the offline one the
.tgz file containing the deployment scripts can be manually downloaded and copied, as
opposed to directly downloading it from the Aggregator.
2. The offline option still requires cluster connectivity to the internet in order to download
the Guardicore container image.
3. The offline option allows you to review and edit the YAML files in order to customize the
deployment.
Helm uninstall
● In order to remove the agent and admission controller webhook run the following
command:
○ helm uninstall gc-app
■ If uninstalling helm fails due to pre-uninstall, run: helm uninstall gc-app
--no-hooks
○ helm repo remove gc-repo
Manual uninstall
1. In order to delete the Guardicore deployment from the cluster manually, use the following
procedure.
2. Make sure you have the deployment .tgz folder available. If not, it can be downloaded again
from the Aggregator as described in the deployment section.
3. Run the “gc_uninstall.sh” command in the folder. It will remove the Guardicore Daemonset,
Admissions webhook and remove orchestration user and permissions.
1. Create an Inventory file indicating the servers to which you are deploying Agents. See
https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html for information on
the construction of an Ansible Inventory file.
---
- hosts: all
gather_facts: False
tasks:
- name: Download certificate
get_url:
url: "https://{{ aggr_ip }}/guardicore-cas-chain-file.pem"
dest: /tmp/guardicore_cas_chain_file.pem
validate_certs: no
- name: Install agent
(in the above command, replace AGGREGATOR_IP with the actual IP of the Aggregator, and
PASSWORD with the actual password).
1. Make sure that Guardicore Management and Aggregator Servers are fully
installed and configured.
3. Create a reachable network share for the SCCM packages and copy the install.bat to it.
NOTE: Steps 1+2 must be repeated following each Guardicore Centra system patch, before
installing new agents.
a. Provide a name for the program, such as Guardicore Agents and specify the
following Command line:
windows_installer.exe /a
"<IP1/FQDN1-AGG>:443,<IP2/FQDN2-AGG>:443,<IP3/FQDN3-AGG>:443" /p
<password> /q > c:\windows\temp\agent_installation.log
Note: psexec can run on several machines from a txt list, with "@list_path.txt" instead of the
<remote_ip_to_install>
The Modules column displays the status of Agent modules. Modules that are installed appear as
blue icons. When reviewing the outcome of Agent deployment, use this view to make sure that all
the modules that are expected to be installed really are installed.
Even if modules appear as blue, indicating that they are installed, there may be problems with their
functioning due to the installation or to the limitations of the OS to which they have been installed:
● A red dot (Active with Errors) on a module icon of a newly deployed Agent indicates there
is an error that needs attention. The specific error will be listed as a flag raised for this
agent. For the list of all flags, please refer to Agent Flags.
○ An exception to this is the Polling mode flag on the Reveal module. The Polling
mode flag can sometimes be the expected outcome of a deployment, mostly on
Legacy OSs. In that case, this does not require further attention.
● A yellow dot on a module icon (Active with Partial Capabilities) indicates that the module is
functioning with only partial capabilities. Usually this is expected and caused by OS specific
limitations.
Hovering over the icon displays more information, as in the following figure:
Note that the Enforcement Module icon in the above picture displays a yellow dot; hovering the
mouse cursor over the icon reveals that the Enforcement module has L4 only enforcement. This is
the expected outcome of the Agent deployment on Windows 2003 servers.
The administrator can use a wide variety of filters to quickly check the health of Agent modules.
For example, using the Module limitations filter enables discovering which Agents have L4 only
enforcement as shown in the following figure:
Using the Aggregator filter, the administrator can narrow the list to only those Agents that were
deployed from a particular Aggregator as in the following figure:
Using a combination of filters enables the administrator to quickly review the deployed Agents
and discover those that are not functioning as expected and need additional attention.
Available Profiles
During Agent installation, you can choose one of the following three Log Rotation profiles: “min”,
“medium”, “max” for allocating storage space for Agent logs. The “medium” profile is considered to
be the default profile.
The type of profile determines the amount of debugging information that is collected and the time
span over which it is collected. The Min profile collects the least amount of information, while the
Max collects the most. Thus, the choice of profile may affect troubleshooting.
The following table describes the estimated storage required for each Log Rotation Profile:
90 MB 220 MB 700 MB
For more detailed information on Log Rotation Profiles, see the Administration Guide.
4.1 Orchestrations
4.1.1 VMware Orchestration configuration (vCenter integration)
VirtualMachine
GuestNicInfo
GuestIpAddress
HostSystem
HostPortGroup
HostPortGroupPort
DistributedVirtualPort
DistributedVirtualPortgroup
DistributedVirtualSwitch
ComputeResource
VirtualNetwork
Folder
2. Log in with the admin user created during Management Server setup wizard. The Centra
Management screen appears.
3. At the upper right of the screen, click the icon to access Administration.
Type vSphere
user name, password and vCenter IP address Fill in a value for each.
6. Click Save.
Intro
Importing orchestration data helps you label your assets and build policies around them. Centra
enables you to import orchestration data from AWS. Centra's Aggregator connects to the AWS
API to pull metadata on Elastic Compute Cloud (EC2) workloads, VPC flow logs, and more. This
article explains how to configure AWS orchestration.
Preconditions:
In order to pull metadata from EC2, you must establish authentication between the Aggregator
and AWS. The authentication method depends on the location and permissions of the Aggregator.
This is the recommended implementation if you have an Aggregator running under a VPC that
belongs to the account that you want to monitor.
The role must have a policy attached with all the authorizations required (See AWS Policy
definition)
This is the recommended implementation if you need to monitor multiple accounts. The assumed
role in these accounts must have a policy attached with all the authorizations required (See AWS
Policy definition).
Customer Credentials:
Only available option if the Aggregator is running outside the AWS environment. The Customer
must create an IAM user with programmatic access only (Access/Secret Key). It does not require
console access. The user must have a policy attached with all the authorizations required (See
AWS Policy definition)
In order to authorize the queries that the orchestrator makes you need to create a Custom policy
or use a predefined AWS Policy.
AWS provides a read only policy “AmazonEC2ReadOnlyAccess” that has a superset of the
required permissions.
If you want to create a customer policy with the minimal required authorization, you can use the
following JSON definition:
"Version": "2012-10-17",
"Statement": [
"Sid": "Orchestrator",
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeRegions",
"ec2:DescribeInstances",
"ec2:DescribeImages",
"ec2:DescribeAvailabilityZones"
],
"Resource": "*"
The administrator uses the following section of the Add New Orchestration dialog box to
configure AWS authentication as explained in the sections below:
1. Verify that an AWS IAM role has been created. For instructions on how to create an AWS
IAM role, see the section at the end of this article, or refer to AWS documentation.
2. In the authentication method field, select EC2 IAM Role. There is no need to fill out any
other Authentication fields and you can proceed to the region name field.
Note: If you want to assume a different role, use the role arn field to type a new Amazon Resource
Name (ARN) of the role to assume.
Intro
Azure Orchestration enables you to complement the information provided by GuardiCore Agents.
For example, information coming from Azure orchestration may include Azure tags assigned to the
asset, and more. Find more information about Azure Tags here.
Intro
Importing orchestration data helps you label your assets and build policies around them. Centra
enables you to import orchestration data from GPC (Google Cloud Platform). When GCP
orchestration is configured, Centra's Aggregator connects to the GCP API to pull metadata on
GPC workloads.
Field Value
Type GCP
Service Account Service account email of the account created in the previous section
Project List This option allows you to configure more than one project per orchestration. Add
a comma delimited list of project IDs.
Private Key Paste the private key downloaded in the previous section.
Label Key
Translation Enables you to control the way imported labels appear in Centra, So, for example,
you can specify that a tag such as OrchestrationAppName should be imported
into Centra as App.
Labeling Strategy This refers to how you want to import custom Tags into Centra.
Three strategies are provided:
Predefined: List the custom Tags to import into Centra. This is done by supplying a
list of keys to import.
Note: Labeling Strategy only affects custom tags that users created for F5 and
does not affect the importation of metadata.
Predefined Labels List the keys to import as labels. This only applies to custom tags.
Orchestration Full Number of seconds to elapse before another orchestration report is generated.
Report Interval
As with other Orchestrations, once you have configured the GCP orchestration you will be able to
see the information coming from the orchestration on your Assets page.
1. Create a user in IAM for the Centra system who will be calling the API, and provide the
user read only access to the desired tenant\s.
1. RSA key pair in PEM format (minimum 2048 bits). See How to Generate an API
Signing Key.
2. Fingerprint of the public key. See How to Get the Key's Fingerprint.
3. Tenancy's OCID and user's OCID. See Where to Get the Tenancy's OCID and
User's OCID.
3. Upload the public key from the key pair in the Console. See How to Upload the Public Key.
4. Make sure you take note of the user OCID, key pair fingerprint, private key and tenancy
OCID and region.You will need these for the next step.
1. In Centra’s Administration menu select Data Center/Orchestrations and click the + Add
Orchestration button. The Add New Orchestration dialog box appears:
Field Value
Type OCI
User OCID OCID of the user calling the API. See Step 1 above.
Key Pair Fingerprint See Step 1 above for how to obtain the key pair fingerprint.
Private Key Content of the private key in PEM format. See Step 1 above for how to obtain
this.
Tenancy OCID OCID for the tenancy. See Step 1 above for how to obtain this.
Region OCI home region. See Regions and Availability Domains for more information.
Query All Regions When checked, queries all regions subscribed by the tenancy. Customers that
use more than one region can choose to query all regions which will enable the
orchestration to pull information for assets that are in other regions as well.
As with other Orchestrations, once you have configured the OCI orchestration you will be able to
see the information coming from the orchestration on your Assets page.
Importing orchestration data helps you label your assets and build policies around them. Centra
enables you to import orchestration data from the OpenStack cloud operating system. When
OpenStack orchestration is configured, Centra pulls metadata from OpenStack and converts them
to Centra Labels.
3. Add a reader role for the Guardicore user and specify the domain\projects to be covered by
the orchestration. A ‘reader’ role should be configured by default as part of the OpenStack
deployment. If it’s missing please contact the OpenStack administrator to create one. The
following CLI command applies to the whole domain:
Basic Configuration
Field Description
Admin User The User Name for the Guardicore User created in Step 1 in OpenStack.
Admin The User Password for the Guardicore User created in Step 1 in OpenStack.
Password
Projects List The project list to be covered by the orchestration. The list can be provided by
‘project ID’ or ‘project name@domain’ name format delimited by a new line.
Auth Url The API public authentication endpoint. The Endpoint can be discovered by
running the following from the console:
Predefined List of label keys to load from orchestration when labeling strategy is set to
Labels predefined
Label Key A list of label keys to translate on import; each origin label key should be
Translation followed by -> and the target label key. For example,
"OrchestrationAppName->App"
Advanced Configuration
This configuration is used to mitigate the performance impact on the Openstack controller:
Field Description
Full Port Pull Strategy for full port pull (occurring every Orchestration Full Report
Strategy Interval):
AllPulledServersInBulk: Pull ports for all pulled servers, in bulk (bulk size
is set by Ports Pull Bulk Size). For example, if bulk size is 50, then first, all
ports for the first 50 servers will be pulled, then all ports for the next 50
servers, and so on.
Ports Pull Bulk Size How many ports to pull in each bulk. Relevant only for
AllPulledServersInBulk and NewServersOnlyInBulk modes.
Interval Between Sleep interval between per-server ports pulls (in milliseconds)
Ports Pulls
Servers Pull Bulk How many servers to pull in each bulk. Relevant for all modes.
Size
0: pull all servers at once
Interval Between Sleep interval between server's bulk pull (in milliseconds)
Server Pulls
4. Click Test Connection to verify credentials. The test connects to the API endpoint and
tests connectivity to the nova-client:list-servers and neutron-clients list-networks.
API Commands
Configuring orchestration of the Active Directory with Centra is required for creating Centra User
Groups. For more information about this feature enabling user-based segmentation, refer to the
User Groups article in the Admin Guide.
1. In the Centra Administration menu, select Data Center/Orchestration, and click the
+ Add Orchestration button. The Add New Orchestration dialog box appears.
2. In the Add New Orchestration dialog box, for Type, select Active Directory:
Field Description
Name The name that you want to use to identify the AD orchestration.
GC Cluster The Guardicore Cluster that you want to use for the AD orchestration.
Domain Name The domain name of the organization for which you are configuring the AD
orchestration. This is the root domain of the entire AD tree hierarchy. For a detailed
understanding of AD structure and domains see Active Directory Structure and
Storage Technologies.
Login Username The user logon name according to the userPrincipalName (UPN) format for the
Active Directory as explained in User Naming Attributes. A UPN consists of a UPN
prefix (the user account name) and a UPN suffix (a DNS domain name).
Base DN The section of the directory where the application will commence searching for
(optional) Users and Groups. For users to be found in an application, they must be located
underneath the base DN. The Base DN speeds up the search for users.
Use SSL Select to use SSL for the orchestration. To use this mode, make sure that Active
Directory Certificate Services are enabled or use Insecure mode.
This API enables customers to easily add a large amount of asset information to Centra, using
REST API calls to Aggregators (unlike the REST API that calls Management). Once enabled,
customers' scripts and automations will be able to create and name new Centra assets (even
without Agents) and add labels to existing assets in a distributed fashion. By replacing IP
addresses of agentless workloads with real asset names, customers get more context when
browsing Reveal maps and building segmentation policies. An asset added through this API will
appear on the Assets page with Inventory API in the Orchestration column.
● The user wants to report workload labels from the workloads themselves (for example,
using Chef recipes) and these labels might change. These workloads can be with or
without Agents.
Creating assets using this method results in an improved experience for the customer:
● Users can replace "unknown IPs" with labeled assets, instead of using labels.
● System performance is better when using assets instead of dynamic IP criteria.
How it works
An automation tool, running on the customer premises, calls a REST API method on the
Aggregators. This call contains specific asset parameters: name, IP and more. The Aggregator
then reports these assets to Centra, where they'll appear as if they arrived from a regular
orchestration. As with other orchestrations, these reports are merged with asset information
from other sources (other orchestrations and Agent information), so it's safe to report asset
information, regardless of its coverage by other orchestration engines.
1. From Administration go to Data Center > Orchestrations and select InventoryAPI. The
Add New Orchestration dialog box appears:
Field Value
Type InventoryAPI
The created users are not related to Centra users in any way; Centra
credentials cannot be used as REST API credentials or vice versa.
Report Expiration Number of seconds an asset is considered "on" after the user has
last reported it to orchestration.
Labeling Strategy This refers to how you want to import custom Tags into Centra. Three
strategies are provided:
Predefined: List the custom Tags to import into Centra. This is done
by supplying a list of keys to import.
Note: Labeling Strategy only affects custom tags that users create
and does not affect the importation of metadata.
Predefined Labels List the keys to import as labels. This only applies to custom tags.
Orchestration Full Number of seconds to elapse before running another full report.
Report Interval
Label Key Translation Enables you to control the way imported labels appear in Centra, So,
for example, you can specify that a tag such as
OrchestrationAppName should be imported into Centra as App.
3. To create a Centra asset, call the REST API method on any of the Aggregators in the
defined cluster. The REST API call can contain information about one or more assets.
Each asset should contain the following information:
Asset ID A unique ID for this asset. This unique ID must be created by the
customer automation, and must be reused when reporting the same
asset on subsequent calls.
Asset name This name will appear in Centra's Reveal maps and asset views.
BIOS UUID The asset's BIOS UUID. Necessary in case the asset has an Agent
installed on it (during report time or in the future). See below for ways
to get this value.
If only IPs are present, it’s possible to send only one dictionary with
the IP addresses: { "addresses": <list of IPv4 and IPv6 IPs> }
Note: If the mac data is missing, the BIOS UUID must be given in
case the asset has an Agent installed on it (during report time or in
the future).
Metadata Optional parameters which will be attached to the asset and reported
to the management console.
An asset added through Inventory API will be displayed on the Assets page with Inventory API
in the Orchestration column:
The Aggregator serves the REST API from the same server and certificate as the "Guest Installer"
HTTPS interface (which is used for Agent installation script download). If an FQDN is used, it can
be used for these REST API calls as well (with proper certificate usage).
5. In the request body, put the asset information as described above. For example:
"assets":[
"id":"422F81AE-781B-4823-F1FD-7E51093BF316",
"bios-uuid":"422F81AE-781B-4823-F1FD-7E51093BF312",
"name":"lin-lin-Agent20",
"addresses":[
"172.17.2.52",
"100.100.102.52",
"200.200.202.52"
],
"labels":[
"key":"Role",
"value":"Server"
},
"key":"Deployment",
"value":"API"
"name":"lin-lin-Agent20", "addresses":["172.17.2.52",
"100.100.102.52", "200.200.202.52"], "labels": [{"key": "Role",
"value": "Server"}, {"key": "Deployment", "value": "API"}]}]}' -u
gc-api:password -H "Content-Type: application/json" -X POST
https://172.16.100.50/api/v1.0/assets
The Aggregator serves the REST API from the same server and certificate as the "Guest Installer"
HTTPS interface (which is used for Agent installation script download). If an FQDN is used, it can
be used for these REST API calls as well (with proper certificate usage).
5. In the request body, put the asset information as described above. For example:
{
"assets":[
{
"id":"422F81AE-781B-4823-F1FD-7E51093BF316",
"bios-uuid":"422F81AE-781B-4823-F1FD-7E51093BF312",
"name":"lin-lin-Agent20",
"nics": [{
"mac_address": "00:21:56:9d:03:89",
"addresses": ["100.101.102.106", "200.201.202.206"]}
],
"labels":[
{
"key":"Role",
"value":"Server"
},
{
"key":"Deployment",
"value":"API"
}
]
}
]
}
Limitations
● If you report an asset without a BIOS UUID, a subsequent report by an Agent will not be
matched to this asset. The management server does not match assets reported through
this orchestration with Agent information according to IP address. At the moment there is
no way to report AWS instance ID or other matching parameters - you must use IP & BIOS
UUID.
● Assets reported just once using the Inventory API will eventually expire; there is no way to
report assets which will stay 'indefinitely'; the REST API method must be repeatedly called
to keep the asset as "On”.
Syslog is a common format for message logging. The administrator uses the Add New Syslog
integration dialog box to configure Syslog (as described below), and can configure multiple hosts
for Syslog by using the dialog box repeatedly. Each time a Syslog Integration is configured, the
configuration is added as a row in the Syslog Integration screen:
Events Syslog Exporter: enables you to export a wide range of data to Syslog including incidents,
system alerts, Agent and Audit logs, messages, etc.
Network Log Syslog Exporter: enables exporting the Network log which provides data on
connections including type of connection, how Centra handled the connection, time of connection,
as well as detailed source and destination information. To enable the Network Log Syslog
Exporter, your administrator must execute a few CLI commands (see Enabling the Network Log
Reporter).
The administrator can configure the incidents to be exported to Syslog by performing the
following:
2. Click the + Add syslog Integration button to display the following dialog
box:
3. Select either Events Syslog Exporter or Network Log Syslog Exporter and complete the
fields as explained below:
If you selected Events Syslog Exporter, the following dialog box appears:
Field Explanation
Type Events Syslog Exporter appears here if you selected it in the Add New Syslog
Integration dialog box in step 2 above.
Connection Options
Syslog Port Different servers might require different ports (syslog UDP is usually 514).
Export through In some SaaS deployments, in order not to open extra ports, it is possible to
Aggregators configure the Aggregators to export the syslog to the syslog server. If this
feature is enabled, you must also enable the Cluster Exporter in the
Aggregator screen (From Components/Aggregator select the Aggregator,
then select the More button, Override Configuration, Show Advanced
Options. Under Advanced Options, select Aggregator/Aggregator features,
and the Cluster exporter checkbox:
Use TLS Encrypt Syslog Traffic with TLS (works only with the TCP protocol). Syslog
records can be sent over a secure channel, as indicated in RFC 5425. This is
common practice when the syslog channel is sent over the public internet or
other unsafe networks. The TLS protocol ensures the syslog messages are
securely sent and received over the network.
After setting the general Syslog settings (host, port and export settings), do
the following to enable TLS encryption for the Syslog channel:
● Make sure your Syslog Protocol is set to TCP.
● Make sure the Use TLS box is checked.
Verify Host This field should always be checked; it verifies that the host domain presents a
valid certificate. If this box is not checked, the TLS protocol will be used but
there is no guarantee that the data is not intercepted by a third party.
Client certificate Required if the syslog server performs client authentication. In this case, a
specific client certificate should be given in order for Centra to successfully
connect to the syslog server.
This is usually not required for syslog servers on the public internet, such as
Sumo Logic or Logz.io.
Exporting Options
Export Incidents Choose whether to export incident information. Note that Exporting
incidents is subject to filters defined in System > Configuration > Exporters.
Alert minimum severity The minimum alert severity to be exported: completed, info, warning, error
Export agents log Choose whether to export the Agents log to Syslog.
Export full changes of Export full changes of segmentation policies (may include sensitive
segmentation policies. information).
Export label changes log Choose whether to export Label changes log information.
Log messages to file Choose whether to log all sent messages to a local file on the sending
machine.
Report agent labels to Includes the Agent labels in the syslog report.
syslog
Agent labels list reported Enables you to specify the Agent labels that will be reported in syslog.
to syslog
Message Format
CEF - Common Event Format (CEF) is a Logging and Auditing file format from
ArcSight. CEF is an extensible, text-based format designed to support
multiple device types by offering the most relevant information. The CEF
format description can be reviewed here: CommonEventFormatV25.pdf
RFC 5424
RFC-5424 Structured Structured data elements as specified in RFC 5424, without brackets. E.g.
Data Sumo Logic cloud syslog source token.
When Network Log Syslog Exporter is selected in the Type field of the Add New Syslog Integration
dialog box, a dialog box with fields similar to the Events Syslog Exporter dialog box above appears,
with the exception of the Exporting options:
Fields Explanation
Exported verdicts Centra’s verdict on how to handle the connection (corresponds to the
Action filter in the Network log). Possible verdicts are Blocked, Will be
Blocked, Alerted, Could not Block, Allowed.
Filter by labels Enables filtering log entries whose source or destination belong to the
specified label key and value.
Export label keys Adds label info of specified keys to exported network logs.
5. Click the Test Connection button to test the connection and then click Save; the
configuration is added as a row in the Syslog Integration screen.
Bad reputation
Lateral movement
Network scan
Integrity
System Event
Audit log
Before using the Network Log Syslog Exporter, the administrator must enable the Network Log
Reporter that reports logs to the Network Log Syslog Exporter.
4.2.2 Email
Centra allows you to subscribe to incidents and/or system alerts. This way you will receive an
email every time an incident or system alert has been logged. Configuration varies between SaaS
users and on-premises users.
SaaS Users
The SMTP configurations are done by GuardiCore so SaaS users don't need to configure anything.
You only need to select the type of alerts you wish to receive - Incident Alerts, System Alerts or
both - and fill out related fields.
1. To choose the alerts you wish to receive, from Email Integration select Subscriptions > Alerts.
2. Check Enable Incident Alerts and/or Enable System Alerts to subscribe to the service.
Note: if you check Enable Incident Alerts, go to System > Configuration > Exporters to set the
severity level for incident alerts.
3. In Alert minimum severity select the alert severity level. The severity levels -
Info/Warning/Error - correspond to the severity levels of the System Log (Administration >
System >Log). This configuration defines the minimum severity that will trigger an email alert.
4. In Email addresses type the email address/addresses to send the incidents and alerts email to.
On-Premises Users
On-premises users need to first set SMTP configurations and then subscribe to the alerts service.
1. SSH to Management and type the following CLI command: gc-mgmtctl --import_all set_conf
--group email_smtp --option force_show_smtp_configurations --value True. The SMTP Setup
screens appears.
3. Next, choose the alerts you wish to receive, from Email Integration select Subscriptions >
Alerts.
4. Check Enable Incident Alerts and/or Enable System Alerts to subscribe to the service.
Note: if you check Enable Incident Alerts, go to System > Configuration > Exporters to set the
severity level for incident alerts.
5. In Alert minimum severity select the alert severity level. The severity levels -
Info/Warning/Error - correspond to the severity levels of the System Log (Administration >
System >Log). This configuration defines the minimum severity that will trigger an email alert.
6. In Email addresses type the email address/addresses to send the incidents and alerts email to.
4.2.3 Slack
Integrate with Slack to export Guardicore incident messages to your corporate Slack platform.
This URL accepts notifications from Guardicore and passes it into Slack.
4.3 Integrations
4.3.1 Integration with Palo Alto Networks Firewall
The integration of Guardicore Centra with Palo Alto Networks leverages Centra unique breach
detection capabilities and Palo Alto Networks firewall access control capabilities. The joint
solution allows security administrators to proactively block IP addresses of compromised assets to
gain control of the attack. As part of the attack mitigation, the IP address of the compromised
asset is automatically forwarded to the Palo Alto firewall from the Reveal map.
Guardicore Centra uses various techniques to detect zero day attacks in data centers, including
dynamic deception, reputation and policy based micro-segmentation. Once an attack is detected,
Guardicore Centra updates Palo Alto firewall with the IP address of the compromised host. The
Firewall then blocks connection attempts to and from the compromised asset, blocking its ability
to propagate in the datacenter.
How It Works
The process begins with Centra identifying a suspicious IP address that has generated a High
Severity incident. The IP can be either external, i.e. coming from the Internet, or part of internal,
east-west traffic. Once the IP is detected, it is relayed to Palo Alto Networks Panorama which then
blocks all connection attempts to and from the compromised asset through the NGFW, blocking
its ability to propagate in the data center. Centra can be configured to send this information
automatically or manually directly from its Reveal map. IPs are collected from all Centra’s
platforms including deception servers, Reveal maps and reputation servers.
The joint solution allows security administrators to proactively block compromised assets inside
the data center from performing data exfiltration or carrying out lateral movement. As part of the
attack mitigation, the IP address of the compromised asset is reported to the Palo Alto Networks
firewall which can cut the attacker’s communication line with its C&C server or prevent it from
exfiltrating previously stolen data.
2. As a best practice, for API access to Palo Alto Networks Panorama, set up a separate admin
account for XML API access to Panorama by following these steps:
ii. Enable or disable XML API features from the list, such as Report,
Log, and Configuration.
Configuration
Configuring Centra and Palo Alto Firewall integration is easily accomplished using Centra's Admin
panel and Palo Alto's Firewall.
1. On the Administration menu, select Mitigation & IoCs and click Firewall Mitigation.
Note that Centra provides separate configuration options for external and internal IPs. The
default value for both External IPs Action Mode and Internal IPs Action Mode is Manual:
In Manual mode you send suspected IPs to the firewall by first selecting incidents in the
Lateral Movement, Policy Violations, or Bad reputation Incident screens (or in All Incidents),
displaying the incident's Report, and then clicking the button in the
report's Recommended Actions section.
Make sure that you use the same tag in Palo Alto Dynamic address group as used in the Internal
IPs Tag and External IPs Tag:
Palo Alto UI
4. On the Firewalls Integration page, configure the Palo Alto firewall fields and whether to
report to all firewalls or to specific ones.
5. After completing the configuration and clicking Save Changes, you should be able to see
the Report IP to Firewall button in the Recommended Actions section of an incident's
Report page (If you have set Action Mode to Manual in the Firewall Mitigation page as
described above; if you've set it to Automatic, the IP will be automatically reported):
Similarly, if you have specified Manual mode in the Firewall Mitigation dialog, you can report an IP
of any asset on the Reveal map, even if this asset is not part of an ongoing incident. In the asset's
Asset information panel, click the Report IP to Firewall button as shown in the following figure:
Troubleshooting
3. Verify that the Dynamic address groups are defined on both systems.
1. Fill in the fields in the Add New User Directory dialog box:
Field Description
Login Username Type the username of the service account that will be used to connect to the domain.
Base DN The root distinguished name (DN) to use when running queries against the directory
server.
LDAP Providers A list of servers (domain name or IP) through which the connection to the domain will be
made.
Use SSL Click this checkbox to secure the directory with SSL.
The user directory is added. Note that you can modify the lookup order with the
exception of Locally Defined Users which is always the first entry on the list.
2. Configure it to never expire and save the password for later, let's say the pass is "123456".
3. In User Settings, enable the following: `This account supports Kerberos AES 256 bit
encryption` and 'password never expires'.
1. Open CMD (not powershell) on the AD server with admin privileges. Here is a quick review
of the syntax:
3. Move the Centra Keytab file created in the 'C:\' drive to a secure location.
4. If you want to read more on keytab, here's all you need to know about Keytab files.
1. Make sure you already have LDAP configured in the system as the permission group
membership check relies on the LDAP connection
Note: In our demo, the values in the above figure are replaced with the following:
Centra FQDN is centra.testing.gc
Realm is testing.gc
Keytab is the file that we saved in the previous section; upload it here. Once the file
uploads, the box turns green.
3. After you configure all the Kerberos details, it should look like this:
Note that the test connection button only tests the LDAP connection and not the Kerberos one.
1. Make sure you have access to a user and an endpoint that are part of the domain. The user
should be part of a group in the AD that is allowed to access Centra.
2. While logged in with the domain user, open a Chrome browser and go to the Centra
address.
4. If you get signed in automatically but want to use a different built-in user, simply log out
and use the alternative credentials.
1. Sign into the Azure portal as a cloud application admin or application admin for your Azure
AD tenant.
2. Navigate to Azure Active Directory > Enterprise applications and select Centra from the
list. If a Guardicore application has not been created, create a new application (to add new
application, select New application).
3. Under the Manage section, select Single sign-on > SAML. The Set up Single Sign-On with
SAML - Preview page appears:
4. Select the Edit button to edit the parameters for Basic SAML Configuration:
Field Value
6. After filling out the fields, click the Save button at the top of the dialog box.
8. Click the Edit button at the top right of the dialog box and fill in the following fields:
Name user.mail
MemberOf User.groups:
Note:
If the number of groups the user is in exceeds a limit (150 for SAML,
200 for JWT) see the instructions at the end of this document..
UserEmail user.mail
9. Click Save and move to the next section, SAML Signing Certificate.
10. Click the Edit button and for the Certificate Signing Option, select Sign SAML response
and assertion, and click Save.
11. Download the SAML certificate by clicking the download link for Certificate
(Base64).
12. Copy the App Federation Metadata URL (the login url) for use at a later stage.
2. Click Add User Directory to display the Add New User Directory dialog
box:
Name Enter a friendly name that will help you identify this for your SSO setup.
Idp Entity ID The Azure AD identifier (Identifier Entity ID) under the Azure Configure
Sign-On page.
Note: If the Azure IdP entity ID contains a backslash character / at the end,
the UI prevents adding it. There is a workaround that requires changing the
entity ID in mongodb manually.
Idp SSO URL Paste the login URL (App Federation Metadata URL) that you copied in Stage
1, step 12 above.
Idp Certificate Open the certificate that you downloaded in Stage 1, step 11 above and copy its
contents. Then paste the contents into this field.
4. Click Verify Configuration and then click Save. The User Directory is listed on the User
Directory screen.
5. On the User Directory screen, select the User Directory that you just created and in the
User Directory box on the right, select the Key button at the top to download the
signing certificate. You will need to upload this to Azure AD.
For Linked Directory Groups, select the SSO User Directory that you created and add the
value that you are looking to get for memberOf. For example if you entered groups, please
add the group name or ObjectID, etc. that you would expect to send to Centra.
2. Select Import Certificate and select the certificate file you downloaded from Centra.
3. Once imported, please select the ... on the certificate you uploaded and activate it.
4. Test login.
1. Make sure you assigned the required groups to your Guardicore application in
Azure Active Directory. Use this link to assign groups to an application that is using
Azure AD.
2. Navigate to Azure Active Directory > Enterprise applications and select the
Guardicore application from the list.
6. Select the Groups assigned to the application in the Group Claims options:
7. Choose the sAMAccountName option in the Source attribute drop down list.
Note: Step 1 is redundant once the Guardicore app is accepted into the Okta application directory.
1. In the Okta classic UI, select Applications and click the Add Application button:
2. Click Create New App and in the Create a New Integration dialog box, specify the
following:
Platform: Web
3. Click Create and under General Settings, for App Name, specify Guardicore:
Single Sign on URL This should be the URL to the Centra system as the client sees it
concatenated with the SAML authentication REST endpoint. For example,
for GC-MGMT it's
'https://cus-1801.cloud.guardicore.com/sso-authenticate'.
So the pattern is 'https://{Centra URL}/sso-authenticate'
● Select 'Use this for Recipient URL and Destination URL'
Audience URI (SP Entity ID) The Centra URL. For example for GC-MGMT:
'https://cus-1801.cloud.guardicore.com'
Add one attribute named 'userEmail' with Name format set to 'Basic'. Value should be
'user.email'. The attribute name 'userEmail' is case sensitive so make sure you are
writing it exactly as shown.
Note: If a user in the user@domain format has already been configured manually in
Centra, SAML authentication will fail for that user and will default to local
authentication.
Add one attribute name 'memberOf' with name format set to 'Basic'. Filter should be
selected to 'Matches regex' and value '.*' (dot and asterisk). 'memberOf 'is case
sensitive:
10. Click on View Setup Instructions to open a new page with the SAML details. You will need
to copy some of these details for Step 2 that follows.
1. Click on the newly created Okta Guardicore application and navigate to the 'Sign-On' tab.
2. In Centra's Admin screen, select User Management, User Directories to display the Add
New User Directory dialog box:
a. In the User Directories screen, click the provider (Okta) to display User
Directory Details and a Key button:
5. Return to the Okta UI and click the Edit for SAML settings under the Centra app.
6. Under Advanced Settings, in the Encryption Certificate box, click the Browse button and
upload the PEM file.
This step enables configuring the actual users. In the following instructions we will configure Okta
users, but in a real use case it could also be a user that is synced from an internal AD. All that
matters is that the group is configured correctly.
1. In the Okta UI, click Directory/Groups, and click the Add Group button to add a new group
(in this example, GC):
2. Click the group and associate users with it. In this example, a user named Test was
associated with the group.
Note: Make sure you type the name correctly, as there is no validation feedback on this field.
This article provides instructions on how to configure SAML 2.0 for Guardicore Centra in the
Red Hat environment. The instructions comprise four stages:
2. Make sure you are in the relevant realm that contains the users for the Centra
integration.
3. In the Master menu, under Configure, choose Clients and click the Create button:
Field Value
Enabled On
Include AuthnStatement On
Sign Documents On
Sign Assertions On
Encrypt Assertions On
Valid Redirect URLs The client SAML endpoint (in this example,
http://centra.acme.org/sso-authenticate)
Master SAML Processing URL The client SAML endpoint (in this example,
http://centra.acme.org/sso-authenticate)
6. Select the Roles tab and make sure no roles are assigned to this client.
7. Select the Client Scopes tab and make sure no roles are assigned to this client.
8. Select the Mappers tab and click Create to display the Create Protocol Mapper dialog
box:
9. In the Create Protocol Mapper dialog box, fill in the fields as follows:
Field Value
Name memberOf
SAML Attribute ON
NameFormat
10. Click Save to save the data and return to the Mappers tab.
11. On the Mappers tab, select the add builtin button to display the Add Builtin Protocol
Mapper dialog box:
12. In the Add Builtin Protocol Mapper dialog box, select x.500 email mapper and click the
Add selected button.
13. On the Mappers tab, edit the x.500 email mapper as follows:
15. On the Installation tab, in the Format Option list, select SAML Metadata
IDPSSODescriptor:
6. Idp Entity ID: copy the entityID url from the EntityDescriptor section in the xml from the
previous section.
7. Idp SSO URL: copy the SingleSignOnService Location URL from the
SingleSignOnService section in the xml from the previous section.
8. Idp Certificate: copy the certificate from the dsig:X509Certificate section in the xml from
the previous section.
9. Click Verify.
1. Click on the newly created entry; a pane should appear on the right.
2. Click the Key icon to download the public key for assertion encryption configuration:
3. Open the RH SSO console and select the clients SAML Keys tab.
4. Click Import under the Encryption Key section and import the PEM file downloaded from
the Centra system.
5. Click Import under the Signing Key section and import the PEM file downloaded from the
Centra system.
Instructions for configuring Guardicore Centra as an SP for FortiAuthenticator are standard and
provided below. However, there are two additional “non-standard” settings that must be
configured by Guardicore Support:
Therefore, in the meantime, you must contact Guardicore Support to deactivate the
requirement for encrypted assertions. Please open a support ticket and be sure to
indicate that you are using FortiAuthenticator for SSO/SAML and that you require an
encrypt assertion override.
2. For general IdP settings,enable the SAML identity provider portal and enter the following:
b. Realms: Add the realm associated with the remote server for G Suite.
a. From Authentication > SAML IdP > Service Providers create a name (for example,
Guardicore) for the service provider (Guardicore) that you will use as a SAML client.
b. Enter the SP information from the client you will use as the SAML service provider
(enter the Centra URL that you are using).
d. Under SAML Attribute click Create New, and enter a SAML Attribute name that
your SAML SP is expecting to identify the user. Select a User Attribute for this
selection. If you're unsure of which attribute to pick, select SAML Username.
2. Click + Add User Directory to display the Add New User Directory dialog box:
Name Enter a friendly name that will help you identify this for your SSO setup.
Note: If the IdP entity ID contains a slash character / at the end, the
UI prevents adding it. Contact Guardicore Support to manually
change the entity ID in Centra’s configuration database.
Idp SSO URL Paste the login URL that you entered from the previous stage.
Idp Certificate Open the certificate from the IdP metadata that you downloaded from
Stage 1 and paste the contents into this field.
4. Click Verify Configuration and then click Save. The User Directory is listed on the User
Directory screen.
Permission Schemes enables administrators to restrict a user's access to Reveal maps, Incidents,
and Neighboring Assets. Administrators can assign scoped permissions such as View Reveal Maps
or View Incidents. For example, some application owners might be allowed to view all data
pertaining to their application (with all other applications hidden) while some site owners might be
allowed to access only Reveal maps pertaining to their environment.
Some of the reasons for creating permission schemes include the following:
● Limit users' view based on asset labels, e.g. service providers may want to provide their
customers access to the information related to their assets only.
● Allow each user to view a limited scope of Centra:
Field Description
Role A role is a set of permissions and related allowed actions. The following roles are
available:
System Custom
Administrator
Prevent override The checkbox is available for the Application Owner role only. Selecting the
rules creation or checkbox prevents anyone with the Application Owner role to create or edit
modification Override rules. Override rules appear as read only to the Application Owner.
checkbox
Default View The first Centra screen the user sees after login based on the defined permission.
Linked Directory Attach custom permission schemes to Active Directory groups. Make sure you
Groups activate the User Directories feature before you activate the new AD groups in
the Linked Directory Groups field.
3. Click Save. The Permission Scheme is displayed in the list of Permission Schemes:
4. Clicking a Permission Scheme in the list displays the scheme's details in the right pane and
enables you to edit the scheme:
The following table provides details on the default role permissions to Centra's features.
Title Action Global Guest System Global Policy Applicatio Reveal Incidents
Admin Admin Administrator n Owner Map Viewer
Viewer
Dashboard View ✓ ✓ ✓ ✓
Network View ✓ ✓ ✓ ✓
Statistics
Reveal>Explore Explore ✓ ✓ ✓ ✓ ✓ ✓ ✓
and Saved Maps
Create ✓ ✓ ✓ ✓ ✓ ✓
Delete ✓ ✓ ✓ ✓ ✓
Label ✓ ✓
asset
Set map ✓ ✓ ✓ ✓ ✓ ✓
default
view
Explore ✓ ✓ ✓
Precomp
uted
Explore ✓ ✓
Private
Explore ✓ ✓ ✓ ✓
All
Scoped
Create ✓ ✓
Private
Reveal>Labels View ✓ ✓ ✓ ✓ ✓
labels
Add label ✓ ✓
Delete ✓ ✓
label
Edit label ✓ ✓
Policy>Projects View ✓ ✓ ✓ ✓
Edit ✓ ✓
Policy>Rules View ✓ ✓ ✓ ✓ ✓
Publish ✓ ✓
changes
Discard ✓ ✓ ✓
changes
Suggest ✓ ✓ ✓
changes
Policy>Revisions View ✓ ✓ ✓ ✓ ✓
Revert ✓ ✓
policy
Policy>Label ✓ ✓
Groups
Policy>User View ✓ ✓ ✓ ✓
Groups
Edit ✓ ✓
Publish ✓ ✓
changes
Discard ✓ ✓
changes
Incidents + View ✓ ✓ ✓ ✓ ✓ ✓
Incident Groups
Edit ✓ ✓ ✓
Assets View ✓ ✓ ✓ ✓ ✓
Edit ✓ ✓
Activity>Networ View ✓ ✓ ✓ ✓ ✓
k Log
Activity>Redirec View ✓ ✓ ✓ ✓ ✓
tion Log
Activity>Reputat View ✓ ✓ ✓ ✓ ✓
ion Log
Activity>Integrit View ✓ ✓ ✓ ✓
y Log
Activity>Label View ✓ ✓ ✓ ✓
Log
Edit ✓
Detection>Detec View ✓ ✓ ✓ ✓
tors
Edit ✓
Detection>Reput View ✓ ✓ ✓ ✓
ation
Edit ✓
Integrity View ✓ ✓ ✓ ✓
Monitoring>Tem
plates
Publish ✓
changes
Discard ✓
changes
Suggest ✓
changes
Cleanup ✓
stale
hashes
Edit ✓
Components>De View ✓ ✓ ✓
ception Servers
Edit ✓ ✓
Components>Col View ✓ ✓ ✓
lectors
Edit ✓ ✓
Components>Ag View ✓ ✓ ✓
gregators
Edit ✓ ✓
Agents>Agents View ✓ ✓ ✓ ✓
Edit ✓ ✓
Agents>Agent View ✓ ✓ ✓
Installation
Screen
Agents>Agents View ✓ ✓ ✓
Log
Agents>Agent View ✓ ✓ ✓ ✓
installation
profiles
Edit ✓ ✓
Data View ✓ ✓ ✓
Center>Orchestr
ations
Edit ✓ ✓
View ✓ ✓ ✓
Data View ✓ ✓ ✓
Center>Orchestr
ations
Integration View ✓ ✓ ✓
Edit ✓ ✓
User View ✓ ✓
Management>Us
ers
User View ✓ ✓ ✓
Management>Us
er Directories
Edit ✓ ✓
User View ✓ ✓
Management>Pe
rmission
Schemes
Edit ✓ ✓
System>Log View ✓ ✓ ✓
System>Configur View ✓ ✓ ✓
ation
Edit ✓ ✓
System>Info View ✓ ✓
V31 introduces a new role into the system – Application Owner. The role allows you to define
configuration access only to a specific scope of assets. Scoping in v31 enables users to create and
edit segmentation rules within a particular scope. The scope for creating and editing these rules is
determined by the labels that have been defined within the user’s scope in the user’s assigned
Permission Scheme. Scoping of Segmentation Rules adheres to the following restrictions:
● Application owners can create new rules that include the scoped labels but cannot publish
the rules. The rules can be reviewed and published by the Administrator or Global Policy
Admin.
● Application owners cannot revert policy.
● Application owners can only discard the changes in the context of their own changes and
cannot affect changes in other user’s contexts.
● Application owners will see unpublished rules only in their scope but will not see
unpublished rules in other user’s scopes unless the unpublished rule directly affects any of
the scoped rules.
● All other aspects of scoping such as scoping for Reveal maps and the ability to view
incidents, assets, activity logs, FIM policy, etc. are as in previous versions.
1. From System, select Users and in the Add New User dialog box, fill in the Username, Email
Address and Description fields.
2. In the Permission Scheme field, scroll through the list of Permission Schemes and select the
scheme that you want to assign to the user.
Background
This article contains instructions for connecting a Centra Management to Guardicore’s KO Cloud.
The KO Cloud is a hosted environment that contains the gc_enforcement kernel modules for all
supported Linux distributions for all existing and supported kernel versions.
This article is relevant for on-premises deployments, as SaaS deployments are automatically
connected to the KO Cloud.
In case the matching KO file is not found on the management or on the KO Cloud - a rare situation
typically associated with custom-built kernels which are not available on public repositories - the
Agent will move to “polling mode” and will provide limited Reveal service. A flag will be raised in
Management so the issue can be detected by the administrator and handled with Guardicore
support. When the supported KOs are added locally to Management and/or to the KO Cloud and
through it to Management and Aggregators, the wrapper automatically detects the added KOs
and installs them and the Agent returns to its normal operation without any need for operator
involvement.
Preparation/Prerequisites
1. Receive a token.json file from Guardicore.
2. Configure permissions on the Management:
chown -R guardicore-svc:guardicore-svc /storage/kos/
chmod 755 /storage/kos/store
chmod 777 /storage/kos/cache
chmod 644 /storage/kos/store/*
Steps
1. Copy the token.json received from Guardicore onto the Management in the path
/etc/guardicore/ko_cloud/token.json
2. Edit `/etc/guardicore/ko_cloud/major_versions.csv` with the relevant versions for said
customer. For example:
36,37
3. Enable the ko-cloud configuration by running the following command:
gc-ko-cli configure --sleep-interval-seconds 3600 --bucket-name ko-cloud-bucket --enable
a. To add proxy support, also specify proxy URL (GA since v32):
gc-ko-cli configure --sleep-interval-seconds 3600 --bucket-name ko-cloud-bucket
--enable --proxy-url https://<proxy_url>
*GCP bucket accepts only https proxy, if your proxy is an http proxy please set it in the
command like: https_proxy=http://URL:PORT
4. Check KO-cloud status by running:
gc-ko-cli status
5. Validate that sqsh files are updating under `/storage/kos/store/default`
6. In order to manually fetch new KOs, run the following command:
gc-ko-cli fetch
Unplanned incidents can happen at any time. Your network could suffer connectivity problems,
the hypervisor that hosts your system components can crash, or your entire site might fail. When
things don’t go as planned, it’s important to have a well-planned disaster recovery solution that
ensures continuous system operation at all times. Guardicore’s disaster recovery approach allows
for continuous system operation in times of complete site failure, connectivity problems,
hypervisor crash, etc.
Note: The described solution is relevant for on-premises deployments only, as availability of SaaS
deployments is guaranteed by Guardicore.
Both clusters can be active or backup, but only one can be active or backup at any given time. If the
primary cluster fails, you can initiate a failover on the standby cluster to continue system
operations on the standby cluster. When the primary cluster becomes available, it returns to
active and the standby cluster goes back to being the backup cluster.
Centra ensures there is an ongoing sync between the two clusters. For example, all segmentation
rules and labels written to the primary cluster are replicated to the backup cluster, and the other
way around.
What's synced
● Configuration (information)
● Segmentation policy
● Reveal data
● Incidents data
Before you can initiate a failover, you must first configure the system so that it is capable of
switching between a primary management cluster and a secondary management cluster.
1. Install two different management clusters. These are referred to as Primary Management
Master/Cluster and Standby Management Master/Cluster.
2. Allow SSH communication between the management master within each cluster (i.e. by
doing ssh-copy-id <standby-IP>).
3. Sync the certificates between the primary management master and the standby
management master:
Add the following in /etc/guardicore/hosts at the end of the file on the primary:
1 ...
2 [peer_master]
3] [standby_master_ip]
4. To synchronize the certificates, run the following on the primary management cluster:
gc-dr-cli sync-standby-certs
This copies all certificates from the primary management master to the standby one.
Notes:
6. Enable the primary management cluster by running the following on the primary:
gc-dr-cli enable
Note:
sleep-interval-seconds determines the interval between each time the standby management
cluster attempts to fetch the new backup configuration.
8. Enable the standby management cluster by running the following on the standby:
gc-dr-cli enable
Initiating Failover
In case the primary cluster fails for any reason, the administrator can initiate the failover. This will
cause the standby management cluster to take over as the primary management cluster. All
management operations will then be available on the new active cluster.
The process takes around 10 minutes, including the shifting of the components to the standby
management master which now acts as the primary management master.
Once the disaster has been resolved and the designated primary cluster regains availability, the
administrator can initiate the failback process. All configuration and policy changes made to the
standby cluster during the disaster period will be synced back to the primary cluster.
1. To initiate the failback and return the system to the primary cluster, run the following on
the standby:
gc-dr-cli generate-config
2. Initiate fetch and load configuration from the designated standby (current "active"
management). Run the following on the primary management master:
gc-dr-cli pull-and-load-config
The designated primary management master will now pull the file created on the standby
management master and load it. The designated primary management master is now ready for
use.
Return the standby management master to its original standby role by running the following on
the standby management master:
gc-dr-cli standby
The designated standby is stopped, and the primary management cluster becomes the active one.
This returns us to the original state: the designated master cluster is the "active" cluster, while the
designated standby cluster is the "backup" one.
Use this procedure if the old primary management cluster is gone or non-recoverable.
Preliminary steps
Deploy a new primary management cluster with the same control node IP address as the previous
primary control node.
Failback Steps
ssh-copy-id <standby-IP>
ssh-copy-id <primary-IP>
1. On the primary, in the file /etc/guardicore/hosts, at the bottom of the file, add the
following:
...
[peer_master]
<standby_control_node_ip>
Note: You do not need to do this on the standby as it should already be present.
- "aggregator"
- "disaster_recovery"
- "disaster_recovery_server"
- "gcca"
- "mesos_master"
- "mitigation_ca"
- "mongodbclient"
- "mongodbserver"
- "mitigation_cas_chain.pem"
- "rabbitmq"
- "rabbitmqserver"
- "remote_ssl_proxy"
- "remote_ssl_proxy_server"
gc-dr-cli propograte-certificates
5. Restart dr-ssl-proxy
gc-cluster-cli infra-service-restart --infra_name dr_ssl_proxy
Configure the new primary as instructed in the Instructions for Configuring the System for
Disaster Recovery section. No need to re-configure the standby.
Now that the primary disaster recovery settings have been configured, enable it by running the
following on the standby:
gc-dr-cli enable
gc-dr-cli pull-and-load-config
gc-dr-cli standby
Steps
1. Contact Guardicore support team and receive the Plugins Server OVA, and deploy in the
environment.
2. Connect to the Plugins Server using the temporary root password: GCAdmin123
3. Configure the interface of the machine:
a. To set it to using DHCP:
gc-plugins-server configure_interface --set_dhcp
b. To configure a static IP:
gc-plugins-server configure_interface --ip <ip/CIDR> --gw
<gateway IP> --dns <comma-separated list of dns servers>I
4. Configure the REST API username/password:
a. To create a new user go to Centra -> Administration -> Users and click on Add User
i. Create an admin user
b. Configure the plugin server with the username/password that were just created
and the management server FQDN/IP:
5 Appendices
5.1 Appendix A: Agents and OS Support
For updated OS support for Guardicore agents and OS, refer to this link.
Fur further information regarding support for your Centa version, contact your Guardicore
engineer.
Prerequisite
● Verify with the client that no other packages were installed on the management host,
such as additional user agents, etc.
● Contact Guardicore PS team to create a new package.
● Download the package and obtain the 2 files:
○ gc_bionic_security_package_<datetime>.tar- A file containing the deb
packages that will be installed.
○ Apply_security_package.sh- A script file that applies the patch.
Procedure
● Create a snapshot of each node of the management cluster.
● Stop the cluster, including all pipelines and infra:
○ gc-cluster-cli cluster-stop --group all
○ gc-cluster-cli cluster-stop --group proxy
● Verify that the Management node has sufficient disk space and select a target directory.
Note: You need space of about twice the size of the patch: one space for downloading the
patch and another for extracting it.
● Copy the .tar file and the script to the target directory (/storage is recommended).