You are on page 1of 983

APIGEE Edge for Private Cloud 4.51.

00
CONSOLIDATE
Contenido

Notas
https://docs.apigee.com/release/notes/45100-edge-private-cloud-
release-notes#new-features

Release summary
Supported software

Bug fixes
Security issues fixed
Known issues

Architectural Overview
Before installing Apigee Edge for Private Cloud, you should be familiar with the
overall organization of Edge modules and software components.

Apigee Edge for Private Cloud consists of the following modules:

● Apigee Edge Gateway (aka API Services)


● Apigee Edge Analytics
● Apigee Edge Developer Services portal
● Apigee Edge Monetization Services (aka Developer Services Monetization)

The following image shows how the different modules interact within Apigee
Apigee Edge Gateway

Edge Gateway is the core module of Apigee Edge and is the main tool for managing
your APIs. The Gateway UI provides tools for adding and configuring your APIs,
setting up bundles of resources, and managing developers and apps. The Gateway
offloads many common management concerns from your backend API. When you
add an API, you can apply policies for security, rate-limiting, mediation, caching, and
other controls. You can also customize the behavior of your API by applying custom
scripts, making call outs to third-party APIs, and so on.

Software Components

Edge Gateway is built from the following primary components:

● Edge Management Server


● Apache ZooKeeper
● Apache Cassandra
● Edge Router
● Edge Message Processor
● OpenLDAP
● Edge UI (formerly known as the New Edge experience) and Classic UI

Edge Gateway is designed so that these may be all installed on a single host or
distributed among several hosts.

Apigee Edge Analytics


Edge Analytics has powerful API analytics to see long-term usage trends. You can
segment your audience by top developers and apps, learn about usage by API
method to know where to invest, and create custom reports on business-level
information.

As data passes through Apigee Edge, several default types of information are
collected including URL, IP, user ID for API call information, latency, and error data.
You can use policies to add other information, such as headers, query parameters,
and portions of a request or response extracted from XML or JSON.

All data is pushed to Edge Analytics where it is maintained by the analytics server in
the background. Data aggregation tools can be used to compile various built-in or
custom reports.

Software Components

Edge Analytics comprises the following:

● Qpid, which consists of the following


● Apache Qpid messaging system
● Apigee Qpid Server service - A Java service from Apigee used to
manage Apache Qpid
● Postgres, which consists of the following:
● PostgreSQL database
● Apigee Postgres Server service - A Java service from Apigee used to
manage the PostgreSQL database

Apigee Edge Developer Services portal

Apigee Developer Services portal (or simply, the portal) is a template portal for
content and community management. It is based on the open source Drupal project.
The default setup allows creating and managing API documentation, forums, and
blogs. A built-in test console allows testing of APIs in real time from within the
portal.

Apart from content management, the portal has various features for community
management such as manual/automatic user registration and moderating user
comments. Role-Based Access Control (RBAC) model controls the access to
features on the the portal. For example, you can enable controls to allow registered
user to create forum posts, use test consoles, and so on.

The Apigee Edge for Private Cloud deployment script does not include portal
deployment. The portal deployment on-premises is supported by its own installation
script. For more information, see Install the portal.

Apigee Edge Monetization Services

Edge Monetization Services is a new powerful extension to Apigee Edge for Private
Cloud. As an API provider, you need an easy-to-use and flexible way to monetize your
APIs so that you can generate revenue for the use of those APIs. Monetization
Services solves those requirements. Using Monetization Services, you can create a
variety of rate plans that charge developers for the use of your APIs bundled into
packages. The solution offers an extensive degree of flexibility: you can create pre-
paid plans, post-paid plans, fixed-fee plans, variable rate plans, freemium plans, plans
tailored to specific developers, plans covering groups of developers, and more.

In addition, Monetization Services includes reporting and billing facilities. For


example, as an API provider, you can get summary or detailed reports on traffic to
your API packages for which developers purchased a rate plan. You can also make
adjustments to these records as necessary. And you can create billing documents
(which include applicable taxes) for the use of your API packages and publish those
documents to developers.

Note: Monetization Services cannot be installed on an All-in-One (AiO) configuration.

You can also set limits to help control and monitor the performance of your API
packages and allow you to react accordingly, and you can set up automatic
notifications for when those limits are approached or reached.

Note: The core Apigee Edge (Gateway and Analytics) is a prerequisite for using Monetization
Services.

Monetization Services Features

The key features of Edge Monetization Services include:

● Fully integrated with the API platform means real-time interaction


● Support all business models out of the box from simple fee-based plans to the
most complex charging/revenue share plans (easy to create and modify
plans)
● Rate transactions on volume or custom attributes within each transaction.
Transaction can be made up of APIs from Gateway PLUS other systems
(external to Apigee Edge)
● Automated tools such as limits and notifications to monitor performance and
manage the process
● Integrated developer/partner workflow and controls to manage purchase
through the billing/payment
● Fully self service for business users and developers/partners, so no need for
costly technical intervention
● Integrated with any backend sales, accounting and ERP system
Software Components

Edge Monetization Services is built on top of the following primary components:

● Edge Management Server


● Edge Message Processor

For more information on getting started with Monetization Services using Edge UI,
see Get started using monetization.

On-Premises deployment
An on-premises installation of core Apigee Edge for Private Cloud (Gateway and
Analytics) provides the infrastructure required to run API traffic on behalf of the on-
premises client's customers.

The following videos introduce you to the deployment models for Apigee Edge for
Private Cloud:

The components provided by the on-premises installation of Edge Gateway include


(but are not limited to):
● A Router handles all incoming API traffic from a load balancer, determines the
organization and environments for the API proxy that handles the request,
balances requests across available Message Processors, and then
dispatches the request. The Router terminates the HTTP request, handles the
TLS/SSL traffic, and uses the virtual host name, port, and URI to steer
requests to the appropriate Message Processor.
● A Message Processor processes API requests. The Message Processor
evaluates an incoming request, executes any Apigee policies, and calls the
back-end systems and other systems to retrieve data. Once those responses
have been received, the Message Processor formats a response and returns it
to the client.
● Apache Cassandra is the runtime data repository that stores application
configurations, distributed quota counters, API keys, and OAuth tokens for
applications running on the gateway.
● Apache ZooKeeper contains configuration data about the location and
configuration of the various Apigee components, and notifies the different
servers of configuration changes.
● OpenLDAP (LDAP) to manage system and organization user and roles.
● A Management Server to hold these pieces together. The Management Server
is the endpoint for Edge Management API requests. It also interacts with the
Edge UI.
● A UI provides browser-based tooling that lets you perform most of the tasks
necessary to create, configure, and manage API proxies, API products, apps,
and users.

The components provided by the on-premises installation of Edge Analytics include:

● A Qpid Server manages queuing system for analytics data.


● A Postgres Server manages the PostgreSQL analytics database.

The following diagram illustrates how Apigee Edge components interact:


Installation requirements
Hardware requirements
You must meet the following minimum hardware requirements for a highly available
infrastructure in a production grade environment.

The following video gives you high-level sizing guidance for your installation:

For all installation scenarios described in Installation topologies, the following tables
list the minimum hardware requirements for the installation components.

In these tables the hard disk requirements are in addition to the hard disk space
required by the operating system. Depending on your applications and network
traffic, your installation might require more or fewer resources than listed below.

Minimum hard
Installation Component RAM CPU
disk
Cassandra 16GB 8-core 250GB local
storage with
SSD
supporting
2000 IOPS

Message Processor/Router on same 16GB 8-core 100GB


machine

Message Processor (standalone) 16GB 8-core 100GB

Router (standalone) 16GB 8-core 100GB

Analytics - Postgres/Qpid on same server 16GB* 8-core* 500GB - 1TB**


network
storage***,
preferably with
SSD backend,
supporting
1000 IOPS or
higher*

Analytics - Postgres master or standby 16GB* 8-core* 500GB - 1TB**


(standalone) network
storage***,
preferably with
SSD backend,
supporting
1000 IOPS or
higher*

Analytics - Qpid standalone 8GB 4-core 30GB - 50GB


local storage
with SSD
The default
Qpid queue
size is 1 GB,
which can be
increased to 2
GB. If you need
more capacity,
add additional
Qpid nodes.

OpenLDAP/UI/Management Server 8GB 4-core 60GB

UI/Management Server 4GB 2-core 60GB

OpenLDAP (standalone) 4GB 2-core 60GB

* Adjust Postgres system requirements based on throughput:

● Less than 250 TPS: 8GB, 4-core can be considered with managed network storage*** supporting 1000 IOPS or higher

● Greater than 250 TPS: 16GB, 8-core, managed network storage*** supporting 1000 IOPS or higher

● Greater than 1000 TPS: 16GB, 8-core, managed network storage*** supporting 2000 IOPS or higher

● Greater than 2000 TPS: 32GB, 16-core, managed network storage*** supporting 2000 IOPS or higher

● Greater than 4000 TPS: 64GB, 32-core, managed network storage*** supporting 4000 IOPS or higher

** The Postgres hard disk value is based on the out of the box analytics captured by Edge. If you add custom values to the analytics data, then

these values should be increased accordingly. Use the following formula to estimate the required storage:

bytes of storage needed =

(# bytes of analytics data/request) *

(requests/second) *

(seconds/hour) *

(hours of peak usage/day) *

(days/month) *

(months of data retention)

For example:
(2K bytes) * (100 req/sec) * (3600 secs/hr) * (18 peak hours/day) * (30 days/month) * (3 months

retention)

= 1,194,393,600,000 bytes or 1194.4 GB of storage needed

*** Network Storage is recommended for Postgresql database because:

● It gives the ability to dynamically scale up the storage size if and when required.

● Network IOPS can be adjusted on the fly in most of today's environment/Storage/Network subsystems.

● Storage level snapshots can be enabled as part of backup and recovery solutions.

In addition, the following lists the hardware requirements if you want to install the
Monetization Services (not supported on All-in-One installation):

Component with Monetization RAM CPU Hard disk

Management Server (with 8GB 4-core 60GB


Monetization Services)

Analytics - Postgres/Qpid on 16GB 8-core 500GB - 1TB


same server network storage,
preferably with SSD
backend, supporting
1000 IOPS or higher,
or use the rule from
the table above.

Analytics - Postgres master or 16GB 8-core 500GB - 1TB


standby standalone network storage,
preferably with SSD
backend, supporting
1000 IOPS or higher,
or use the rule from
the table above.

Analytics - Qpid standalone 16GB 8-core 40GB - 500GB local


storage with SSD or
fast HDD
For installations
greater than 250
TPS, HDD with local
storage supporting
1000 IOPS is
recommended.

Note:

● If the root file system is not large enough for the installation, Apigee recommends that
you place the data onto a larger disk.
● If an older version of Apigee Edge for Private Cloud was installed on the machine, ensure
that you delete the /tmp/java directory before a new installation.
● The system-wide temporary folder /tmp needs execute permissions in order to start
Cassandra.
● If user 'apigee' was created prior to the installation, ensure that /home/apigee exists as
home directory and is owned by apigee:apigee.

Operating system and third-party software requirements


These installation instructions and the supplied installation files have been tested on
the operating systems and third-party software listed in Supported software and
supported versions.

Java

You need a supported version of Java 1.8 installed on each machine prior to the
installation. Supported JDKs are listed in Supported software and supported
versions.

Ensure that the JAVA_HOME environment variable points to the root of the JDK for
the user performing the installation.

Note: The Edge installer can install Java 1.8 and set JAVA_HOME for you as part of the apigee-
setup installation process. See Install the Edge apigee-setup utility for more.

SELinux

Depending on your settings for SELinux, Edge can encounter issues with installing
and starting Edge components. If necessary, you can disable SELinux or set it to
permissive mode during installation, and then re-enable it after installation. See
Install the Edge apigee-setup utility for more.

Creating the 'apigee' user


The installation procedure creates a Unix system user named 'apigee'. Edge
directories and files are owned by 'apigee', as are Edge processes. That means Edge
components run as the 'apigee' user. If necessary, you can run components as a
different user.

Installation directory
By default, the installer writes all files to the /opt/apigee directory. You cannot
change this directory location. While you cannot change this directory, you can
create a symlink to map /opt/apigee to another location, as described in Creating
a symlink from /opt/apigee.

In the instructions in this guide, the installation directory is noted as /opt/apigee.

Creating a symlink from /opt/apigee

Before you create the symlink, you must first create a user and group named
"apigee". This is the same group and user created by the Edge installer.

To create the symlink, perform these steps before downloading the


bootstrap_4.51.00.sh file. You must perform all of these steps as root:

1. Create the "apigee" user and group:


2. groupadd -r apigee > useradd -r -g apigee -d /opt/apigee -s /sbin/nologin -c
"Apigee platform user" apigee
3. Create a symlink from /opt/apigee to your desired install root:
4. ln -Ts /srv/myInstallDir /opt/apigee
5. Where /srv/myInstallDir is the desired location of the Edge files.
6. Change ownership of the install root and symlink to the "apigee" user:
7. chown -h apigee:apigee /srv/myInstallDir /opt/apigee

Network setting
Apigee recommendeds that you check the network setting prior to the installation.
The installer expects that all machines have fixed IP addresses. Use the following
commands to validate the setting:

● hostname returns the name of the machine


● hostname -i returns the IP address for the hostname that can be addressed
from other machines.

Depending on your operating system type and version, you may need to edit
/etc/hosts and /etc/sysconfig/network if the hostname is not set correctly.
See the documentation for your specific operating system for more information.

If a server has multiple interface cards, the "hostname -i" command returns a space-
separated list of IP addresses. By default, the Edge installer uses the first IP address
returned, which might not be correct in all situations. As an alternative, you can set
the following property in the installation configuration file:
ENABLE_DYNAMIC_HOSTIP=y

With that property set to "y", the installer prompts you to select the IP address to use
as part of the install. The default value is "n". See Edge Configuration File Reference
for more.

Warning: If you set ENABLE_DYNAMIC_HOSTIP=y, ensure that your property file does not set
HOSTIP.

TCP Wrappers

TCP Wrappers can block communication of some ports and can affect OpenLDAP,
Postgres, and Cassandra installation. On those nodes, check /etc/hosts.allow
and /etc/hosts.deny to ensure that there are no port restrictions on the required
OpenLDAP, Postgres, and Cassandra ports.

iptables

Validate that there are no iptables policies preventing connectivity between nodes on
the required Edge ports. If necessary, you can stop iptables during installation using
the command:

sudo/etc/init.d/iptables stop

On CentOS 7.x:

systemctl stop firewalld

Directory access
The following table lists directories on Edge nodes that have special requirements
from Edge processes:

Service Directory Description

Router /etc/rc.d/ The Edge Router uses the NGINX


init.d/ router and requires read access to
/etc/rc.d/init.d/function
functions
s.

If your security process requires


you to set permissions on
/etc/rc.d/init.d/function
s, do not set them to 700 or else
the Router will fail to start.

You can set permissions to 744 to


allow read access to
/etc/rc.d/init.d/function
s.

Zookeeper /dev/random The Zookeeper client library


requires read access to the
random number generator
/dev/random. If /dev/random
is blocked on read, then the
Zookeeper service might fail to
start up.

Cassandra
All Cassandra nodes must be connected to a ring. Cassandra stores data replicas on
multiple nodes to ensure reliability and fault tolerance. The replication strategy for
each Edge keyspace determines the Cassandra nodes where replicas are placed. For
more, see About Cassandra replication factor and consistency level.

Cassandra automatically adjusts its Java heap size based on the available memory.
For more, see Tuning Java resources in the event of a performance degradation or
high memory consumption.

After installing the Edge for Private Cloud, you can check that Cassandra is
configured correctly by examining the
/opt/apigee/apigee-cassandra/conf/cassandra.yaml file. For example,
ensure that the Edge for Private Cloud installation script set the following properties:

● cluster_name
● initial_token
● partitioner
● seeds
● listen_address
● rpc_address
● snitch

Warning: Do not edit the cassandra.yaml file.

PostgreSQL database
After you install Edge, you can adjust the following PostgreSQL database settings
based on the amount of RAM available on your system:

conf_postgresql_shared_buffers = 35% of RAM # min 128kB


conf_postgresql_effective_cache_size = 45% of RAM
conf_postgresql_work_mem = 512MB # min 64kB

Note: These settings assume that the PostgreSQL database is only used for Edge analytics, and
not for any other purpose.

To set these values:

1. Edit the postgresql.properties file:


2. vi /opt/apigee/customer/application/postgresql.properties
3. If the file does not exist, create it.
4. Set the properties listed above.
5. Save your edits.
6. Restart the PostgreSQL database:
7. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql restart

System limits
Ensure that you have set the following system limits on Cassandra and Message
Processor nodes:

● On Cassandra nodes, set soft and hard memlock, nofile, and address space
(as) limits for installation user (default is "apigee") in
/etc/security/limits.d/90-apigee-edge-limits.conf as shown
below:
● apigee soft memlock unlimited
● apigee hard memlock unlimited
● apigee soft nofile 32768
apigee hard nofile 65536
apigee soft as unlimited
apigee hard as unlimited
apigee soft nproc 32768
apigee hard nproc 65536
● For more information, see Recommended production settings in the Apache
Cassandra documentation.
● On Message Processor nodes, set the maximum number of open file
descriptors to 64K in /etc/security/limits.d/90-apigee-edge-
limits.conf as shown below:
● apigee soft nofile 32768
● apigee hard nofile 65536
● If necessary, you can raise that limit. For example, if you have a large number
of temporary files open at any one time.
● If you ever see the following error in a Router or Message Processor
system.log, your file descriptor limits may be set too low:
● "java.io.IOException: Too many open files"
● You can check your user limits by running:

# su - apigee

$ ulimit -n
100000


● If you are still reaching the open files limits after setting the file descriptor
limits to 100000, open a ticket with Apigee Edge Support for further
troubleshooting.

Network Security Services (NSS)


Network Security Services (NSS) is a set of libraries that supports development of
security-enabled client and server applications. You should ensure that you have
installed NSS v3.19, or later.

To check your current version:

yum info nss

To update NSS:

yum update nss

See this article from RedHat for more information.

Disable DNS lookup on IPv6 when using NSCD (Name


Service Cache Daemon)
If you have installed and enabled NSCD (Name Service Cache Daemon), the
Message Processors make two DNS lookups: one for IPv4 and one for IPv6. You
should disable DNS lookup on IPv6 when using NSCD.

To disable the DNS lookup on IPv6:

1. On every Message Processor node, edit /etc/nscd.conf


2. Set the following property:
3. enable-cache hosts no

Disable IPv6 on Google Cloud Platform for RedHat/CentOS


7
If you are installing Edge on RedHat 7 or CentOS 7 on Google Cloud Platform, then
you must disable IPv6 on all Qpid nodes.

See the RedHat or CentOS documentation of your specific OS version for


instructions on disabling IPv6. For example, you can:

1. Open /etc/hosts in an editor.


2. Insert a "#" character in column one of the following line to comment it out:
3. #::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
4. Save the file.

AWS AMI
If you are installing Edge on an AWS Amazon Machine Image (AMI) for Red Hat
Enterprise Linux 7.x, you must first run the following command:

yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-


optional

Tools
The installer uses the following UNIX tools in the standard version as provided by
EL5 or EL6.

awk expr libxslt rpm unzip

basename grep lua-socket rpm2cpio useradd

bash hostname ls sed wc

bc id net-tools sudo wget

curl libaio perl (from procps) tar xerces-c

cyrus-sasl libdb4 pgrep (from procps) tr yum

date libdb-cxx ps uuid chkconfig

dirname libibverbs pwd uname

echo librdmacm python

Note:

● The executable for the useradd tool is located in /usr/sbin and for chkconfig in
/sbin.
● With sudo access you can gain access over the environment of the calling user, for
example, usually you would call sudo command or sudo
PATH=$PATH:/usr/sbin:/sbin command.
● Ensure that you have patch installed prior to a service pack (patch) installation.
ntpdate

Apigee recommends that your servers' times are synchronized. If not already
configured, the ntpdate utility could serve this purpose, which verifies whether
servers are time synchronized. You can use yum install ntp to install the utility.
This is particularly useful for replicating OpenLDAP setups. Note that you set up
server time zone in UTC.

openldap 2.4

The on-premises installation requires OpenLDAP 2.4. If your server has an Internet
connection, then the Edge install script downloads and installs OpenLDAP. If your
server does not have an Internet connection, you must ensure that OpenLDAP is
already installed before running the Edge install script. On RHEL/CentOS, you can run
yum install openldap-clients openldap-servers to install the
OpenLDAP.

For 13-host installations, and 12-host installations with two data centers, OpenLDAP
replication is required because there are multiple nodes hosting OpenLDAP.

Firewalls and virtual hosts


The term virtual commonly gets overloaded in the IT arena, and so it is with an
Apigee Edge for Private Cloud deployment and virtual hosts. To clarify, there are two
primary uses of the term virtual:

● Virtual machines (VM): Not required, but some deployment use VM


technology to create isolated servers for their Apigee components. VM hosts,
like physical hosts, can have network interfaces and firewalls.
● Virtual hosts: Web endpoints, analogous to an Apache virtual host.

A router in a VM can expose multiple virtual hosts (as long as they differ from one
another in their host alias or in their interface port).

Just as a naming example, a single physical server A might be running two VMs,
named "VM1" and "VM2". Let's assume "VM1" exposes a virtual Ethernet interface,
which gets named "eth0" inside the VM, and which is assigned IP address
111.111.111.111 by the virtualization machinery or a network DHCP server; and
then assume VM2 exposes a virtual Ethernet interface also named "eth0" and it gets
assigned an IP address 111.111.111.222.

We might have an Apigee router running in each of the two VMs. The routers expose
virtual host endpoints as in this hypothetical example:

The Apigee router in VM1 exposes three virtual hosts on its eth0 interface (which
has some specific IP address), api.mycompany.com:80,
api.mycompany.com:443, and test.mycompany.com:80.
The router in VM2 exposes api.mycompany.com:80 (same name and port as
exposed by VM1).

The physical host's operating system might have a network firewall; if so, that
firewall must be configured to pass TCP traffic bound for the ports being exposed on
the virtualized interfaces (111.111.111.111:{80, 443} and
111.111.111.222:80). In addition, each VM's operating system may provide its
own firewall on its eth0 interface and these too must allow ports 80 and 443 traffic to
connect.

The basepath is the third component involved in routing API calls to different API
proxies that you may have deployed. API proxy bundles can share an endpoint if they
have different basepaths. For example, one basepath can be defined as
http://api.mycompany.com:80/ and another defined as
http://api.mycompany.com:80/salesdemo.

In this case, you need a load balancer or traffic director of some kind splitting the
http://api.mycompany.com:80/ traffic between the two IP addresses
(111.111.111.111 on VM1 and 111.111.111.222 on VM2). This function is
specific to your particular installation, and is configured by your local networking
group.

The basepath is set when you deploy an API. From the above example, you can
deploy two APIs, mycompany and testmycompany, for the organization
mycompany-org with the virtual host that has the host alias of
api.mycompany.com and the port set to 80. If you do not declare a basepath in the
deployment, then the router does not know which API to send incoming requests to.

However, if you deploy the API testmycompany with the base URL of /salesdemo,
then users access that API using http://api.mycompany.com:80/salesdemo.
If you deploy your API mycompany with the base URL of / then your users access
the API by the URL http://api.mycompany.com:80/.

Licensing
Each installation of Edge requires a unique license file that you obtain from Apigee.
You will need to provide the path to the license file when installing the management
server, for example /tmp/license.txt.

The installer copies the license file to


/opt/apigee/customer/conf/license.txt.

If license file is valid, the management server validates the expiry and allowed
Message Processor (MP) count. If any of the license settings is expired, you can find
the logs in the following location: /opt/apigee/var/log/edge-management-
server/logs. In this case you can contact Apigee Edge Support for migration
details.
If you do not yet have a license, contact Apigee Sales.

Port requirements
The need to manage the firewall goes beyond just the virtual hosts; both VM and
physical host firewalls must allow traffic for the ports required by the components to
communicate with each other.

Port diagrams
The following images show the port requirements for both a single data center and
multiple data center configuration:

SINGLE DATA CENTER


MULTIPLE DATA CENTERS

The following image shows the port requirements for each Edge component in a
single data center configuration:

If you install the 12-node clustered configuration with two data centers, ensure that
the nodes in the two data centers can communicate over the ports shown below:

SINGLE
Notes on this diagram:

● The ports prefixed by "M" are ports used to manage the component and must
be open on the component for access by the Management Server.
● The Edge UI requires access to the Router, on the ports exposed by API
proxies, to support the Send button in the trace tool.
● Access to JMX ports can be configured to require a username/password. See
How to Monitor for more information.
● You can optionally configure TLS/SSL access for certain connections, which
can use different ports. See TLS/SSL for more.
● You can configure the Management Server and Edge UI to send emails
through an external SMTP server. If you do, you must ensure that the
Management Server and UI can access the necessary port on the SMTP
server (not shown). For non-TLS SMTP, the port number is typically 25. For
TLS-enabled SMTP, it is often 465, but check with your SMTP provider.

MULTI
Note that:

● All Management Servers must be able to access all Cassandra nodes in all
other data centers.
● All Message Processors in all data centers must all be able to access each
other over port 4528.
● The Management Server must be able to access all Message Processors over
port 8082.
● All Management Servers and all Qpid nodes must be able to access Postgres
in all other data centers.
● For security reasons, other than the ports shown above and any others
required by your own network requirements, no other ports should be open
between data centers.

By default, the communications among components is not encrypted. You can add
encryption by installing Apigee mTLS. For more information, see Introduction to
Apigee mTLS.

Port details
The table below describes the ports that need to be opened in firewalls, by Edge
component:
Filter by port # 22 80 443 1099 1100 1101
1102 1103 2181 2888 3888 4526 4527
4528 4529 4530 5432 5636 5672 7000 7199
8080 8081 8082 8083 8084 8443 9000
9042 9099 9160 10389 15999 59001 59002
FILTER BY PORT #

Component Port Description

Standard HTTP 80, 443 HTTP plus any other ports you use for virtual
ports hosts

Apigee SSO 9099 Connections from external IDPs, the Management


Server, and browsers for authentication.

Cassandra 7000, 9042, Apache Cassandra ports for communication


9160 between Cassandra nodes and for access by
other Edge components.

7199 JMX port. Must be open for access by the


Management Server.

LDAP 10389 OpenLDAP

Management 1099 JMX port


Server
4526 Port for distributed cache and management calls.
This port is configurable.

5636 Port for monetization commit notifications.

8080 Port for Edge management API calls. These


components require access to port 8080 on the
Management Server: Router, Message Processor,
UI, Postgres, Apigee SSO (if enabled), and Qpid.

Management 9000 Port for browser access to management UI


UI

Message 1101 JMX port


Processor
4528 For distributed cache and management calls
between Message Processors, and for
communication from the Router and Management
Server.
A Message Processor must open port 4528 as its
management port. If you have multiple Message
Processors, they must all be able to access each
other over port 4528 (indicated by the loop arrow
in the diagram above for port 4528 on the
Message Processor). If you have multiple data
centers, the port must be accessible from all
Message Processors in all data centers.

8082 Default management port for Message Processor


and must be open on the component for access
by the Management Server.

If you configure TLS/SSL between the Router and


Message Processor, used by the Router to make
health checks on the Message Processor.

Port 8082 on the Message Processor only has to


be open for access by the Router when you
configure TLS/SSL between the Router and
Message Processor. If you do not configure
TLS/SSL between the Router and Message
Processor, the default configuration, port 8082
still must be open on the Message Processor to
manage the component, but the Router does not
require access to it.

8443 When TLS is enabled between the Router and


Message Processor, you must open port 8443 on
the Message Processor for access by the Router.

8998 Message Processor port for communications


from Router

Postgres 22 If configuring two Postgres nodes to use master-


standby replication, you must open port 22 on
each node for SSH access.

1103 JMX port

4530 For distributed cache and management calls

5432 Used for communication from Qpid/Management


Server to Postgres

8084 Default management port on Postgres server;


must be open on the component for access by the
Management Server.

Qpid 1102 JMX port

4529 For distributed cache and management calls

5672 ● Single data center: Used for sending


analytics from Router and Message
Processor to Qpid.
● Multiple data centers: Used for
communications between Qpid nodes in
different data centers.

Also used for communication between the Qpid


server and broker components on the same node.
In topologies with multiple Qpid nodes, the server
must be able to connect to all brokers on port
5672.

8083 Default management port on Qpid server and


must be open on the component for access by the
Management Server.

Router 4527 For distributed cache and management calls.


A Router must open port 4527 as its management
port. If you have multiple Routers, they must all be
able to access each other over port 4527
(indicated by the loop arrow in the diagram above
for port 4527 on the Router).

While it is not required, you can open port 4527 on


the Router for access by any Message Processor.
Otherwise, you might see error messages in the
Message Processor log files.

8081 Default management port for Router and must be


open on the component for access by the
Management Server.

15999 Health check port. A load balancer uses this port


to determine if the Router is available.

To get the status of a Router, the load balancer


makes a request to port 15999 on the Router:

curl -v
http://routerIP:15999/v1/servers/self/reachable
If the Router is reachable, the request returns
HTTP 200.

59001 Port used for testing the Edge installation by the


apigee-validate utility. This utility requires
access to port 59001 on the Router. See Test the
install for more on port 59001.

SmartDocs 59002 The port on the Edge router where SmartDocs


page requests are sent.

ZooKeeper 2181 Used by other components like Management


Server, Router, Message Processor and so on

2888, 3888 Used internally by ZooKeeper for ZooKeeper


cluster (known as ZooKeeper ensemble)
communication

The next table shows the same ports, listed numerically, with the source and
destination components:

Filter by port # 1099 1100 1101 1102


1103 2181 2888 3888 4526 4527 4528
4529 4530 5432 5636 5672 7000 7199 8080
8081 8082 8083 8084 8443 9000 9042
9099 9160 10389 15999 59001 59002
FILTER BY PORT #
Destination
Port Number Purpose Source Component
Component

virtual_host_p HTTP plus any External client (or load Listener on Message
ort other ports you balancer) Router
use for virtual
host API call
traffic. Ports
80 and 443 are
most
commonly
used; the
Message
Router can
terminate
TLS/SSL
connections.

1099 through JMX JMX Client Management Server


1103 Management (1099)
Message Processor
(1101)
Qpid Server (1102)
Postgres Server
(1103)

2181 Zookeeper Management Server Zookeeper


client Router
communicatio Message Processor
n Qpid Server
Postgres Server

2888 and 3888 Zookeeper Zookeeper Zookeeper


internode
management

4526 RPC Management Server Management Server


Management
port

4527 RPC Management Server Router


Management Router
port for
distributed
cache and
management
calls, and for
communicatio
ns between
Routers

4528 For distributed Management Server Message Processor


cache calls Router
between Message Processor
Message
Processors,
and for
communicatio
n from the
Router
4529 RPC Management Server Qpid Server
Management
port for
distributed
cache and
management
calls

4530 RPC Management Server Postgres Server


Management
port for
distributed
cache and
management
calls

5432 Postgres client Qpid Server Postgres

5636 Monetization External JMS component Management Server

5672 ● Single Qpid Server Qpid Server


data
center:
Used for
sending
analytic
s from
Router
and
Messag
e
Process
or to
Qpid.
● Multiple
data
centers:
Used for
commu
nication
s
between
Qpid
nodes in
different
data
centers.

Also used for


communicatio
n between the
Qpid server
and broker
components
on the same
node. In
topologies with
multiple Qpid
nodes, the
server must be
able to
connect to all
brokers on port
5672.

7000 Cassandra Cassandra Other Cassandra


inter-node node
communicatio
ns

7199 JMX JMX client Cassandra


management.
Must be open
for access on
the Cassandra
node by the
Management
Server.

8080 Management Management API clients Management Server


API port

8081 through Component Management API clients Router (8081)


8084 API ports, used Message Processor
for issuing API (8082)
requests Qpid Server (8083)
directly to Postgres Server
individual (8084)
components.
Each
component
opens a
different port;
the exact port
used depends
on the
configuration
but must be
open on the
component for
access by the
Management
Server

8443 Communicatio Router Message Processor


n between
Router and
Message
Processor
when TLS is
enabled

8998 Communicatio Router Message Processor


n between
Router and
Message
Processor

9000 Default Edge Browser Management UI


management Server
UI port

9042 CQL native Router Cassandra


transport Message Processor
Management Server

9099 External IDP IDP, browser, and Apigee SSO


authentication Management Server

9160 Cassandra Router Cassandra


thrift client Message Processor
Management Server

10389 LDAP port Management Server OpenLDAP

15999 Health check Load balancer Router


port. A load
balancer uses
this port to
determine if
the Router is
available.

59001 Port used by apigee-validate Router


the apigee-
validate
utility to test
the Edge
installation

59002 The router port SmartDocs Router


where
SmartDocs
page requests
are sent

A Message Processor keeps a dedicated connection pool open to Cassandra, which


is configured to never timeout. When a firewall is between a Message Processor and
Cassandra server, the firewall can time out the connection. However, the Message
Processor is not designed to re-establish connections to Cassandra.

To prevent this situation, Apigee recommends that the Cassandra server, Message
Processor, and Routers be in the same subnet so that a firewall is not involved in the
deployment of these components.

If a firewall is between the Router and Message Processors, and has an idle TCP
timeout set, our recommendations is to do the following:

1. Set net.ipv4.tcp_keepalive_time = 1800 in sysctl settings on Linux


OS, where 1800 should be lower than the firewall idle tcp timeout. This setting
should keep the connection in an established state so that the firewall does
not disconnect the connection.
2. On all Message Processors, edit
/opt/apigee/customer/application/message-
processor.properties to add the following property. If the file does not
exist, create it.
3. conf_system_cassandra.maxconnecttimeinmillis=-1
4. Restart the Message Processor:
5. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart
6. On all Routers, edit
/opt/apigee/customer/application/router.properties to add
the following property. If the file does not exist, create it.
7. conf_system_cassandra.maxconnecttimeinmillis=-1
8. Restart the Router:
9. /opt/apigee/apigee-service/bin/apigee-service edge-router restart

Installation topologies
This section describes the Edge installation topologies (i.e., the node configurations
supported by Edge).

Introductory video

The following table gives you an overview of the configurations*:

Installation
Topology Nodes
Instructions

Non-production Topologies

All-in-One (1-node) ● Node 1: Cassandra, All-in-one


Edge UI, Management installation
Server, Message
Processor, OpenLDAP,
Postgres Server, Qpid
Server, Router,
Zookeeper

2-node (standalone) ● Node 1: Cassandra, 2-node


Edge UI, Management standalone
Server, Message installation
Processor, Router,
OpenLDAP, Zookeeper
● Node 2: Postgres Server,
Qpid Server

5-node ● Node 1: Cassandra, 5-node


Edge UI, Management installation
Server, OpenLDAP,
Zookeeper
● Nodes 2 & 3: Cassandra,
Message Processor,
Router, Zookeeper
● Nodes 4 & 5: Postgres
Server, Qpid Server

Production topologies

Single Data Center (9 or 13 nodes per DC)

9-node ● Node 1: Cassandra, 9-node


Edge UI, Management clustered
Server, OpenLDAP, installation
Zookeeper
● Nodes 2 & 3: Cassandra,
Zookeeper
● Nodes 4 & 5: Message
Processor, Router
● Nodes 6 & 7: Qpid
Server
● Nodes 8 & 9: Postgres
Server

13-node ● Nodes 1, 2 & 3: 13-node


Cassandra, Zookeeper clustered
● Nodes 4 & 5: OpenLDAP installation
● Nodes 6 & 7: Edge UI,
Management Server
● Nodes 8 & 9: Postgres
Server
● Nodes 10 & 11:
Message Processor,
Router
● Nodes 12 & 13: Qpid
Server

Multiple Data Centers (6 nodes per DC)

12-node Two data centers, each with 6 12-node


nodes, each consisting of the clustered
following: installation
● Nodes 1 & 7: Cassandra,
Edge UI, Management
Server OpenLDAP,
Zookeeper
● Nodes 2 & 3, 8 & 9:
Cassandra, Message
Processor, Router,
Zookeeper
● Nodes 4 & 5, 10 & 11:
Qpid Server
● Node 6 & 12: Postgres
master (Data Center 1)
and Postgres standby
(Data Center 2)

* These configurations do not include the Apigee Developer Services portal (or simply, the portal). For more information, see

Portal overview.

For Apigee Edge for Private Cloud, to plan out custom topology requirements, please
contact your Sales Rep or Account Executive to engage with Apigee Edge
Professional Services and Customer Success. Performance varies by use case, so
we recommend setting up a performance environment to analyze and tune your
performance before rolling changes out to your production environment.

All-in-one Installation (1-node)


A single node runs all Edge components.

Additional notes:

● Apigee recommends this topology be used only for non-production or


development installations, as this configuration is not optimal for
performance.
● This topology should not be used for performance testing.
● This topology does not support high availability.
● You cannot install Monetization Services on an AIO configuration.
● View sample configuration file for this topology

For installation information, see All-in-one installation.

Standalone installation (2-node)


In this scenario, a single node runs Gateway standalone servers and associated
components: Apigee Management Server, Apache ZooKeeper, Apache Cassandra,
OpenLDAP, Edge UI, Apigee Router, and Apigee Message Processor. The other node
runs Analytics standalone components: Qpid Server and Postgres Server.
Additional notes:

● Apigee recommends this topology be used only for non-production or


development installations, as this configuration is not optimal for
performance.
● This topology should not be used for performance testing.
● This topology does not support high availability.

For installation information, see 2-node standalone installation.

5-node clustered installation


In a 5-node topology, three nodes run ZooKeeper and Cassandra clusters. One of
those three nodes also runs the Apigee Management Server, OpenLDAP, and Edge
UI. Two of those three nodes also run Apigee Router and Message Processor. Two
nodes run Apigee Analytics.

NOTE: This scenario combines cluster and Gateway components to reduce the number of
servers used.

To achieve optimal performance, the cluster can also be deployed on three different servers. This
scenario also introduces a master-standby replication between two Postgres nodes if analytics
statistics are mission critical.
Additional notes:

● Apigee recommends this topology be used only for non-production or


development installations, as this configuration is not optimal for
performance.
● This topology limits the ability to scale the deployment for future expansion
needs. Only Routers and Message Processors can be expanded easily in this
topology, as described in Adding a Router or Message Processor node. To
expand this topology, you typically adopt one of the larger topologies
described in this section.
● This topology should not be used for performance testing.
● This topology does not support high availability.

For installation information, see 5-node installation.

9-node clustered installation


This scenario is similar to 5-node clustered installation but has different Analytics
components setup to achieve performance high availability.

NOTE: This scenario introduces a master-standby replication between two Postgres nodes if
Analytics statistics are mission critical.
Additional notes:

● While this topology represents the minimum number of nodes for a production
installation, it is not optimal for performance.
● You can expand this topology to support high availability, as described in
Adding a data center.
● In this topology, Routers and Message Processors are hosted on the same
nodes and may result in “noisy neighbor” problems.
● This topology is limited to a 3-node Cassandra setup with a quorum of two.
As a result, your ability to take a node out for maintenance is limited.

For installation information, see 9-node clustered installation.

13-node clustered installation


This scenario is an enhancement of 9-node clustered installation covering separate
data zones for data and Apigee servers in one datacenter setup. Here LDAP is
installed as an independent separate node.

NOTE: This scenario uses master-master OpenLDAP replication and master-standby Postgres
replication in one datacenter setup.
Additional notes:

● Apigee recommends this configuration as a minimum topology for production


setups with the documented hardware specifications. For more information,
see Hardware requirements.
● You can expand this topology to support high availability across multiple data
centers, as described in Adding a data center.
● In this topology, Routers and Message Processors are hosted on the same
nodes and may result in “noisy neighbor” problems.
● This topology is limited to a 3-node Cassandra ring with a quorum of two. As
a result, your ability to take a node out for maintenance is limited.

For installation information, see 13-node clustered installation.

12-node clustered installation


This scenario covers disaster recovery and analytics high availability across two
data centers.

NOTE: This scenario uses master-master OpenLDAP replication and master-standby Postgres
replication (across two data centers).
Additional notes:

● Apigee considers this topology adequate for production purposes. However,


since every installation has its own unique requirements and challenges,
please contact your Sales Rep or Account Executive to engage with Apigee
Edge Professional Services and Customer Success.
● In this topology, Routers and Message Processors are hosted on the same
nodes and may result in “noisy neighbor” problems.

For installation information, see 12-node clustered installation.


Installing Monetization Services
Monetization Services runs within any existing Apigee Edge setup, except the All-in-
One (AiO) configuration.

To install Monetization, you install Monetization Services, the Apigee Management


Server, and the Message Processor. To install Monetization on Edge where the Edge
installation has multiple Postgres nodes, the Postgres nodes must be configured in
Master/Standby mode. You cannot install Monetization on Edge if you have multiple
Postgres master nodes.

For more information, see Installing Monetization Services.

The following videos introduce you to the requirements and process for installing a
single-node Edge installation:

Edge Demo Installation Requirements


You can install Edge for the Private Cloud on a single host machine as part of a
demo or proof of concept installation. This type of installation is referred to as an
Edge "all-in-one" installation. The host machine can be a standalone machine or a
VM that meets the system prerequisites listed below.
Note: A single machine install is meant for a getting started and prototyping environments, not
for production. Some services, such as Monetization, cannot be installed on an AIO installation.

A full production installation of Edge requires multiple machines. See Installation topologies for
examples of production installations.

After you install Edge for the Private Cloud on the host machine, you can optionally
install Apigee Developer Services portal (or simply, the portal) on its own host
machine.

Licensing
Each installation of Edge requires a unique license file that you obtain from Apigee. If
you do not yet have a license, contact Sales here.

System requirements for Edge


The following table lists the system requirements for installing Edge on a single host
machine:

Requiremen
Description Test
t

Access to Ensure access to curl -v


Apigee RPM https://software.apigee.c https://software.apigee.com
repo om returns HTTP 200
Ensure you received a
username/password from
Apigee for the repo:

● If you are a
prospect
evaluating Apigee,
please contact
Apigee Sales here.
● If you are an
existing Apigee
customer, please
contact your
Apigee
representative.

Access to Ensure access to your curl -v http://backend to check


your backend services access to your backend services
backend
services
License key Check for an email from Ensure license key deployed to host
Apigee with license key machine
attached

OS Version Supported OS version as cat /etc/redhat-release returns


listed at Supported the OS version
software and supported
versions.

Java Supported Java versions: java -version returns the installed


version Java version
● Oracle JDK 1.8
● OpenJDK 1.8 If required Java version not found, Edge
installer downloads and installs it.

CPU cores 8 minimum lscpu returns number of CPUs

cat /proc/cpuinfo returns CPU


Information

RAM 16 GB minimum cat /proc/meminfo returns memory


information

Diskspace 100 GB minimum df -h returns the disk space.

df -h /opt returns the disk space


for /opt, the Edge install directory

Hostname Hostname set to IP hostname -i returns the IP address of


address of host host

Network External Internet access yum repolist returns available repos.


required.
For RedHat, check the availability of
For RedHat OS, access to repos from
RHEL yum repository. /etc/yum.repos.d/redhat-
rhui.repo

ports, Ensure that ports 8080, This requirement is dependent on your


iptables, 9000, 9001, and 9002 can OS and OS configuration. There are
firewalld accept incoming packets. several commands you can use to view
the current settings:

iptables -nvL

Linux 6.x: service iptables


status
Linux 7.x: sysctl firewalld
status

If necessary, you can stop iptables or


firewalld

SELinux Disable SELinux, or set it Temporarily set SELinux to permissive


to permissive mode, mode:
during install. Re-enable
On a Linux 6.x operating system:
after install if necessary
echo 0 > /selinux/enforce

To re-enable after installing Edge:

echo 1 > /selinux/enforce

On a Linux 7.x operating system:

setenforce 0

To re-enable after installing Edge:

setenforce 1

To permanently disable SELinux, see


Install the Edge apigee-setup utility.

System User performing install sudo whoami should return root


user access requires:
● sudo access or
root access
● ability to add users
on host machine

SMTP Access to SMTP server to


Server send emails to new Edge
users.

System requirements for the portal


You can install the portal on a machine different than the one you used to install
Edge. Make sure you have satisfied the following requirements before you install the
portal:
Requirement
Description Test
s

Access to Ensure access to curl -v


Apigee RPM https://software.apigee.c https://software.apigee.com
repo om returns HTTP 200
Ensure you received a
username/password
from Apigee for the repo.

Edge Ensure that you have See System requirements for Edge
installed on already installed Edge on above.
the host the host machine

Port Ensure port 8079 is netstat -nlptu | grep 8079


available and accessible

Enable Cassandra authentication


By default, Cassandra installs without authentication enabled. That means anyone
can access Cassandra. You can enable authentication after installing Edge, or as
part of the installation process.

If you decide to enable authentication on Cassandra, it uses the following default


credentials:

● username = 'cassandra'
● password = 'cassandra'

You can use this account, set a different password for this account, or create a new
Cassandra user. Add, remove, and modify users by using the Cassandra
CREATE/ALTER/DROP USER statements.

For more information, see Cassandra SQL shell commands.

Enable Cassandra authentication during installation


You can enable Cassandra authentication at install time. However, while you can
enable authentication when you install Cassandra, you cannot change the default
username and password. You have to perform that step manually after installation of
Cassandra completes.

NOTE: Use this procedure when installing Cassandra by using the "-p c", "-p ds", "-p sa", "-p aio", "-
p asa", and "-p ebp" options.

To enable Cassandra authentication at install time, include the CASS_AUTH property


in the configuration file for all Cassandra nodes:

CASS_AUTH=y # The default value is n.

The following Edge components access Cassandra:

● Management Server
● Message Processors
● Routers
● Qpid servers
● Postgres servers

Therefore, when you install these components, you must set the following properties
in the configuration file to specify the Cassandra credentials:

CASS_USERNAME=cassandra
CASS_PASSWORD=cassandra

You can change the Cassandra credentials after installing Cassandra. However, if
you have already installed the Management Server, Message Processors, Routers,
Qpid servers, or Postgres servers, you must also update those components to use
the new credentials.

To change the Cassandra credentials after installing Cassandra:

1. Log into any one Cassandra node using the cqlsh tool and the default
credentials. You only have to change the password on one node and it will be
broadcast to all Cassandra nodes in the ring:
2. /opt/apigee/apigee-cassandra/bin/cqlsh cassIP 9042 -u cassandra -p
cassandra
3. Where:
a. cassIP is the IP address of the Cassandra node.
b. 9042 is the default Cassandra port.
c. The default user is cassandra.
d. The default password is cassandra. If you changed the password
previously, use the current password. If the password contains any
special characters, wrap it in single quotes.
4. Execute the following command at the cqlsh> prompt to update the
password:
5. ALTER USER cassandra WITH PASSWORD 'NEW_PASSWORD';
6. Exit the cqlsh tool, as the following example shows:
7. exit
8. If you have not yet installed the Management Server, Message Processors,
Routers, Qpid servers, or Postgres servers, set the following properties in the
config file and then install those components:

CASS_USERNAME=cassandra

9. CASS_PASSWORD=NEW_PASSWORD
10. If you have already installed the Management Server, Message Processors,
Routers, Qpid servers, or Postgres servers, then see Resetting Edge
Passwords for the procedure to update those components to use the new
password.

Enable Cassandra authentication post installation


To enable authentication after an installation:

● Update all Edge components that connect to Cassandra with the Cassandra
username and password.
● Enable authentication on all Cassandra nodes, and set the Cassandra
username and password on any one node. You only have to change the
credentials on one Cassandra node and they will be broadcast to all
Cassandra nodes in the ring.

Update Edge components that connect to Cassandra

Use the following procedure to update all Edge components that communicate with
Cassandra with the new credentials. Note that you do this step before you actually
update the Cassandra credentials:

1. On the Management Server node, run the following command:

/opt/apigee/apigee-service/bin/apigee-service edge-management-server

2. store_cassandra_credentials -u cassandra_username -p cassandra_password


3. Optionally, you can pass a file to the command containing the new username
and password:
4. apigee-service edge-management-server store_cassandra_credentials -f
configFile
5. Where the configFile contains the following:

CASS_USERNAME=cassandra_username # Default is cassandra

6. CASS_PASSWORD='cassandra_password' # Default is cassandra; wrap in


single quotes if it includes special chars
7. This command automatically restarts the Management Server.
8. For each of the following services, repeat Step 1:
● All Message Processors
● All Routers
● All Qpid servers (edge-qpid-server)
● Postgres servers (edge-postgres-server)
9. When you repeat Step 1 for each service, replace edge-management-
server in the command above with the appropriate service name. For
example, when you execute the step for a Router service, use the following
command:

/opt/apigee/apigee-service/bin/apigee-service edge-router

10. store_cassandra_credentials -u cassandra -p cassandra

Enable authentication

Use the following procedure to enable Cassandra authentication and set the
username and password:

1. Create a silent configuration file with the contents shown below:

# Specify IP address or DNS name of cassandra node

IP1=192.168.1.1

IP2=192.168.1.2

IP3=192.168.1.3

# Must resolve to IP address or DNS name of host

HOSTIP=$(hostname -i)

# Set to ‘y’ to enable Cassandra authentication.

CASS_AUTH=y # Possible values are ‘y/n’

# Cassandra username. If it does not exists, this user would be created as a


SUPERUSER

CASS_USERNAME=cassandra # Default value is cassandra

# Cassandra Password. If CASS_USERNAME does not exist, create SUPERUSER with


this as password

CASS_PASSWORD=cassandra # Default value is cassandra

# Space-separated IP/DNS names of the Cassandra hosts

CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1"

# Username of an existing C* user. Only needed if you have disabled or change


details of the default cassandra user(‘cassandra’)

CASS_EXISTING_USERNAME=cassandra # The default username is cassandra


# Password of an existing C* user. Only needed if you have disabled or change
password of the default cassandra user(‘cassandra’)

CASS_EXISTING_PASSWORD=cassandra # The default password is cassandra

# Cassandra port

2. CASS_PORT=9042 # The default port is 9042.


3. Log in to the first Cassandra node and execute the following command:
4. apigee-service apigee-cassandra enable_cassandra_authentication -f CONFIG
5. Optionally, you can pass the properties as command arguments to the script,
as shown in the following example:
6. CASS_AUTH=y HOSTIP=$(hostname -i) CASS_PORT=9042
CASS_EXISTING_USERNAME=cassandra
CASS_EXISTING_PASSWORD=cassandra CASS_USERNAME=cassandra
CASS_PASSWORD=cassandra CASS_HOSTS="192.168.1.1:1,1
192.168.1.2:1,1 192.168.1.3:1,1" apigee-service apigee-cassandra
enable_cassandra_authentication
7. Notes:
● For default Cassandra credentials, the command above enables
Cassandra authentication and restarts Cassandra.
● For non-default credentials, the command also alters the replication
factor, creates a superuser, and runs a repair on system_auth
keyspace.
8. Repeat steps 1 and 2 on all Cassandra nodes.

Set up primary-standby replication for


Postgres
By default, Edge installs all Postgres nodes in primary mode. However, in production
systems with multiple Postgres nodes, you configure them to use primary-standby
replication so that if the primary node fails, the standby node can continue to serve
traffic.

If the primary node ever fails, you can promote the standby server to the primary. See
Handling a PostgresSQL Database Failover for more information.

Note: Primary-standby replication is not supported for the Developer Services portal. The portal
only supports a single Postgres node.

Configure Primary-Standby Replication at install time


You can configure primary-standby replication at install time by including the
following properties in the config file for the two Postgres nodes:
PG_MASTER=IP_OR_DNS_OF_NEW_PRIMARY
PG_STANDBY=IP_OR_DNS_OF_NEW_STANDBY

Note: Apigee strongly recommends that you use IP addresses rather than host names for the
PG_MASTER and PG_STANDBY properties in your silent configuration file. In addition, you should
be consistent on both nodes.

If you use host names rather than IP addresses, you must be sure sure that the host names
properly resolve using DNS.

The installer automatically configures the two Postgres node to function as primary-
standby with replication.

Configure Primary-Standby Replication after installation


You can configure primary-standby replication after installation by using the
following procedure:

1. Identify which Postgre node will be the primary and which will be the standby
server.
2. On the primary node, edit the config file to set:

PG_MASTER=IP_OR_DNS_OF_NEW_PRIMARY

3. PG_STANDBY=IPorDNSofNewStandby
4. Enable replication on the new primary:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup-
replication-on-master -f configFile
6. On the standby node, edit the config file to set:

PG_MASTER=IP_OR_DNS_OF_NEW_PRIMARY

7. PG_STANDBY=IPorDNSofNewStandby
8. Stop the standby node:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop
10. On the standby node, delete any existing Postgres data:
11. rm -rf /opt/apigee/data/apigee-postgresql/
12. Note: If necessary, you can backup this data before deleting it.
13. Configure the standby node:
14. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup-
replication-on-standby -f configFile

Test Primary-Standby Replication


On completion of replication, verify the replication status by issuing the following
scripts on both servers. The system should display identical results on both servers
to ensure a successful replication:

1. On the primary node, run:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql postgres-
check-master
3. Validate that it says it is the primary.
4. On the standby node:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql postgres-
check-standby
6. Validate that it says it is the standby.

Setting up a virtual host


A virtual host on Edge defines the domains and Edge Router ports on which an API
proxy is exposed, and, by extension, the URL that apps use to access an API proxy. A
virtual host also defines whether the API proxy is accessed by using the HTTP
protocol, or by the encrypted HTTPS protocol.

Note: This page describes how to configure a virtual host that supports the HTTP protocol so
that you can get an Edge installation up and running quickly. To define a virtual host that
supports the HTTPS protocol over TLS, see Configuring virtual hosts for the Private Cloud.

As part of the Edge onboarding process, you have to create an organization,


environment, and virtual host. Edge provides the setup-org command to make this
process easier for new users.

When you create the virtual host, you must specify the following information:

● The name of the virtual host that you use to reference it in your API proxies.
● The port on the Router for the virtual host. Typically these ports start at 9001
and increment by one for every new virtual host.
● The host alias of the virtual host. Typically the DNS name of the virtual host.

For example, in a config file passed to the setup-org command, you can specify
this information as:

# Specify virtual host information


VHOST_PORT=9001
VHOST_NAME=default

# If you have a DNS entry for the virtual host


VHOST_ALIAS=myapis.apigee.net

The Edge Router compares the Host header of the incoming request to the list of
available host aliases as part of determining the API proxy that handles the request.
When making a request through a virtual host, either specify a domain name that
matches the host alias of a virtual host, or specify the IP address of the Router and
the Host header containing the host alias.
For example, if you created a virtual host with a host alias of myapis.apigee.net on
port 9001, then a cURL request to an API through that virtual host could use one of
the following forms:

● If you have a DNS entry for myapis.apigee.net:


● curl http://myapis.apigee.net:9001/proxy-base-path/resource-path
● If you do not have a DNS entry for myapis.apigee.net:
● curl http://routerIP:9001/proxy-base-path/resource-path -H 'Host:
myapis.apigee.net'
● In this form, you specify the IP address of the Router, and pass the host alias
in the Host header.
Note: The curl command, most browsers, and many other utilities automatically append
the Host header with the domain as part of the request, so you can actually use a cURL
command in the form:
● curl http://routerIP:9001/proxy-base-path/resource-path

Options when you do not have a DNS entry for the virtual
host
One option when you do not have a DNS entry is to set the host alias to the IP
address of the Router and port of the virtual host, as routerIP:port. For example:

VHOST_ALIAS=192.168.1.31:9001

Then you make a curl command in the form below:

curl http://routerIP:9001/proxy-base-path/resource-path

This option is preferred because it works well with the Edge UI.

If you have multiple Routers, add a host alias for each Router, specifying the IP
address of each Router and port of the virtual host:

# Specify the IP and port of each router as a space-separated list enclosed in quotes:
# VHOST_ALIAS="192.168.1.31:9001 192.168.1.32:9001"

Alternatively, you can set the host alias to a value, such as temp.hostalias.com.
Then, you have to pass the Host header on every request:

curl -v http://routerIP:9001/proxy-base-path/resource-path -H 'host:


temp.hostalias.com'

Or, add the host alias to your /etc/hosts file. For example, add this line to
/etc/hosts:

192.168.1.31 temp.hostalias.com

Then you can make a request as if you had a DNS entry:


curl -v http://myapis.apigee.net:9001/proxy-base-path/resource-path

Add Cassandra rack support


CAUTION! This section is for advanced users only. You should only follow the instructions in this
section if you are familiar with configuring Cassandra rings.

This section provides general guidance for scaling operations of Cassandra by


making Cassandra on Apigee Edge for Private Cloud rack aware.

NOTE: The instructions in this section are for new installations only. Migrating to a rack aware
Cassandra configuration from an existing installation is not currently documented or supported.

For more information on why making your Cassandra ring rack aware is important,
see the following resources:

● Replication (Cassandra documentation)


● Cassandra Architecture & Replication Factor Strategy

What is a rack?
A Cassandra rack is a logical grouping of Cassandra nodes within the ring.
Cassandra uses racks so that it can ensure replicas are distributed among different
logical groupings. As a result, operations are sent to not just one node, but multiple
nodes, each on a separate rack, providing greater fault tolerance and availability.

The examples in this section use three Cassandra racks, which is the number of
racks that are supported by Apigee in production topologies.

The default installation of Cassandra in Apigee Edge for Private Cloud assumes a
single logical rack and places all the nodes in a data center within it. Although this
configuration is simple to install and manage, it is susceptible to failure if an
operation fails on one of those nodes.

The following image shows the default configuration of the Cassandra ring:
(Figure 1) Default configuration: All nodes on a single rack

In a more robust configuration, each node would be assigned to a separate rack and
operations would also execute on replicas on each of those racks.

The following image shows a 3-node ring. This image shows the order in which
operations are replicated across the ring (clockwise) and highlights the fact that no
two nodes are on the same rack:

(Figure 2) Rack aware configuration:


Three nodes, one on each rack

In this configuration, operations are sent to a node but are also sent to replicas of
that node on other racks (in clockwise order).

Add rack awareness (with 3 nodes)


All production Installation topologies of Apigee Edge for Private Cloud have at least
three Cassandra nodes, which this section refers to as "IP1", "IP2", and "IP3". By
default, each of these nodes is in the same rack, "ra-1".
This section describes how to assign the Cassandra nodes to separate racks so that
all operations are sent to replica nodes in separate logical groupings within the ring.

To assign Cassandra nodes to different racks during installation:

1. Before you run the installer, log into the Cassandra node and open the
following silent configuration file for edit:
2. /opt/silent.conf
3. Create the file if it does not exist and be sure to make the "apigee" user an
owner.
4. Edit the CASS_HOSTS property, a space-separated list of IP addresses (not
DNS or hostname entries) that uses the following syntax:
5. CASS_HOSTS="IP_address:data_center_number,rack_number [...]"
6. The default value is a three node Cassandra ring with each node assigned to
rack 1 and data center 1, as the following example shows:
7. CASS_HOSTS="IP1:1,1 IP2:1,1 IP3:1,1"
8. Change the rack assignments so that node 2 is assigned to rack 2 and node 3
is assigned to rack 3, as the following example shows:
9. CASS_HOSTS="IP1:1,1 IP2:1,2 IP3:1,3"
10. By changing the rack assignments, you instruct Cassandra to create two
additional logical groupings (racks), which then provide replicas that receive
all operations received by the first node.
For more information on using the CASS_HOSTS configuraion property, see
Edge Configuration File Reference.
11. Save your changes to the configuration file and execute the following
command to install Cassandra with your updated configuration:
12. /opt/apigee/apigee-setup/bin/setup.sh -p c -f path/to/silent/config
13. For example:
14. /opt/apigee/apigee-setup/bin/setup.sh -p c -f /opt/silent.conf
15. Repeat this procedure for each Cassandra node in the ring, in the order in
which the nodes are assigned in the CASS_HOSTS property. In this case, you
must install Cassandra in the following order:
a. Node 1 (IP1)
b. Node 2 (IP2)
c. Node 3 (IP3)

After installation, you should Check the Cassandra configuration.

Check the Cassandra configuration


After installing a rack-aware Cassandra configuration, you can check that the nodes
are assigned to the different racks by using the nodetool status command, as
the following example shows:

/opt/apigee/apigee-cassandra/bin/nodetool status

(You execute this command on one of the Cassandra nodes.)


The results should look similar to the following, where the Rack column shows the
different rack IDs for each node:

Datacenter: dc-1

========================

Status=Up/Down

|/ State=Normal/Leaving/Joining/Moving

-- Address Load Tokens Owns Host ID Rack

UN IP1 737 MB 256 ? 554d4498-e683-4a53-b0a5-e37a9731bc5c ra-1

UN IP2 744 MB 256 ? cf8b7abf-5c5c-4361-9c2f-59e988d52da3 ra-2

UN IP3 723 MB 256 ? 48e0384d-738f-4589-aa3a-08dc5bd5a736 ra-3

If you enabled JMX authentication for Cassandra, you must also pass your username
and password to nodetool. For more information, see Use nodetool to manage
cluster nodes.

Install a six-node ring


For additional redundancy, you can expand the Cassandra ring to six nodes. In this
case, you assign two nodes to each of the three racks. This configuration requires an
additional three nodes: Node 4 (IP4), Node 5 (IP5), and Node 6 (IP6).

The following image shows the order in which operations are replicated across the
ring (clockwise) and highlights the fact that during replication, no two adjacent
nodes are on the same rack:
(Figure 3) 6-node
Cassandra ring: Two nodes on each of three racks

In this configuration, each node has two more replicas: one in each of the other two
racks. For example, node 1 in rack 1 has a replica in Rack 2 and Rack 3. Operations
sent to node 1 are also sent to the replicas in the other racks, in clockwise order.

To expand a three-node Cassandra ring to a six-node Cassandra ring, configure the


nodes in the following way in your silent configuration file:

CASS_HOSTS="IP1:1,1 IP4:1,3 IP2:1,2 IP5:1,1 IP3:1,3 IP6:1,2"

As with a three-node ring, you must install Cassandra in the same order in which the
nodes appear in the CASS_HOSTS property:

1. Node 1 (IP1)
2. Node 4 (IP4)*
3. Node 2 (IP2)
4. Node 5 (IP5)
5. Node 3 (IP3)
6. Node 6 (IP6)
* Make your changes in the silent configuration file before running the setup utility on
the fourth node (the second machine in the Cassandra installation order).

NOTE: The ordering and placement of nodes is very important. If done incorrectly, you might end
up with a ring that is imbalanced or has hotspots that do not provide the desired amount of fault
tolerance.

Expand to 12 nodes
To further increase fault tolerance and availability, you can increase the number of
Cassandra nodes in the ring from six to 12. This configuration requires an additional
six nodes (IP7 through IP12).

The following image shows the order in which operations are replicated across the
ring (clockwise) and highlights the fact that during replication, no two adjacent
nodes are on the same rack:

(Figure
4) 12-node Cassandra ring: Four nodes on each of three racks
The procedure for installing a 12-node ring is similar to installing a three or six node
ring: set CASS_HOSTS to the given values and run the installer in the specified order.

To expand to a 12-node Cassandra ring, configure the nodes in the following way in
your silent configuration file:

CASS_HOSTS="IP1:1,1 IP7:1,2 IP4:1,3 IP8:1,1 IP2:1,2 IP9:1,3 IP5:1,1 IP10:1,2 IP3:1,3


IP11:1,1 IP6:1,2 IP12:1,3"

As with a three- and six-node rings, you must execute the installer on nodes in the
order in which the nodes appear in the configuration file:

1. Node 1 (IP1)
2. Node 7 (IP7)*
3. Node 4 (IP4)
4. Node 8 (IP8)
5. Node 2 (IP2)
6. Node 9 (IP9)
7. Node 5 (IP5)
8. Node 10 (IP10)
9. Node 3 (IP3)
10. Node 11 (IP11)
11. Node 6 (IP6)
12. Node 12 (IP12)

* You must make these changes before installing Apigee Edge for Private Cloud on
the 7th node (the second machine in the Cassandra installation order).

NOTE: As with the three-node and six-node rings, the ordering and placement of nodes is very
important. If done incorrectly, you might end up with a ring that is imbalanced or has hotspots
that do not provide the desired amount of fault tolerance.

Edge Installation Overview


A typical Edge installation consists of Edge components distributed across multiple
nodes. After you install Edge on a node, you then install and configure one or more
Edge components on the node

Installation process
Installing Edge on a node is a multi-step process:

1. Disable SELinux on the node or set it to permissive mode. See Install the Edge
apigee-setup utility for more.
2. Decide if you want to enable Cassandra authentication.
3. Decide if you want to set up master-standby replication for Postgres.
4. Select your Edge configuration from the list of recommended topologies. For
example, you can install Edge on a single node for testing, or on 13 nodes for
production. See Installation Topologies for more.
5. On each node in your selected topology, install the Edge apigee-setup
utility:
● Download the Edge bootstrap_4.51.00.sh file to
/tmp/bootstrap_4.51.00.sh.
● Install the Edge apigee-service utility and dependencies.
● Install the Edge apigee-setup utility and dependencies.
See Install the Edge apigee-setup utility for more.
6. Use the apigee-setup utility to install one or more Edge components on
each node based on your selected topology.
See Install Edge components on a node.
7. On the Management Server node, use the apigee-setup utility to install
apigee-provision, the utilities that you use to create and manage Edge
organizations.
See Onboard an organization for more.
8. Restart the Classic UI component on each node after the installation is
complete, as the following example shows:
9. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
10. (Recommended) After you complete the initial installation, Apigee
recommends that you install the new Edge UI (whose component name is
edge-management-ui), which is an enhanced user interface for developers
and administrators of Apigee Edge for Private Cloud.
For more information, see Install the new Edge UI.

After the installation is complete, check out this list of common post-installation
actions.

Note: If you are installing Edge on Red Hat Enterprise Linux (RHEL) 8.x, you must first disable the
PostgreSQL and NGINX modules in order to use the Edge apigee-setup utility. See Prerequiste:
Disable the PostgreSQL and NGINX modules.

Who can perform the install


The Apigee Edge distribution files are installed as a set of RPMs and dependencies.
To install, uninstall, and update Edge RPMs, the commands must be run by the root
user or by a user that has full sudo access. For full sudo access, that means the user
has sudo access to perform the same operations as root.

Any user who wants to run the following commands or scripts must either be root, or
be a user with full sudo access:

● apigee-service utility:
● apigee-service commands: install, uninstall, update.
● apigee-all commands: install, uninstall, update.
● setup.sh script to install Edge components (Unless you have already used
"apigee-service install" to install the required RPMs. Then root or full
sudo access if not required.)
● update.sh script to update Edge components

Also, the Edge installer creates a new user on your system, named "apigee". Many
Edge commands invoke sudo to run as the "apigee" user.

Any user who wants to run all other commands than the ones shown above must be
a user with full sudo access to the "apigee" user. These commands include:

● apigee-service utility commands, including:


● apigee-service commands such as start, stop, restart,
configure.
● apigee-all commands such as start, stop, restart,
configure.

Creating a user with full sudo access to "apigee" user


NOTE: You cannot perform this step until after you have run bootstrap_4.51.00.sh file to
create the "apigee" user.

To configure a user to have full sudo access to the "apigee" user, use the "visudo"
command to edit the sudoers file to add:

installUser ALL=(apigee) NOPASSWD: ALL

Where installUser is the username of the person working with Edge.

Setting permissions on configuration files

Any files or resources used by the Edge commands must be accessible to the
"apigee" user. This includes the Edge license file and any config files.

NOTE: You can set the RUN_USER property for an Edge component to specify a different user
than "apigee". If you do, then all of the Edge commands for that component invoke sudo to run as
that user. Files or resources must then be accessible to that user.

When creating a configuration file, you can change its owner to "apigee:apigee" to
ensure that it is accessible to Edge commands:

1. Create the file in an editor as any user.


2. chown the owner of the file to "apigee:apigee" or, if you changed the user
running the Edge service from the "apigee" user, chown the file to the user
who is running the Edge service.

Separating Edge install tasks between root and non-root user

While it is simplest to perform the entire Edge install process as root or by a user
that has full sudo access, that is not always possible. Instead, you can separate the
process into tasks performed by root and tasks performed by a user with full sudo
access to the "apigee" user.

1. Tasks performed by root:


a. Download and run the bootstrap_4.51.00.sh file:

curl https://software.apigee.com/bootstrap_4.51.00.sh -o
/tmp/bootstrap_4.51.00.sh

b. sudo bash /tmp/bootstrap_4.51.00.sh apigeeuser=uName


apigeepassword=pWord
c. This step installs the apigee-service utility and creates the "apigee"
user.
d. Configure a user to have full sudo access to the "apigee" user as
described in Creating a user with full sudo access to "apigee" user.
e. Install the apigee-setup utility:
f. /opt/apigee/apigee-service/bin/apigee-service apigee-setup install
g. Use the apigee-setup utility to install Edge RPMs on the node:
h. /opt/apigee/apigee-service/bin/apigee-service compName install
i. The Edge RPMs that you install on the node depend on your topology.
The list of available components includes: apigee-provision,
apigee-validate, apigee-zookeeper, apigee-cassandra,
apigee-openldap, edge-management-server, edge-ui,
edge-router, edge-message-processor, apigee-
postgresql, apigee-qpidd, edge-postgres-server,
edge-qpid-server.
2. After the root user installs the Edge RPMs on the node, the user with full sudo
access to the "apigee" user completes the configuration process:
a. Use the setup.sh utility to complete the configuration of the Edge
components on the node. The form of the command depends on the
components that you installed on the node. For a complete list, see
Install Edge components on a node.
For example, to complete the installation of ZooKeeper and Cassandra,
use the following command:
b. /opt/apigee/apigee-setup/bin/setup.sh -p ds -f configFile
c. Where configFile is the Edge configuration file.
Or, to perform an all-in-one install, use the following command:
d. /opt/apigee/apigee-setup/bin/setup.sh -p aio -f configFile

Location of installation configuration files


You must pass a configuration file to the apigee-setup utility that contains the
information about the Edge installation. The only requirement on silent installations
is that the configuration file must be accessible or readable by the "apigee" user. For
example, put the file in the /usr/local/var or /usr/local/share directory on
the node and chown it to "apigee:apigee".
All information in the configuration file is required except for the Edge system
administrator's password. If you omit the password, the apigee-setup utility
prompts you to enter it on the command line.

See Install Edge components on a node for more.

Handling an installation failure


In the case of a failure during the installation of an Edge component, you can try to
correct the issue, and then run the installer again. The installer is designed to be run
repeatedly in cases where it detects a failure, or if you later want to change or update
a component after installation.

After installing or upgrading, be sure to restart the Edge UI component on each node
on which it is running.

Internet or non-Internet installation


To install Edge on a node, the node must be able to access the Apigee repository:

● Nodes with an external Internet connection


Nodes with an external internet connection access the Apigee repository to
install the Edge RPMs and dependencies.
● Nodes without an external Internet connection
Nodes without an external Internet connection can access a mirrored version
of the Apigee repository that you set up internally. This repository contains all
Edge RPMs, but you have to ensure that you have all other dependencies
available from repos on the internal network.
Note: Apigee does not host all third-party dependencies in our public repositories. You
must download and install these dependencies from publicly accessible repositories.
To create the internal Apigee repository, you do require a node with an
external internet access to be able to download the Edge RPMs and
dependencies. Once you have created the internal repo, you can then move it
to another node or make that node accessible to the Edge nodes for
installation.

Using a local Edge repository to maintain your Edge


version
One of the reasons to use a local, or mirrored, repository is for installing Edge on
nodes with no external internet connection, as described in the previous section.

Note: However, there is another advantage to using a local repo, even for nodes with an external
internet connection. When you install Edge from the Apigee public repository, you always install
the latest Edge RPMs. Therefore, if you want to download and store Edge RPMs for a specific
version of Edge, then you should create a local repo for that Edge version. You can then use that
local repo to perform installations for any version of Edge.
For example, you first use the local repo to install an Edge development environment. Then, when
you are ready to move to a production environment, you again install Edge from the local repo. By
installing from the local repo, you guarantee that your development and production environments
match.

A mirrored repo is very flexible. For example, you can create a mirrored repo from the latest Edge
RPMs or from a specific version of Edge. After you create the repo, you can also update it to add
RPMs from difference Edge versions. See Install the Edge apigee-setup utility for more.

Resolving RPM installation dependencies


The Apigee Edge distribution files are installed as a set of RPM files, each of which
can have its own chain of installation dependencies. Many of these dependencies
are defined by third-party components that are outside the control of Apigee and can
change at any time. Therefore, the documentation does not list the explicit version
number of each dependency.

If you are performing an installation on a machine with internet access, the node can
download the necessary RPMs and dependencies. However, if you are installing
from a node without internet access, you typically set up an internal repo containing
all necessary dependencies. The only way to guarantee that all dependencies are
included in your local repo is to attempt an installation, identify any missing
dependencies, and copy them to the local repo until the installation succeeds.

Common Yum commands


The Edge installation tools for Linux rely on Yum to install and update components.
You might have to use several Yum commands to manage an installation on a node.

● Clean all Yum caches:


● sudo yum clean all
● To update an Edge component:
● sudo yum update componentName
● For example:

sudo yum update apigee-setup

● sudo yum update edge-management-server

File System Structure


Edge installs all files in the /opt/apigee directory.

Note: You cannot change this directory location. However, you can create a symlink to map it to
a different location. See Install the Edge apigee-setup utility for more.

In this guide and in the Edge Operations Guide, the root installation directory is noted
as:

/opt/apigee
The installation uses the following file system structure to deploy Apigee Edge for
Private Cloud.

Log Files

The log file for apigee-setup and the setup.sh script is written to /tmp/setup-
root.log.

The log files for each component are contained in the /opt/apigee/var/log
directory. Each component has its own subdirectory. For example, the logs for the
Management Server are in the directory:

/opt/apigee/var/log/edge-management-server

The following tables lists the location of the log files:

Component Location

Management Server /opt/apigee/var/log/edge-


management-server

Router /opt/apigee/var/log/edge-
router

The Edge Router is implemented


using NGINX. For additional logs, see:

/opt/apigee/var/log/edge-
router/nginx
/opt/nginx/logs

Message Processor /opt/apigee/var/log/edge-


message-processor

Apigee Qpid Server /opt/apigee/var/log/edge-


qpid-server

Apigee Postgres Server /opt/apigee/var/log/edge-


postgres-server

Classic UI (not the new Edge UI, /opt/apigee/var/log/edge-ui


whose component name is
edge-management-ui)

ZooKeeper /opt/apigee/var/log/apigee-
zookeeper

OpenLDAP /opt/apigee/var/log/apigee-
openldap

Cassandra /opt/apigee/var/log/apigee-
cassandra/system.log

Qpidd /opt/apigee/var/log/apigee-
qpidd

PostgreSQL database /opt/apigee/var/log/apigee-


postgresql

apigee-monit /opt/apigee/var/log/apigee-
monit

Data

Component Location

Management Server /opt/apigee/data/edge-management-


server

Router /opt/apigee/data/edge-router

Message Processor /opt/apigee/data/edge-message-


processor

Apigee Qpid agent /opt/apigee/data/edge-qpid-server

Apigee Postgres agent /opt/apigee/data/edge-postgres-


server

ZooKeeper /opt/apigee/data/apigee-zookeeper

OpenLDAP /opt/apigee/data/apigee-openldap

Cassandra /opt/apigee/data/apigee-
cassandra/data

Qpidd /opt/apigee/data/apigee-qpid/data
PostgreSQL database /opt/apigee/data/apigee-
postgres/pgdata

apigee-monit /opt/apigee/data/apigee-monit

Enable system check on install


The Edge installation configuration file supports the following property:

ENABLE_SYSTEM_CHECK=y

If you set this property to "y", the installer checks that the system meets the CPU and
memory requirements for the component being installed. The default value is "n" to
disable the check.

Install the Edge apigee-setup utility


To install Edge on a node, you first install the Edge apigee-setup utility. If you are in
an environment where your nodes do not have an external internet connection, you
must also install a local copy of the Apigee repo.

Default installation directory: /opt/apigee


Edge installs all files in the /opt/apigee directory. You cannot change this
directory. However, if desired, you can create a symlink to map /opt/apigee to
another location. See Installation Requirements for more information.

Prerequisite: Disable SELinux


You must disable SELinux, or set it to permissive mode, before you can install Edge
apigee-setup utility or any Edge components. If necessary, after installing Edge,
you can re-enable SELinux.

● To temporarily set SELinux to permissive mode, execute the following


command:
1. On a Linux 8.x operating system:
2. sudo setenforce 0
3. To re-enable SELinux after installing Edge:
4. sudo setenforce 1
5. On a Linux 7.x operating system:
6. sudo setenforce 0
7. To re-enable SELinux after installing Edge:
8. sudo setenforce 1
9. On a Linux 6.x operating system:
10. sudo echo 0 > /selinux/enforce
11. To re-enable SELinux after installing Edge:
12. sudo echo 1 > /selinux/enforce
● To permanently disable SELinux or set it to permissive mode:
1. Open /etc/sysconfig/selinux in an editor.
2. Set SELINUX=disabled or SELINUX=permissive
3. Save your edits.
4. Restart the node.
5. If necessary, re-enable SELinux after Edge installation by repeating this
procedure to set SELINUX=enabled.

Prerequisite: Enable EPEL repo


You must enable Extra Packages for Enterprise Linux (or EPEL) to install or update
Edge, or to create a local repo. The command you use depends on your version of
RedHat/CentOS:

● For Red Hat Enterprise Linux (RHEL) 8.0, see Prerequisites for RHEL 8.
● For Red Hat/CentOS/Oracle 7.x:

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

● sudo rpm -ivh epel-release-latest-7.noarch.rpm


● For Red Hat/CentOS/Oracle 6.x:

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm

● sudo rpm -ivh epel-release-latest-6.noarch.rpm


● For AWS-2:

sudo amazon-linux-extras install epel -y

● sudo yum-config-manager --enable epel

Prerequisite: Check libdb4 library version on RedHat 7.4


and CentOS 7.4
On RedHat 7.4 and CentOS 7.4, check the version of the libdb4 RPMs before you
install. Edge requires version 4.8 and some versions of RedHat 7.4 and CentOS 7.4
ship with a later version. If you have a later version, uninstall it and the Edge installer
will then install version 4.8.

You can use the following command to check your version:

rpm -qa | grep libdb4


If you see that the libdb4 RPM version is later than version 4.8, uninstall it.

Prerequisites for RHEL 8


If you are installing Edge on a server running Red Hat Enterprise Linux (RHEL) 8, do
the following steps before performing the installation:

1. Enable Extra Packages for Enterprise Linux (EPEL):


2. sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-
8.noarch.rpm
3. Disable Postgres and NGINX:

sudo dnf module disable postgresql

4. sudo dnf module disable nginx


5. Install Python 2 and create a symlink:

sudo dnf install -y python2

6. sudo ln -s /usr/bin/python2 /usr/bin/python

Install Edge apigee-setup utility on a node with an external


internet connection
To install Edge on a node with an external internet connection:

1. Obtain the username and password from Apigee that you use to access the
Apigee repository. If you have an existing username:password for the Apigee
ftp site, you can use those credentials.
2. Log in to your node as root to install the Edge RPMs
NOTE: While RPM installation requires root access, you can perform Edge configuration
without root access.
3. Install yum-utils and yum-plugin-priorities:

sudo yum install yum-utils

4. sudo yum install yum-plugin-priorities


5. Disable SELinux.
6. Enable the EPEL repo.
7. Check your version of libdb4.
8. If you are installing on RHEL 8, follow the steps in Prerequisites for RHEL 8.
9. If you are installing on Oracle 7.x, run the following command:
10. sudo yum-config-manager --enable ol7_optional_latest
11. If you are installing on AWS, run the following yum-configure-manager
commands:

yum update rh-amazon-rhui-client.noarch


12. sudo yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-
server-optional
13. Download the Edge bootstrap_4.51.00.sh file to
/tmp/bootstrap_4.51.00.sh:
14. curl https://software.apigee.com/bootstrap_4.51.00.sh -o
/tmp/bootstrap_4.51.00.sh
15. Install the Edge apigee-service utility and dependencies:
16. sudo bash /tmp/bootstrap_4.51.00.sh apigeeuser=uName
apigeepassword=pWord
17. Where uName:pWord are the username and password you received from
Apigee. If you omit pWord, you will be prompted to enter it.
By default, the installer checks to see that you have Java 1.8 installed. If you
do not, it installs it for you. Use the JAVA_FIX option to specify how to handle
Java installation. JAVA_FIX takes the following values:
● I: Install OpenJDK 1.8 (default)
● C: Continue without installing Java
● Q: Quit. For this option, you have to install Java yourself.
18. The installation of the apigee-service utility creates the
/etc/yum.repos.d/apigee.repo file that defines the Apigee repository. To view
the definition file, use the command:
19. cat /etc/yum.repos.d/apigee.repo
20. To view the repo contents, use the command:
21. sudo yum -v repolist 'apigee*'
22. Use apigee-service to install the apigee-setup utility:

23. /opt/apigee/apigee-service/bin/apigee-service apigee-setup install


24. Use apigee-setup to install and configure Edge components on the node. See
Install Edge components on a node for more.

Troubleshooting

When attempting to install on a node with an external internet connection, you might
encounter one or more of the following errors:

Cannot open: https:// : @ software.apigee.com//apigee-repo-version.rpm

bootstrap.sh: Error: Repo configuration failed

error: package package_name is not installed

The following table lists some possible resolutions for these errors:

Error Type Possible Resolution


Password Do not use special characters in your Apigee password.
contains bad
characters

Connectivity Test your network connectivity by executing the following ncat


issues command:

nc -v software.apigee.com 443

You should get a message similar to the following:

Connection to software.apigee.com 443 port [tcp/https]


succeeded!

If you don't have nc installed, you can execute the following


telnet command:

telnet software.apigee.com 443

If the commands succeed, you can use CTRL+C to abort the


open connection.

If either command fails, then you have limited or no network


connectivity. Check with your network administrator.

Incorrect Ensure that your username and password are correct.


credentials
For example, check if you get an error when you try to use the
following command with your Apigee username and password:

curl -i -u username:password
https://software.apigee.com/apigee-repo.rpm

Proxy issues Your local configuration uses an egress HTTP proxy and you
have not extended the same configuration to the yum package
manager. Check your environment variables:
echo $http_proxy
echo $https_proxy

For an egress HTTP proxy, you should use one of the following
options:

● Add an HTTP proxy configuration in /etc/yum.conf


● Add global HTTP proxy configuration in
/etc/environment

Install Edge apigee-setup utility on a node with no external


internet connection
If your Edge nodes are behind a firewall, or in some other way are prohibited from
accessing the internet, then you must create several repositories, or mirrors, that
contain files you'll need during the install. Those mirrors must then be accessible to
all nodes. Once created, nodes can then access these local mirrors to install Edge.

NOTE: Apigee does not host all third-party dependencies in our public repositories. You must
download and install these dependencies from publicly accessible repositories.

The Apigee Edge installation process for nodes with no internet connections requires
access to the following local repos:

● Apigee Edge repo: As described in Create a local Apigee repository.


● Yum repo (for utilities such as yum-utils and yum-plugin-priorities):
Your ops team should be able to set this up for you.
● Extra Packages for Enterprise Linux (or EPEL): Your ops team should be able
to set this up for you.

Create a local Apigee repository

To create the internal Apigee repository, you do require a node with an external
internet access to be able to download the Edge RPMs and dependencies. Once you
have created the internal repo, you can then move it to another node or make that
node accessible to the Edge nodes for installation.

After you create a local Apigee repository, you might later have to update it with the
latest Edge release files. The following sections describe how to create a local
Apigee repository, and how to update it.

To create a local Apigee repo:

1. Obtain the username and password from Apigee that you use to access the
Apigee repository. If you have an existing username:password for the Apigee
ftp site, you can use those credentials.
2. Log in to your node as root to install the Edge RPMs.
NOTE: While RPM installation requires root access, you can perform Edge configuration
without root access.
3. Disable SELinux as described above.
4. Download the Edge bootstrap_4.51.00.sh file to
/tmp/bootstrap_4.51.00.sh:
5. curl https://software.apigee.com/bootstrap_4.51.00.sh -o
/tmp/bootstrap_4.51.00.sh
6. Install the Edge apigee-service utility and dependencies:

7. sudo bash /tmp/bootstrap_4.51.00.sh apigeeuser=uName


apigeepassword=pWord
8. Where uName:pWord are the username and password you received from
Apigee. If you omit pWord, you will be prompted to enter it.
9. Install the apigee-mirror utility on the node:
10. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror install
11. NOTE: If you are updating an existing repo to 4.51.00, you only have to update apigee-
mirror:
12. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror update
13.
14. Use the apigee-mirror utility to sync the Apigee repo to the
/opt/apigee/data/apigee-mirror/repos/ directory.
To minimize the size of the repo, include the --only-new-rpms to download
just the latest RPMs. You need approximately 1.6 GB of disk space for the
download:
15. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror sync --only-new-
rpms
16. If you want to download the entire repo, including older RPMs, omit --only-
new-rpms. You need approximately 6 GB of disk space for the full download:
17. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror sync
18. You now have a local copy of the Apigee repo. The next section describes
how to install the Edge apigee-setup utility from the local repo.
19. (Optional) If you want to install Edge from the local repo onto the same node
that hosts the local repo, then you need to first run the following commands:
a. Run bootstrap_4.51.00.sh from the local repo to install the
apigee-service utility:
b. sudo bash
/opt/apigee/data/apigee-mirror/repos/bootstrap_4.51.00.sh
apigeeprotocol="file://" apigeerepobasepath=/opt/apigee/data/apigee-
mirror/repos
c. Use apigee-service to install the apigee-setup utility:
d. /opt/apigee/apigee-service/bin/apigee-service apigee-setup install
e. Use apigee-setup to install and configure Edge components on the
node. See Install Edge components on a node for more.

Install apigee-setup on a remote node from the local repo

You have two options for installing Edge from the local repo. You can either:

● Create a .tar file of the repo, copy the .tar file to a node, and then install Edge
from the .tar file.
● Install a webserver on the node with the local repo so that other nodes can
access it. Apigee provides the NGINX webserver for you to use, or you can use
your own webserver.
Install from the .tar file

To install from the .tar file:

1. On the node with the local repo, use the following command to package the
local repo into a single .tar file named /opt/apigee/data/apigee-
mirror/apigee-4.51.00.tar.gz:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror package
3. Copy the .tar file to the node where you want to install Edge. For example,
copy it to the /tmp directory on the new node.
4. On the new node, disable SELinux as described above.
5. On the new node, be sure that you can access the local Yum utility repo and
the EPEL repo.
6. Double check that all external internet repos are disabled (this should be the
case because you are installing on a machine without internet access):
7. sudo yum repolist
8. All external repos should be disabled, but the local Apigee repo and your
internal repos should be enabled.
9. On the new node, install yum-utils and yum-plugin-priorities from
your local repo:

sudo yum install yum-utils

10. sudo yum install yum-plugin-priorities


11. Your ops team or other group within your organization must set up a local
repo for you to be able to install the Yum tools.
12. On the new node, check your version of libdb4 as described above.
13. If you are installing on Oracle 7.x, run the following command:
14. sudo yum-config-manager --enable ol7_optional_latest
15. If you are installing on AWS, run the following yum-configure-manager
command:
16. sudo yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-
REGION-rhel-server-optional
17. On the new node, untar the file to the /tmp directory:
18. tar -xzf apigee-4.51.00.tar.gz
19. This command creates a new directory, named repos, in the directory
containing the .tar file. For example /tmp/repos.
20. Install the Edge apigee-service utility and dependencies from /tmp/repos:
21. sudo bash /tmp/repos/bootstrap_4.51.00.sh apigeeprotocol="file://"
apigeerepobasepath=/tmp/repos
22. Notice that you include the path to the repos directory in this command.
23. Use apigee-service to install the apigee-setup utility:
24. /opt/apigee/apigee-service/bin/apigee-service apigee-setup install
25. Use apigee-setup to install and configure Edge components on the node.
See Install Edge components on a node for more.

Install from the repo using the NGINX webserver


To install from the repo using the NGINX webserver:

1. Install the NGINX webserver on the repo node:

2. opt/apigee/apigee-service/bin/apigee-service apigee-mirror nginxconfig


3. By default, NGINX is configured to use localhost as the server name and port
3939. To change these values:
a. Open
/opt/apigee/customer/application/mirror.properties in
an editor. Create the file if it does not exist.
b. Set the following values as necessary:

conf_apigee_mirror_listen_port=3939

c. conf_apigee_mirror_server_name=localhost
d. Restart NGINX:
e. /opt/nginx/scripts/apigee-nginx restart
4. By default, the repo requires a username:password of admin:admin. To
change these credentials, set the following environment variables:

MIRROR_USERNAME=uName

5. MIRROR_PASSWORD=pWord
6. On the new node, install yum-utils and yum-plugin-priorities:

sudo yum install yum-utils

7. sudo yum install yum-plugin-priorities


8. On the new node, disable SELinux as described above.
9. On the new node, ensure that the local EPEL repo is enabled.
10. On the new node, check your version of libdb4 as described above.
11. On the remote node, download the Edge bootstrap_4.51.00.sh file to
/tmp/bootstrap_4.51.00.sh:
12. curl http://uName:pWord@remoteRepo:3939/bootstrap_4.51.00.sh -o
/tmp/bootstrap_4.51.00.sh
13. Where uName:pWord are the username and password you set above for the
repo, and remoteRepo is the IP address or DNS name of the repo node.
14. On the remote node, install the Edge apigee-service utility and
dependencies:
15. sudo bash /tmp/bootstrap_4.51.00.sh apigeerepohost=remoteRepo:3939
apigeeuser=uName apigeepassword=pWord apigeeprotocol=http://
16. Where uName:pWord are the repo username and password.
17. On the remote node, use apigee-service to install the apigee-setup
utility:
18. /opt/apigee/apigee-service/bin/apigee-service apigee-setup install
19. Use apigee-setup to install and configure Edge components on the remote
node. See Install Edge components on a node for more.
Update a local Apigee repository

To update the repo, you must download the latest bootstrap_4.51.00.sh file and then
perform a new sync.

To update the repo:

1. Download the Edge bootstrap_4.51.00.sh file to


/tmp/bootstrap_4.51.00.sh:
2. curl https://software.apigee.com/bootstrap_4.51.00.sh -o
/tmp/bootstrap_4.51.00.sh
3. Run the Edge bootstrap_4.51.00.sh file:
4. sudo bash/tmp/bootstrap_4.51.00.sh apigeeuser=uName
apigeepassword=pWord
5. Where uName:pWord are the username and password you received from
Apigee. If you omit pWord, you will be prompted to enter it.
6. Update apigee-mirror:
7. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror update
8. Perform the sync:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror sync --only-new-
rpms
10. Note: The command apigee-service apigee-mirror sync --only-new-rpms
does not work on Oracle Linux 8.X.
11. If you want to entire repo:
12. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror sync

Clean a local Apigee repo

Cleaning the local repo deletes /opt/apigee/data/apigee-mirror and /var/tmp/yum-


apigee-*.

To clean the local repo, use:

/opt/apigee/apigee-service/bin/apigee-service apigee-mirror clean

Add or update Edge 4.16.0x/4.17.0x in a 4.51.00 repo

If you have to maintain installations for Edge 4.16.0x or 4.17.0x in a 4.51.00 repo,
you can maintain a repo that contains all versions. From that repo, you can then
install any version of Edge.

To add 4.16.0x/4.17.0x to a 4.51.00 repo:

1. Ensure that you have installed the 4.51.00 version of the apigee-mirror
utility:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror version
3. You should see a result in the form below, where xyz is the build number:
4. apigee-mirror-4.51.00-0.0.xyz
5. Use the apigee-mirror utility to download Edge 4.16.0x/4.17.0x to your
repo. Notice how you prefix the command with the desired version:
6. apigeereleasever=4.17.01 /opt/apigee/apigee-service/bin/apigee-service
apigee-mirror sync --only-new-rpms
7. Use this same command to later update the 4.16.0x/4.17.0x repos by
specifying the required version numbers.
8. Examine the /opt/apigee/data/apigee-mirror/repos directory to see
the file structure:
9. ls /opt/apigee/data/apigee-mirror/repos
10. You should see the following files and directories:

apigee

apigee-repo-1.0-6.x86_64.rpm

bootstrap_4.16.01.sh

bootstrap_4.16.05.sh

bootstrap_4.17.01.sh

bootstrap_4.17.05.sh

bootstrap_4.17.09.sh

bootstrap_4.18.01.sh

bootstrap_4.18.05.sh

bootstrap_4.19.01.sh

11. thirdparty
12. Notice how you have a bootstrap file for all versions of Edge. The apigee
directory also contains separate directories for each version of Edge.
13. To package the repo into a .tar file, use the following command:

14. apigeereleasever=4.17.01 /opt/apigee/apigee-service/bin/apigee-service


apigee-mirror package
15. This command packages all the 4.17.0x and 4.16.0x repos into the same .tar
file. You cannot package only part of the repo.

To install Edge from the local repo or .tar file, just make sure to run the correct
bootstrap file by using one of the following commands. This example installs Edge
4.17.01:

● If installing from a .tar file, run the correct bootstrap file from the repo:

● sudo bash /tmp/repos/bootstrap_4.17.01.sh apigeeprotocol="file://"


apigeerepobasepath=/tmp/repos
● To complete the installation, follow the remaining steps from "Install from
the .tar file" above.
● If installing using the NGINX webserver, download and then run the correct
bootstrap file from the repo:

/usr/bin/curl http://uName:pWord@remoteRepo:3939/bootstrap_4.17.01.sh -o
/tmp/bootstrap_4.17.01.sh

● sudo bash /tmp/bootstrap_4.17.01.sh apigeerepohost=remoteRepo:3939


apigeeuser=uName apigeepassword=pWord apigeeprotocol=http://
● To complete the installation, follow the remaining steps from "Install from the
repo using the NGINX webserver" above.

Install Edge components on a node


After you install the Edge apigee-setup utility on a node, use the apigee-setup
utility to install one or more Edge components on the node.

The apigee-setup utility uses a command in the form:

/opt/apigee/apigee-setup/bin/setup.sh -p component -f configFile

Where component is the Edge component to install, and configFile is the silent
configuration file containing the installation information. The configuration file must
be accessible or readable by the "apigee" user. For example, you can create a new
directory for the files, place them in the /usr/local or /usr/local/share directory, or
anywhere else on the node accessible by the "apigee" user.

For example, to install the Edge Management Server:

/opt/apigee/apigee-setup/bin/setup.sh -p ms -f /usr/local/myConfig

For information on installing the Edge apigee-setup, see Install the Edge apigee-
setup utility.

Installation considerations
As you write your config file, take into consideration the following options.

Setting up Postgres master-standby replication

By default, Edge installs all Postgres nodes in master mode. However, in production
systems with multiple Postgres nodes, you must configure them to use master-
standby replication so that if the master node fails, the standby node can continue to
server traffic.
You can enable and configure master-standby replication at install time by using
properties in the silent config file. Or, you can enable master-standby replication after
installation. For more, see Set up master-standby replication for Postgres.

Enabling Cassandra authentication

By default, Cassandra installs without authentication enabled. That means anyone


can access Cassandra. You can enable authentication after installing Edge, or as
part of the installation process.

Note: While you can enable authentication when you install Cassandra, you cannot change the
default username and password. You must perform that step manually after the installation of
Cassandra completes.

For more, see Enable Cassandra authentication.

Using a protected port when creating a virtual host

If you want to create a virtual host that binds the Router to a protected port, such as
port numbers less than 1000, then you have to configure the Router to run as a user
with access to those ports. By default, the Router runs as the user "apigee" which
does not have access to privileged ports.

For information about how to configure a virtual host and Router to access ports
below 1000, see Setting up a virtual host.

Install the new Edge UI

After you complete the initial installation, Apigee recommends that you install the
new Edge UI, which is an enhanced user interface for developers and administrators
of Apigee Edge for Private Cloud. (The Classic UI is installed by default.)

Note that the Edge UI requires that you disable Basic authentication and use an IDP
such as SAML or LDAP.

For more information, see Install the new Edge UI.

Specifying the components to install


The following table lists the options you pass to the -p option of the apigee-
service utility to specify which components to install on the node:

Component Description

c Install Cassandra only.

zk install ZooKeeper only.


ds Install ZooKeeper and Cassandra.

ld Install OpenLDAP only.

mt Install Edge Management Server, which also installs


OpenLDAP.

If you set USE_LDAP_REMOTE_HOST=y in the config


file, the OpenLDAP installation is skipped and the
Management Server uses OpenLDAP installed on a
different node.

ms Install Edge Management Server, which also installs the


Edge UI and OpenLDAP.

If you set USE_LDAP_REMOTE_HOST=y in the config


file, the OpenLDAP installation is skipped and the
Management Server uses OpenLDAP installed on a
different node.

r Install Edge Router only.

mp Install Edge Message Processor only.

rmp Install Edge Router and Message Processor.

ui Install the Edge UI.

qs Install Qpid Server only.

ps Install Postgres Server only.

pdb Install Postgres database only - used only when


installing the Apigee Developer Services portal (or
simply, the portal). See Install the portal.

sax Install the analytics components, meaning Qpid and


Postgres.

Use this option for development and testing only, not


for production.

sso Install the Apigee SSO module.


mo Install Monetization.

sa Install Edge standalone, meaning Cassandra,


ZooKeeper, Management Server, OpenLDAP, Edge UI,
Router, and Message Processor. This option omits the
Edge analytics components: Qpid and Postgres.

Use this option for development and testing only, not


for production.

aio Install all components on a single node.

Use this option for development and testing only, not


for production.

dp Install the portal.

Note: We recommend installing OpenLDAP on no more than three nodes. To install OpenLDAP
on more than one node, you need to use a 13-node clustered-installation.

If you are installing additional management nodes in a topology, use the


USE_LDAP_REMOTE_HOST=y property in the Edge configuration file, so that the management
server does not have to install an additional LDAP server.

Creating a configuration file


The configuration file contains all the information necessary to install Edge. You can
often use the same configuration file to install all components in an Edge
installation.

Note: In the configuration file, you must specify all Cassandra nodes by IP address. Other
components can be specified by IP address or DNS name.

However, you will have to use different configuration files, or modify your
configuration file, if:

● You are installing multiple OpenLDAP servers and need to configure


replication as part of a 13-node installation. Each file requires different values
for LDAP_SID and LDAP_PEER.
● You are creating multiple data centers as part of a 12-node installation. Each
data center requires different settings for properties such as
ZK_CLIENT_HOSTS and CASS_HOSTS.

Each installation topology described below shows an example config file for that
topology. For a complete reference on the config file, see Edge Configuration File
Reference.
Warning: Creating a config file on a Windows machine and then copying it to a Linux machine
can add additional end-of-line, carriage return, or newline characters to the file that are not
compatible with all Linux utilities. This situation can also occur if you copy text from a Windows
editor and paste into a Linux window. As an alternative, you can use the Linux dos2unix utility
to clean up a config file created on Windows. Or, make sure to do all editing of config files in a
Linux editor.

Test system requirements without running an install


Edge for the Private Cloud supports the ENABLE_SYSTEM_CHECK=y property to
check CPU and memory requirements on a machine as part of an install. However, in
previous releases of Edge, that check required you to actually perform the install.

You can now use the "-t" flag to make that check without having to do an install. For
example, to check the system requirements for an "aio" install without actually doing
the install, use the following command:

/opt/apigee/apigee-setup/bin/setup.sh -p aio -f configFile -t

This command displays any errors with the system requirements to the screen.

See Installation requirements for a list of system requirements for all Edge
components.

Installation log files


By default, the setup.sh utility writes log information about the installation to:

/opt/apigee/var/log/apigee-setup/setup.log

If the user running the setup.sh utility does not have access to that directory, it
writes the log to the /tmp directory as a file named setup_username.log.

If the user does not have access to /tmp, the setup.sh utility fails.

Install Edge components


This section describes how to install Edge components for the different topologies.
The order of component installation is based on your desired topology.

All of the installation example shown below assume that you are installing:

● With Cassandra authentication disabled (default). See Enable Cassandra


authentication for more.
● With Postgres master-standby replication disabled (default). See Set up
master-standby replication for Postgres for more.
● Message Processor and Router on the same node. If you install the Message
Processors and Routers on different nodes, install all the Message
Processors first, and then all the Routers.
Prerequisites

Before you can install Edge components, you must:

● Check the Installation requirements for prerequisites and a list of required


files to obtain before proceeding with the installation. Ensure that you have
reviewed the requirements before beginning the installation process.
● Disable SELinux or set it to permissive mode. See Install the Edge apigee-
setup utility for more.

All-in-one installation
1. Install all components on a single node using the command:
2. /opt/apigee/apigee-setup/bin/setup.sh -p aio -f configFile
3. Restart the Classic UI component after the installation is complete:
4. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
5. This applies to the Classic UI, not the new Edge UI whose component name is
edge-management-ui.
6. Test the installation as described at Test the install.
7. Onboard your organization as described at Onboard an organization.

See a video of an Edge all-in-one install here.

Shown below is a silent configuration file for this topology. For a complete reference
on the config file, see Edge Configuration File Reference.

# With SMTP
IP1=IP_or_DNS_name_of_Node_1
HOSTIP=$(hostname -i)
ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com
# Admin password must be at least 8 characters long and contain one uppercase
# letter, one lowercase letter, and one digit or special character
APIGEE_ADMINPW=ADMIN_PASSWORD
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
LDAP_TYPE=1
APIGEE_LDAPPW=LDAP_PASSWORD
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1"
ZK_CLIENT_HOSTS="$IP1"
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1"
# Default is postgres
PG_PWD=postgres
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"

2-node standalone installation

See Installation topologies for the list of Edge topologies and node numbers.

1. Install Standalone Gateway and node 1


2. /opt/apigee/apigee-setup/bin/setup.sh -p sa -f configFile
3. Install Analytics on node 2:
4. /opt/apigee/apigee-setup/bin/setup.sh -p sax -f configFile
5. Restart the Classic UI component on node 1:
6. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
7. This applies to the Classic UI, not the new Edge UI whose component name is
edge-management-ui.
8. Test the installation as described at Test the install.
9. Onboard your organization as described at Onboard an organization.

Shown below is a silent configuration file for this topology. For a complete reference
on the config file, see Edge Configuration File Reference.

# With SMTP
IP1=IP_of_Node_1
HOSTIP=$(hostname -i)
ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=ADMIN_PASSWORD
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
LDAP_TYPE=1
APIGEE_LDAPPW=LDAP_PASSWORD
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1"
ZK_CLIENT_HOSTS="$IP1"
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1"
# Default is postgres
PG_PWD=postgres
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"

5-node installation

See Installation topologies for the list of Edge topologies and node numbers.

1. Install Datastore cluster on nodes 1, 2 and 3:


2. /opt/apigee/apigee-setup/bin/setup.sh -p ds -f configFile
3. Install Management Server on node 1:
4. /opt/apigee/apigee-setup/bin/setup.sh -p ms -f configFile
5. Install Router and Message Processor on nodes 2 and 3:
6. /opt/apigee/apigee-setup/bin/setup.sh -p rmp -f configFile
7. Install Analytics on node 4 and 5:
8. /opt/apigee/apigee-setup/bin/setup.sh -p sax -f configFile
9. Restart the Classic UI component on node 1:
10. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
11. This applies to the Classic UI, not the new Edge UI whose component name is
edge-management-ui.
12. Test the installation as described at Test the install.
13. Onboard your organization as described at Onboard an organization.

Shown below is a silent configuration file for this topology. For a complete reference
on the config file, see Edge Configuration File Reference.

# With SMTP
IP1=IP_of_Node_1
IP2=IP_of_Node_2
IP3=IP_of_Node_3
IP4=IP_of_Node_4
IP5=IP_of_Node_5
HOSTIP=$(hostname -i)
ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=ADMIN_PASSWORD
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=1
APIGEE_LDAPPW=LDAP_PASSWORD
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1 $IP2 $IP3"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1 $IP2 $IP3"
# Default is postgres
PG_PWD=postgres
PG_MASTER=$IP4
PG_STANDBY=$IP5
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"

9-node clustered installation

See Installation topologies for the list of Edge topologies and node numbers.

1. Install Datastore Cluster Node on node 1, 2 and 3:


2. /opt/apigee/apigee-setup/bin/setup.sh -p ds -f configFile
3. Install Apigee Management Server on node 1:
4. /opt/apigee/apigee-setup/bin/setup.sh -p ms -f configFile
5. Install Router and Message Processor on nodes 4 and 5:
6. /opt/apigee/apigee-setup/bin/setup.sh -p rmp -f configFile
7. Install Apigee Analytics Qpid Server on node 6 and 7:
8. /opt/apigee/apigee-setup/bin/setup.sh -p qs -f configFile
9. Install Apigee Analytics Postgres Server on node 8 and 9:
10. /opt/apigee/apigee-setup/bin/setup.sh -p ps -f configFile
11. Restart the Classic UI component on node 1:
12. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
13. This applies to the Classic UI, not the new Edge UI, whose component name is
edge-management-ui.
14. Test the installation as described at Test the install.
15. Onboard your organization as described at Onboard an organization.

Shown below is a silent configuration file for this topology. For a complete reference
on the config file, see Edge Configuration File Reference.

# With SMTP
IP1=IP_of_Node_1
IP2=IP_of_Node_2
IP3=IP_of_Node_3
IP8=IP_of_Node_8
IP9=IP_of_Node_9
HOSTIP=$(hostname -i)
ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=ADMIN_PASSWORD
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=1
APIGEE_LDAPPW=LDAP_PASSWORD
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1 $IP2 $IP3"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"
# Must use IP addresses for CASS_HOSTS, not DNS names.
# Optionally use Cassandra racks
CASS_HOSTS="$IP1 $IP2 $IP3"
# Default is postgres
PG_PWD=postgres
SKIP_SMTP=n
PG_MASTER=$IP8
PG_STANDBY=$IP9
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"

13-node clustered installation

This section describes the order of installation for a 13-node cluster. For a list of
Edge topologies and node numbers, see Installation topologies.

The order of installation for a 13-node cluster is as follows:

1. Install Datastore Cluster Node on node 1, 2 and 3:


2. /opt/apigee/apigee-setup/bin/setup.sh -p ds -f configFile
3. Install OpenLDAP on node 4 and 5:
4. /opt/apigee/apigee-setup/bin/setup.sh -p ld -f configFile
5. Install Apigee Management Server on node 6 and 7:
6. /opt/apigee/apigee-setup/bin/setup.sh -p ms -f configFile
7. Install Apigee Analytics Postgres Server on node 8 and 9:
8. /opt/apigee/apigee-setup/bin/setup.sh -p ps -f configFile
9. Install Router and Message Processor on nodes 10 and 11:
10. /opt/apigee/apigee-setup/bin/setup.sh -p rmp -f configFile
11. Install Apigee Analytics Qpid Server on node 12 and 13:
12. /opt/apigee/apigee-setup/bin/setup.sh -p qs -f configFile
13. Restart the Classic UI component on nodes 6 and 7:
14. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
15. This applies to the Classic UI, not the new Edge UI whose component name is
edge-management-ui.
16. Test the installation as described at Test the install.
17. Onboard your organization as described at Onboard an organization.

Shown below is a sample silent configuration file for this topology. For a complete
reference on the config file, see Edge Configuration File Reference.

# For all nodes except IP4 and IP5 # For OpenLDAP nodes only (IP4
# (which are the OpenLDAP nodes) and IP5)
IP1=IP_of_Node_1 IP1=IP_of_Node_1
IP2=IP_of_Node_2 IP2=IP_of_Node_2
IP3=IP_of_Node_3 IP3=IP_of_Node_3
IP4=IP_of_Node_4 IP4=IP_of_Node_4
IP5=IP_of_Node_5 IP5=IP_of_Node_5
IP6=IP_of_Node_6 IP6=IP_of_Node_6
IP7=IP_of_Node_7 IP7=IP_of_Node_7
IP8=IP_of_Node_8 IP8=IP_of_Node_8
IP9=IP_of_Node_9 IP9=IP_of_Node_9
HOSTIP=$(hostname -i) HOSTIP=$(hostname -i)
ENABLE_SYSTEM_CHECK=y ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=ADMIN_PASSWORD APIGEE_ADMINPW=ADMIN_PASS
LICENSE_FILE=/tmp/license.txt WORD
# Management Server on IP6 only
MSIP=$IP6 # For the OpenLDAP Server on IP4
USE_LDAP_REMOTE_HOST=y only
LDAP_HOST=$IP4 MSIP=$IP6
LDAP_PORT=10389 USE_LDAP_REMOTE_HOST=n
# Management Server on IP7 only LDAP_TYPE=2
# MSIP=$IP7 LDAP_SID=1
# USE_LDAP_REMOTE_HOST=y LDAP_PEER=$IP5
# LDAP_HOST=$IP5
# LDAP_PORT=10389 # For the OpenLDAP Server on IP5
# Use the same password for both OpenLDAP only
nodes # MSIP=$IP7
APIGEE_LDAPPW=LDAP_PASSWORD # USE_LDAP_REMOTE_HOST=n
MP_POD=gateway # LDAP_TYPE=2
REGION=dc-1 # LDAP_SID=2
ZK_HOSTS="$IP1 $IP2 $IP3" # LDAP_PEER=$IP4
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3" # Set same password for both
# Must use IP addresses for CASS_HOSTS, OpenLDAPs.
not DNS names. APIGEE_LDAPPW=LDAP_PASSWO
# Optionally use Cassandra racks RD
CASS_HOSTS="$IP1 $IP2 $IP3"
# Default is postgres
PG_PWD=postgres
PG_MASTER=$IP8
PG_STANDBY=$IP9
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company
<myco@company.com>"

12-node clustered installation

Before you install Edge on a 12-node clustered topology (two data centers), you must
understand how to set the ZooKeeper and Cassandra properties in the silent config
file.

● ZooKeeper
For the ZK_HOSTS property for both data centers, specify the IP addresses or
DNS names of all ZooKeeper nodes from both data centers, in the same order,
and mark any nodes with the with :observer modifier. Nodes without the
:observer modifier are called "voters". You must have an odd number of
"voters" in your configuration.
In this topology, the ZooKeeper host on host 9 is the observer:

For the ZK_CLIENT_HOSTS property for each data center, specify the IP
addresses or DNS names of only the ZooKeeper nodes in the data center, in
the same order, for all ZooKeeper nodes in the data center. In the example
configuration file shown below, node 9 is tagged with the :observer
modifier so that you have five voters: Nodes 1, 2, 3, 7, and 8.
● Cassandra

All datacenters must to have the same number of Cassandra nodes.


For CASS_HOSTS for each data center, ensure that you specify all Cassandra
IP addresses (not DNS names) for both data centers. For data center 1, list
the Cassandra nodes in that data center first. For data center 2, list the
Cassandra nodes in that data center first. List the Cassandra nodes in the
same order for all Cassandra nodes in the data center.
All Cassandra nodes must have a suffix ":d,r". For example ip:1,1 =
datacenter 1 and rack/availability zone 1; and ip:2,1 = datacenter 2 and
rack/availability zone 1.
For example, "192.168.124.201:1,1 192.168.124.202:1,1 192.168.124.203:1,1
192.168.124.204:2,1 192.168.124.205:2,1 192.168.124.206:2,1"
The first node in rack/availability zone 1 of each datacenter will be used as
the seed server.
In this deployment model, Cassandra setup will look like the following:

See Installation topologies for the list of Edge topologies and node numbers.

1. Install Datastore Cluster Node on node 1, 2, 3, 7, 8, and 9:


2. /opt/apigee/apigee-setup/bin/setup.sh -p ds -f configFile
3. Install Apigee Management Server with OpenLDAP replication on node 1 and
7:
4. /opt/apigee/apigee-setup/bin/setup.sh -p ms -f configFile
5. Install Router and Message Processor on nodes 2, 3, 8 and 9:
6. /opt/apigee/apigee-setup/bin/setup.sh -p rmp -f configFile
7. Install Apigee Analytics Qpid Server on node 4, 5, 10, and 11:
8. /opt/apigee/apigee-setup/bin/setup.sh -p qs -f configFile
9. Install Apigee Analytics Postgres Server on node 6 and 12:
10. /opt/apigee/apigee-setup/bin/setup.sh -p ps -f configFile
11. Restart the Classic UI component on nodes 1 and 7:
12. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
13. This applies to the Classic UI, not the new Edge UI whose component name is
edge-management-ui.
14. Test the installation as described at Test the install.
15. Onboard your organization as described at Onboard an organization.

Shown below is a silent configuration file for this topology. For a complete reference
on the config file, see Edge Configuration File Reference.
● Configures OpenLDAP with replication across two OpenLDAP nodes.
● Specifies the :observer modifier on one ZooKeeper node. In a single data
center installation, omit that modifier.

# Datacenter 1
IP1=IP_of_Node_1
IP2=IP_of_Node_2
IP3=IP_of_Node_3
IP6=IP_of_Node_6
IP7=IP_of_Node_7
IP8=IP_of_Node_8
IP9=IP_of_Node_9
IP12=IP_of_Node_12
HOSTIP=$(hostname -i)
MSIP=$IP1
ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=ADMIN_PASSWORD
LICENSE_FILE=/tmp/license.txt
USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=2
LDAP_SID=1
LDAP_PEER=$IP7
APIGEE_LDAPPW=LDAP_PASSWORD
MP_POD=gateway-1
REGION=dc-1
ZK_HOSTS="$IP1 $IP2 $IP3 $IP7 $IP8 $IP9:observer"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"
# Must use IP addresses for CASS_HOSTS, not DNS names.
# Optionally use Cassandra racks
CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1 $IP7:2,1 $IP8:2,1 $IP9:2,1"
# Default is postgres
PG_PWD=postgres
PG_MASTER=$IP6
PG_STANDBY=$IP12
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"

Test the install


Apigee provides test scripts that you can use to validate your installation.

Run the validation tests


Each step of the validation testing process returns an HTTP 20X response code for a
successful test.

To run the test scripts:

1. Install apigee-validate on a Management Server node:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-validate install
3. Run the setup command on a Management Server node to invoke the test
scripts:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-validate setup -f
configFile
5. The configFile file must contain the following property:
6. APIGEE_ADMINPW=SYS_ADMIN_PASSWORD
7. If omitted, you will be prompted for the password.
By default, the apigee-validate utility creates a virtual host on the Router
that uses port 59001. If that port is not open on the Router, you can optionally
include the VHOST_PORT property in the config file to set the port. For
example:
8. VHOST_PORT=9000
9. The script then does the following:
● Creates an organization and associates it with the pod.
● Creates an environment and associates the Message Processor with
the environment.
● Creates a virtual host.
● Imports a simple health check proxy and deploys the application to the
"test" environment.
● Imports the SmartDocs proxy.
● Executes the test to make sure everything is working as expected.

A successful test returns the 20X HTTP response.

To remove the organization, environment and other artifacts created by the test
scripts:

Note: Do not remove the test artifacts if you intend to use SmartDocs. See Install SmartDocs for
more.

1. Run the following command:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-validate clean -f
configFile
3. Where configFile is the same file you used to run the tests.
Note: If you get errors from the testing and the troubleshooting methodology, contact
Apigee Edge Support and provide the error log.

Verify pod installation


Now that you have installed the Apigee Analytics, Apigee recommends that you
perform following basic but important validation steps:

1. Verify that the Management Server is in the central POD. On Management


Server, run the following curl command:
2. curl -u sysAdminEmail:password http://localhost:8080/v1/servers?
pod=central
3. You should see output in the form:
4. [ {
5. "internalIP" : "192.168.1.11",
6. "isUp" : true,
"pod" : "central",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ ]
},
"type" : [
"application-datastore", "scheduler-datastore", "management-server",
"auth-datastore", "apimodel-datastore", "user-settings-datastore",
"audit-datastore"
],
"uUID" : "d4bc87c6-2baf-4575-98aa-88c37b260469"
},
{
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "central",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "started.at",
"value" : "1454691312854"
}, ... ]
},
"type" : [ "qpid-server" ],
"uUID" : "9681202c-8c6e-4242-b59b-23e3ef092f34"
}]
7. Verify that the Router and Message Processor are in gateway POD. On
Management Server, run the following curl command:
8. curl -u sysAdminEmail:password http://localhost:8080/v1/servers?
pod=gateway
9. You see output similar to the central pod but for the Router and Message
Processor.
10. Verify that Postgres is in the analytics POD. On Management Server, run the
following curl command:
11. curl -u sysAdminEmail:password http://localhost:8080/v1/servers?
pod=analytics
12. You see output similar to the central POD but for Postgres.

Onboard an organization
To onboard an organization, you must create an onboarding configuration file and
then pass it to the setup-org command. Each of these steps is described in the
sections that follow.

For information on using the management API to onboard an organization, see


Creating an organization, environment, and virtual host.

Create an onboarding configuration file

This section includes a sample configuration file for onboarding an organization with
setup-org.

Copy the following example and edit as necessary for your organization:

IP1=192.168.1.1

# Specify the IP or DNS name of the Management Server.

MSIP="$IP1"

# Specify the Edge sys admin credentials.

ADMIN_EMAIL="admin@email.com"
APIGEE_ADMINPW=admin_password # If omitted, you are prompted for it.

# Specify organization name.

ORG_NAME=myorg # lowercase only, no spaces, underscores, or periods.

# Specify the organization administrator user.

# Either specify an existing user, or specify the information

# necessary to create a new user.

# Do not use the sys admin as the organization administrator.

# Create a new user for the organization administrator.

NEW_USER="y"

# New user information if NEW_USER="y".

USER_NAME=new@user.com

FIRST_NAME=new

LAST_NAME=user

# Org admin password must be at least 8 characters long and contain one
uppercase

# letter, one lowercase letter, and one digit or special character

USER_PWD="newUserPword"

ORG_ADMIN=new@user.com

# Or, specify an existing user as the organization admin,

# omit USER_NAME, FIRST_NAME, LAST_NAME, USER_PWD.

# NEW_USER="n"

# ORG_ADMIN=existing@user.com
# Specify environment name.

ENV_NAME=prod # lowercase only

# Specify virtual host information.

VHOST_PORT=9001

VHOST_NAME=default

# If you have a DNS entry for the virtual host.

VHOST_ALIAS=myorg-test.apigee.net

# If you do not have a DNS entry for the virtual host,

# specify the IP and port of each router as a space-separated list:

# VHOST_ALIAS="firstRouterIP:9001 secondRouterIP:9001"

# Optionally configure TLS/SSL for virtual host.

# VHOST_SSL=y # Set to "y" to enable TLS/SSL on the virtual host.

# KEYSTORE_JAR= # JAR file containing the cert and private key.

# KEYSTORE_NAME= # Name of the keystore.

# KEYSTORE_ALIAS= # The key alias.

# KEY_PASSWORD= # The key password, if it has one.

# Specify the analytics group.

# AXGROUP=axgroup-001 # Default name is axgroup-001.

Note that:

● For VHOST_ALIAS, if you already have a DNS record that you will use to
access to the virtual host, specify the host alias and optionally the port, for
example, "myapi.example.com". If you do not yet have a DNS record, you can
use the IP address of the Router.

For more on how to configure the virtual host, see Setting up a virtual host.
● For TLS/SSL configuration, see Keystores and Truststores and Configuring
TLS access to an API for the Private Cloud for more information on creating
the JAR file, and other aspects of configuring TLS/SSL.
● For more information on configuring virtual hosts, see Configuring TLS access
to an API for the Private Cloud.
● You cannot create two organizations with the same name. In that case, the
second create will fail.

Execute setup-org

After you created the onboarding configuration file, you pass it to the setup-org
script to perform the onboarding process. You must run the script on the
Management Server node.

NOTE: The onboarding configuration file must be readable by the "apigee" user.

When onboarding an organization, the setup-org script does the following:

● Creates a new organization.


● Creates an environment.
● Creates a virtual host for the environment.
● Sets the specified user as the organization admin. Note that:
● You can use an existing user or create a new one for the organization
admin.
● The organization admin must not be the same as the sys admin.
● Associates the organization with the "gateway" pod. (This is the default and
cannot be changed.)
● Associates the environment with all Message Processor(s).
● Enables analytics.

To execute setup-org:

1. Install apigee-provision on the Management Server node:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-provision install
3. Run the setup-org script on the Management Server node and point it at the
configuration file that you created in Create an onboarding configuration file:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-provision setup-org -f
configFile
5. The configuration file must be readable by the "apigee" user.
6. Verify that you have successfully onboarded an organization. One way to do
this is to log into the UI by requesting the following URL in a browser:
7. http://IP_address:9000/login
8. Where IP_address is the IP address of the server on which you installed the
Edge UI.
For additional verification steps, see Verify the onboarding.
9. Create your first proxy!

Verify the onboarding

On completion of onboarding, verify the status of the system by issuing the following
curl commands on the Management Server node:

1. Check for user and organization status on the Management Server by


executing the following commands:

curl -u adminEmail:admin_passwd http://localhost:8080/v1/users

curl -u adminEmail:admin_passwd http://localhost:8080/v1/organizations

2. curl -u adminEmail:admin_passwd
http://localhost:8080/v1/organizations/org_name/deployments
3. Check analytics by executing the following command:
4. curl -u adminEmail:admin_password
http://localhost:8080/v1/organizations/org_name/environments/env_name/
provisioning/axstatus
5. Check the PostgreSQL database status by executing the following commands
on Node 2 (as shown in the installation topologies):
6. psql -h /opt/apigee/var/run/apigee-postgresql -U apigee apigee
7. At the command prompt, enter the following command to view the analytics
table for your organization:
8. \d analytics."org_name.env_name.fact"
9. Use the following command to exit psql:
10. \q
11. Access the Apigee Edge UI using a web browser. Remember that you already
noted the management console URL at the end of the installation.
a. Launch your preferred browser and enter the URL of the Edge UI. It
looks similar to the following, where the IP address is for Node 1 (as
shown in the installation topologies), or whichever node on which you
installed the UI for alternative configurations:
b. http://192.168.56.111:9000/login
c. 9000 is the port number used by the UI.
If you are starting the browser directly on the server hosting the Edge
UI, then you can use a URL in the form:
d. http://localhost:9000/login
e. Note: Ensure that port 9000 is open. If you are using a service such as Google
Cloud Engine, you may need to modify the firewall rules so that you can access
the Apigee services via a browser.
f. On the console login page, specify the Apigee system admin
username/password.Note: This is the global system administrator password
that you set during the installation.
12. Sign up for a new Apigee user account and use the new user credential to
login. On the console sign in page, click the Sign In button.
The browser redirects to
http://192.168.56.111:9000/platform/#/org_name/ and opens a
dashboard that lets you configure the organization that you created (if you
logged in using Apigee admin credentials).

Create your first proxy

After you have onboarded a new organization and verified that the onboarding
process was successful, you can now create your first proxy. For more information,
see Build your first API proxy.

Other resources your might find helpful include:

● Samples list
● Mock target RESTful APIs that you can use in your own API-building
experiments

Edge Configuration File Reference


Shown below is an example of a complete silent configuration file for a 9 node Edge
installation. Edit this file as necessary for your configuration. Use the -f option to
setup.sh to include this file. For examples of configuration files that are specific to
each topology, see Install Edge components.

NOTE: The definition of the IP# variables for the Router, Message Processor, Qpid, and Postgres
nodes are for illustrating the node configuration; they are not actually used.

# IP address or DNS name of nodes.


IP1=192.168.1.1 # Management Server, OpenLDAP, UI, ZooKeeper, Cassandra (IP
address only; do not use a DNS name)
IP2=192.168.1.2 # ZooKeeper, Cassandra (IP address only; do not use a DNS name)
IP3=192.168.1.3 # ZooKeeper, Cassandra (IP address only; do not use a DNS name)
IP4=192.168.1.4 # Router, Message Processor
IP5=192.168.1.5 # Router, Message Processor
IP6=192.168.1.6 # Qpid
IP7=192.168.1.7 # Qpid
IP8=192.168.1.8 # Postgres
IP9=192.168.1.9 # Postgres
# Must resolve to IP address or DNS name of host - not to 127.0.0.1 or localhost.
HOSTIP=$(hostname -i)

# Specify "y" to check that the system meets the CPU and memory requirements
# for the component being installed. See Installation Requirements for requirements
# for each component. The default value is "n" to disable check.
ENABLE_SYSTEM_CHECK=n

# When "hostname -i" returns multiple IP addresses,


# set to "y", to have the installer prompt you to select the IP address to use.
ENABLE_DYNAMIC_HOSTIP=n

# Set Edge sys admin credentials.


ADMIN_EMAIL=your@email.com
APIGEE_ADMINPW=yourPassword # If omitted, you are prompted for it.

# Location of Edge license file.


LICENSE_FILE=/tmp/license.txt

# Management Server information.


MSIP=$IP1 # IP or DNS name of Management Server node.
# Specify the port the Management Server listens on for API calls.
# APIGEE_PORT_HTTP_MS=8080 # Default is 8080.

#
# OpenLDAP information.
#
# Set to y if you are connecting to a remote LDAP server.
# If n, Edge installs OpenLDAP when it installs the Management Server.
USE_LDAP_REMOTE_HOST=n

# If connecting to remote OpenLDAP server, specify the IP/DNS name and port.
# LDAP_HOST=$IP1 # IP or DNS name of OpenLDAP node.
# LDAP_PORT=10389 # Default is 10389.
APIGEE_LDAPPW=yourLdapPassword

# Specify OpenLDAP without replication, 1, or with replication, 2.


LDAP_TYPE=1

# Set only if using replication.


# LDAP_SID=1 # Unique ID for this LDAP server.
# LDAP_PEER= # IP or DNS name of LDAP peer.
# The Message Processor and Router pod.
MP_POD=gateway

# The name of the region, corresponding to the data center name.


REGION=dc-1 # Use dc-1 unless installing in a
# multi-data center environment.

# If you are using region names other than dc-1, dc-2 etc, set this property to map
your region
# name to the appropriate dc-x format region name. This property is required by
Management server
# to appropriately register Cassandra data stores based on Cassandra's data centers
and regions.
REGION_MAPPING=":dc-1 :dc-2 ... :dc-x"

# ZooKeeper information.
# See table below if installing in a multi-data center environment.
ZK_HOSTS="$IP1 $IP2 $IP3" # IP/DNS names of all ZooKeeper nodes.
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3" # IP/DNS names of all ZooKeeper nodes.

# Cassandra information.
CASS_CLUSTERNAME=Apigee # Default name is Apigee.

# Space-separated IP addresses of the Cassandra hosts (previously defined; do not


use DNS names)
# Syntax is: IP_address:host_number,rack_number
CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1"

# Set to enable Cassandra authentication.


# CASS_AUTH=y # The default value is n.
# Cassandra uname/pword required if you enabled Cassandra authentication.
# CASS_USERNAME=
# CASS_PASSWORD=''

# Postgres username and password as set when you installed Edge.


# Default is apigee:postgres.
PG_USER=apigee
PG_PWD=postgres

# Use to enable Postgres master-standby replication


# when you have multiple Postgres nodes.
# PG_MASTER=IPofNewMaster
# PG_STANDBY=IPofOldMaster

# SMTP information.
SKIP_SMTP=n # Skip now and configure later by specifying "y".
SMTPHOST=smtp.gmail.com
SMTPUSER=your@email.com
SMTPPASSWORD=yourEmailPassword
SMTPSSL=y
SMTPPORT=465 # If no SSL, use a different port, such as 25.
SMTPMAILFROM="My Company <myco@company.com>"

# The following four properties are only effective for Management server:
# Cassandra JMX uname/pword required if you enabled Cassandra JMX
authentication.
# CASS_JMX_USERNAME =
# CASS_JMX_PASSWORD =

# Cassandra JMX SSL truststore details if you have enabled SSL based JMX in
Cassandra.
# JMX Truststore file should be readable by Apigee user
# CASS_JMX_TRUSTSTORE =
# CASS_JMX_TRUSTSTORE_PASS =

The following table contains additional information about these properties:

Property Note

IP/DNS names Do not use a host name mapping to 127.0.0.1 or an IP


address of 127.0.0.1 when specifying the IP address of a
node.
Note that for Cassandra host definitions, use IP
addresses only; do not use DNS names.

ENABLE_SYSTEM_C If "y", check that the system meets the CPU and memory
HECK requirements for the component being installed. See
Installation Requirements for requirements for each
component.

The default value is "n" to disable check.

ENABLE_DYNAMIC_ If a server has multiple interface cards, the "hostname -i"


HOSTIP command returns a space-separated list of IP
addresses's. By default, the Edge installer uses the first
IP address returned, which might not be correct in all
situations. As an alternative, you can set the following
property in the installation configuration file.

When set to "y", the installer prompts you to select the IP


address to use in the install. The default value is "n".

Caution: If you set ENABLE_DYNAMIC_HOSTIP=y, ensure that


your property file does not set HOSTIP.

ADMIN_EMAIL The system administrator's password must be at least 8


APIGEE_ADMINPW characters long and contain one uppercase letter, one
lowercase letter, one digit or one special character. If
you omit the password, you will be prompted for it.

After installation completes, Apigee recommends that


you remove the password from the configuration file.

LICENSE_FILE The location of the license file, which must be accessible


to the "apigee" user. For example, store it in the /tmp
directory and chmod 777 on the file. The file is copied to
the Edge installation directory.

APIGEE_LDAPPW Specifies the OpenLDAP password.

After installation completes, Apigee recommends that


you remove the password from the configuration file.

USE_LDAP_REMOTE If USE_LDAP_REMOTE_HOST is n, Edge automatically


_HOST installs OpenLDAP when it installs the Management
Server.
LDAP_HOST
LDAP_PORT Set USE_LDAP_REMOTE_HOST to y if you are connecting
to a remote LDAP server. OpenLDAP is not installed with
the Management Server.

If you are connecting to a remote OpenLDAP server, use


LDAP_HOST and LDAP_PORT to specify the IP address
or DNS name and port number of the host.

LDAP_TYPE Set LDAP_TYPE=1 for OpenLDAP with no replication.


LDAP_SID LDAP_TYPE=2 corresponds to OpenLDAP with
replication.
LDAP_PEER
If your Edge topology uses a single OpenLDAP server,
specify 1. If your Edge installation uses multiple
OpenLDAP nodes, such as in a 13-node production
installation, specify 2.
If you enable replication, set the following properties:

● LDAP_SID=1 - Unique ID for this LDAP server.


Each LDAP node uses a different ID. For example,
set to 2 for LDAP peer.
● LDAP_PEER=10.0.0.1 - IP or DNS name of LDAP
peer.

MP_POD Specify the name of the Message Processor and Router


pod. By default, the name is gateway.

REGION Region name. By convention, names are typically in the


form dc-# where # corresponds to an integer value. For
example, dc-1, dc-2, etc. You can use dc-1 unless
installing in a multi-data center environment.

In a multiple data center installation, the value is dc-1, or


dc-2, etc. depending on which data center you are
installing. However, you are not restricted to using only
names in the form dc-#. You can use any name for the
region.

REGION_MAPPING If you are using region names other than dc-1, dc-2 etc,
set this property to map your region name to the
appropriate dc-x format region name. This property is
required by Management server to appropriately register
Cassandra data stores based on Cassandra's data
centers and regions.

ZK_HOSTS The IP addresses or DNS names of the ZooKeeper


nodes. The IP addresses or DNS names must be listed in
the same order on all ZooKeeper nodes.

Use the same format for HOSTIP as you use for


ZK_HOSTS. That is, if you specify the IP address for
ZK_HOSTS use an IP address for HOSTIP. If you use a
DNS, then use a DNS name for both.

In a multi-data center environment, list all ZooKeeper


nodes from both data centers.

Specify the ":observer" modifier on ZooKeeper nodes


only when creating multiple data centers as described in
a 12-host installation. In a single data center installation,
omit that modifier. For more information, see 12-host
clustered installation.
ZK_CLIENT_HOSTS The IP addresses or DNS names of the ZooKeeper nodes
used by this data center. The IP addresses or DNS
names must be listed in the same order on all
ZooKeeper nodes.

Use the same format for HOSTIP as you use for


ZK_CLIENT_HOSTS. That is, if you specify the IP
address for ZK_CLIENT_HOSTS use an IP address for
HOSTIP. If you use a DNS, then use a DNS name for
both.

In a single data center installation, these are the same


nodes as specified by ZK_HOSTS.

In a multi-data center environment, list only the


ZooKeeper nodes in this data center. For more
information, see 12-host clustered installation.

CASS_CLUSTERNAM Optionally specify the name of the Cassandra cluster.


E The default name is "Apigee".

CASS_HOSTS Specifies a comma-separated list of Cassandra nodes'


host IP addresses (not DNS names), and optionally their
data center number and the rack to which they belong.

For production topologies, there must be at least three


nodes in this list. The first two nodes are used as "seed
servers". As a result, the IP addresses must be listed in
the same order on all Cassandra nodes.

The syntax for each entry in the list is as follows:

IP_address[:data_center_number,rack_number]

Cassandra nodes can optionally specify the data center


and rack of the Cassandra node. Specify the
data_center_number modifier only when creating multiple
data centers as described in a 12-host installation. In a
single data center installation, omit that modifier.

For example '192.168.124.201:1,1 = datacenter 1 and


rack/availability zone 1, and '192.168.124.204:2,1 =
datacenter 2 and rack/availability zone 1.

In a multi-datacenter environment, to overcome firewall


issues, CASS_HOSTS must be ordered in a manner (as
shown in above example) such that the nodes of the
current datacenter are placed at the beginning. For more
information, see 12-host clustered installation.
For information on specifying the rack_number for a
Cassandra host, see Add Cassandra rack support.

CASS_AUTH If you enable Cassandra authentication, CASS_AUTH=y,


CASS_USERNAME you can pass the Cassandra user name and password by
using these properties.
CASS_PASSWORD
After installation completes, Apigee recommends that
you remove the password from the configuration file.

CONFIG_DELTA_LO CONFIG_DELTA_LOG controls how changes to


G configuration files are logged. If you set
CONFIG_DELTA_LOG=y, configuration changes for a
component are not logged.

PG_USER By default, the PostgreSQL database has two users


PG_PWD defined: 'postgres' and 'apigee'.

PG_USER lets you change the username of the 'apigee'


user. You cannot change the name of the 'postgres' user.

By default, the PostgreSQL database has two users


defined: 'postgres' and 'apigee'. Both users have a
default password of 'postgres'. Use PG_PWD to set the
password to a different value for both users at install
time.

After installation completes, Apigee recommends that


you remove the password from the configuration file.
PG_MASTER Set to enable Postgres master-standby replication, in the
PG_STANDBY form:

PG_MASTER=IPofNewMaster
PG_STANDBY=IPofOldMaster
NOTE: Apigee strongly recommends that you use IP addresses
rather than host names for the PG_MASTER and PG_STANDBY
properties in your silent configuration file. In addition, you should
be consistent on both nodes.

If you use host names rather than IP addresses, you must be


sure that the host names properly resolve using DNS.

SKIP_SMTP Configure SMTP so Edge can send emails for lost


SMTPHOST passwords and other notifications.
SMTPUSER If SMTP user credentials are not required, omit
SMTPPASSWORD SMTPUSER and SMTPPASSWORD.
SMTPSSL
SMTPMAILFROM is required.
SMTPPORT
SMTPMAILFROM

CASS_JMX_USERNA Cassandra JMX username. Required if you enabled


ME Cassandra JMX authentication.

CASS_JMX_PASSWO Cassandra JMX password. Required if you enabled


RD Cassandra JMX authentication.

CASS_JMX_TRUSTS Cassandra JMX SSL truststore username, if you have


TORE enabled SSL based JMX in Cassandra. The JMX
Truststore file should be readable by Apigee user.

CASS_JMX_TRUSTS Cassandra JMX SSL truststore password, if you have


TORE_PASS enabled SSL based JMX in Cassandra.

In addition to the properties listed here, there are properties for configuring Apigee
mTLS. For more information, see Configure Apigee mTLS.
Install SmartDocs
SmartDocs is installed automatically when you install and run the installation test
scripts described in Test the install. As part of running the test scripts, you run the
following commands:

/opt/apigee/apigee-service/bin/apigee-service apigee-validate install

/opt/apigee/apigee-service/bin/apigee-service apigee-validate setup -f configFile

Where configFile is the same config file you used to install Edge. See Install Edge
components on a node for more.

This command installs SmartDocs as part of running the tests.

To complete the installation:

1. Test that SmartDocs is installed by confirming that the smartdocs.zip file


is located in the following directory:
2. /opt/apigee/apigee-validate/bundles/
3. Or run the following API call on the Management Server node:
4. curl -v -u adminEmail:adminPword 0:8080/v1/o/validate/apis
5. This command should return the following if SmartDocs is installed:
6. [ "smartdocs", "passthrough" ]
7. From the Edge UI, create and update a KVM named “smartdocs_whitelist,” as
shown in the figure below. The KVM should be created in the organization and
environment in which the SmartDocs proxy is currently deployed.
Note: Make sure that the box for encrypted is NOT checked.
● Add a key named “is_whitelist_configured,” where the value is “YES”.
● Add a second key named “allowed_hosts,” where the values are space
separated hostnames or IP addresses called from SmartDocs. The
value of "allowed_hosts" should include any hosts included in OpenAPI
specs added to SmartDocs. For example, if you have an OpenAPI spec
that calls mocktarget.apigee.net, you will need to add
mocktarget.apigee.net to the "allowed_hosts" value. If a host is
not included in the KVM, the SmartDocs response will be 400 Bad
Request with a content payload of Bad Request-Hostname not
permitted.
8.

Note: If you do not add and configure this KVM, the proxy will not enforce
whitelisting. This could result in unauthorized access to your hosts and IP
addresses. Only hostnames and IP addresses of API endpoints documented
with SmartDocs should be included in the "allowed_hosts" values.
9.

Installing Monetization Services


Monetization Services is an extension to Apigee Edge, hence it does not run as a
standalone process. It runs within any existing Apigee Edge setup, with the exception
of the All-In-One (AIO) configuration. You cannot install Monetization Services on an
AIO configuration.

Monetization requirements
● If you are installing Monetization on an Edge topology that uses multiple
Management Server nodes, such as a 13-node installation, then you must
install both Edge Management Server nodes before installing Monetization.
● To install Monetization on Edge where the Edge installation has multiple
Postgres nodes, the Postgres nodes must be configured in Master/Standby
mode. You cannot install Monetization on Edge if you have multiple Postgres
master nodes. For more, see Set up Master-Standby Replication for Postgres.
● Monetization is not available with the All-In-One (AIO) configuration.

Installation overview
The following steps illustrate how to add Monetization Services on an existing
Apigee Edge installation:

● Use the apigee-setup utility to update the Apigee Management Server node
to enable the Monetization Services, for example, catalog management, limits
and notifications configuration, billing and reporting.
If you have multiple Management Server nodes, such as a 13-node
installation, then you must install both Edge Management Server nodes before
installing Monetization.
● Use the apigee-setup utility to update the Apigee Message Processor to
enable the runtime components of the Monetization Services, for example,
transaction recording policy and limit enforcement. If you have multiple
Message Processors, install Monetization on all of them.
● Perform the Monetization onboarding process for your Edge organizations.
● Configure the Apigee Developer Services portal (or simply, the portal) to
support monetization. For more information, see Configure Monetization in
the Developer Portal.

Creating a silent configuration file for Monetization


Shown below is an example silent configuration file for a Monetization installation.
Edit this file as necessary for your configuration. Use the -f option to setup.sh to
include this file.

Note: Typically, you add these properties to the same configuration file that you used to install
Edge, as shown in Install Edge components on a node.

# Edge configuration properties


# Specify IP address or DNS name of node.
IP1=192.168.1.1 # Management Server, OpenLDAP, UI, ZooKeeper, Cassandra
IP2=192.168.1.2 # ZooKeeper, Cassandra
IP3=192.168.1.3 # ZooKeeper, Cassandra
IP4=192.168.1.4 # Router, Message Processor
IP5=192.168.1.5 # Router, Message Processor
IP6=192.168.1.6 # Qpid
IP7=192.168.1.7 # Qpid
IP8=192.168.1.8 # Postgres
IP9=192.168.1.9 # Postgres

# Must resolve to IP address or DNS name of host - not to 127.0.0.1 or localhost.


HOSTIP=$(hostname -i)

# Edge sys admin credentials


ADMIN_EMAIL=your@email.com
APIGEE_ADMINPW=yourPassword # If omitted, you are prompted for it.

# Specify the Management Server port.


APIGEE_PORT_HTTP_MS=8080

#
# Monetization configuration properties.
#
# Postgres credentials from Edge installation.
PG_USER=apigee # Default from Edge installation
PG_PWD=postgres # Default from Edge installation

# Specify Postgres server.


MO_PG_HOST="$IP8" # Only specify one Postgres node.

# Create a Postgres user for Monetization.


# Default username is "postgre".
# If you specify a different user, that user must already exist.
MO_PG_USER=postgre
MO_PG_PASSWD=moUserPWord

# Specify one ZooKeeper host.


# Ensure this is a ZooKeeper leader node in a multi-datacenter environment.
ZK_HOSTS="$IP2"

# Specify Cassandra information.


# Ensure CASS_HOSTS is set to the same value as when you installed Edge.
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1"

# Default is "Apigee", unless it was changed during Edge install.


CASS_CLUSTERNAME=Apigee

# Cassandra uname/pword required only if you enabled Cassandra authentication.


# If your password uses special characters, wrap it in single quotes.
# CASS_USERNAME=
# CASS_PASSWORD=

# Specify the region.


# Default is dc-1 unless you are in a multi-datacenter environment.
REGION=dc-1

# If your Edge config file did not specify SMTP information, add it.
# Monetization requires an SMTP server.
SMTPHOST=smtp.gmail.com
SMTPPORT=465
SMTPUSER=your@email.com
SMTPPASSWORD=yourEmailPassword
SMTPSSL=y
SMTPMAILFROM="My Company <myco@company.com>"

Notes:
● If your Edge config file did not specify SMTP information, add it. Monetization
requires an SMTP server.
● In a single data center installation, odd numbers of ZooKeeper nodes should
be configured as voters. If a number of ZooKeeper nodes are even, then
some nodes will be configured as observers. When you are installing Edge
across an even number of data centers, some ZooKeeper nodes must be
configured as observers to make the number of voter nodes odd. During
ZooKeeper leader election one voter node will be elected as a leader. Ensure
that the ZK_HOSTS property above specifies a leader node in a multiple data
center installation.
● If you enable Cassandra authentication, you can pass the Cassandra user
name and password by using the following properties:

CASS_USERNAME

● CASS_PASSWORD

Integrate Monetization Services with all Management


Servers
Use the following procedure to integrate monetization on Management Server
nodes.

1. If you are installing Monetization on an Edge topology that uses multiple


Management Server nodes, such as a 13-node installation, then ensure that
you installed both Management Server nodes before installing Monetization.
2. On the Management Server node, run the setup script:

3. /opt/apigee/apigee-setup/bin/setup.sh -p mo -f configFile
4. The -p mo option specifies to integrate Monetization.
The configuration file must be accessible or readable by the "apigee" user.
5. If you are installing Monetization on multiple Management Server nodes,
repeat step 2 on the second Management Server node.

On successful configuration, an RDBMS schema for Monetization Services is created


in the PostgreSQL database. This completes the integration of Monetization
Services and its associated components with Postgres Server.

Integrate Monetization Services with all Message


Processors
Use the following procedure to integrate monetization on all Message Processor
nodes.

1. On the first Message Processor node, at the command prompt, run the setup
script:
2. /opt/apigee/apigee-setup/bin/setup.sh -p mo -f configFile
3. The -p mo option specifies to integrate Monetization.
The configuration file must be accessible or readable by the "apigee" user.
4. Repeat this procedure on all Message Processor nodes.

On successful configuration, the Message Processor is updated with Monetization


Services. This completes the integration of Monetization Services and its associated
components with the Message Processors.

Monetization onboarding
To create a new organization and enable monetization:

1. Create the organization as you would any new organization. For more
information, see Onboard an organization.
2. Use the monetization provisioning API as described in Enable monetization
for an organization. To do this, you must have system administrator
privileges.

When you next log in to the Edge UI, you see the Monetization entry in the top-level
menu for the organization:

To configure the portal to support monetization, see Configure Monetization in the


developer portal.

Adding a Management Server node to a Monetization


installation
If you add a Management Server to an existing Edge installation, you must ensure
that you add Monetization services to the new Management Server and configure all
Management Servers so they can communicate.

To add a Management Server:

1. Install the new Management Server.


2. Install Monetization on the new Management Server.
3. On the original Management Server, call the following:
4. /opt/apigee/apigee-service/bin/apigee-service edge-mint-management-server
mint-configure-mgmt-cluster
5. Restart the original Management Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
7. On the new Management Server, call the following:
8. /opt/apigee/apigee-service/bin/apigee-service edge-mint-management-server
mint-configure-mgmt-cluster
9. Restart the new Management Server:
10. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart

Additional configuration
Provide billing documents as PDF files

Monetization displays billing documents to end users in HTML format. To provide


billing documents as PDF files, you can integrate Monetization with a billing system
that provides PDF generation or license a supported third-party PDF library.

Configure organization settings

To add/update organization attributes, you can use a PUT request, as the following
example shows:

curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD \

-v http://ms_IP:8080/v1/organizations/orgId -d 'org object with attributes' -X PUT

Monetization responds with the organization's settings. For example:

{
...
"displayName": "Orgnization name",
"name": "org4",
"properties": {
"property": [
...
{
"name": "MINT_CURRENCY",
"value": "USD"
},
{
"name": "MINT_COUNTRY",
"value": "US"
},
{
"name": "MINT_TIMEZONE",
"value": "GMT"
}
]
}
}

The following table lists the organization-level attributes that are available to
configure a mint organization.

Attributes Description

MINT_TAX_MODEL Accepted values are "DISCLOSED",


"UNDISCLOSED", "HYBRID" (default is null)

MINT_CURRENCY ISO currency code (default is null)

MINT_TAX_NEXUS Tax nexus (default is null)

MINT_DEFAULT_PROD_T Default product tax category (default is


AX_CATEGORY null)

MINT_IS_GROUP_ORG IS group organization (default is "false")

MINT_HAS_BROKER Has broken (default is false)

MINT_TIMEZONE Timezone (default is null)

MINT_TAX_ENGINE_EXT Tax engine ID (default is null)


ERNAL_ID

MINT_COUNTRY Organization's country (default is null)

MINT_REG_NO Organization's registration number, United


Kingdom gives different number than tax
ID (default is null)

MINT_BILLING_CYCLE_ "PRORATED", "CALENDAR_MONTH"


TYPE (default is "CALENDAR_MONTH")

MINT_SUPPORTED_BILL "PREPAID"/"POSTPAID"/"BOTH" (default is


ING_TYPE "PREPAID")
MINT_IS_SEPARATE_IN Indicates whether a separate fee invoice
V_FOR_FEES should be generated (default is "false")

MINT_ISSUE_NETTING_ Indicates whether netting statement


STMT should be issued (default is "false")

MINT_NETTING_STMT_P Indicates whether netting statement


ER_CURRENCY should be generated per currency (default
is "false")

MINT_HAS_SELF_BILLI Indicates whether the organization has


NG self billing (default is "false")

MINT_SELF_BILLING_F Indicates whether the organization has


OR_ALL_DEV self billing for all developers (default is
"false")

MINT_HAS_SEPARATE_I Indicates whether the organization has


NV_FOR_PROD separate invoice per product (default is
"false")

MINT_HAS_BILLING_AD Indicates whether the organization


JUSTMENT supports billing adjustments (default is
"false")

features.isMonetiza Used by the management UI to display


tionEnabled monetization specific menu (default is
"false")

ui.config.isOperato Used by management UI to display


r provider as Operator verses Organization
(default is "true")

For configuring business organization settings using the management UI, see
Access monetization in Edge.

Monetization limits
To enforce monetization limits, attach the Monetization Limits Check policy to API
proxies. Specifically, the policy is triggered under the following conditions:

● Developer accessing monetized API is not registered or has not subscribed to


the rate plan.
● Developer has exceeded the transaction volume for the subscribed rate plan.
● Developer pre-paid account balance or post-paid credit limit has been
reached.

The Monetization Limits Check policy raises faults and blocks API calls in situations
like those listed above. The policy extends the Raise Fault policy, and you can
customize the message returned. The applicable conditions are derived from
business variables.

For more information, see Enforce monetization limits on API proxies.

Enable monetization for an


organization
The following sections describe how to enable monetization for an organization. The
method you use to enable monetization for an organization depends on whether you
are an Edge Cloud or Edge for Private Cloud customer.

Apigee Edge Cloud

For Apigee Edge Cloud customers, Apigee will assist you in enabling monetization
for your organization. Contact Apigee Edge Support for assistance.

Apigee Edge Private Cloud

Note: Ensure that your Edge account has system administrator privileges before
proceeding.

To enable monetization for an organization, issue a POST request to


/asyncjobs/enablemonetization.

You must pass the following information in the request body.

Property Description

adminEmail Default email for monetization notification settings.


mxGroup Group used for Apache Qpid and rating servers. The
group that you choose depends on capacity
requirements, region, and type of organization. For
private cloud, set this value to mxgroup001.

notifyTo Email to notify when monetization has been enabled


successfully.

orgName Name of the organization.

pgHostName Host name for the Postgres database.

pgPassword The password for your Postgres monetization user


account.

pgPort Port for the Postgres database.

pgUserName The account name for your Postgres monetization


user.

For example, the following request enables monetization for the myOrg organization,
where ms_IP is the IP address of the Management Server node and port is the
configured port (such as 8443):

$ curl -H "Content-Type:application/json" -X POST -d \

'{

"orgName" : "myOrg",

"mxGroup" : "mxgroup001",

"pgHostName" : "pg_hostname",

"pgPort" : "5432",

"pgUserName" : "pg_username",

"pgPassword" : "pg_password",

"adminEmail" : "myemail@company.com",

"notifyTo" : "myemail@company.com"
}' \

"https://ms_IP:port/v1/mint/asyncjobs/enablemonetization" \

-u email:password

The following provides an example of the response:

"id": "c6eaa22d-27bd-46cc-be6f-4f77270818cf",

"log": "",

"orgId": "myOrg",

"status": "RUNNING",

"type": "ENABLE_MINT"

After the request is complete, an email is sent to the email configured for the
notifyTo property in the request, and the status field will change to one of the
following values: COMPLETED, FAILED, or CANCELLED.

You can check the status of the request by issuing a GET to /asyncjobs/{id}.

For example:

$ curl -X GET "https://ms_IP:port/v1/mint/asyncjobs/c6eaa22d-27bd-46cc-be6f-


4f77270818cf" \

-u email:password

"id": "c6eaa22d-27bd-46cc-be6f-4f77270818cf",

"log": "",

"orgId": "myOrg",

"status": "COMPLETED",

"type": "ENABLE_MINT"
}

After the install

Invoke the test scripts to


ensure that your installation
of each component was
successful.

whatshot Tune JVM heap Optimize your Java memory settings for
settings each node.

Manage LDAP password Change the default LDAP password and


policy configure various authentication settings.

Install apigee-monit on the Install and use a tool that monitors the
node components on the node and attempts to
restart them if they fail.

Set up PostgreSQL purge Prune excess data collected by the


job(s) analytics service.

Set up Cassandra nodetool Periodic maintenance you should


repair perform on your Cassandra ring to ensure
consistency across all nodes.

Enable autostart Instruct Edge for Private Cloud to restart


automatically during a reboot.

Install the new Edge UI Apigee recommends that you install the
new Edge UI, which is an enhanced user
interface for developers and
administrators of Apigee Edge for Private
Cloud.
Note that these are just some of the more common tasks you typically perform after
installing Edge. For additional operations and administration tasks, see How to
configure Edge and Operations.

Invoke commands on Edge components


Edge installs management utilities under /opt/apigee/apigee-service/bin
that you can use to manage an Edge installation. For example, you can use the
apigee-all utility to start, stop, restart, or determine the status of all Edge
components on the node:

/opt/apigee/apigee-service/bin/apigee-all stop|start|restart|status|version

Use the apigee-service utility to control and configure individual components.


The apigee-service utility has the form:

/opt/apigee/apigee-service/bin/apigee-service component_name action

Where component_name identifies the component. The component must be on the


node on which you execute apigee-service. Depending on your configuration,
values of component_name can include:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)

In addition to these components, you can also invoke apigee-service on the


apigee-provision and apigee-validate components depending on your
configuration.

For example, to restart the Edge Router, execute the following command:

/opt/apigee/apigee-service/bin/apigee-service edge-router restart


You can determine the list of components installed on the node by examining the
/opt/apigee directory. That directory contains a subdirectory for every Edge
component installed on the node. Each subdirectory is prefixed by:

● apigee: A third-party component used by Edge. For example, apigee-


cassandra.
● edge: An Edge component from Apigee. For example, edge-management-
server.
● edge-mint: A Monetization component. For example edge-mint-
management-server.

The complete list of actions for a component depends on the component itself, but
all components support the following actions:

● start, stop, restart


● status, version
● backup, restore
● install, uninstall

Configure Edge components


To configure Edge after installation, you use a combination of .properties files
and Edge utilities. For example, to configure TLS/SSL on the Edge UI, you edit
.properties files to set the necessary properties. Changes to .properties files
require you to restart the affected Edge component.

The .properties files are located in the


/opt/apigee/customer/application directory. Each component has its own
.properties file in that directory. For example, router.properties and
management-server.properties.

NOTE: If you have not set any properties for a component, the
/opt/apigee/customer/application directory might not contain a .properties file for
the component. In that case, create one.

To set a property for a component, edit the corresponding .properties file, and
then restart the component:

/opt/apigee/apigee-service/bin/apigee-service component restart

For example:

/opt/apigee/apigee-service/bin/apigee-service edge-router restart

When you update Edge, the .properties files in the


/opt/apigee/customer/application directory are read. That means the
update retains any properties that you set on the component.
See How to configure Edge for more information on Edge configuration.

Install apigee-monit on the node


After you have completed installing components on a node, you can optionally add
the apigee-monit utility. apigee-monit will monitor the components on the
node and attempt to restart them if they fail. For more information, see Self healing
with apigee-monit.

Configure an OpenLDAP server to be read-only


If your Edge installation contains an OpenLDAP server that does not need to have
traffic switched to it, we recommend that you configure the server to be read-only. To
do so:

1. Create a file mark_readonly.ldif on the server with the following lines:

dn: olcDatabase={2}bdb,cn=config

changetype: modify

replace: olcReadOnly

2. olcReadOnly: TRUE
3. Execute the following command on the server to mark it readonly:
4. ldapmodify -a -x -w "$APIGEE_LDAPPW" -D "$CONFIG_BIND_DN" -H
"ldap://:10389" -f mark_readonly.ldif

In case the primary server fails, you can switch back to use standby server as
primary as follow:

1. Create a file mark_writable.ldif on the standby server with the following


line:

dn: olcDatabase={2}bdb,cn=config

changetype: modify

replace: olcReadOnly

2. olcReadOnly: FALSE
3. Execute the following command on the standby server:
4. ldapmodify -a -x -w "$APIGEE_LDAPPW" -D "$CONFIG_BIND_DN" -H
"ldap://:10389" -f mark_writable.ldif

Modifying Java memory settings


Depending on your traffic and processing requirements you may need to change the
heap memory size or class metadata size for your nodes running Java-based Private
Cloud components.

This section provides the default and recommended Java heap memory sizes, as
well as the process for changing the defaults. Lastly, this section describes how to
change other JVM settings using properties files.

Default and recommended heap memory sizes


The following table lists the default and recommended Java heap memory sizes for
Java-based Private Cloud components:

Properties File Default Recommended


Component
Name Heap Size Heap Size

Runtime

Cassandra n/a Automatically Automatically


configured1 configured1

Message message- 512MB 3GB - 6GB2


Processor processor.prop
erties

Router router.propert 512MB 512MB


ies

Analytics

Postgres postgres- 512MB 512MB


server server.propert
ies

Qpid server qpid- 512MB 2GB - 4GB


server.propert
ies

Management

Manageme management- 512MB 512MB


nt Server server.propert
ies
UI ui.properties 512MB 512MB

OpenLDAP n/a Native App3 Native App3

Zookeeper zookeeper.prop 2048MB 2048MB


erties

Notes

1
Cassandra dynamically calculates the maximum heap size when it starts up.
Currently, this is one half the total system memory, with a maximum of 8192MB.
For information on setting the heap size, see Change heap memory size.

2
For Message Processors, Apigee recommends that you set the heap size to
between 3GB and 6GB. Increase the heap size beyond 6GB only after conducting
performance tests.

If the heap usage nears the max limit during your performance testing, then
increase the max limit. For information on setting the heap size, see Change heap
memory size.
3
Not all Private Cloud components are implemented in Java. Because they are not
Java-based, apps running natively on the host platform do not have configurable
Java heap sizes; instead, they rely on the host system for memory management.

To determine how much total memory Apigee recommends that you allocate to your
Java-based components on a node, add the values listed above for each component
on that node. For example, if your node hosts both the Postgres and Qpid servers,
Apigee recommends that your combined memory allocation be between 2.5GB and
4.5GB.

For a list of required hardware (such as RAM), see Installation requirements.

Change heap memory sizes


To change heap memory settings, edit the properties file for the component. For
example, for the Message Processor, edit the
/opt/apigee/customer/application/message-processor.properties
file.

If the message-processor.properties file does not exist, or if the


corresponding .properties file for any Edge component does not exist, create it
and then change ownership of the file to the "apigee" user, as the following example
shows:
chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties

TIP: In Java 1.8 and later, class metadata memory allocation replaced permanent generation
("permgem"). Some configuration properties still refer to permgen (often reflecting "perm" in the
property name).

If the component is installed on multiple machines, such as the Message Processor,


then you must edit the properties file on all machines that host the component.

The following table lists the properties that you edit to change heap sizes:

Property Description

bin_sete The minimum heap size. The default is based on the


nv_min_m values listed in Default and recommended heap memory
sizes.
em
This setting corresponds to the Java -Xms option.

TIP: For optimal performance, set bin_setenv_min_mem and


bin_setenv_max_mem to the same value.

bin_sete The maximum heap size. The default is based on the


nv_max_m values listed in Default and recommended heap memory
sizes.
em
This setting corresponds to the Java -Xmx option.

TIP: For optimal performance, set bin_setenv_min_mem and


bin_setenv_max_mem to the same value.

bin_sete The default class metadata size. The default value is set
nv_meta_ to the value of bin_setenv_max_permsize, which
defaults to 128 MB. On the Message Processor, Apigee
space_si
recommends that you set this value to 256 MB or 512 MB,
ze depending on your traffic.

This setting corresponds to the Java -


XX:MetaspaceSize option.

When you set heap size properties on a node, use the "m" suffix to indicate
megabytes, as the following example shows:

bin_setenv_min_mem=4500m
bin_setenv_max_mem=4500m
bin_setenv_meta_space_size=1024m

After setting the values in the properties file, restart the component:

/opt/apigee/apigee-service/bin/apigee-service component restart

For example:

/opt/apigee/apigee-service/bin/apigee-service edge-message-processor restart

Change other JVM properties


For Java settings not controlled by the properties listed above, you can also pass in
additional JVM flags or values for any Edge component. The *.properties files
will be read in by Bash and should be enclosed by '(single quotes) to preserve literal
characters or "(double quotes) if you require shell expansion.

● bin_setenv_ext_jvm_opts: Set any Java property not specified by other


properties. For example:
● bin_setenv_ext_jvm_opts='-XX:MaxGCPauseMillis=500'
● However, do not use bin_setenv_ext_jvm_opts to set -Xms, -Xmx, or -
XX:MetaspaceSize as these values are controlled by the properties listed
above.

For additional tips on configuring memory for Private Cloud components, see this
article on the Edge forums.

Managing the default LDAP password


policy for API management
The Apigee system uses OpenLDAP to authenticate users in your API management
environment. OpenLDAP makes this LDAP password policy functionality available.

This section describes how to configure the delivered default LDAP password policy.
Use this password policy to configure various password authentication options, such
as the number of consecutive failed login attempts after which a password can no
longer be used to authenticate a user to the directory.

This section also describes how to use a couple of APIs to unlock user accounts that
have been locked according to attributes configured in the default password policy.

For additional information, see:

● LDAP policy
● "Important information about your password policy" in the Apigee Community

Configuring the Default LDAP Password Policy


To configure the default LDAP password policy:

1. Connect to your LDAP server using an LDAP client, such as Apache Studio or
ldapmodify. By default OpenLDAP server listens on port 10389 on the
OpenLDAP node.
To connect, specify the Bind DN or user of
cn=manager,dc=apigee,dc=com and the OpenLDAP password that you
set at the time of Edge installation.
2. Use the client to navigate to the password policy attributes for:
● Edge users: cn=default,ou=pwpolicies,dc=apigee,dc=com
● Edge sysadmin:
cn=sysadmin,ou=pwpolicies,dc=apigee,dc=com
3. Edit the password policy attribute values as desired.
4. Save the configuration.

Default LDAP Password Policy Attributes

Attribute Description Default

pwdExpireWarning The maximum number of 604800


seconds before a password is
(Equivalent to 7
due to expire that expiration
days)
warning messages will be
returned to a user who is
authenticating to the directory.

pwdFailureCountInterval Number of seconds after which 300


old consecutive failed bind
attempts are purged from the
failure counter.

In other words, this is the


number of seconds after which
the count of consecutive failed
login attempts is reset.

If
pwdFailureCountInterval
is set to 0, only a successful
authentication can reset the
counter.
If
pwdFailureCountInterval
is set to >0, the attribute
defines a duration after which
the count of consecutive failed
login attempts is automatically
reset, even if no successful
authentication has occurred.

We suggest that this attribute


be set to the same value as the
pwdLockoutDuration
attribute.

pwdInHistory Maximum number of used, or 3


past, passwords for a user that
will be stored in the
pwdHistory attribute.

When changing her password,


the user will be blocked from
changing it to any of her past
passwords.

pwdLockout If TRUE, specifies to lock out a False


user when their password
expires so that the user can no
longer log in.

pwdLockoutDuration Number of seconds during 300


which a password cannot be
used to authenticate the user
due to too many consecutive
failed login attempts.

In other words, this is the


length of time during which a
user account will remain locked
due to exceeding the number of
consecutive failed login
attempts set by the
pwdMaxFailure attribute.

If pwdLockoutDuration is
set to 0, the user account will
remain locked until a system
administrator unlocks it.
See Unlocking a user account.

If pwdLockoutDuration is
set to >0, the attribute defines a
duration for which the user
account will remain locked.
When this time period has
elapsed, the user account will
be automatically unlocked.

We suggest that this attribute


be set to the same value as the
pwdFailureCountInterval
attribute.

pwdMaxAge Number of seconds after which user: 2592000


a user (non-sysadmin)
sysadmin: 0
password expires. A value of 0
means passwords do not
expire. The default value of
2592000 corresponds to 30
days from the time the
password was created.

pwdMaxFailure Number of consecutive failed 3


login attempts after which a
password may not be used to
authenticate a user to the
directory.

pwdMinLength Specifies the minimum number 8


of characters required when
setting a password.

Unlocking a User Account


A user's account may be locked due to attributes set in the password policy. A user
with the sysadmin Apigee role assigned can use the following API call to unlock the
user's account. Replace userEmail, adminEmail, and password with actual values.

To unlock a user:

/v1/users/userEmail/status?action=unlock -X POST -u adminEmail:password


Note: To ensure that only system administrators can call this API, the /users/*/status path
is included in the conf_security_rbac.restricted.resources property for the
Management Server:

cd /opt/apigee/edge-management-server
grep -ri "conf_security_rbac.restricted.resources" *

The output contains the following:

token/default.properties:conf_security_rbac.restricted.resources=/environments,/
environments/*,/environments/*/virtualhosts,/environments/*/virtualhosts/*,/pods,/
environments/*/servers,/rebuildindex,/users/*/status

Self healing with apigee-monit


Apigee Edge for Private Cloud includes apigee-monit, a tool based on the open
source monit utility. apigee-monit periodically polls Edge services; if a service is
unavailable, then apigee-monit attempts to restart it.

NOTE: apigee-monit is not supported on all platforms. Before continuing, check the list of
compatible platforms.

To use apigee-monit, you must install it manually. It is not part of the standard
installation.

WARNING

apigee-monit attempts to restart a component every time the component fails a check. A
component fails a check when it is disabled, but also when you purposefully stop it, such as
during an upgrade, installation, or any task that requires you to stop a component.

As a result, you must stop monitoring the component before executing any procedure that
requires you to install, upgrade, stop, or restart a component.

By default, apigee-monit checks the status of Edge services every 60 seconds.

Quick start

This section shows you how to quickly get up and running with apigee-monit.

If you are using Amazon Linux, first install Fedora. Otherwise, skip this step.
sudo yum install -y
https://kojipkgs.fedoraproject.org/packages/monit/5.25.1/1.el6/x86_64/monit-
5.25.1-1.el6.x86_64.rpm

To install apigee-monit, do the following steps:

Install apigee-monit

/opt/apigee/apigee-service/bin/apigee-service apigee-monit install


/opt/apigee/apigee-service/bin/apigee-service apigee-monit configure

/opt/apigee/apigee-service/bin/apigee-service apigee-monit start

This installs apigee-monit and starts monitoring all components on the


node by default.

Stop monitoring components

/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c


component_name

/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c all

Start monitoring components

/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c


component_name

/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c all

Get summary status information

/opt/apigee/apigee-service/bin/apigee-service apigee-monit report


/opt/apigee/apigee-service/bin/apigee-service apigee-monit summary

Look at the apigee-monit log files

cat /opt/apigee/var/log/apigee-monit/apigee-monit.log

Each of these topics and others are described in detail in the sections that follow.

About apigee-monit

apigee-monit helps ensure that all components on a node stay up and running. It
does this by providing a variety of services, including:

● Restarting failed services


● Displaying summary information
● Logging monitoring status
● Sending notifications
● Monitoring non-Edge services

Apigee recommends that you monitor apigee-monit to ensure that it is running.


For more information, see Monitor apigee-monit.

apigee-monit architecture

During your Apigee Edge for Private Cloud installation and configuration, you
optionally install a separate instance of apigee-monit on each node in your
cluster. These separate apigee-monit instances operate independently of one
another: they do not communicate the status of their components to the other
nodes, nor do they communicate failures of the monitoring utility itself to any central
service.

The following image shows the apigee-monit architecture in a 5-node cluster:


Figure 1: A separate instance of apigee-monit runs in isolation on each node in a
cluster

Supported platforms

apigee-monit supports the following platforms for your Private Cloud cluster. (The
supported OS for apigee-monit depends on the version of Private Cloud.)

Operating System Private Cloud Version

v4.19.01 v4.19.06 v4.50.00

CentOS 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8,
7.7 7.9

RedHat Enterprise 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8,
Linux (RHEL) 7.7 7.9

Oracle Linux 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8
7.7

* While not technically supported, you can install and use apigee-monit on CentOS/RHEL/Oracle version 6.9 for Apigee Edge for

Private Cloud version 4.19.01.

Component configurations
apigee-monit uses component configurations to determine which components to
monitor, which aspects of the component to check, and what action to take in the
event of a failure.

TERMINOLOGY

A component configuration specifies the service to check and the action to take in the event of a
failure. Component configurations can also define how often to check a service and the number
of times to retry a service before triggering a failure response.

By default, apigee-monit monitors all Edge components on a node using their pre-
defined component configurations. To view the default settings, you can look at the
apigee-monit component configuration files. You cannot change the default
component configurations.

apigee-monit checks different aspects of a component, depending on which


component it is checking. The following table lists what apigee-monit checks for
each component and shows you where the component configuration is for each
component. Note that some components are defined in a single configuration file,
which others have their own configurations.

Configuration
Component What is monitored
location

Management Server /opt/apigee/ apigee-monit checks:


edge-
management- ● Specified port(s) are
server/monit/ open and accepting
default.conf requests
● Specified
protocol(s) are
Message Processor /opt/apigee/ supported
edge-message- ● Status of the
processor/ response
monit/
default.conf
In addition, for these
components apigee-
Postgres Server /opt/apigee/ monit:
edge-postgres-
server/monit/ ● Requires multiple
default.conf
Qpid Server /opt/apigee/
failures within a
edge-qpid- given number of
server/monit/ cycles before taking
default.conf action
● Sets a custom
request path
Router /opt/apigee/
edge-router/
monit/
default.conf

Cassandra /opt/apigee/ apigee-monit checks:


data/apigee-
Edge UI
monit/ ● Service is running
OpenLDAP monit.conf

Postgres

Qpid

Zookeeper

The following example shows the default component configuration for the edge-
router component:

check host edge-router with address localhost

restart program = "/opt/apigee/apigee-service/bin/apigee-service edge-router


monitrestart"

if failed host 10.1.1.0 port 8081 and protocol http

and request "/v1/servers/self/uuid"

with timeout 15 seconds

for 2 times within 3 cycles

then restart

if failed port 15999 and protocol http

and request "/v1/servers/self"

and status < 600


with timeout 15 seconds

for 2 times within 3 cycles

then restart

The following example shows the default configuration for the Classic UI (edge-ui)
component:

check process edge-ui

with pidfile /opt/apigee/var/run/edge-ui/edge-ui.pid

start program = "/opt/apigee/apigee-service/bin/apigee-service edge-ui start" with


timeout 55 seconds

stop program = "/opt/apigee/apigee-service/bin/apigee-service edge-ui stop"

This applies to the Classic UI, not the new Edge UI whose component name is edge-
management-ui.

You cannot change the default component configurations for any Apigee Edge for
Private Cloud component. You can, however, add your own component
configurations for external services, such as your target endpoint or the httpd
service. For more information, see Non-Apigee component configurations.

By default, apigee-monit monitors all components on a node on which it is


running. You can enable or disable it for all components or for individual
components. For more information, see:

● Stop and start monitoring a component


● Stop, start, and disable apigee-monit

Install apigee-monit

apigee-monit is not installed by default; you can install it manually after upgrading
or installing version 4.19.01 or later of Apigee Edge for Private Cloud.

This section describes how to install apigee-monit on the supported platforms.

For information on uninstalling apigee-monit, see Uninstall apigee-monit.


Install apigee-monit on a supported platform

This section describes how to install apigee-monit on a supported platform.

To install apigee-monit on a supported platform:

1. Install apigee-monit with the following command:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-monit install
3. Configure apigee-monit with the following command:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-monit configure
5. Start apigee-monit with the following command:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-monit start
7. Repeat this procedure on each node in your cluster.

Stop and start monitoring components

When a service stops for any reason, apigee-monit attempts to restart the
service.

This can cause a problem if you want to purposefully stop a component. For
example, you might want to stop a component when you need to back it up or
upgrade it. If apigee-monit restarts the service during the backup or upgrade, your
maintenance procedure can be disrupted, possibly causing it to fail.

TIP

To prevent apigee-monit from causing failures during a planned outage, stop monitoring the
component with the stop-component option when you perform the following operations:

● Stop a component
● Install or upgrade a component
● Restart a component

The following sections show the options for stopping the monitoring of components.

Stop a component and unmonitor it

To stop a component and unmonitor it, execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit stop-component -c


component_name
component_name can be one of the following:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)

Note that "all" is not a valid option for stop-component. You can stop and
unmonitor only one component at a time with stop-component.

To re-start the component and resume monitoring, execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit start-component -c


component_name

Note that "all" is not a valid option for start-component.

For instructions on how to stop and unmonitor all components, see Stop all
components and unmonitor them.

Unmonitor a component (but don't stop it)

To unmonitor a component (but don't stop it), execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c


component_name

component_name can be one of the following:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)

To resume monitoring the component, execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c


component_name

Unmonitor all components (but don't stop them)

To unmonitor all components (but don't stop them), execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c all

To resume monitoring all components, execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c all

Stop all components and unmonitor them

To stop all components and unmonitor them, execute the following commands:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c all

/opt/apigee/apigee-service/bin/apigee-all stop

To re-start all components and resume monitoring, execute the following


commands:

/opt/apigee/apigee-service/bin/apigee-all start

/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c all

To stop monitoring all components, you can also disable apigee-monit, as


described in Stop, start, and disable apigee-monit.

Stop, start, and disable apigee-monit


As with any service, you can stop and start apigee-monit using the apigee-
service command. In addition, apigee-monit supports the unmonitor
command, which lets you temporarily stop monitoring components.

Stop apigee-monit
NOTEIf you use cron to watch apigee-monit, you must disable it before stopping apigee-
monit, as described in Monitor apigee-monit.

To stop apigee-monit, use the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit stop

Start apigee-monit

To start apigee-monit, use the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit start

Disable apigee-monit

You can suspend monitoring all components on the node by using the following
command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c all

Alternatively, you can permanently disable apigee-monit by uninstalling it from the


node, as described in Uninstall apigee-monit.

Uninstall apigee-monit

To uninstall apigee-monit:

1. If you set up a cron job to monitor apigee-monit, remove the cron job
before uninstalling apigee-monit:
2. sudo rm /etc/cron.d/apigee-monit.cron
3. Stop apigee-monit with the following command:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-monit stop
5. Uninstall apigee-monit with the following command:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-monit uninstall
7. Repeat this procedure on each node in your cluster.

Monitor a newly installed component


If you install a new component on a node that is running apigee-monit, you can
begin monitoring it by executing apigee-monit's restart command. This
generates a new monit.conf file that will include the new component in its
component configurations.

The following example restarts apigee-monit:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit restart

Customize apigee-monit

You can customize various apigee-monit settings, including:

1. Default apigee-monit control settings


2. Global configuration settings
3. Non-Apigee component configurations

Default apigee-monit control settings

You can customize the default apigee-monit control settings such as the
frequency of status checks and the locations of the apigee-monit files. You do
this by editing a properties file using the code with config technique. Properties files
will persist even after you upgrade Apigee Edge for Private Cloud.

The following table describes the default apigee-monit control settings that you
can customize:

Property Description

conf_mo The httpd daemon's port. apigee-monit uses httpd for its
nit_htt dashboard app and to enable reports/summaries. The default value is
pd_port 2812.

conf_mo Constraints on requests to the httpd daemon. apigee-monit uses


nit_htt httpd to run its dashboard app and enable reports/summaries. This
pd_allo value must point to the localhost (the host that httpd is running on.
w
To require that requests include a username and password, use the
following syntax:
conf_monit_httpd_allow=allow username:"password"\nallow 127.0.0.1

When adding a username and password, insert a "\n" between each


constraint. Do not insert actual newlines or carriage returns in the
value.

conf_mo The directory in which event details are stored.


nit_mon
it_data
dir

conf_mo The amount of time that apigee-monit waits after it is first loaded
nit_mon into memory before it runs. This affects apigee-monit the first
it_dela process check only.
y_time

conf_mo The location of the apigee-monit log file.


nit_mon
it_logd
ir

conf_mo The frequency at which apigee-monit attempts to check each


nit_mon process; the default is 60 seconds.
it_retr
y_time

conf_mo The location of the PID and state files, which apigee-monit uses for
nit_mon checking processes.
it_rund
ir

To customize the default apigee-monit control settings:

1. Edit the following file:


2. /opt/apigee/customer/application/monit.properties
3. If the file does not exist, create it and set the owner to the "apigee" user:
4. chown apigee:apigee /opt/apigee/customer/application/monit.properties
5. Note that if the file already exists, there may be additional configuration
properties defined in it beyond what is listed in the table above. You should
not modify properties other than those listed above.
6. Set or replace property values with your new values.
For example, to change the location of the log file to /tmp, add or edit the
following property:
7. conf_monit_monit_logdir=/tmp/apigee-monit.log
8. Save your changes to the monit.properties file.
9. Re-configure apigee-monit with the following command:
10. /opt/apigee/apigee-service/bin/apigee-service apigee-monit configure
11. Reload apigee-monit with the following command:
12. /opt/apigee/apigee-service/bin/apigee-service apigee-monit reload
13. If you cannot restart apigee-monit, check the log file for errors as
described in Access apigee-monit log files.
14. Repeat this procedure for each node in your cluster.

Global configuration settings

You can define global configuration settings for apigee-monit; for example, you
can add email notifications for alerts. You do this by creating a configuration file in
the /opt/apigee/data/apigee-monit directory and then restarting apigee-
monit.

To define global configuration settings for apigee-monit:

1. Create a new component configuration file in the following location:


2. /opt/apigee/data/apigee-monit/filename.conf
3. Where filename can be any valid file name, except "monit".
4. Change the owner of the new configuration file to the "apigee" user, as the
following example shows:
5. chown apigee:apigee /opt/apigee/data/apigee-monit/my-mail-config.conf
6. Add your global configuration settings to the new file. The following example
configures a mail server and sets the alert recipients:

SET MAILSERVER smtp.gmail.com PORT 465

USERNAME "example-admin@gmail.com" PASSWORD "PASSWORD"

USING SSL, WITH TIMEOUT 15 SECONDS

SET MAIL-FORMAT {

from: edge-alerts@example.com

subject: Monit Alert -- Service: $SERVICE $EVENT on $HOST

}
SET ALERT fred@example.com

7. SET ALERT nancy@example.com


8. For a complete list of global configuration options, see the monit
documentation.
9. Save your changes to the component configuration file.
10. Reload apigee-monit with the following command:
11. /opt/apigee/apigee-service/bin/apigee-service apigee-monit reload
12. If apigee-monit does not restart, check the log file for errors as described
in Access apigee-monit log files.
13. Repeat this procedure for each node in your cluster.

Non-Apigee component configurations

You can add your own configurations to apigee-monit so that it will check
services that are not part of Apigee Edge for Private Cloud. For example, you can use
apigee-monit to check that your APIs are running by sending requests to your
target endpoint.

To add a non-Apigee component configuration:

1. Create a new component configuration file in the following location:


2. /opt/apigee/data/apigee-monit/filename.conf
3. Where filename can be any valid file name, except "monit".
You can create as many component configuration files as necessary. For
example, you can create a separate configuration file for each non-Apigee
component that you want to monitor on the node.
4. Change the owner of the new configuration file to the "apigee" user, as the
following example shows:
5. chown apigee:apigee /opt/apigee/data/apigee-monit/my-config.conf
6. Add your custom configurations to the new file. The following example checks
the target endpoint on the local server:
7. CHECK HOST localhost_validate_test WITH ADDRESS localhost
8. IF FAILED
9. PORT 15999
PROTOCOL http
REQUEST "/validate__test"
CONTENT = "Server Ready"
FOR 2 times WITHIN 3 cycles
THEN alert
10. For a complete list of possible configuration settings, see the monit
documentation.
11. Save your changes to the configuration file.
12. Reload apigee-monit with the following command:
13. /opt/apigee/apigee-service/bin/apigee-service apigee-monit reload
14. If apigee-monit does not restart, check the log file for errors as described
in Access apigee-monit log files.
15. Repeat this procedure for each node in your cluster.

Note that this is for non-Edge components only. You cannot customize the
component configurations for Edge components.

Access apigee-monit log files

apigee-monit logs all activity, including events, restarts, configuration changes,


and alerts in a log file.

The default location of the log file is:

/opt/apigee/var/log/apigee-monit/apigee-monit.log

You can change the default location by customizing the apigee-monit control
settings.

Log file entries have the following form:

'edge-message-processor' trying to restart

[UTC Dec 14 16:20:42] info : 'edge-message-processor' trying to restart

'edge-message-processor' restart: '/opt/apigee/apigee-service/bin/apigee-service


edge-message-processor monitrestart'

You cannot customize the format of the apigee-monit log file entries.

View aggregated status with apigee-monit

apigee-monit includes the following commands that give you aggregated status
information about the components on a node:

Command Usage

report /opt/apigee/apigee-service/bin/apigee-service apigee-monit report


summary /opt/apigee/apigee-service/bin/apigee-service apigee-monit summary

Each of these commands is explained in more detail in the sections that follow.

report

The report command gives you a rolled-up summary of how many components are
up, down, currently being initialized, or are currently unmonitored on a node. The
following example invokes the report command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit report

The following example shows report output on an AIO (all-in-one) configuration:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit report

up: 11 (100.0%)

down: 0 (0.0%)

initialising: 0 (0.0%)

unmonitored: 1 (8.3%)

total: 12 services

In this example, 11 of the 12 services are reported by apigee-monit as being up.


One service is not currently being monitored.

You may get a Connection refused error when you first execute the report
command. In this case, wait for the duration of the
conf_monit_monit_delay_time property, and then try again.

summary

The summary command lists each component and provides its status. The following
example invokes the summary command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit summary

The following example shows summary output on an AIO (all-in-one) configuration:


/opt/apigee/apigee-service/bin/apigee-service apigee-monit summary

Monit 5.25.1 uptime: 4h 20m

Service Name Status Type

host_name OK System

apigee-zookeeper OK Process

apigee-cassandra OK Process

apigee-openldap OK Process

apigee-qpidd OK Process

apigee-postgresql OK Process

edge-ui OK Process

edge-qpid-server OK Remote Host

edge-postgres-server OK Remote Host

edge-management-server OK Remote Host

edge-router OK Remote Host

edge-message-processor OK Remote Host

If you get a Connection refused error when you first execute the summary
command, try waiting the duration of the conf_monit_monit_delay_time
property, and then try again.

Monitor apigee-monit

It is best practice to regularly check that apigee-monit is running on each node.

To check that apigee-monit is running, use the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor_monit

Apigee recommends that you issue this command periodically on each node that is
running apigee-monit. One way to do this is with a utility such as cron that
executes scheduled tasks at pre-defined intervals.
To use cron to monitor apigee-monit:

1. Add cron support by copying the apigee-monit.cron directory to the


/etc/cron.d directory, as the following example shows:
2. cp /opt/apigee/apigee-monit/cron/apigee-monit.cron /etc/cron.d/
3. Open the apigee-monit.cron file to edit it.
The apigee-monit.cron file defines the cron job to execute as well as the
frequency at which to execute that job. The following example shows the
default values:

# Cron entry to check if monit process is running. If not start it

4. */2 * * * * root /opt/apigee/apigee-service/bin/apigee-service apigee-monit


monitor_monit
5. This file uses the following syntax, in which the first five fields define the time
at which apigee-monit executes its action:
6. min hour day_of_month month day_of_week task_to_execute
7. For example, the default execution time is */2 * * * *, which instructs
cron to check the apigee-monit process every 2 minutes.
You cannot execute a cron job more frequently than once per minute.
For more information on using cron, see your server OS's documentation or
man pages.
8. Change the cron settings to match your organization's policies. For example,
to change the execution frequency to every 5 minutes, set the job definition to
the following:
9. */5 * * * * root /opt/apigee/apigee-service/bin/apigee-service apigee-monit
monitor_monit
10. Save the apigee-monit.cron file.
11. Repeat this procedure for each node in your cluster.

If cron does not begin watching apigee-monit, check that:

● There is a blank line after the cron job definition.


● There is only one cron job defined in the file. (Commented lines do not
count.)

If you want to stop or temporarily disable apigee-monit, you must disable this
cron job, too, otherwise cron will restart apigee-monit.

To disable cron, do one of the following:

● Delete the /etc/cron.d/apigee-monit.cron file:


● sudo rm /etc/cron.d/apigee-monit.cron
● You will have to re-copy it if you later want to re-enable cron to watch
apigee-monit.
OR
● Edit the /etc/cron.d/apigee-monit.cron file and comment out the job
definition by adding a "#" to the beginning of the line; for example:
● # 10 * * * * root /opt/apigee/apigee-service/bin/apigee-service apigee-monit
monitor_monit

Recurring analytics services


maintenance tasks
Many Apigee Analytics Services tasks can be performed using standard Postgres
utilities. The routine maintenance tasks you would perform on the Analytics
database—such as database reorganization using VACUUM, reindexing and log file
maintenance—are the same as those you would perform on any PostgreSQL
database. Information on routine Postgres maintenance can be found at
http://www.postgresql.org/docs/9.1/static/maintenance.html.

NOTE: Postgres autovacuum is enabled by default, so you do not have to explicitly run the
VACUUM command. Autovacuum reclaims storage by removing obsolete data from the
PostgreSQL database.NOTE: Apigee does not recommend moving the PostgreSQL database
without contacting Apigee Customer Success. The Apigee system uses the PostgreSQL
database server's IP address. As a result, moving the database or changing its IP address
without performing corresponding updates on the Apigee environment metadata will cause
undesirable results.

For more on maintaining PostgreSQL database, see


http://www.postgresql.org/docs/9.1/static/maintenance.html.

NOTE: No pruning is required for Cassandra. Expired tokens are automatically purged after 180
days.

Pruning Analytics Data


As the amount of analytics data available within the Apigee repository increases, you
may find it desirable to "prune" the data beyond your required retention interval. Run
the following command to prune data for a specific organization and environment:

/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql pg-data-purge


org_name env_name number_of_days_to_retain

To run the script, enter the following command:


/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql pg-data-purge
org_name env_name number_of_days_to_retain [Delete-from-parent-fact - N/Y]
[Confirm-delete-from-parent-fact - N/Y]

The script has the following options:

● Delete-from-parent-fact Default : No. Will also delete data older than


retention days from parent fact table.
● Skip-confirmation-prompt. Default: No. If No, the script will prompt for
confirmation before deleting data from parent fact. Set to Yes if the purge
script is automated.

This command interrogates the "childfactables" table in the "analytics" schema to


determine which raw data partitions cover the dates for which data pruning is to be
performed, then drops those tables. Once the tables are dropped, the entries in
"childfactables" related to those partitions are deleted.

Childfactables are daily-partitioned fact data. Every day new partitions are created
and data gets ingested into the daily partitioned tables. So at a later point in time,
when the old fact data will not be required, you can purge the respective
childfactables.

The script has the following options since version 4.51.00.00:

● Delete-from-parent-fact Default : No. Will also delete data older than retention
days from parent fact table.
● Confirm-delete-from-parent-fact. Default: No. If No, the script will prompt for
confirmation before deleting data from parent fact. Set to Yes if the purge
script is automated.
NOTE: If you have configured master-standby replication between Postgres servers, then you
only need to run this script on the Postgres master.

Apache Cassandra maintenance tasks


This section describes periodic maintenance tasks for Cassandra.

Anti-entropy maintenance
The Apache Cassandra ring nodes require periodic maintenance to ensure
consistency across all nodes. To perform this maintenance, use the following
command:

apigee-service apigee-cassandra apigee_repair -pr

Apigee recommends the following when running this command:

● Run on every Cassandra node (across all regions or data centers).


● Run on one node at a time to ensure consistency across all nodes in the ring. Ensure that
50% disk space is available on each Cassandra node. Running repair jobs on multiple
nodes at the same time can impair the health of Cassandra.
To check if a repair job on a node has completed successfully, look in the
nodes's system.log file for an entry with the latest repair session's UUID
and the phrase "session completed successfully." Here is a sample log entry:

INFO [AntiEntropySessions:1] 2015-03-01 10:02:56,245 RepairSession.java (line 282)


[repair #2e7009b0-c03d-11e4-9012-99a64119c9d8] session completed
successfully"

● Ref: https://support.datastax.com/hc/en-us/articles/204226329-How-to-
check-if-a-scheduled-nodetool-repair-ran-successfully
● Run during periods of relatively low workload (the tool imposes a significant
load on the system).
● Run at least every seven days in order to eliminate problems related to
Cassandra's "forgotten deletes".
● Run on different nodes on different days, or schedule it so that there are
several hours between running it on each node.
● Use the -pr option (partitioner range) to specify the primary partitioner range
of the node only.

If you enabled JMX authentication for Cassandra, you must include the username
and password when you invoke nodetool. For example:

apigee-service apigee-cassandra apigee_repair -u username -pw password -pr

You can also run the following command to check the supported options of
apigee_repair:

apigee-service apigee-cassandra apigee_repair -h

Note: apigee_repair is a wrapper around Cassandra's nodetool repair, which


performs additional checks before performing Cassandra's repair.

For more information, see the following resources:

● Forgotten deletes
● Data consistency
● nodetool repair

NOTE: Apigee does not recommend adding, moving or removing Cassandra nodes without
contacting Apigee Edge Support. The Apigee system tracks Cassandra nodes using their IP
address, and performing ring maintenance without performing corresponding updates on the
Apigee environment metadata will cause undesirable results.

Log file maintenance


Cassandra logs are stored in the /opt/apigee/var/log/cassandra directory on
each node. By default, a maximum of 50 log files, each with a maximum size of 20
MB, can be created; once this limit is reached older logs are deleted when newer logs
are created.

If you should find that Cassandra log files are taking up excessive space, you can
modify the amount of space allocated for log files by editing the log4j settings.

1. Edit /opt/apigee/customer/application/cassandra.properties
to set the following properties. If that file does not exist, create it:

conf_logback_maxfilesize=20MB

# max file size

2. conf_logback_maxbackupindex=50 # max open files


3. Restart Cassandra by using the following command:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restart

Disk space maintenance


You should monitor Cassandra disk utilization regularly to ensure at least 50 percent
of each disk is free. If disk utilization climbs above 50 percent, we recommend that
you add more disk space to reduce the percentage that is in use.

Cassandra automatically performs the following operations to reduce its own disk
utilization:

● Authentication token deletion when tokens expire. However, it may take a


couple of weeks to free up the disk space the tokens were using, depending
on your configuration. If automatic deletion is not adequate to maintain
sufficient disk space, contact support to learn about manually deleting tokens
to recover space.Note: Manual deletion will delete all tokens, after which all clients will
need to re-authenticate.
● Note on data compaction: Starting with Edge for Private Cloud 4.51.00, new
installations of Apigee Cassandra will create keyspaces with the Leveled
Compaction Strategy.
Installations of older versions of Edge for Private Cloud that have been
upgraded to Private Cloud 4.51.00 will continue to retain the prior compaction
strategy. If the existing compaction strategy is
SizeTieredCompactionStrategy, we recommend changing to
LeveledCompactionStrategy, which provides better disk utilization.

Note: When Cassandra performs data compactions, it can take a considerable


amount of CPU cycles and memory. But resource utilization should return to normal
once compactions are complete. You can run the 'Nodetool compactionstats'
command on each node to check if compaction is running. The output of
compactionstats informs you if there are pending compactions to be executed
and the estimated time for completion.
Setting server autostart
An on-premises installation of Edge Private Cloud does not restart automatically
during a reboot. You can use the following commands to enable/disable autostart
on any node.

To enable all components on the node:

/opt/apigee/apigee-service/bin/apigee-all enable_autostart

To disable all components on the node:

/opt/apigee/apigee-service/bin/apigee-all disable_autostart

To enable or disable autostart for a specific component on the node:

/opt/apigee/apigee-service/bin/apigee-service component_name enable_autostart

/opt/apigee/apigee-service/bin/apigee-service component_name disable_autostart

Where component_name identifies the component. Possible values include:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)

The script only affects the node on which you run it. If you want to configure all
nodes for autostart, run the script on all nodes.

Note: If you run into issues where the OpenLDAP server does not start automatically on reboot,
try disabling SELinux or setting it to permissive mode.
Note that the order of starting the components is very important:

1. First start ZooKeeper, Cassandra, LDAP (OpenLDAP)


If ZooKeeper and Cassandra are installed as cluster, the complete cluster
must be up and running before starting any other Apigee component.
2. Then, any Apigee component (Management Server, Router, UI, etc.). For
Postgres Server first start postgresql and for Qpid Server first start qpidd.

Implications:

● For a complete restart of an Apigee Edge environment, the nodes with


ZooKeeper and Cassandra need to be booted completely prior to any other
node.
● If any other Apigee component is running on one or more ZooKeeper and
Cassandra nodes, it is not recommended to use autostart. Instead, start the
components in the order described in Starting, stopping, restarting, and
checking the status of Apigee Edge.

Troubleshooting autostart

If you configure autostart, and Edge encounters issues with starting the OpenLDAP
server, you can try disabling SELinux or setting it to permissive mode on all nodes.
To configure SELinux:

1. Edit the /etc/sysconfig/selinux file:


2. sudo vi /etc/sysconfig/selinux
3. Set SELINUX=disabled or SELINUX=permissive.
4. Save your edits.
5. Restart the machine and then restart Edge:
6. /opt/apigee/apigee-service/bin/apigee-all restart

Install the new Edge UI


This document describes how to install the Edge UI for Apigee Edge for Private
Cloud. The Edge UI is the next generation of UI for Edge.

TERMINOLOGY NOTE: In this document:

● The original UI is referred to as the Edge Classic UI.


● The new UI is referred to as the Edge UI. It was formerly referred to as the New Edge
experience or the New UE.
Prerequisites
To try out the new Edge UI in an Apigee Edge for Private Cloud installation, you
must:

● Install the Edge UI on its own node. You cannot install it on a node that
contains other Edge components, including the node on which the existing
Classic UI resides. Doing so will cause users to be unable to log in to the
Classic UI.
The Edge UI's node must meet the following requirements:
● JAVA 1.8
● 4 GBytes of RAM
● 2-core
● 60GB disk space
● You must first install the 4.51.00 version of the apigee-setup utility
on the node as described at Install the Edge apigee-setup utility.
● Port 3001 must be open. This is the default port used for requests to
the Edge UI. If you change the port by using the properties described
in New UE configuration file, make sure that port is open.
● Enable an external IDP on Edge. The Edge UI supports SAML or LDAP as the
authentication mechanism.
● (SAML IDP only) The Edge UIonly supports TLS v1.2. Because you connect
to the SAML IDP over TLS, if you use SAML, your IDP must therefore support
TLS v1.2.

For more on the Edge UI, see The new Edge UI for Private Cloud.

Installation overview
At a high level, the process for installing the Edge UI for Apigee Edge for Private
Cloud is as follows:

● Add a new node to your cluster


● Log in to the new node
● Download the apigee-setup utility
● Create a configuration file and modify it with your settings
● Execute the apigee-setup utility
● Log in and test to new UE

When executed on the new node, the apigee-setup utility:

● Installs the base Classic UI, called shoehorn, and configures the Classic UI to
use an external IDP to authenticate with Edge.NOTE: You cannot use the installed
Classic UI to connect to Edge. It is simply a requirement for installing the new Edge UI.
● Installs the new Edge UI, and configures the Edge UI to use your external IDP
to authenticate with Edge.

Considerations before installing the new Edge UI


As described above in the prerequisites, the Edge UI requires that you enable an
external IDP on Edge. That means user authentication is controlled by the IDP,
where you configure it to use email addresses as the user ID. All Edge UI users
must therefore be registered in the IDP.

The Classic UI, the default UI you installed with Apigee Edge for Private Cloud, does
not require an external IDP. It can use either an IDP or Basic authentication. That
means you can either:

● Enable external IDP support on Edge and on both the Classic UI and the Edge
UI.
In this scenario, all Classic UI and Edge UI users are registered in the IDP.
For information on adding new users to the IDP, see Register new Edge
users.
● Enable external IDP support on Edge, but leave Basic authentication enabled.
The Edge UI uses the IDP and the Classic UI still uses Basic authentication.
In this scenario, all Classic UI users log in with Basic authentication
credentials, where their credentials are stored in the Edge OpenLDAP
database. Edge UI users are registered in the IDP and log in by using either
SAML or LDAP.
However, a Classic UI user cannot log in to the Edge UI until you have added
that user to the IDP as described in Register new Edge users.

Installation configuration changes from previous releases


Be aware of the following changes to the installation procedure from the Beta
releases of the Edge UI.

Installation configuration changes from Edge 4.18.05

Installation configuration changes from Edge 4.18.01

New Edge UI configuration file


The following configuration file contains all the information necessary to install and
configure the new Edge UI. You can use the same configuration file to install and
configure both the shoehorn/Classic UI and the Edge UI.

# IP of the Edge Management Server.


# This node also hosts the Apigee SSO module and the current, or Classic, UI.
IP1=management_server_IP

# IP of the Edge UI node.


IP2=edge_UI_server_IP

# Edge sys admin credentials.


ADMIN_EMAIL=your@email.com
APIGEE_ADMINPW=your_password # If omitted, you are prompted for it.

# Edge Management Server information.


APIGEE_PORT_HTTP_MS=8080
MSIP=$IP1
MS_SCHEME=http

#
# Edge UI configuration.
#

# Enable the Edge UI.


EDGEUI_ENABLE_UNIFIED_UI=y
# Specify IP and port for the Edge UI.
# The management UI port must be open for requests to the Edge UI
MANAGEMENT_UI_PORT=3001
MANAGEMENT_UI_IP=$IP2
# Set to 'OPDK' to specify a Private Cloud deployment.
MANAGEMENT_UI_APP_ENV=OPDK
# Disable TLS on the Edge UI.
MANAGEMENT_UI_SCHEME=http

# Location of Edge UI.


MANAGEMENT_UI_PUBLIC_URIS=$MANAGEMENT_UI_SCHEME://
$MANAGEMENT_UI_IP:$MANAGEMENT_UI_PORT
MANAGEMENT_UI_SSO_REGISTERED_PUBLIC_URIS=$MANAGEMENT_UI_PUBLIC_
URIS
MANAGEMENT_UI_SSO_CSRF_SECRET=YOUR_CSRF_SECRET
# Duration of CSRF token.
MANAGEMENT_UI_SSO_CSRF_EXPIRATION_HOURS=24
# Defaults to 8760 hours, or 365 days.
MANAGEMENT_UI_SSO_STRICT_TRANSPORT_SECURITY_AGE_HOURS=8760

## SSO configuration for the Edge UI.


MANAGEMENT_UI_SSO_ENABLED=y

# Only required if MANAGEMENT_UI_SSO_ENABLED is 'y'


MANAGEMENT_UI_SSO_CLIENT_OVERWRITE=y

MANAGEMENT_UI_SSO_CLIENT_ID=newueclient
MANAGEMENT_UI_SSO_CLIENT_SECRET=your_client_sso_secret

#
# Shoehorn UI configuration
#
# Set to http even if you enable TLS on the Edge UI.
SHOEHORN_SCHEME=http
SHOEHORN_IP=$MANAGEMENT_UI_IP
SHOEHORN_PORT=9000

#
# Edge Classic UI configuration.
# Some settings are for the Classic UI, but are still required to configure the Edge
UI.
#

# These settings assume that Classic UI is installed on the Management Server.


CLASSIC_UI_IP=$MSIP
CLASSIC_UI_PORT=9000
CLASSIC_UI_SCHEME=http
EDGEUI_PUBLIC_URIS=$CLASSIC_UI_SCHEME://$CLASSIC_UI_IP:
$CLASSIC_UI_PORT

# Information about publicly accessible URL for Classic UI.


EDGEUI_SSO_REGISTERD_PUBLIC_URIS=$EDGEUI_PUBLIC_URIS

# Enable SSO
EDGEUI_SSO_ENABLED=y

# The name of the OAuth client used to connect to apigee-sso.


# The default client name is 'edgeui'.
# Apigee recommends that you use the same settings as you used
# when enabling your IDP on the Classic UI.
EDGEUI_SSO_CLIENT_NAME=edgeui
# Oauth client password using uppercase, lowercase, number, and special chars.
EDGEUI_SSO_CLIENT_SECRET=ssoClient123
# If set, existing EDGEUI client will deleted and new one will be created.
EDGEUI_SSO_CLIENT_OVERWRITE=y

# Apigee SSO Component configuration


# Externally accessible IP or DNS of Edge SSO module.
SSO_PUBLIC_URL_HOSTNAME=$IP1
SSO_PUBLIC_URL_PORT=9099
# Default is http. Set to https if you enabled TLS on the Apigee SSO module.
# If Apigee SSO uses a self-signed cert, you must also set
MANAGEMENT_UI_SKIP_VERIFY to "y".
SSO_PUBLIC_URL_SCHEME=http
# MANAGEMENT_UI_SKIP_VERIFY=y
# SSO admin credentials as set when you installed Apigee SSO module.
SSO_ADMIN_NAME=ssoadmin
SSO_ADMIN_SECRET=your_sso_admin_secret

#
## SSO Configuration (define external IDP) #
#
# Use one of the following configuration blocks to #
# define your IDP settings: #
# - SAML configuration properties #
# - LDAP Direct Binding configuration properties #
# - LDAP Indirect Binding configuration properties #

INSERT_IDP_CONFIG_BLOCK_HERE (SAML, LDAP direct, or LDAP indirect)

## SMTP Configuration (required)


#
SKIP_SMTP=n # Skip now and configure later by specifying "y".
SMTPHOST=smtp.gmail.com
SMTPUSER=your@email.com
SMTPPASSWORD=your_email_password
SMTPSSL=y
SMTPPORT=465 # If no SSL, use a different port, such as 25.
SMTPMAILFROM="My Company myco@company.com"

Install the Edge UI


After you create and modify the configuration file, you can install the new Edge UI
on that new node.

To install the Edge UI:

1. Add a new node to your cluster.


2. Log in to the new node as an administrator.
3. Install the 4.51.00 version of the apigee-setup utility on the node, as
described at Install the Edge apigee-setup utility.
4. Clean all cached information with Yum by executing the following command:
5. sudo yum clean all
6. Create the configuration file as described in New Edge UI configuration file
and ensure that it is owned by the "apigee" user:
7. chown apigee:apigee configFile
8. Be sure that you make the following edits to the configuration file:
● Change the value of the MANAGEMENT_UI_SSO_CSRF_SECRET
property in the configuration file to your CSRF secret.NOTE: Do not use
the default value in the configuration file example above for your CSRF secret.
Change it to a unique value that only you know.
● Configure Edge to use one of the following (the new Edge UI requires
an external IDP):
● SAML
● LDAP
● For more information, see Overview of external IDPs authentication.
9. Configure your external IDP with the users that you want to have access to
the Edge UI. For more information, see Register new Edge users.
10.On the new node, execute the following command:
11./opt/apigee/apigee-setup/bin/setup.sh -p ue -f configFile
12.The apigee-setup utility installs the Classic UI. On top of the Classic UI,
the installer then installs the Edge UI.
13.Log in to the Edge UI by opening the following URL in a browser:
14.http://new_edge_UI_IP:3001
15.Where new_edge_UI_IP is the IP address of the node hosting the new Edge
UI.
Edge prompts you for your external IDP credentials.
16.Enter your credentials.
The new Edge UI appears. For information on using the Edge UI, see The new
Edge UI for Private Cloud.
If the Edge UI does not display, then be sure that port 3001 is open for
external connections.

Uninstall the new UI


To uninstall the new UI from its node, you must uninstall both the new Edge UI and
the base Classic UI (shoehorn) that was installed on the node during the UI
installation process.

To uninstall the new Edge UI:

/opt/apigee/apigee-service/bin/apigee-service edge-management-ui uninstall

To uninstall the base Classic UI (shoehorn):

/opt/apigee/apigee-service/bin/apigee-service edge-ui uninstall

To remove all Edge components from the node:

1. Stop all Edge services running on the machine:


2. /opt/apigee/apigee-service/bin/apigee-all stop
3. Clear the yum cache:
4. sudo yum clean all
5. Remove all the Apigee RPMs:
6. sudo rpm -e $(rpm -qa | egrep "(apigee-|edge-)")
7. Remove the installation root directory:
8. sudo rm -rf /opt/apigee

How to configure Edge


To configure Edge after installation, you use a combination of .properties files
and Edge utilities. For example, to configure TLS/SSL on the Edge UI, you edit
.properties files to set the necessary properties. Changes to .properties
files require you to restart the affected Edge component.

Apigee refers to the technique of editing .properties files as code with config
(sometimes abbreviated as CwC). Essentially, code with config is a key/value
lookup tool based on settings in .properties files. In code with config, the keys
are referred to as tokens. Therefore, to configure Edge, you set tokens in
.properties files.

Warning: The Edge documentation contains information on how to set tokens for the different
Edge components. However, many tokens are used internally and are not meant to be updated.
Do not modify any undocumented token without first contacting Apigee Edge Support to ensure
that you do not cause any unforeseen side effects.

Code with config allows Edge components to set default values that are shipped
with the product, lets the installation team override those settings based on the
installation topology, and then lets customers override any properties they choose.

If you think of it as a hierarchy, then the settings are arranged as follows, with
customer settings having the highest priority to override any settings from the
installer team or Apigee:

1. Customer
2. Installer
3. Component

Determine the current value of a token


Before you set a new value for a token in a .properties file, you should first
determine its current value by using the following command:

/opt/apigee/apigee-service/bin/apigee-service component_name configure -search


token

Where component_name is the name of the component, and token is the token to
inspect.

This command searches the hierarchy of the component's .properties files to


determine the current value of the token.
The following example checks the current value of the
conf_http_HTTPRequest.line.limit token for the Router:

/opt/apigee/apigee-service/bin/apigee-service edge-router configure -search


conf_http_HTTPRequest.line.limit

You should see output that looks like the following:

Found key conf_http_HTTPRequest.line.limit, with value, 4k, in /opt/apigee/edge-


router/token/default.properties

If the token's value starts with a #, it has been commented out and you must use
special syntax to change it. For more information, see Set a token that is currently
commented out.

If you do not know the entire name of the token, use a tool such as grep to search
by the property name or key word. For more information, see Locate a token.

Properties files
There are editable and non-editable component configuration files. This section
describes these files.

Editable component configuration files

The following table lists the Apigee components and the properties files that you
can edit to configure those components:

Editable Configuration
Component Component Name
File

Cassandra apigee- /opt/apigee/


cassandra customer/
application/
cassandra.propertie
s

Apigee SSO apigee-sso /opt/apigee/


customer/
application/
sso.properties

Management Server edge- /opt/apigee/


management- customer/
application/
server management-
server.properties

Message Processor edge-message- /opt/apigee/


processor customer/
application/
message-
processor.propertie
s

apigee-monit apigee-monit /opt/apigee/


customer/
application/
monit.properties

Classic UI (does not edge-ui /opt/apigee/


affect the new Edge UI) customer/
application/
ui.properties

Edge UI (new Edge UI apigee- n/a (use the installation


only; does not affect the management-ui configuration file)
Classic UI)

OpenLDAP apigee- /opt/apigee/


openldap customer/
application/
openldap.properties

Postgres Server edge- /opt/apigee/


postgres- customer/
server application/
postgres-
server.properties

PostgreSQL Database apigee- /opt/apigee/


postgresql customer/
application/
postgressql.propert
ies
Qpid Server edge-qpid- /opt/apigee/
server customer/
application/qpid-
server.properties

Qpidd apigee-qpidd /opt/apigee/


customer/
application/
qpidd.properties

Router edge-router /opt/apigee/


customer/
application/
router.properties

Zookeeper apigee- /opt/apigee/


zookeeper customer/
application/
zookeeper.propertie
s

If you want to set a property in one of these component configuration files but it
does not exist, you can create it in the location listed above.

In addition, you must ensure that the properties file is owned by the "apigee" user:

chown apigee:apigee
/opt/apigee/customer/application/configuration_file.properties

Non-editable component configuration files

In addition to the editable component configuration files, there are also


configuration files that you cannot edit.

CAUTION: You can only modify the .properties files under


/opt/apigee/customer/application directory. While you can view the configuration files
listed below, do not modify these files.

Informational (non-editable) files include the following:

Owner Filename or Directory


Installation /opt/apigee/token

Component /opt/apigee/component_name/conf

Where component_name identifies the component.


Possible values include:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management
Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message
Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)

Set a token value


You can only modify the .properties files in the
/opt/apigee/customer/application directory. Each component has its own
.properties file in that directory. For example, router.properties and
management-server.properties. For a complete list of properties files, see
Location of .properties files.

Note: If you have not set any properties for a component, the
/opt/apigee/customer/application directory might not contain a .properties file for
the component. In that case, create one.

To create a .properties file:

1. Create a new text file in an editor. The file name must match the list shown
in the table above for customer files.
2. Change the owner of the file to "apigee:apigee", as the following example
shows:
3. chown apigee:apigee /opt/apigee/customer/application/router.properties
4. If you changed the user that runs the Edge service from the "apigee" user,
use chown to change the ownership to the user which is running the Edge
service.

When you upgrade Edge, the .properties files in the


/opt/apigee/customer/application directory are read. That means the
upgrade will retain any properties that you set on the component.

To set the value of a token:

1. Edit the component's .properties file.


2. Add or change the value of the token. The following example sets the value
of the conf_http_HTTPRequest.line.limit property to "10k":
3. conf_http_HTTPRequest.line.limit=10k
4. If the token takes multiple values, separate each value with a comma, as the
following example shows:
5. conf_security_rbac.restricted.resources=/environments,/environments/*,/
environments/*/virtualhosts,/environments/*/virtualhosts/*,/pods,/
environments/*/servers,/rebuildindex,/users/*/status,/myuri/*
6. To add a new value to a list such as this, you typically append the new value
to the end of the list.
7. Restart the component:
8. /opt/apigee/apigee-service/bin/apigee-service component_name restart
9. Where component_name is one of the following:
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
10.For example, after editing router.properties, restart the Router:
11./opt/apigee/apigee-service/bin/apigee-service edge-router restart
12.(Optional) Check that the token value is set to your new value by using the
configure -search option. For example:
13./opt/apigee/apigee-service/bin/apigee-service edge-router configure -
search conf_http_HTTPRequest.line.limit
14.For more information about configure -search, see Determine the
current value of a token.

Locate a token
In most cases, the tokens you need to set are identified in this guide. However, if
you need to override the value of an existing token whose full name or location you
are unsure of, use grep to search the component's source directory.

For example, if you know that in a previous release of Edge you set the
session.maxAge property and want to know the token value used to set it, then
grep for the property in the /opt/apigee/edge-ui/source directory:

grep -ri "session.maxAge" /opt/apigee/edge-ui/source

You should see a result in the following form:

/opt/apigee/component_name/source/conf/
application.conf:property_name={T}token_name{/T}

The following example shows the value of the UI's session.maxAge token:

/opt/apigee/edge-ui/source/conf/
application.conf:session.maxAge={T}conf_application_session.maxage{/T}

The string between the {T}{/T} tags is the name of the token that you can set in the
UI's .properties file.

Set a token that is currently commented out


Some tokens are commented out in the Edge configuration files. If you try to set a
token that is commented out in an install or component configuration file, your
setting is ignored.

To set the value of a token that is commented out in an Edge configuration file, use
special syntax in the following form:

conf/filename+propertyName=propertyValue

For example, to set the property named HTTPClient.proxy.host on the


Message Processor, first grep for the property to determine its token:

grep -ri /opt/apigee/edge-message-processor/ -e "HTTPClient.proxy.host"

The grep command returns results that include the token name. Notice how the
property name is commented out, as indicated by the # prefix:
source/conf/
http.properties:#HTTPClient.proxy.host={T}conf_http_HTTPClient.proxy.host{/T}
token/default.properties:conf_http_HTTPClient.proxy.host=
conf/http.properties:#HTTPClient.proxy.host=

To set the value of this property, edit


/opt/apigee/customer/application/message-processor.properties,
but use a special syntax, as the following example shows:

conf/http.properties+HTTPClient.proxy.host=myhost.name.com

In this case, you must prefix the property name with conf/http.properties+.
This is the location and name of the configuration file containing the property
followed by "+".

After you restart the Message Processor, examine the file /opt/apigee/edge-
message-processor/conf/http.properties:

cat /opt/apigee/edge-message-processor/conf/http.properties

At the end of the file, you will see the property set, in the form:

conf/http.properties:HTTPClient.proxy.host=myhost.name.com

Configure forward proxy for requests from Send Requests


section of Trace UI
This section explains how to configure forward proxy for requests from the Send
Requests section of the Trace UI with optional proxy credentials. To configure
forward proxy:

1. Edit the /opt/apigee/customer/application/ui.properties and


make sure the file is owned by apigee:apigee.
2. Add the following overrides (changing values to your specific proxy
configuration):

conf_application_http.proxyhost=proxy.example.com

conf_application_http.proxyport=8080

conf_application_http.proxyuser=apigee

3. conf_application_http.proxypassword=Apigee123!
4. Save and restart the classic UI.
About planets, regions, pods,
organizations, environments and
virtual hosts

An on-premises installation of Edge Private Cloud, or an Edge instance, consists of


multiple Edge components installed on a set of server nodes. The following image
shows the relationship among the planet, regions, pods, organizations,
environments, and virtual hosts that make up an Edge instance:

The following table describes these relationships:

Component Contains Associated with Default

Planet One or more n/a


regions
Region One or more "dc-1"
pods

Pod One or more "central"


Edge "gateway"
components "analytics"

Organization One or more One or more pods none


environments containing Message
Processors, and a user
acting as the org
admin

Environment One or more One or more Message none


virtual hosts Processors in a pod
associated with the
parent organization

Virtual Host One or more host none


aliases

About Planets
A planet represents an entire Edge hardware and software environment and can
contain one or more regions. In Edge, a planet is a logical grouping of regions: you
do not explicitly create or configure a planet as part of installing Edge.

About Regions
A region is a grouping of one or more pods. By default, when you install Edge, the
installer creates a single region named "dc-1" containing three pods, as the
following table shows:

Region Pods in the region

"dc-1" "gateway", "central", "analytics"

The following image shows the default regions:


This image shows the load balancer directing traffic to the "gateway" pod. The
"gateway" pod contains the Edge Router and Message Processor components that
handle API requests. Unless you define multiple data centers, you should not have
to create additional regions.

In a more complex installation, you can create two or more regions. One reason to
create multiple regions is to organize machines geographically, which minimizes
network transit time. In this scenario, you host API endpoints so that they are
geographically close to the consumers of those APIs.

In Edge, each region is referred to as a data center. A data center in the Eastern US
can then handle requests arriving from Boston, Massachusetts, while a data center
in Singapore can handle requests originating from devices or computers in Asia.

For example, the following image shows two regions, corresponding to two data
centers:
About Pods
A pod is a grouping of one or more Edge components and Cassandra datastores.
The Edge components can be installed on the same node, but are more commonly
installed on different nodes. A Cassandra datastore is a data repository used by the
Edge components in the pod.

By default, when you install Edge, the installer creates three pods and associates
the following Edge components and Cassandra datastores with each pod:

Pod Edge components Cassandra datastores

"gateway" Router, Message cache- keyvaluemap-


Processor datastore datastore
counter- kms-datastore
datastore
dc-datastore

"central" Management Server, application- identityzone-


Zookeeper, LDAP, datastore datastore
UI, Qpid apimodel- edgenotification-
datastore datastore
audit- management-
datastore server
auth-datastore scheduler-
datastore
user-settings-
datastore

"analytics" Postgres analytics- reportcrud-


datastore datastore

The Edge components and Cassandra datastores in the "gateway" pod are required
for API processing. These components and datastores must be up and running to
process API requests. The components and datastores in the "central" and
"analytics" pods are not required to process APIs, but add additional functionality to
Edge.

The following image shows the components in each pod:


You can add additional Message Processor and Router pods to the three that are
created by default. Alternatively, you can add additional Edge components to an
existing pod. For example, you can add additional Routers and Message Processors
to the "gateway" pod to handle increased traffic loads.

Notice that the "gateway" pod contains the Edge Router and Message Processor
components. Routers only send requests to Message Processors in the same pod
and not to Message Processors in other pods.

You can use the following API call to view server registration details at the end of
the installation for each pod. This is a useful monitoring tool.

curl -u adminEmail:pword http://ms_IP:8080/v1/servers?pod=podName

Where ms_IP is the IP address or DNS name of the Management Server, and
podName is one of the following:

● gateway
● central
● analytics

For example, for the "gateway" pod:


curl -u adminEmail:pword http://ms_IP:8080/v1/servers?pod=gateway

Apigee returns output similar to the following:

[{
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "jmx.rmi.port",
"value" : "1101"
}, ... ]
},
"type" : [ "message-processor" ],
"uUID" : "276bc250-7dd0-46a5-a583-fd11eba786f8"
},
{
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ ]
},
"type" : [ "dc-datastore", "management-server", "cache-datastore", "keyvaluemap-
datastore", "counter-datastore", "kms-datastore" ],
"uUID" : "13cee956-d3a7-4577-8f0f-1694564179e4"
},
{
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "jmx.rmi.port",
"value" : "1100"
}, ... ]
},
"type" : [ "router" ],
"uUID" : "de8a0200-e405-43a3-a5f9-eabafdd990e2"
}]

The type attribute lists the component type. Note that it lists the Cassandra
datastores registered in the pod. While Cassandra nodes are installed in the
"gateway" pod, you will see Cassandra datastores registered with all pods.

About Organizations
An organization is a container for all the objects in an Apigee account, including
APIs, API products, apps, and developers. An organization is associated with one or
more pods, where each pod must contain one or more Message Processors.

Note: In the default installation, where you only have a single "gateway" pod that contains all
the Message Processors, you only associate an org with the "gateway" pod. Unless you define
multiple data centers, and multiple Message Processors pods, you cannot associate an
organization with multiple pods.

In an on-premises installation of Edge Private Cloud, there are no organizations by


default. When you create an organization, you specify two pieces of information:

1. A user who functions as the organization administrator. That user can then
add additional users to the organization, and set the role of each user.
2. The "gateway" pod, the pod containing the Message Processors.

An organization can contain one or more environments. The default Edge


installation procedure prompts you to create two environments: "test" and "prod".
However, you can create more environments as necessary, such as "staging",
"experiments", etc.

Organization provides scope for some Apigee capabilities. For example, key-value-
map (KVM) data is available at the organization level, meaning from all
environments. Other capabilities, such as caching, are scoped to a specific
environment. Apigee analytics data is partitioned by a combination of organization
and environment.

Shown below are the major objects of an organization, including those defined
globally in the organization, and those defined specifically to an environment:
About Environments
An environment is a runtime execution context for the API proxies in an
organization. You must deploy an API proxy to an environment before it can be
accessed. You can deploy an API proxy to a single environment or to multiple
environments.

An organization can contain multiple environments. For example, you might define
a "dev", "test", and "prod" environment in an organization.

When you create an environment, you associate it with one or more Message
Processors. You can think of an environment as a named set of Message
Processors on which API proxies run. Every environment can be associated with
the same Message Processors, or with different ones.

To create an environment, specify two pieces of information:

1. The organization containing the environment.


2. The Message Processors that handle API proxy requests to the environment.
These Message Processors must be in a pod associated with the
environment's parent organization.
By default, when you create an environment, Edge associates all available
Message Processors in the "gateway" pod with the environment.
Alternatively, you can specify a subset of the available Message Processors
so that different Message Processors handle requests to different
environments.

A Message Processor can be associated with multiple environments. For example,


your Edge installation contains two Message Processors: A and B. You then create
three environments in your organization: "dev", "test", and "prod":
● For the "dev" environment, you associate Message Processor A because you
do not expect a large volume of traffic.
● For the "test" environment, you associate Message Processor B because you
do not expect a large volume of traffic.
● For the "prod" environment, you associate both Message Processors A and B
to handle the production-level volume.

The Message Processors assigned to an environment can all be from the same
pod, or can be from multiple pods, spanning multiple regions and data centers. For
example, you define the environment "global" in your organization that includes
Message Processors from three regions, meaning three different data centers: US,
Japan, and Germany.

Deploying an API proxy to the "global" environment causes the API proxy to run on
Message Processors in all of three data centers. API traffic arriving at a Router in
any one of those data centers would be directed only to Message Processors in
that data center because Routers only direct traffic to Message Processors in the
same pod.

About virtual hosts


A virtual host defines the port on the Edge Router on which an API proxy is exposed,
and, by extension, the URL that apps use to access the API proxy. Every
environment must define at least one virtual host.

Ensure that the port number specified by the virtual host is open on the Router
node. You can then access an API proxy by making a request to the following URLs:

http://routerIP:port/proxy-base-path/resource-name

https://routerIP:port/proxy-base-path/resource-name

Where:

● http or https: If the virtual host is configured to supports TLS/SSL, use


HTTPS. If the virtual host does not support TLS/SSL, use HTTP.
● routerIP:port is the IP address and port number of the virtual host.
● proxy-base-path and resource-name are defined when you create the API
proxy.

Typically, you do not publish your APIs to customers with an IP address and port
number. Instead, you define a DNS entry for the Router and port. For example:

http://myAPI.myCo.com/proxy-base-path/resource-name
https://myAPI.myCo.com/proxy-base-path/resource-name

You also must create a host alias for the virtual host that matches the domain name
of the DNS entry. From the example above, you would specify a host alias of
myAPI.myCo.com. If you do not have a DNS entry, set the host alias to the IP
address of the Router and port of the virtual host, as routerIP:port.
For more, see About virtual hosts.

Creating your first organization,environment, and virtual


host
After you complete the Edge installation process, your first action is typically to
create an organization, environment, and virtual host through the "onboarding"
process. To perform onboarding, run the following command on the Edge
Management Server node:

/opt/apigee/apigee-service/bin/apigee-service apigee-provision setup-org -f


configFile

This command takes as input a config file that defines a user, organization,
environment, and virtual host.

For example, you create:

● A user of your choosing to function as the organization administrator


● An organization named example
● An environment in the organization named prod that is associated with all
Message Processors in the "gateway" pod
● A virtual host in the environment named default that allows HTTP access
on port 9001
● Host alias for the virtual host

After running that script, you can access your APIs by using a URL in the form:

http://routerIP:9001/proxy-base-path/resource-name

You can later add any number of organizations, environments, and virtual hosts.

For more information, see Onboard an organization.

Using the apigee-adminapi.sh utility


The apigee-adminapi.sh calls the Edge management API to perform many
maintenance tasks.

About apigee-adminapi.sh
Advantages

Comparison with apigee-provision


Install apigee-adminapi.sh

Invoke apigee-adminapi.sh
You invoke apigee-adminapi.sh from a Management Server node. When you
invoke the utility, you must define the following as either environment variables or
command line options:

● ADMIN_EMAIL (corresponds to the admin command line option)


● ADMIN_PASSWORD (pwd)
● EDGE_SERVER (host)

The following example invokes apigee-adminapi.sh and passes the required


values as command line options:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh buildinfo list --admin


user@example.com --pwd abcd1234 --host localhost

The following example defines the required options as temporary environment


variables and then invokes the apigee-adminapi.sh utility:

export ADMIN_EMAIL=user@example.com
export ADMIN_PASSWORD=abcd1234
export EDGE_SERVER=192.168.56.101
/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh servers list

If you do not pass the password as an option or define it as an environment


variable, apigee-adminapi.sh will prompt you to enter it.

Set apigee-adminapi.sh parameters


You must enter all parameters to a command by using either command-line
switches or by using environment variables. Prefix the command line switches with
a single dash (-) or double dash (--) as required.

For example, you can specify the organization name by either:

● Using the -o command line switch:


● /opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh orgs -o testOrg
● Setting an environment variable named ORG:

export ORG=testOrg

● /opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh orgs

NOTE: You typically use environment variables to set ADMIN_EMAIL and EDGE_SERVER, and
optionally ADMIN_PASSWORD. These parameters are used by most commands. The concern
with setting other params, such as ORG, by using environment variables is that the setting
might be correct for one command but could be incorrect for a subsequent command. For
example, if you forget to reset the environment variable, you might inadvertently pass the
wrong value to the next command.

If you omit any required parameters to the command, the utility displays an error
message describing the missing parameters. For example, if you omit the --host
option (which corresponds to the EDGE_SERVER environment variable), apigee-
adminapi.sh responds with the following error:

Error with required variable or parameter


ADMIN_PASSWORD....OK
ADMIN_EMAIL....OK
EDGE_SERVER....null

If you receive an HTTP STATUS CODE: 401 error, then you entered the wrong
password.

Get apigee-adminapi.sh help


At any time, use the tab key to display a prompt that lists the available command
options.

To see all possible commands, invoke the utility with no options:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh

If you press the tab key after typing apigee-adminapi.sh, you will see the list of
possible options:

analytics classification logsessions regions securityprofile userroles


buildinfo GET orgs runtime servers users

The tab key displays options based on the context of the command. If you enter the
tab key after typing:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh orgs

You will see the possible options for completing the orgs command:

add apis apps delete envs list pods userroles

Use the -h option to display help for any command. For example, if you use the -h
option as shown below:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh orgs -h

The utility displays complete help information for all possible options to the orgs
command. The first item in the output shows the help for the orgs add command:
+++++++++++++++++++++++++++++++++++++++++++
orgs add
Required:
-o ORG Organization name
Optional:
-H HEADER add http header in request
--admin ADMIN_EMAIL admin email address
--pwd ADMIN_PASSWORD admin password
--host EDGE_SERVER edge server to make request to
--port EDGE_PORT port to use for the http request
--ssl set EDGE_PROTO to https, defaults to http
--debug ( set in debug mode, turns on verbose in curl )
-h Displays Help

Pass a file to apigee-adminapi.sh


The apigee-adminapi.sh utility is a wrapper around curl. As a result, some
commands correspond to PUT and POST API calls that take a request body. For
example, creating a virtual host corresponds to a POST API call that requires
information about the virtual host in the request body.

When using the apigee-adminapi.sh utility to create a virtual host, or any


command that takes a request body, you can pass all of the necessary information
on the command line as shown below:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh orgs envs virtual_hosts add -


e prod -o testOrg --host localhost --admin foo@bar.com -v myVHostUtil -p 9005 -a
192.168.56.101:9005

Or, you can pass a file containing the same information as would be contained in
the request body of the POST. For example, the following command takes a file
defining the virtual host:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh orgs envs virtual_hosts add -


e prod -o testOrg --host localhost --admin foo@bar.com -f vhostcreate

Where the file vhostcreate contains the POST body of the call. In this example, it
is an XML-formatted request body:

<VirtualHost name="myVHostUtil">
<HostAliases>
<HostAlias>192.168.56.101:9005</HostAlias>
</HostAliases>
<Interfaces/>
<Port>9005</Port>
</VirtualHost>

Display debug and API information


Use the --debug option to the apigee-adminapi.sh utility to display detailed
information about the command. This information includes the curl command
generated by the apigee-adminapi.sh utility to perform the operation.

For example, the following command uses the --debug option. The results display
the underlying curl command's output in verbose mode:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh orgs add -o testOrg2 --admin


foo@bar.com --host localhost --debug
curl -H Content-Type: application/xml -v -X POST -s -k -w \n==> %{http_code}
-u ***oo@bar.com:***** http://localhost:8080/v1/o -d <Organization
name="testOrg2"
type="paid"/>
* About to connect() to localhost port 8080 (#0)
* Trying ::1... connected
* Connected to localhost (::1) port 8080 (#0)
* Server auth using Basic with user 'foo@bar.com'
> POST /v1/o HTTP/1.1
> Authorization: Basic c2dp234234NvbkBhcGlnZ2342342342342341Q5
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1
Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8080
> Accept: */*
> Content-Type: application/xml
> Content-Length: 43
>
} [data not shown]
< HTTP/1.1 201 Created
< Content-Type: application/json
< Date: Tue, 03 May 2016 02:08:32 GMT
< Content-Length: 291
<
{ [data not shown]
* Connection #0 to host localhost left intact
* Closing connection #0

Important data to remember from the


installation process
Before you can perform any maintenance tasks, you need to know a few things
about your Apigee Edge environment that are set during the system installation
process. Be sure that you have the following information available from your
operations team:

● Pods:
● Gateway pod name (default "gateway")
● Central pod name (default "central")
● Analytics pod name (default "analytics")
● Region name (default "dc-1")
● Administrative user ID and password

Other important information, such as the Apigee version number or the unique
identifiers (UUIDs) of an Apigee component, can be found as follows:

● Use the following command to display the status of all Edge components
installed on the node:
● /opt/apigee/apigee-service/bin/apigee-all status
● Use the following command to display the version numbers of all Edge
components installed on the node:
● /opt/apigee/apigee-service/bin/apigee-all version
● Use the following API call to display the UUID of installed Edge components:
● curl http://management_server_IP:port_num/v1/servers/self -u
adminEmail:password
● Where management_server_IP is the IP address or DNS name of the
Management server node, and port_num is one of the following:
● 8081 for the Router
● 8082 for the Message Processor
● 8083 for Qpid
● 8084 for Postgres

Reset Edge passwords


After you complete the installation, you can reset the following passwords:

● OpenLDAP
● Apigee Edge system administrator
● Edge organization user
● Cassandra

Instructions on resetting each of these passwords are included in the sections that
follow.

Reset OpenLDAP password


The way you reset the OpenLDAP password depends on your configuration.
Depending on your Edge configuration, OpenLDAP can be installed as:

● A single instance of OpenLDAP installed on the Management Server node.


For example, in a 2-node, 5-node, or 9-node Edge configuration.
● Multiple OpenLDAP instances installed on Management Server nodes,
configured with OpenLDAP replication. For example, in a 12-node Edge
configuration.
● Multiple OpenLDAP instances installed on their own nodes, configured with
OpenLDAP replication. For example, in a 13-node Edge configuration.

For a single instance of OpenLDAP installed on the Management Server, perform


the following:

1. On the Management Server node, run the following command to create the
new OpenLDAP password:

/opt/apigee/apigee-service/bin/apigee-service apigee-openldap \

2. change-ldap-password -o OLD_PASSWORD -n NEW_PASSWORD


3. Run the following command to store the new password for access by the
Management Server:

/opt/apigee/apigee-service/bin/apigee-service edge-management-server \

4. store_ldap_credentials -p NEW_PASSWORD
5. This command restarts the Management Server.

In an OpenLDAP replication setup with OpenLDAP installed on Management Server


nodes, follow the above steps on both Management Server nodes to update the
password.

Note: You must set identical new passwords on both LDAP servers in order to achieve
successful OpenLDAP replication.

In an OpenLDAP replication setup with OpenLDAP being on a node other than


Management Server, ensure that you first change the password on both OpenLDAP
nodes, then on both Management Server nodes.

Reset system admin password


Resetting the system admin password requires you to reset the password in two
places:

● Management Server
● UI

Note: You cannot reset the system admin password from the Edge UI. You must use the
procedure below to reset it.Note: If you have enabled TLS on the Edge UI, you must also re-
enable it as part of changing the sys admin password.Caution: You should stop the Edge UI
before resetting the system admin password. Because you reset the password first on the
Management Server, there can be a short period of time when the UI is still using the old
password. If the UI makes more than three calls using the old password, the OpenLDAP server
locks out the system admin account for three minutes.

To reset the system admin password:

1. Edit the silent config file that you used to install the Edge UI to set the
following properties:

APIGEE_ADMINPW=NEW_PASSWORD

SMTPHOST=smtp.gmail.com

SMTPPORT=465

SMTPUSER=foo@gmail.com

SMTPPASSWORD=bar

SMTPSSL=y

2. SMTPMAILFROM="My Company <myco@company.com>"


3. Note that you have to include the SMTP properties when passing the new
password because all properties on the UI are reset.
4. On the UI node, stop the Edge UI:
5. /opt/apigee/apigee-service/bin/apigee-service edge-ui stop
6. Use the apigee-setup utility to reset the password on the Edge UI from the
config file:
7. /opt/apigee/apigee-setup/bin/setup.sh -p ui -f configFile
8. (Only if TLS is enabled on the UI) Re-enable TLS on the Edge UI as described
in Configuring TLS for the management UI.
9. On the Management Server, create a new XML file. In this file, set the user ID
to "admin" and define the password, first name, last name, and email address
using the following format:

<User id="admin">

<Password><![CDATA[password]]></Password>

<FirstName>first_name</FirstName>

<LastName>last_name</LastName>

<EmailId>email_address</EmailId>

10.</User>
11.On the Management Server, execute the following command:

curl -u "admin_email_address:admin_password" -H \

"Content-Type: application/xml" -H "Accept: application/json" -X POST \


"http://localhost:8080/v1/users/admin_email_address" -d @your_data_file

12.
13.Where your_data_file is the file you created in the previous step.
Edge updates your admin password on the Management Server.
14.Delete the XML file you created. Passwords should never be permanently
stored in clear text.

In an OpenLDAP replication environment with multiple Management Servers,


resetting the password on one Management Server updates the other Management
Server automatically. However, you have to update all Edge UI nodes separately.

Reset organization user password


To reset the password for an organization user, use the apigee-servce utility to
invoke apigee-setup, as the following example shows:

/opt/apigee/apigee-service/bin/apigee-service apigee-setup reset_user_password


[-h]
[-u USER_EMAIL]
[-p USER_PWD]
[-a ADMIN_EMAIL]
[-P APIGEE_ADMINPW]
[-f configFile]

For example:

/opt/apigee/apigee-service/bin/apigee-service apigee-setup reset_user_password

-u user@myCo.com -p Foo12345 -a admin@myCo.com -P adminPword

cp ~/Documents/tmp/hybrid_root/apigeectl_beta2_a00ae58_linux_64/README.md

~/Documents/utilities/README.md

Note: If you omit the admin email, then it uses the default admin email set at installation time.
If you omit the admin password, then the script prompts for it. All other parameters must be set
on the command line, or in a config file.

Shown below is an example config file that you can use with the "-f" option:

USER_NAME=user@myCo.com
USER_PWD="Foo12345"
APIGEE_ADMINPW=ADMIN_PASSWORD

You can also use the Update user API to change the user password.

SysAdmin and organization user password rules


Use this section to enforce a desired level of password length and strength for your
API management users. The settings use a series of preconfigured (and uniquely
numbered) regular expressions to check password content (such as uppercase,
lowercase, numbers, and special characters). Write these settings to
/opt/apigee/customer/application/management-server.properties
file. If that file does not exist, create it.

After editing management-server.properties, restart the management server:

/opt/apigee/apigee-service/bin/apigee-service edge-management-server restart

You can then set password strength ratings by grouping different combinations of
regular expressions. For example, you can determine that a password with at least
one uppercase and one lowercase letter gets a strength rating of "3", but that a
password with at least one lowercase letter and one number gets a stronger rating
of "4".

Note: By default, the Edge UI auto generates passwords that are compatible with the default
password rules defined on the Edge Management Server. If you change these rules, you should
also update the Edge UI so that it auto generates user passwords appropriately. See Auto-
generate Edge UI passwords for more.

Property Description

conf_security_password.validation.minimu Use these to determine the overall


m.password.length=8 characteristics of valid passwords.
conf_security_password.validation.default. The default minimum rating for
password strength (described later
rating=2
in the table) is 3.
conf_security_password.validation.minimu
m.rating.required=3 Notice that the
password.validation.default.rating=
2 is lower than the minimum rating
required, which means that if a
password entered falls outside of
the rules you configure, the
password is rated a 2 and is
therefore invalid (below the
minimum rating of 3).

Following are regular expressions that identify password characteristics. Note


that each one is numbered. For example,
password.validation.regex.5=... is expression number 5. You'll use
these numbers in a later section of the file to set different combinations that
determine overall password strength.

conf_security_password.validation.regex.1 1: All characters repeat


=^(.)\\1+$
conf_security_password.validation.regex.2 2: At least one lowercase letter
=^.*[a-z]+.*$

conf_security_password.validation.regex.3 3: At least one uppercase letter


=^.*[A-Z]+.*$

conf_security_password.validation.regex.4 4: At least one digit


=^.*[0-9]+.*$

conf_security_password.validation.regex.5 5: At least one special character (not


=^.*[^a-zA-z0-9]+.*$ including underscore _)

conf_security_password.validation.regex.6 6: At least one underscore


=^.*[_]+.*$

conf_security_password.validation.regex.7 7: More than one lowercase letter


=^.*[a-z]{2,}.*$

conf_security_password.validation.regex.8 8: More than one uppercase letter


=^.*[A-Z]{2,}.*$

conf_security_password.validation.regex.9 9: More than one digit


=^.*[0-9]{2,}.*$

conf_security_password.validation.regex.1 10: More than one special character


0=^.*[^a-zA-z0-9]{2,}.*$ (not including underscore)

conf_security_password.validation.regex.1 11: More than one underscore


1=^.*[_]{2,}.*$

The following rules determine password strength based on password content.


Each rule includes one or more regular expressions from the previous section and
assigns a numeric strength to it. The numeric strength of a password is
compared to the conf_security_password.validation.minimum.rating.required
number at the top of this file to determine whether or not a password is valid.

conf_security_password.validation.rule.1= Each rule is numbered. For example,


1,AND,0 password.validation.rule.3=
conf_security_password.validation.rule.2= ... is rule number 3.
2,3,4,AND,4
Each rule uses the following format
conf_security_password.validation.rule.3=
(right of the equals sign):
2,9,AND,4
conf_security_password.validation.rule.4= regex-index-list,[AND|OR],rating
3,9,AND,4
conf_security_password.validation.rule.5= regex-index-list is the list of regular
5,6,OR,4 expressions (by number from the
previous section), along with an
conf_security_password.validation.rule.6=
3,2,AND,3
conf_security_password.validation.rule.7= AND|OR operator (meaning,
2,9,AND,3 consider all or any of the
expressions listed).
conf_security_password.validation.rule.8=
3,9,AND,3 rating is the numeric strength rating
given to each rule.

For example, rule 5 means that any


password with at least one special
character OR one underscore gets a
strength rating of 4. With
password.validation.minimum
.rating.required=3 at the top of
the file, a password with a 4 rating is
valid.

conf_security_rbac.password.validation.en Set role-based access control


abled=true password validation to false when
single sign-on (SSO) is enabled.
Default is true.

Reset Cassandra password


By default, Cassandra ships with authentication disabled. If you enable
authentication, it uses a predefined user named cassandra with a password of
cassandra. You can use this account, set a different password for this account, or
create a new Cassandra user. Add, remove, and modify users by using the
Cassandra CREATE/ALTER/DROP USER statements.

For information on how to enable Cassandra authentication, see Enable Cassandra


authentication.

To reset the Cassandra password, you must:

● Set the password on any one Cassandra node and it will be broadcast to all
Cassandra nodes in the ring
● Update the Management Server, Message Processors, Routers, Qpid servers,
and Postgres servers on each node with the new password

For more information, see CQL Commands.

To reset the Cassandra password:

1. Log into any one Cassandra node using the cqlsh tool and the default
credentials. You only have to change the password on one Cassandra node
and it will be broadcast to all Cassandra nodes in the ring:
2. /opt/apigee/apigee-cassandra/bin/cqlsh cassIP 9042 -u cassandra -p
'cassandra'
3. Where:
● cassIP is the IP address of the Cassandra node.
● 9042 is the Cassandra port.
● The default user is cassandra.
● The default password is 'cassandra'. If you changed the password
previously, use the current password. If the password contains any
special characters, you must wrap it in single quotes.
4. Execute the following command as the cqlsh> prompt to update the
password:
5. ALTER USER cassandra WITH PASSWORD 'NEW_PASSWORD';
6. If the new password contains a single quote character, escape it by
preceding it with a single quote character.
7. Exit the cqlsh tool:
8. exit
9. On the Management Server node, run the following command:
10./opt/apigee/apigee-service/bin/apigee-service edge-management-server
store_cassandra_credentials -u CASS_USERNAME -p 'CASS_PASSWORD'
11.Optionally, you can pass a file to the command containing the new username
and password:
12.apigee-service edge-management-server store_cassandra_credentials -f
configFile
13.Where the configFile contains the following:

CASS_USERNAME=CASS_USERNAME

14.CASS_PASSWORD='CASS_PASSWROD'
15.This command automatically restarts the Management Server.
16.Repeat step 4 on:
● All Message Processors
● All Routers
● All Qpid servers (edge-qpid-server)
● Postgres servers (edge-postgres-server)

The Cassandra password is now changed.

Reset PostgreSQL password


By default, the PostgreSQL database has two users defined: postgres and
apigee. Both users have a default password of postgres. Use the following
procedure to change the default password.

Change the password on all Postgres master nodes. If you have two Postgres
servers configured in master/standby mode, then you only have to change the
Password on the master node. See Set up master-standby replication for Postgres
for more.
Note: After you change the apigee and postgres user passwords in the procedure below, no
data is written to the PostgreSQL database until you complete all the other steps.

1. On the Master Postgres node, change directories to


/opt/apigee/apigee-postgresql/pgsql/bin.
2. Set the PostgreSQL postgres user password:
a. Login to PostgreSQL database using the command:
b. psql -h localhost -d apigee -U postgres
c. When prompted, enter the existing postgres user password as
postgres.
d. At the PostgreSQL command prompt, enter the following command to
change the default password:
e. ALTER USER postgres WITH PASSWORD 'new_password';
f. On success, PostgreSQL responds with the following:
g. ALTER ROLE
h. Exit PostgreSQL database using the following command:
i. \q
3. Set the PostgreSQL apigee user password:
a. Login to PostgreSQL database using the command:
b. psql -h localhost -d apigee -U apigee
c. When prompted, enter the apigee user password as postgres.
d. At the PostgreSQL command prompt, enter the following command to
change the default password:
e. ALTER USER apigee WITH PASSWORD 'new_password';
f. Exit PostgreSQL database using the command:
g. \q
4. You can set the postgres and apigee users' passwords to the same value
or different values.
5. Set APIGEE_HOME:
6. export APIGEE_HOME=/opt/apigee/edge-postgres-server
7. Encrypt the new password:
8. sh /opt/apigee/edge-analytics/utils/scripts/utilities/passwordgen.sh
new_password
9. This command returns an encrypted password. The encrypted password
starts after the ":" character and does not include the ":"; for example, the
encrypted password for "apigee1234" is:
10.Encrypted string:WheaR8U4OeMEM11erxA3Cw==
11.Update the Management Server node with the new encrypted passwords for
the postgres and apigee users.
a. On the Management Server, change directory to
/opt/apigee/customer/application.
b. Edit the management-server.properties file to set the following
properties. If this file does not exist, create it.Note: Some properties take
the encrypted postgres user password, and some take the encrypted apigee
user password:
● conf_pg-
agent_password=newEncryptedPasswordForPostgresUser
● conf_pg-
ingest_password=newEncryptedPasswordForPostgresUser
● conf_query-
service_pgDefaultPwd=newEncryptedPasswordForApigeeUser
● conf_query-
service_dwDefaultPwd=newEncryptedPasswordForApigeeUser
● conf_analytics_aries.pg.password=newEncryptedPasswordF
orPostgresUser
c.
d. Make sure the file is owned by apigee user:
e. chown apigee:apigee management-server.properties
12.Update all Postgres Server and Qpid Server nodes with the new encrypted
password.
a. On the Postgres Server or Qpid Server node, change to the following
directory:
b. /opt/apigee/customer/application
c. Open the following files for edit:
● postgres-server.properties
● qpid-server.properties
d. If these files do not exist, create them.
e. Add the following properties to the files:Note: All of these properties take
the encrypted postgres user password.
● conf_pg-
agent_password=newEncryptedPasswordForPostgresUs
er
● conf_pg-
ingest_password=newEncryptedPasswordForPostgresU
ser
● conf_query-
service_pgDefaultPwd=newEncryptedPasswordForPost
gresUser
● conf_query-
service_dwDefaultPwd=newEncryptedPasswordForPost
gresUser
● conf_analytics_aries.pg.password=newEncryptedPa
sswordForPostgresUser
f. Make sure the files are owned by apigee user:

chown apigee:apigee postgres-server.properties

g. chown apigee:apigee qpid-server.properties


13.Update the SSO component (if SSO is enabled):
a. Connect to or log in to the node on which the apigee-sso
component is running. This is also referred to as the SSO server.
In AIO or 3-node installations, this node is the same node as the
Management Server.
If you have multiple nodes running the apigee-sso component, you
must perform these steps on each node.
b. Open the following file for edit:
c. /opt/apigee/customer/application/sso.properties
d. If the file does not exist, create it.
e. Add the following line to the file:
f. conf_uaa_database_password=new_password_in_plain_text
g. For example:
h. conf_uaa_database_password=apigee1234
i. Execute the following command to apply your configuration changes
to the apigee-sso component:
j. /opt/apigee/apigee-service/bin/apigee-service apigee-sso configure
k. Repeat these steps for each SSO server.
14.Restart the following components in the following order:
a. PostgreSQL database:
b. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
restart
c. Qpid Server:
d. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server
restart
e. Postgres Server:
f. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server
restart
g. Management Server:
h. /opt/apigee/apigee-service/bin/apigee-service edge-management-
server restart
i. SSO server:
j. /opt/apigee/apigee-service/bin/apigee-service apigee-sso restart

Managing the default LDAP password


policy for API management
The Apigee system uses OpenLDAP to authenticate users in your API management
environment. OpenLDAP makes this LDAP password policy functionality available.

This section describes how to configure the delivered default LDAP password
policy. Use this password policy to configure various password authentication
options, such as the number of consecutive failed login attempts after which a
password can no longer be used to authenticate a user to the directory.

This section also describes how to use a couple of APIs to unlock user accounts
that have been locked according to attributes configured in the default password
policy.
For additional information, see:

● LDAP policy
● "Important information about your password policy" in the Apigee
Community

Configuring the Default LDAP Password Policy


To configure the default LDAP password policy:

1. Connect to your LDAP server using an LDAP client, such as Apache Studio or
ldapmodify. By default OpenLDAP server listens on port 10389 on the
OpenLDAP node.
To connect, specify the Bind DN or user of
cn=manager,dc=apigee,dc=com and the OpenLDAP password that you
set at the time of Edge installation.
2. Use the client to navigate to the password policy attributes for:
● Edge users: cn=default,ou=pwpolicies,dc=apigee,dc=com
● Edge sysadmin:
cn=sysadmin,ou=pwpolicies,dc=apigee,dc=com
3. Edit the password policy attribute values as desired.
4. Save the configuration.

Default LDAP Password Policy Attributes

Attribute Description Default

pwdExpireWarning The maximum number of 604800


seconds before a password is
(Equivalent to 7
due to expire that expiration
days)
warning messages will be
returned to a user who is
authenticating to the directory.

pwdFailureCountInterval Number of seconds after which 300


old consecutive failed bind
attempts are purged from the
failure counter.

In other words, this is the


number of seconds after which
the count of consecutive failed
login attempts is reset.

If
pwdFailureCountInterval
is set to 0, only a successful
authentication can reset the
counter.

If
pwdFailureCountInterval
is set to >0, the attribute
defines a duration after which
the count of consecutive failed
login attempts is automatically
reset, even if no successful
authentication has occurred.

We suggest that this attribute


be set to the same value as the
pwdLockoutDuration
attribute.

pwdInHistory Maximum number of used, or 3


past, passwords for a user that
will be stored in the
pwdHistory attribute.

When changing her password,


the user will be blocked from
changing it to any of her past
passwords.

pwdLockout If TRUE, specifies to lock out a False


user when their password
expires so that the user can no
longer log in.

pwdLockoutDuration Number of seconds during 300


which a password cannot be
used to authenticate the user
due to too many consecutive
failed login attempts.

In other words, this is the


length of time during which a
user account will remain
locked due to exceeding the
number of consecutive failed
login attempts set by the
pwdMaxFailure attribute.

If pwdLockoutDuration is
set to 0, the user account will
remain locked until a system
administrator unlocks it.

See Unlocking a user account.

If pwdLockoutDuration is
set to >0, the attribute defines
a duration for which the user
account will remain locked.
When this time period has
elapsed, the user account will
be automatically unlocked.

We suggest that this attribute


be set to the same value as the
pwdFailureCountInterval
attribute.

pwdMaxAge Number of seconds after which user: 2592000


a user (non-sysadmin)
sysadmin: 0
password expires. A value of 0
means passwords do not
expire. The default value of
2592000 corresponds to 30
days from the time the
password was created.

pwdMaxFailure Number of consecutive failed 3


login attempts after which a
password may not be used to
authenticate a user to the
directory.

pwdMinLength Specifies the minimum number 8


of characters required when
setting a password.

Unlocking a User Account


A user's account may be locked due to attributes set in the password policy. A user
with the sysadmin Apigee role assigned can use the following API call to unlock the
user's account. Replace userEmail, adminEmail, and password with actual values.

To unlock a user:
/v1/users/userEmail/status?action=unlock -X POST -u adminEmail:password

Note: To ensure that only system administrators can call this API, the /users/*/status path
is included in the conf_security_rbac.restricted.resources property for the
Management Server:

cd /opt/apigee/edge-management-server
grep -ri "conf_security_rbac.restricted.resources" *

The output contains the following:

token/default.properties:conf_security_rbac.restricted.resources=/environments,/
environments/*,/environments/*/virtualhosts,/environments/*/virtualhosts/*,/pods,/
environments/*/servers,/rebuildindex,/users/*/status

Customizing Edge email templates


Edge automatically sends emails in response to certain events. For each of these
events, Edge defines a default email template in /opt/apigee/edge-ui/email-
templates. To customize these emails, you can edit the default templates.

Note: These events are triggered when modifying users in the Edge management UI. If you use
a script to add or modify a user, no email is sent.

The events where an email is sent, and the default template file for the generated
email, are:

● New user is added: user-added-new.html


● Existing user requests a password reset: password-reset.html
● A user is added to an organization: user-added-existing.html

Note: The next installation of Edge overwrites these files, so make sure to backup any
customizations that you make.

The "From" field of the email can be set by using the


conf_apigee_apigee.mgmt.mailfrom property in
/opt/apigee/customer/application/ui.properties (if that file does not
exist, create it). For example:

conf_apigee_apigee.mgmt.mailfrom="My Company <myCo@company.com>"

Note: Use the SMTPMAILFROM property in the installation configuration file to set the "From"
field at install time. The SMTPMAILFROM property is required at install time. See Edge
Configuration File Reference for more.
The email subjects are customizable by editing the following properties in
/opt/apigee/customer/application/ui.properties (if that file does not
exist, create it):

● conf_apigee-base_apigee.emails.passwordreset.subject
● conf_apigee-base_apigee.emails.useraddedexisting.subject
● conf_apigee-base_apigee.emails.useraddednew.subject

Several email properties reference the {companyNameShort} placeholder, which


defaults to a value of "Apigee". You can set the value of the placeholder by using
the conf_apigee_apigee.branding.companynameshort property in
ui.properties. For example:

conf_apigee_apigee.branding.companynameshort="My Company"

After setting any properties in


/opt/apigee/customer/application/ui.properties, you must restart the
Edge UI:

/opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Configuring the Edge SMTP server


An SMTP server allows the Edge UI to send emails. For example, the Edge UI can
send an email to assist a user in recovering their UI password.

At install time, you can specify the SMTP information. To change the SMTP
information, or if you deferred SMTP configuration at the time of installation, you
can use the apigee-setup utility to modify the SMTP configuration.

You can change any or all of the following information:

● SMTP host, such as smtp.gmail.com


● SMTP port, such as 465 if you are using TLS/SSL
● SMTP username and password
● SMTP "From" email address

To update the SMTP information:

1. Edit the configuration file that you used to install Edge, or create a new
configuration file with the following information:
2. # Must include the sys admin password
3. APIGEE_ADMINPW=sysAdminPW
4. # SMTP settings
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com # 0 for no username
SMTPPASSWORD=smtppwd # 0 for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"
5. Run the apigee-setup utility on all the Edge UI nodes to update the SMTP
settings:
6. /opt/apigee/apigee-setup/bin/setup.sh -p ui -f configFile
7. The apigee-setup utility restarts the Edge UI.

Set the expiration time for user


activation links in activation emails
When a user registers a new account, or requests to reset a password, they receive
an email that contain an activation link. By default, that activation link expires after
600 seconds, or ten minutes.

You can configure the expiration time by setting the


conf_apigee_apigee.forgotpassword.emaillinkexpiryseconds token in
the ui.properties file.

To set the expiry time:

1. On the Edge UI node, edit


/opt/apigee/customer/application/ui.properties. If that file
does not exist, create it.
2. Add the following property to the file to set the expiry time in seconds:
3. conf_apigee_apigee.forgotpassword.emaillinkexpiryseconds=timeInSeconds
4. Restart the Edge UI:
5. /opt/apigee/apigee-service/bin/apigee-service edge-ui

Setting the hostname for links in


generated emails
Edge automatically sends emails in response to certain events as described in
Customizing Edge email templates.
Many of these emails contain a link. For example, when a new user is added to an
organization, the Edge UI sends the user an email containing a login URL in the
form:

https://domain/login

The domain is determined automatically by Edge and is typically correct for the
organization. However, there is a chance that when the Edge UI is behind a load
balancer that the automatically generated domain name is incorrect. If so, you can
use the conf_apigee_apigee.emails.hosturl property to explicitly set the
domain name portion of the generated URL.

To set the domain:

1. Open the ui.properties file in an editor. If the file does not exist, create
it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set conf_apigee_apigee.emails.hosturl. For example, set
conf_apigee_apigee.emails.hosturl as:
4. conf_apigee_apigee.emails.hosturl="http://myCo.com"
5. Save your changes.
6. Make sure the properties file is owned by the 'apigee' user:
7. chown apigee:apigee /opt/apigee/customer/application/ui.properties
8. Restart the Edge UI:
9. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Setting the log level for an Edge


component
By default, Edge components use a logging level of INFO. However, you can set the
logging level for each Edge component. For example, you might want to set it to
DEBUG for the Message Processor, or to ERROR for the Management Server.

Note: Each component writes log information to its own file. See Setting log file location for
information on log file locations.

The available log levels include:

● ALL
● DEBUG
● ERROR
● FATAL
● INFO
● OFF
● TRACE
● WARN
To set the log level for a component, edit the component's properties file to set a
token, then restart the component:

● For the Edge UI, the tokens are


conf_logger_settings_application_log_level and
conf_logger_settings_play_log_level. Set them to the same value.
● For all other Edge components, the token is conf_system_log.level

To set the log level for the Edge UI component:

1. Open /opt/apigee/customer/application/ui.properties in an
editor. If the file does not exist, create it.
2. Set the following properties in ui.properties to the desired log level. For
example, to set it to DEBUG:

conf_logger_settings_application_log_level=DEBUG

3. conf_logger_settings_play_log_level=DEBUG
4. Save the file.
5. Ensure the properties file is owned by the "apigee" user:
6. chown apigee:apigee /opt/apigee/customer/application/ui.properties
7. Restart the Edge UI:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

To set the log level for other components:

1. Open
/opt/apigee/customer/application/component_name.propertie
s in an editor, where component_name can be one of the following:
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
2. If the file does not exist, create it.
3. Set the following property in in the properties file to the desired log level. For
example, to set it to DEBUG:
4. conf_system_log.level=DEBUG
5. Save the file.
6. Ensure the properties file is owned by the "apigee" user:
7. chown apigee:apigee
/opt/apigee/customer/application/component_properties_file_name.properti
es
8. Restart the component by using the following syntax:
9. /opt/apigee/apigee-service/bin/apigee-service component_name restart

For more information about the name and location of the component configuration
files, see Location of properties files.

Enabling debug logging


Enabling debug logging

On this page
Enable debug logging for Apigee SSO
You can enable debug logging for several Edge components. When you enable
debug logging, the component writes debug messages to its system.log file.

For example, if you enable debug logging for the Edge Router, the Router writes
debug messages to:

/opt/apigee/var/log/edge-router/logs/system.log

Note: Enabling debug logging can cause the system.log file for a component to grow very
large in size, and can affect the overall system performance. It is recommended that you enable
debug logging for only as long as necessary, and then disable it.

Use the following API call to enable debug logging for a component:

curl -X POST "http://localhost:PORT/v1/logsessions?session=debug"

To disable debug logging:

curl -X DELETE "http://localhost:PORT/v1/logsessions/debug"

To view active debug logging sessions:


curl -X GET "http://localhost:PORT/v1/logsessions/debug"

Warning: Notice that these calls do not take credentials and refer to localhost. These API
calls must be run on the server hosting the Edge components; they cannot be run remotely.

Each Edge component that supports debug logging uses a different port number in
the API call, as shown in the following table:

Component Port Log file

Management 8080 /opt/apigee/var/log/edge-


Server management-server/logs/
system.log

Router 8081 /opt/apigee/var/log/edge-


router/logs/system.log

Message 8082 /opt/apigee/var/log/edge-


Processor message-processor/logs/
system.log

Qpid Server 8083 /opt/apigee/var/log/edge-qpid-


server/logs/system.log

Postgres Server 8084 /opt/apigee/var/log/edge-


postgres-server/logs/system.log

Enable debug logging for Apigee SSO

To enable debug logging for Apigee SSO:

1. Edit the /opt/apigee/customer/application/sso.properties file


(if the file does not exist, create it):
2. vi /opt/apigee/customer/application/sso.properties
3. Set the following property and save the file:
4. conf_logback_log_level=DEBUG
5. Restart the Edge SSO service:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-sso restart
Setting log file location
By default, the log files for an Edge component are written to the
/opt/apigee/var/log/component_name directory, where component_name
can be one of the following:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)

In addition to this list, Edge also logs the setup process in the
/opt/apigee/var/log/apigee-setup directory.

Use the following procedure to change the default log file location for all Edge
components except the Edge UI:

1. Create the following file:


2. /opt/apigee/etc/component_name.d/APIGEE_APP_LOGDIR.sh
3. Add the following property to the file:

APIGEE_APP_LOGDIR=/directory/path

LOG_FILE=${APIGEE_APP_LOGDIR}/apigee-qpidd.log

4.
5. Where /directory/path specifies the directory for the log files.Make sure the
directory is accessible by the apigee user:
6. chown apigee:apigee /directory/path
7. Restart the component:
8. /opt/apigee/apigee-service/bin/apigee-service component_name restart

For more information about the name and location of the component configuration
files, see Location of properties files.
Setting the session timeout in the Edge
UI
By default, a session in the Edge UI expires one day after login. That means Edge UI
users have to log in again after the session expires.

You can configure the session timeout be setting the


conf_application_session.maxage property in the
/opt/apigee/customer/application/ui.properties file.

To set this property:

1. Open the ui.properties file in an editor. If the file does not exist, create
it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set conf_application_session.maxage as a time value and time unit.
Time units can be:
● m, minute, minutes
● h, hour, hours
● d, day, days
4. For example, set conf_application_session.maxage as:
5. conf_application_session.maxage="2d"
6. Save your changes.
7. Restart the Edge UI:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Setting the port number of the Edge UI


By default, the Edge UI listens on port 9000 of the Edge node on which it is
installed. Use the following procedure to change the port number:

1. Ensure that the desired port number is open on the Edge node.
2. Open /opt/apigee/customer/edge-ui.d/ui.sh in an editor. If the file
and directory does not exist, create it.
3. Set the following property in ui.sh:
4. export JAVA_OPTS="-Dhttp.address=IP -Dhttp.port=PORT"
5. Where IP is the IP address of the Edge UI node, and PORT is the new port
number.
6. Save the file.
7. Restart the Edge UI:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

You can now access the Edge UI on the new port number.

UI modifications
You can make minor changes to the Edge UI to improve the experience of your
developer community. For example, you can change the following:

● Support contact: The recipient of internal UI errors.


● Consent banner: Text that is displayed when a user first accesses the Edge
UI. The consent banner displays HTML-formatted text and a button that the
user selects to proceed to the log in screen.

Changing these settings will give your developers a more cohesive experience and
help ensure that issues are routed to the proper place.

You make these changes by editing properties files using the Code with config
(CwC) technique.

Change the support contact


When the UI experiences certain errors (such as when a developer logs in as a user
that has no organization associated with it), a developer may be presented with an
error like the following:

You should change the default value of the "support team" contact to point to an
active resource within your organization that can provide your developers with
timely assistance.

To change the support contact:


1. Open the /opt/apigee/customer/application/ui.properties file
in an editor. If the file does not exist, create it.
2. If you just created the file, change the owner to "apigee:apigee", as the
following example shows:
3. chown apigee:apigee /opt/apigee/customer/application/ui.properties
4. Set the value of the
conf_apigee_apigee.branding.contactemailsupport property to
your internal support contact. For example:
5. conf_apigee_apigee.branding.contactemailsupport="api-
support@mycompany.com"
6. Save your changes.
7. Restart the Edge UI service:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

The next time your developers experience an internal error, the "contact team" link
will send an email to your internal support organization.

Change the consent banner


The consent banner is displayed when a user first accesses the Edge UI. For
example:

To add a consent banner:

1. Open the /opt/apigee/customer/application/ui.properties file


in an editor. If the file does not exist, create it.
2. If you just created the file, change the owner to "apigee:apigee", as the
following example shows:
3. chown apigee:apigee /opt/apigee/customer/application/ui.properties
4. Set the following properties:

# Enable the consent banner:

conf_apigee-base_apigee.feature.enableconsentbanner="true"

# Set the button text:

conf_apigee-base_apigee.consentbanner.buttoncaption="I Agree"

# Set the HTML text:


conf_apigee-base_apigee.consentbanner.body="<h1>Notice and Consent
Banner</h1>

5. <div><p>Enter your consent text here.</p></div>"


6. Save your changes.
7. Restart the Edge UI service:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

The next time your developers open the Edge UI in a browser, they must accept the
consent agreement before they can log in.

Setting the URL of the portal


Apigee provides you with Apigee Developer Services portal (or simply, the portal)
that you can use to build and launch your own customized website to provide all of
these services to your development community. Edge customers can create their
own developer portal, either in the cloud or on-prem. See What is a developer
portal? for more information.

The Edge UI displays the DevPortal button on the Publish > Developers page that,
when clicked, opens the portal associated with an organization. By default, that
button opens the following URL:

http://live-orgname.devportal.apigee.com

Where orgname is the name of the organization.

You can set this URL to a different URL, for example if your portal has a DNS record,
or disable the button completely. Use the following properties of the organization
to control the button:

● features.devportalDisabled: Set to false (default) to enable the


button and to true to disable it.
● features.devportalUrl: Set to the URL of the developer portal.

You set these properties separately for each organization. To set these properties,
you first use the following API call to determine the current property settings on the
organization:

curl -H "Content-Type:application/json" -u adminEmail:pword -X GET \


http://ms_IP:8080/v1/organizations/orgname

This call returns an object describing the organization in the form:

"createdAt" : 1428676711119,

"createdBy" : "me@my.com",

"displayName" : "orgname",

"environments" : [ "prod" ],

"lastModifiedAt" : 1428692717222,

"lastModifiedBy" : "me@my.com",

"name" : "organme",

"properties" : {

"property" : [ {

"name" : "foo",

"value" : "bar"

}]

},

"type" : "paid"

Note any existing properties in the properties area of the object. When you set
properties on the organization, the value in properties overwrites any current
properties. Therefore, if you want to set features.devportalDisabled or
features.devportalUrl in the organization, make sure you copy any existing
properties when you set them.

Use the following PUT call to set properties on the organization:

curl -H "Content-Type:application/json" -u adminEmail:pword -X PUT \

http://ms_IP:8080/v1/organizations/orgname \

-d '{
"displayName" : "orgname",

"name" : "orgname",

"properties" : {

"property" : [

"name" : "foo",

"value" : "bar"

},

"name": "features.devportalUrl",

"value": "http://dev-orgname.devportal.apigee.com/"

},

"name": "features.devportalDisabled",

"value": "false"

}'

In the PUT call, you only have to specify displayName, name, and properties.
Note that this call incudes the "foo" property that was originally set on the
organization.

Allowing the Edge UI access to local IP


addresses
here are several places where the Edge UI might attempt to access a local IP
address. These local IP address might correspond to private or otherwise protected
resources that should not be exposed to external users:

● The Trace tool in the Edge UI has the ability to send and receive API request
to any specified URL. In certain deployment scenarios where Edge
components are co-hosted with other internal services, a malicious user may
misuse the power of the Trace tool by making requests to private IP
addresses.
● When creating an API proxy from an OpenAPI specification, the specification
describes such elements of an API as its base path, paths and verbs,
headers, and more. As part of the spec, a malicious user can specify a base
path of the proxy that refers to a private IP address.
● When creating an API proxy from a WSDL file located on your local file
system.

For security reasons, by default, the Edge UI is prevented from referencing private
IP addresses. The list of private IP addresses includes:

● Loopback address (127.0.0.1 or localhost)


● Site-Local Addresses (For IPv4 - 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16)
● Any Local Address (any address resolving to localhost).

If you want to enable the Edge UI to access private IP addresses, set the following
tokens:

● For the Trace tool, the conf_apigee-


base_apigee.feature.enabletraceforinternaladdresses
property is disabled by default. Set it to true to enable the Trace tool access
to private IP addresses.
● For OpenAPI specs, the conf_apigee-
base_apigee.feature.enableopenapiforinternaladdresses
property is disabled by default. Set it to true to enable an OpenAPI access to
private IP addresses.
● For WSDL files, the conf_apigee-
base_apigee.feature.enablewsdlforinternaladdresses property
is disabled by default. Set it to true to enable the upload of a WSDL file from
private IP addresses.

Note: If the Apigee Routers are reachable only over the above private IP ranges, Apigee
recommends that you set the conf_apigee-
base_apigee.feature.enabletraceforinternaladdresses property to true.

To set these properties to true:

1. Open the ui.properties file in an editor. If the file does not exist, create
it.
2. vi /opt/apigee/customer/application/ui.properties
3. Set the following properties to true:
conf_apigee-base_apigee.feature.enabletraceforinternaladdresses="true"

conf_apigee-base_apigee.feature.enableopenapiforinternaladdresses="true"

4. conf_apigee-base_apigee.feature.enablewsdlforinternaladdresses="true"
5. Save your changes to ui.properties.
6. Make sure the properties file is owned by the 'apigee' user:
7. chown apigee:apigee /opt/apigee/customer/application/ui.properties
8. Restart the Edge UI:
9. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

The Edge UI can now access local IP addresses.

Allow longer custom reports in the


Edge UI
By default, the Edge UI only lets you create custom reports for a maximum of 14
days.

To override this default and set it to 183 days:

1. Open the ui.properties file in an editor. If the file does not exist, create
it.
2. vi /opt/apigee/customer/application/ui.properties
3. Set the following property to "true":
4. conf_apigee-base_apigee.feature.enableforcerangelimit="false"
5. Save your changes to ui.properties.
6. Restart the Edge UI:
7. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Enabling geo aggregation and geo


maps
Geo aggregations lets you collect analytics data for API calls based on
geographical attributes such as region, continent, country, and city. From this
analytics data, you can view a GeoMap in the Edge UI that shows the location of API
requests:
Geo aggregations work by extracting geographical data from a third-party database
and adding it to the analytics data collected by Edge. The geographical information
can contain the city, country, continent, time zone, and region of a request made to
an API proxy.

To use geo aggregation, you must purchase the Maxmind GeoIp2 database that
contains this geographical information. See https://www.maxmind.com/en/geoip2-
databases for more information.

Note: Geo aggregations require that the IP address of a machine making the request to the API
proxy is in the Maxmind GeoIp2 database. Otherwise, Edge will not be able to determine the
location of the request.

Enabling geo aggregation


By default the geo aggregations are not enabled. To enable geo aggregations, you
must:

● On all Qpid servers, install the MaxMind database and configure the Qpid
server to use it.
● Enable the Geo Map display in the Edge UI.

Install the MaxMind database on all Edge Qpid servers

Use the following procedure to install the MaxMind database on all Edge Qpid
servers:
1. Obtain the Maxmind GeoIp2 database.
Note: Geo aggregations work by adding geographical data from a third-party database
to the analytics data collected by Edge. To use geo aggregation, you must purchase the
Maxmind GeoIp2 database that contains this information. See
https://www.maxmind.com/en/geoip2-databases for more information.
2. Create the following folder on the Qpid server node:
3. /opt/apigee/maxmind
4. Download the Maxmind GeoIp2 database to /opt/apigee/maxmind.
5. Change the ownership of the database file to the 'apigee' user:
6. chown apigee:apigee /opt/apigee/maxmind/GeoIP2-City_20160127.mmdb
7. Set the permissions on the database to 744:
8. chmod 744 /opt/apigee/maxmind/GeoIP2-City_20160127.mmdb
9. Set the following tokens in
/opt/apigee/customer/application/qpid-server.properties. If
that file does not exist, create it:

conf_ingestboot-service_vdim.geo.ingest.enabled=true

10.conf_ingestboot-service_vdim.geo.maxmind.db.path=/opt/apigee/
maxmind/GeoIP2-City_20160127.mmdb
11.If you stored the Maxmind GeoIp2 database in a different location, edit the
path property accordingly.
Note that this database file contains a version number. If you later receive an
updated database file, it might have a different version number. As an
alternative, create a symlink to the database file and use the symlink in
qpid-server.properties.
For example, create a symlink for "GeoIP2-City-current.mmdb" to "GeoIP2-
City_20160127.mmdb". If you later receive a new database with a different
file name, you only have to update the symlink rather than having to
reconfigure and restart the Qpid server.
12.Change ownership of the qpid-server.properties file to the 'apigee'
user:
13.chown apigee:apigee /opt/apigee/customer/application/qpid-
server.properties
14.Restart the Qpid server:
15./opt/apigee/bin/apigee-service/bin/apigee-service edge-qpid-server restart
16.Repeat this process on every Qpid node.
17.To validate that geo aggregation is working:
a. Trigger several API proxy calls on a sample API proxy.
b. Wait about 5 - 10 mins for the aggregations to complete.
c. Open a console and connect to the Edge Postgres server:
d. psql -h /opt/apigee/var/run/apigee-postgresql/ -U apigee -d apigee
e. Perform a SELECT query on the table analytics.agg_geo to show the
rows with geographical attributes:
f. select * from analytics.agg_geo;
g. You should see the following columns in the query results which are
extracted from the Maxmind GeoIp2 database: ax_geo_city,
ax_geo_country, ax_geo_continent, ax_geo_timezone,
ax_geo_region.
If the agg_geo table is not getting populated, check the Qpid server
logs at /opt/apigee/var/log/edge-qpid-server/logs/ to
detect any potential exceptions.

Enable Geo Maps in the Edge UI

Use the following procedure to enable Geo Maps in the Edge UI:

1. Set the following token in


/opt/apigee/customer/application/ui.properties to enable Geo
Maps. If that file does not exist, create it:
2. conf_apigee_apigee.feature.disablegeomap=false
3. Change ownership of the ui.properties file to the 'apigee' user:
4. chown apigee:apigee /opt/apigee/customer/application/ui.properties
5. Restart the Edge UI:
6. /opt/apigee/bin/apigee-service/bin/apigee-service edge-ui restart
7. In the Edge UI, select Analytics > GeoMap to display the geo aggregation
data.

Updating the MaxMind GeoIp2 database


MaxMind issues periodic updates to the Maxmind GeoIp2 database. If you receive
an updated database, use the following procedure to install it on Edge:

1. Obtain the updated Maxmind GeoIp2 database.


2. Download the Maxmind GeoIp2 database to /opt/apigee/maxmind.
3. Check the name of the database file. If it is the same as the old file, as
defined in /opt/apigee/customer/application/qpid-
server.properties, proceed to the next step. However, if the file has a
different name, you have to edit the qpid-server.properties file to
specify the name of the new database file and then restart the Qpid server,
as described above.
As an alternative, you can create symlink to the file. For example, create a
symlink for "GeoIP2-City-current.mmdb" to "GeoIP2-City_20160127.mmdb".
If you later receive a new database with a different file name, you only have
to update the symlink rather than having to reconfigure the Qpid server.
4. Change the ownership of the database file to the 'apigee' user:
5. chown apigee:apigee /opt/apigee/maxmind/GeoIP2-City_20160127.mmdb
6. Set the permissions on the database to 744:
7. chmod 744 /opt/apigee/maxmind/GeoIP2-City_20160127.mmdb
8. Restart the Qpid server:
9. /opt/apigee/bin/apigee-service/bin/apigee-service edge-qpid-server restart
10.Repeat this process on every Qpid node.
Setting the password hint text in the
Edge UI
By default, when a user changes password in the Edge UI, a dialog box appears that
contains fields to set the password and text describing the password requirements:

You can configure the text by setting the conf_apigee-


base_apigee.passwordpolicy.pwdhint property in the
/opt/apigee/customer/application/ui.properties file.

To set this property:

1. Open the ui.properties file in an editor. If the file does not exist, create
it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set conf_apigee-base_apigee.passwordpolicy.pwdhint. For
example, set conf_apigee-base_apigee.passwordpolicy.pwdhint
as:

conf_apigee-base_apigee.passwordpolicy.pwdhint="Password must be 13
characters long and

4. contain at least on special character."


5. Save your changes.
6. Restart the Edge UI:
7. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
Auto-generate Edge UI passwords

The Edge UI auto generates user passwords for new users. Users are typically then
sent an email that allows them to change the auto generated password.

By default, the Edge UI auto generates passwords that are compatible with the
default password rules defined on the Edge Management Server. These rules specify
that passwords are at least eight characters long and include at least one special
character and one upper-case character. See Resetting Edge passwords for
information on how to configure Edge password rules.

You can change the password rules on the Management Server to make passwords
more strict. For example, you can increase the minimum length to 10 characters,
require multiple special characters, etc. If you change the rules on the Management
Server, you also have to change the Edge UI to auto generate passwords compatible
with the new rules.

The Edge UI define four properties that you use to set the rules for auto generating
password:

conf_apigee-base_apigee.forgotPassword.underscore.minimum="1"
conf_apigee-base_apigee.forgotPassword.specialchars.minimum="1"
conf_apigee-base_apigee.forgotPassword.lowecase.minimum="1"
conf_apigee-base_apigee.forgotPassword.uppercase.minimum="1"
NOTE: There is no property to set the minimum password length. The Edge UI auto generates
very long passwords that exceed any minimum password length that you set on the
Management Server.

To set these properties:

1. Open the ui.properties file in an editor. If the file does not exist, create it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set the properties:
conf_apigee-base_apigee.forgotPassword.underscore.minimum="2"
conf_apigee-base_apigee.forgotPassword.specialchars.minimum="3"
conf_apigee-base_apigee.forgotPassword.lowecase.minimum="2"

4. conf_apigee-base_apigee.forgotPassword.uppercase.minimum="2"
5. Save your changes.
6. Make sure the properties file is owned by the 'apigee' user:
7. chown apigee:apigee /opt/apigee/customer/application/ui.properties
8. Restart the Edge UI:
9. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Configure the Edge UI to store session


information in memory
Note: This feature is available in Edge version 4.17.01.01 and later.

By default, when a user logs out of the Edge UI, the session cookie for the user is
deleted. However, while the user is logged in, malware or other malicious software
running on the user's system could obtain the cookie and use it to access the Edge
UI. This situation is not specific to the Edge UI itself, but with the security of the
user's system.

As an added level of security, you can configure the Edge UI to store information
about current sessions in server memory. When the user logs out, their session
information is deleted, preventing another user from using the cookie to access the
Edge UI.

Note: If the Edge UI server ever goes down, all session information stored in memory is lost and
all users must log in again after the server comes back up.

This features is disabled by default. Before you enable this feature, your system
must meet one of the following requirements:

● Your system uses a single Edge UI server.


● Your system uses multiple Edge UI servers with a load balancer, and the load
balancer is configured to use sticky sessions.

If your system meets these requirements, then use the following procedure to enable
the Edge UI to track user sessions in memory:

1. Open the ui.properties file in an editor. If the file does not exist, create it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set the following properties in:
conf_apigee_apigee.feature.expireSessionCookiesInternally="true"

4. conf_apigee_apigee.feature.trackSessionCookies="true"
5. Save your changes.
6. Ensure the properties file is owned by the "apigee" user:
7. chown apigee:apigee /opt/apigee/customer/application/ui.properties
8. Restart the Edge UI:
9. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Set the timeout used by the Edge UI for


Edge API management calls

By default, a session in the Edge UI expires one day after login. That means Edge UI
users have to log in again after the session expires.

You can configure the session timeout be setting the


conf_application_session.maxage property in the
/opt/apigee/customer/application/ui.properties file.

To set this property:

1. Open the ui.properties file in an editor. If the file does not exist, create it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set conf_application_session.maxage as a time value and time unit.
Time units can be:
● m, minute, minutes
● h, hour, hours
● d, day, days
For example, set conf_application_session.maxage as:
● conf_application_session.maxage="2d"
4. Save your changes.
5. Restart the Edge UI:
6. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Supporting multiple Edge UI instances


You can install multiple instances of the Edge UI in a high availability scenario.
However, after installing the two instances of the Edge UI, you must perform post-
installation tasks in order to synchronize the property settings between the two.

Specifically, you must configure the two UI instances to have the same value for the
following properties:

application.secret=value
mail.smtp.credential=value
apigee.mgmt.credential=value
Additionally, if you configure them to use TLS, then you must ensure that you use the
same cert and key on both instances.

Configure Edge UI instances using HTTP


1. Log in to the node hosting the first Edge UI instances (do not log in to the UI
itself, but as a user on the node).
2. Open /opt/apigee/edge-ui/conf/apigee.conf in an editor and copy
the values of the following properties for later use:
mail.smtp.credential="value"

3. apigee.mgmt.credential="value"
4. Open /opt/apigee/edge-ui/conf/application.conf in an editor and
copy the values of the following property for later use:
5. application.secret="value"
6. Log in to the node hosting the second Edge UI instances.
7. Open /opt/apigee/customer/application/ui.properties on the
second UI instance in an editor. If the file does not exist, create it.
8. Add the following properties to
/opt/apigee/customer/application/ui.properties, including the
values that you copied from the first UI instance:
conf_application_application.secret="value"
conf_apigee_mail.smtp.credential="value"

9. conf_apigee_apigee.mgmt.credential="value"
10. Notice how you prefix these values with either conf_application_ or
conf_apigee_.
11. Save the file.
12. Make sure /opt/apigee/customer/application/ui.properties is
owned by the "apigee" user:
13. chown apigee:apigee /opt/apigee/customer/application/ui.properties
14. Restart the second UI instance:
15. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Users can now log in to either UI instance.

Configure Edge UI instances using TLS/HTTPS


1. Configure the first UI instance to use TLS/HTTPS as described in Configuring
TLS for the management UI.
2. Configure the second Edge UI instances as described above for HTTP to
synchronize the required properties.
3. Copy the JKS file containing the cert and key from the first UI instance to the
node hosting the second UI instance.
4. Configure the second UI instance to use TLS/HTTPS as described in
Configuring TLS for the management UI.
Enabling/disabling Message
Processor/Router reachability
It is a good practice to disable reachability on a server during maintenance, such as
for a server restart or upgrade. When reachability is disabled, no traffic is directed to
the server. For example, when reachability is disabled on a Message Processor,
Routers will not direct any traffic to that Message Processor.

For example, to upgrade a Message Processor, you can use the following procedure:

1. Disable reachability on the Message Processor.


2. Upgrade the Message Processor.
3. Enable reachability on the Message Processor.

NOTE: In some configurations there may be a number of Routers behind a Load Balancer (ELB).
The ELB might be configured to monitor port 15999 on the Routers. If the load balancer cannot
access port 15999 on the Router it assumes the Router is down and you can restart the Router.

Disabling/enabling reachability on a Message


Processor

To disable reachability on Message Processor, you can just stop the Message
Processor:

/opt/apigee/apigee-service/bin/apigee-service edge-message-processor stop

The Message Processor first processes any pending messages before it shuts
down. Any new requests are routed to other available Message Processors.

To restart the Message Processor, use the following commands:

/opt/apigee/apigee-service/bin/apigee-service edge-message-processor start

/opt/apigee/apigee-service/bin/apigee-service edge-message-processor wait_for_ready

The wait_for_ready command returns the following message when the Message
Processor is ready to process messages:

Checking if message-processor is up: message-processor is up.


Disabling/enabling reachability on a Router

In a production environment, you typically have a load balancer in front of the Edge
Routers. Load balancers monitor port 15999 on the Routers to ensure that the Route
is available.

Configure the load balancer to perform an HTTP or TCP health check on the Router
using the following URL:

http://router_IP:15999/v1/servers/self/reachable

This URL returns an HTTP 200 response code if the Router is reachable.

To make a Router unreachable, you can block port 15999 on the Router. If the load
balancer is unable to access the Router on port 15999 it no longer forwards requests
to the Router. For example, you can block the port by using the following iptables
command on the Router node:

sudo iptables -A INPUT -i eth0 -p tcp --dport 15999 -j REJECT

To later make the Router available, flush iptables:

sudo iptables -F

You might be using iptables to manage other ports on the node so you have to take
that into consideration when you flush iptables or use iptables to block port 15999. If
you are using iptables for other rules, you can use the -D option to reverse the
specific change:

sudo iptables -D INPUT -i eth0 -p tcp --dport 15999 -j REJECT

NOTE: As an alternative to checking reachability on the Router on port 15999, you can check the
port on individual virtual hosts. For example, if you have a virtual host on port 9001, make a TCP
check on that port. To make that port unreachable, you can use the following iptables command:

sudo iptables -A INPUT -i eth0 -p tcp --syn --dport 9001 -j REJECT

Perform Router health checks

You can perform the following types of health checks on Routers:

● Liveness: A signal to the monitoring subsystem that it can restart a


component. For example:
To check a router's liveness:
http://router_IP:8081/v1/servers/self/up

To check a load balancer's liveness:

● http://router_IP:15999/v1/servers/self/reachable
● Readyness: Determines if a router can process customer requests for a
particular environment.
For example:
To check both a router and MP pool's availability:

● http://router_IP:15999/{org}__{env}
● To get the status of a Router, make a request to port 8081 on the Router:
● curl -v http://router_IP:8081/v1/servers/self/up
● If the Router is up, the request returns "true" in the response and HTTP 200.
Note that this call only checks if the Router is up and running. Control of the
Router's reachability from a load balancer is determined by port 15999.
To get the status of a Message Processor:
● curl http://Message_Processor_IP:8082/v1/servers/self/up

Setting HTTP request/response header


limits
The Edge Router and Message Processor have predefined limits to the size of
request/response headers and to the line size

Configure limits for the Router


For the Router, edit the following properties in
/opt/apigee/customer/application/router.properties to change the
default values:

# Request buffers
# default:
# conf_load_balancing_load.balancing.driver.large.header.buffers=8 16k
# new value:
conf_load_balancing_load.balancing.driver.large.header.buffers=8 32k

# Response buffers
# default:
# conf_load_balancing_load.balancing.driver.proxy.buffer.size=64k
# new value:
conf_load_balancing_load.balancing.driver.proxy.buffer.size=128k

If that file does not exist, create it.

For
conf_load_balancing_load.balancing.driver.large.header.buffers,
the first parameter specifies the number of buffers, and the second the size of each
buffer. The buffers are allocated dynamically and released after usage. These
settings are only used if the request header is more than 1 KB. For requests that
have header request URI less than 1 KB, the large buffers won't even be used.

For conf_load_balancing_load.balancing.driver.proxy.buffer.size,
specify the size of the response buffer.

The Edge Router is implemented using NGINX. For more on these properties, see:

● conf_load_balancing_load.balancing.driver.large.header.buffers
● conf_load_balancing_load.balancing.driver.proxy.buffer.size

You must restart the Router after changing these properties:

/opt/apigee/apigee-service/bin/apigee-service edge-router restart

Configuring limits for the Message Processor


For the Message Processor, which handles outgoing requests to your backend
services, edit the following properties in
/opt/apigee/customer/application/message-processor.properties
to change these default values:

conf/http.properties+HTTPRequest.line.limit=7k
conf/http.properties+HTTPRequest.headers.limit=25k
conf/http.properties+HTTPResponse.line.limit=2k
conf/http.properties+HTTPResponse.headers.limit=25k
Note: The property syntax is different for the Message Processor because the properties are
commented out by default.

If that file does not exist, create it.

You must restart the Message Processor after changing these properties:

/opt/apigee/apigee-service/bin/apigee-service edge-message-processor restart

Use IPv6 on the Router


By default, all runtime API calls to Apigee Edge for Private Cloud use IPv4. You can
add IPv6 support to the Router by setting the
conf_load_balancing_load.balancing.driver.nginx.listen.endpoin
t.all.interfaces.enabled property to true on your Router.

TIP: You can check if IPv6 is already enabled by logging into the Router and searching for this
property, as the following example shows:
/opt/apigee/apigee-service/bin/apigee-service edge-router configure
--search
conf_load_balancing_load.balancing.driver.nginx.listen.endpoint.all.interfaces.enabled

To enable IPv6 on your Router:

1. Log in to the Router.


2. Open the router.properties configuration file in an editor:
3. vi /opt/apigee/customer/application/router.properties
4. If the file does not exist, create it.
5. Set the following property to true:
6. conf_load_balancing_load.balancing.driver.nginx.listen.endpoint.all.interfaces
.enabled
7. For example:
8. conf_load_balancing_load.balancing.driver.nginx.listen.endpoint.all.interfaces
.enabled=true
9. Save your changes to the properties file.
10. Restart the router by executing the following command:
11. /opt/apigee/apigee-service/bin/apigee-service edge-router restart
12. During the restart, you should see output that is similar to the following:
[ChangeDelta, position: 775, lines:
[load.balancing.driver.nginx.listen.endpoint.all.interfaces.enabled=false] to

13. [load.balancing.driver.nginx.listen.endpoint.all.interfaces.enabled=true]]
14. Configure your router’s IP address and ports so that incoming API requests
resolve to the IPv6 address/ports. Note that incoming API requests must be
sent from IPv6 enabled clients.
15. Test your configuration by making an API request to your Router's IPv6
address and port.

Configuring the Router timeout

You can configure the Router timeout when accessing Message Processors as part
of an API proxy request.
The Edge Router has a default timeout of 57 seconds when attempting to access a
Message Processor as part of handling a request through an API proxy. After that
timeout expires, the Router attempts to connect to another Message Processor, if
one is available. Otherwise, it returns an error.

The following two properties control the Router timeout:

Property Description

conf_load_balancing_load.balancing.driver.proxy.read.timeout

Specifies the wait time for a single Router. The default value is 57
seconds.

You can set the time interval as something other than seconds by
using the following notation:

ms: milliseconds
s: seconds (default)
m: minutes
h: hours
d: days
w: weeks
M: months (length of 30 days)

y: years (length of 365 days)

For example, to set the wait time to 2 hours, you can use either of
the following values:

conf_load_balancing_load.balancing.driver.proxy.read.timeout=2h
# 2 hours

OR

conf_load_balancing_load.balancing.driver.proxy.read.timeout=12
0m # 120 minutes

conf_load_balancing_load.balancing.driver.nginx.upstream_next
_timeout

Specifies the total wait time for all Message Processors when
your Edge installation has multiple Message Processors. It has a
default value of the current value of
conf_load_balancing_load.balancing.driver.proxy.
read.timeout, or 57 seconds.

As with the
conf_load_balancing_load.balancing.driver.proxy.
read.timeout property, you can specify time intervals other
than the default (seconds).

To configure the Router's timeout:

1. Edit the /opt/apigee/customer/application/router.properties


file. If the file does not exist, create it.
2. Set the properties in the configuration file, as the following example shows:
3. conf_load_balancing_load.balancing.driver.proxy.read.timeout=1800000ms #
1800000 milliseconds
4. conf_load_balancing_load.balancing.driver.nginx.upstream_next_timeout=1d
# 1 day
5. Make sure the properties file is owned by the 'apigee' user:
6. chown apigee:apigee /opt/apigee/customer/application/router.properties
7. Restart the Router:
8. /opt/apigee/apigee-service/bin/apigee-service edge-router restart

To set retry options, use the RetryOption property as described in virtual host
configuration properties.

Configure forward proxying


Forward proxies provide a single point through which multiple machines send
requests to an external server. They can enforce security policies, log and analyze
requests, and perform other actions so that requests adhere to your business rules.
With Edge, a forward proxy typically intermediates your API proxies and an external
TargetEndpoint (a backend target server).

To use an HTTP forward proxy between Edge and the TargetEndpoint, you must
configure the outbound proxy settings on the Message Processors (MPs). These
properties configure the MPs to route target requests from Edge to the HTTP
forward proxy.

To configure an MP for forwarding proxying:

1. On the MP, edit the following file:


2. /opt/apigee/customer/application/message-processor.properties
3. If the message-processor.properties file does not exist, create it.
4. Edit the file to set the proxy-related properties described in the table below.
5. Ensure that the properties file is owned by the 'apigee' user:
6. chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties
7. Save your changes to the properties file.
8. Restart the MP, as the following example shows:
9. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart

The following table describes the properties in the message-


processor.properties file that you use to configure an MP for forward proxying
to a backend server:

Property Description

conf_http_HTTPClient.use.proxy Permits forward proxy use. The default


value is true, which means that you can
use a forward proxy at the API proxy
layer by including the relevant XML in
your bundle configuration.

If you set this value to false, then you


cannot use a forward proxy.

conf_http_HTTPClient.use.tunneling By default Edge uses tunneling for all


traffic. To disable tunneling by default,
set this property to "false".

use.proxy.host.header.with.target.uri Sets the target host and port as a Host


header. By default, the proxy host and
port are set as Host headers. You can
only set this property in the Target
Endpoint. For example:

<HTTPTargetConnection>
<Properties>
<Property name="use.proxy.host.
header.with.target.uri">true
</Property>
</Properties>
<URL>https://mocktarget.apigee.net/
my-target</URL>
</HTTPTargetConnection>

conf/ Specifies the type of the HTTP proxy as


http.properties+HTTPClient.proxy.type HTTP or HTTPS. By default, it uses
"HTTP".

conf/ Specifies the host name or IP address


http.properties+HTTPClient.proxy.host where the HTTP proxy is running.

conf/ Specifies the port on which the HTTP


http.properties+HTTPClient.proxy.port proxy is running. If this property is
omitted, by default it uses port 80 for
HTTP and port 443 for HTTPS.

conf/ If the HTTP proxy requires basic


http.properties+HTTPClient.proxy.user authentication, then use these properties
conf/ to provide authorization details.
http.properties+HTTPClient.proxy.pass
word

For example:

conf_http_HTTPClient.use.proxy=true
conf_http_HTTPClient.use.tunneling=false
conf/http.properties+HTTPClient.proxy.type=HTTP
conf/http.properties+HTTPClient.proxy.host=my.host.com
conf/http.properties+HTTPClient.proxy.port=3128
conf/http.properties+HTTPClient.proxy.user=USERNAME
conf/http.properties+HTTPClient.proxy.password=PASSWORD

If forward proxying is configured for the MP, then all traffic going from API proxies to
backend targets goes through the specified HTTP forward proxy. If the traffic for a
specific target of an API proxy should go directly to the backend target, bypassing
the forward proxy, then set the following property in the TargetEndpoint to override
the HTTP forward proxy:

<Property name="use.proxy">false</Property>

For more information on setting the TargetEndpoint properties, including how to


configure the connection to the target endpoint, see Endpoint properties reference.

To disable forward proxying for all targets by default, set the following property in
your message-processor.properties file:
conf_http_HTTPClient.use.proxy=false

Then set use.proxy to "true" for any TargetEndpoint that you want to go through an
HTTP forward proxy:

<Property name="use.proxy">true</Property>

By default Edge uses tunneling for the traffic to the proxy. To disable tunneling by
default, set the following property in the message-processor.properties file:

conf_http_HTTPClient.use.tunneling=false

If you want to disable tunneling for a specific target, then set the
use.proxy.tunneling property in the TargetEndpoint. If the target uses TLS/SSL,
then this property is ignored, and the message is always sent via a tunnel:

<Property name="use.proxy.tunneling">false</Property>

For Edge itself to act as the forward proxy - receiving requests from the backend
services and routing them to the internet outside of the enterprise - first set up an
API proxy on Edge. The backend service can then make a request to the API proxy,
which can then connect to external services.

Set the message size limit on the


Router or Message Processor
To prevent memory issues in Edge, message payload size on the Router and
Message Processor is restricted to 10MB. Exceeding those sizes results in a
protocol.http.TooBigBody error.

Use the following properties to change the limits on the Router, Message Processor,
or both. Both properties have a default value of "10m" corresponding to 10MB:

● conf_http_HTTPRequest.body.buffer.limit
● conf_http_HTTPResponse.body.buffer.limit

To set these properties:

1. Open the router.properties file or message-processor.properties


file in an editor. If the file does not exist, create it:
2. vi /opt/apigee/customer/application/router.properties
3. or:
4. vi /opt/apigee/customer/application/message-processor.properties
5. Set the properties as desired:
conf_http_HTTPRequest.body.buffer.limit=15m

6. conf_http_HTTPResponse.body.buffer.limit=15m
7. Save your changes.
8. Make sure the properties file is owned by the "apigee" user:

9. chown apigee:apigee /opt/apigee/customer/application/router.properties


10. or:
11. chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties
12. Restart the Edge component:
13. /opt/apigee/apigee-service/bin/apigee-service edge-router restart
14. or:
15. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart

Set L1 cache expiry on a Message


Processor
Apigee Edge provides caching for persistence of data across requests. When API
data is received, it is stored in cache for a brief period of time and then deleted. The
maximum amount of time a piece of data is kept before being deleted is called the
expiry, or time-to-live (TTL), of the cache. Each cache has a default TTL, but in some
cases you may need to change the TTL value to improve API performance.

Types of cache

API data is stored in two types of cache:

● Level 1 (L1): In-memory cache, which has faster access but less available
storage capacity.
● Level 2 (L2): Persistent cache in a Cassandra data store, which has slower
access but more available storage capacity.

When a data entry in L1 cache reaches the L1 TTL, it is deleted. However, a copy of
the entry is kept in the L2 cache (which has a longer TTL than L1 cache), where it
remains accessible to other message processors. See In-memory and persistent
cache levels for more details about cache.
Max L1 TTL

In Edge for Private Cloud, you can set the maximum L1 cache TTL for each message
processor using the Max L1 TTL property
(conf_cache_max.l1.ttl.in.seconds). An entry in the L1 cache will expire
after reaching the Max L1 TTL value and be deleted.

Note: In Edge for Public Cloud, you can only set the TTL for L2 cache.

Notes:

● By default, Max L1 TTL is disabled (with the value -1), in which case the TTL
of an entry in L1 cache is determined by the PopulateCache policy's expiry
settings (for both L1 and L2 cache).
● Max L1 TTL only has an effect if its value is smaller than the overall cache
expiry.

Setting Max L1 TTL

You can set Max L1 TTL on a message processor as follows:

1. Open the configuration file


/opt/apigee/customer/application/message-
processor.properties in an editor. If the file doesn't exist, create it.
2. Set the Max L1 TTL property to the desired value:
3. conf_cache_max.l1.ttl.in.seconds = 180
4. We recommend the value 180 seconds.
5. Make sure the properties file is owned by the "apigee" user:

6. chown apigee:apigee /opt/apigee/customer/application/message-


processor.properties
7. Restart the message processor:
8. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart

Guidelines for setting Max L1 TTL

When setting Max L1 TTL, keep the following guidelines in mind:

● RPC misses: If you are noticing remote procedure call (RPC) misses between
message processors (MPs)— especially across multiple data centers— it's
possible that the L1 cache has stale entries, which will remain stale until they
are deleted from L1 cache. Setting Max L1 TTL to a lower value forces any
stale entries to be removed, and replaced with fresh values from L2 cache,
sooner.
Solution: Decrease conf_cache_max.l1.ttl.in.seconds.
● Excessive load on Casandra: When you set a value for Max L1 TTL, L1 cache
entries will expire more often, leading to more L1 cache misses and more L2
cache hits. Since L2 cache will be hit more often, Cassandra will incur an
increased load.
Solution: Increase conf_cache_max.l1.ttl.in.seconds

As a general rule, tune Max L1 TTL to a value that balances the frequency of RPC
misses between MPs with the potential load on Cassandra.

We recommend setting the value of conf_cache_max.l1.ttl.in.seconds to at


least 180 seconds (3 minutes) to keep processing running smoothly.

Writing analytics data to a file


By default, analytics data collected by the Message Processor is uploaded to Qpid
and Postgres for processing. You can then view the analytics data in the Edge UI.

Alternatively, you can configure the Message Processor to write analytics data to
disk. Then, you can upload that data to your own analytics system for analysis. For
example, you could upload the data to Google Cloud BigQuery. You can then take
advantage of the powerful query and machine learning capabilities offered by
BigQuery and TensorFlow to perform your own data analysis.

You can also choose to use both options. That means you can upload the analytics
data to Qpid/Postgres and also save the data to disk.

File names and location


By default, if you enable the writing of analytics data to disk files, the files are written
to the following directory:

/opt/apigee/var/log/edge-message-processor/ax/tmp

Edge creates a new directory under /tmp for the data files, at one minute intervals.
The format of the directory name is:

org~env~yyyyMMddhhmmss

For example:

myorg~prod~20190909163500
myorg~prod~20190909163600

Each directory contains a .gz file with the individual data files for that interval. The
format of the .gz file name is:

4DigitRandomHex_StartTime.StartTimePlusInterval_internalHostIP_hostUUID_writer_i
ndex.txt.gz

At regular intervals, Edge moves the directory and the .gz file it contains from /tmp
to either of the following directories, based on the setting of the uploadToCloud
Message Processor configuration property:

● uploadToCloud = false: files moved to /opt/apigee/var/log/edge-


message-processor/ax/staging
● uploadToCloud = true: (default): files move to
/opt/apigee/var/log/edge-message-processor/ax/failed

Unzip the data from either the /staging or /failed directory to obtain the
analytics data files.

Configuration properties
Use the following properties to configure the Message Processor to write analytics
data to disk. All of these properties are optional:

Property Description

conf_analytics_analytic Set to true to configure the


s.saveToDisk Message Processor to write
analytics data to disk files.

The default value is false.

conf_analytics_analytic Set to true to configure the


s.sendToQueue Message Processor upload the data
to Qpid/Postgres. Set to false to
disable the writing of analytics data
to Qpid/Postgres.

The default value is true.

conf_analytics_analytic Specifies the base path where the


s.baseDataDirectoryPath analytics data files are written.

The default value is


/opt/apigee/var/log/edge-
message-processor/ax.

conf_analytics_analytic Specifies the disk space, in


s.allocatedDiskSpaceInM megabytes, allocated for analytics
files.
Bytes
The default value is 3072. If you
exceed the allocated disk space for
the analytics data files, the Message
Processor stops saving analytics
data and writes an error message to
its log files.

conf_analytics_analytic Controls the final location of the


s.uploadToCloud analytics files.

● false: files moved to


/opt/apigee/var/log/ed
ge-message-processor/a
x/staging
● true (default): files move to
/opt/apigee/var/log/ed
ge-message-processor/a
x/failed

Note: Even though this property is


called uploadToCloud, no data is
uploaded to the cloud. All analytics
data remains local to your
installation.

To set these properties:

1. Open the message-processor.properties file in an editor. If the file does


not exist, create it:
2. vi /opt/apigee/customer/application/message-processor.properties
3. Set the properties as desired:
4. # Enable writing analytics data to disk.
5. conf_analytics_analytics.saveToDisk=true
6. # Disable writing analytics data to Qpid/Postgres.
conf_analytics_analytics.sendToQueue=false

# Specify base directory for analytics data files.


conf_analytics_analytics.baseDataDirectoryPath=/opt/apigee/var/smg

# Set the disk space available for analytics files.


conf_analytics_analytics.allocatedDiskSpaceInMBytes=3072
# Move final analytics data to files to the /staging directory.
conf_analytics_analytics.uploadToCloud=false
7. Save your changes.
8. Make sure the properties file is owned by the "apigee" user:
9. chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties
10. Set the value of the consumer-type property to ax for the axgroup-001
analytics group:
curl -X POST -H "Content-Type:application/json" \
"http://ms-ip:8080/v1/analytics/groups/ax/axgroup-001/properties?
propName=consumer-type&propValue=ax" \

11. -u sysAdminEmail:sysAdminPWord
12. By default, the name of the analytics group is axgroup-001. In the config file
for an Edge installation, you can set the name of the analytics group by using
the AXGROUP property. If you are unsure of the names of the analytics group,
run the following command on the Management Server node to display it:
apigee-adminapi.sh analytics groups list \

13. --admin sysAdminEmail --pwd sysAdminPword --host localhost


14. This command returns the analytics group name in the name field.
15. Restart the Message Processor:
16. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart
17. After the restart, the Message Processor writes analytics data to data files.
18. Repeat these steps for all Message Processors.

Modifying Java memory settings


Depending on your traffic and processing requirements you may need to change the
heap memory size or class metadata size for your nodes running Java-based Private
Cloud components.

This section provides the default and recommended Java heap memory sizes, as
well as the process for changing the defaults. Lastly, this section describes how to
change other JVM settings using properties files.

Default and recommended heap memory sizes

The following table lists the default and recommended Java heap memory sizes for
Java-based Private Cloud components:
Default Recommended
Properties File
Component
Name Heap Size Heap Size

Runtime

Cassandra n/a Automatically Automatically


configured1 configured1

Message message- 512MB 3GB - 6GB2


Processor processor.prop
erties

Router router.propert 512MB 512MB


ies

Analytics

Postgres postgres- 512MB 512MB


server server.propert
ies

Qpid server qpid- 512MB 2GB - 4GB


server.propert
ies

Management

Manageme management- 512MB 512MB


nt Server server.propert
ies

UI ui.properties 512MB 512MB

OpenLDAP n/a Native App3 Native App3

Zookeeper zookeeper.prop 2048MB 2048MB


erties
Notes

1
Cassandra dynamically calculates the maximum heap size when it starts up.
Currently, this is one half the total system memory, with a maximum of 8192MB.
For information on setting the heap size, see Change heap memory size.

2
For Message Processors, Apigee recommends that you set the heap size to
between 3GB and 6GB. Increase the heap size beyond 6GB only after conducting
performance tests.

If the heap usage nears the max limit during your performance testing, then
increase the max limit. For information on setting the heap size, see Change heap
memory size.

3
Not all Private Cloud components are implemented in Java. Because they are not
Java-based, apps running natively on the host platform do not have configurable
Java heap sizes; instead, they rely on the host system for memory management.

To determine how much total memory Apigee recommends that you allocate to your
Java-based components on a node, add the values listed above for each component
on that node. For example, if your node hosts both the Postgres and Qpid servers,
Apigee recommends that your combined memory allocation be between 2.5GB and
4.5GB.

For a list of required hardware (such as RAM), see Installation requirements.

Change heap memory sizes

To change heap memory settings, edit the properties file for the component. For
example, for the Message Processor, edit the
/opt/apigee/customer/application/message-processor.properties
file.

If the message-processor.properties file does not exist, or if the


corresponding .properties file for any Edge component does not exist, create it
and then change ownership of the file to the "apigee" user, as the following example
shows:

chown apigee:apigee /opt/apigee/customer/application/message-


processor.properties
TIP: In Java 1.8 and later, class metadata memory allocation replaced permanent generation
("permgem"). Some configuration properties still refer to permgen (often reflecting "perm" in the
property name).

If the component is installed on multiple machines, such as the Message Processor,


then you must edit the properties file on all machines that host the component.

The following table lists the properties that you edit to change heap sizes:

Property Description

bin_sete The minimum heap size. The default is based on the


nv_min_m values listed in Default and recommended heap memory
em sizes.

This setting corresponds to the Java -Xms option.

TIP: For optimal performance, set bin_setenv_min_mem and


bin_setenv_max_mem to the same value.

bin_sete The maximum heap size. The default is based on the


nv_max_m values listed in Default and recommended heap memory
em sizes.

This setting corresponds to the Java -Xmx option.

TIP: For optimal performance, set bin_setenv_min_mem and


bin_setenv_max_mem to the same value.

bin_sete The default class metadata size. The default value is set
nv_meta_ to the value of bin_setenv_max_permsize, which
space_si defaults to 128 MB. On the Message Processor, Apigee
ze recommends that you set this value to 256 MB or 512 MB,
depending on your traffic.

This setting corresponds to the Java -


XX:MetaspaceSize option.

When you set heap size properties on a node, use the "m" suffix to indicate
megabytes, as the following example shows:
bin_setenv_min_mem=4500m

bin_setenv_max_mem=4500m

bin_setenv_meta_space_size=1024m

After setting the values in the properties file, restart the component:

/opt/apigee/apigee-service/bin/apigee-service component restart

For example:

/opt/apigee/apigee-service/bin/apigee-service edge-message-processor restart

Change other JVM properties

For Java settings not controlled by the properties listed above, you can also pass in
additional JVM flags or values for any Edge component. The *.properties files
will be read in by Bash and should be enclosed by '(single quotes) to preserve literal
characters or "(double quotes) if you require shell expansion.

● bin_setenv_ext_jvm_opts: Set any Java property not specified by other


properties. For example:
● bin_setenv_ext_jvm_opts='-XX:MaxGCPauseMillis=500'
● However, do not use bin_setenv_ext_jvm_opts to set -Xms, -Xmx, or -
XX:MetaspaceSize as these values are controlled by the properties listed
above.

For additional tips on configuring memory for Private Cloud components, see this
article on the Edge forums.

Configure monetization server


properties
You can configure the monetization servers using the following API:

Property Change API Details

Endpoi https://monetization_server_IP/mint/config/properties/property_name?
nt value=property_value
Reque Notes about the request that you send:
st
● Body: None. The property name and value are query string
parameters. Do not set a value for the body of the request.
● HTTP method: POST
● Authentication: You can use OAuth2 or pass the username and
password in the request.
● You must URL encode your property value before sending the
request.

Respo The response is returned in plain text.


nse

When you call the property change API, the change applies to the server you called
only. It does not apply to other servers in the pod. You must call this API on all
servers on which you want to change the property.

Server configuration properties


The following table describes the monetization server properties that you can set:

Property Description

mint.invalidTscSt Monetization rating servers log all invalid


orage.setting Transaction Status Code (TSC) messages in
Postgres. TSCs include any invalid
transaction, whether it's from the client's
back-end or a result of another criteria not
being met in Apigee Edge for Private Cloud.

The number and frequency of TSC messages


can be large, which means that they can
cause your queries to take extra time to
process.

The mint.invalidTscStorage.setting
property determines whether Apigee Edge for
Private Cloud stores invalid TSC
transactions.

Valid values are:

● saveToDatabase: Instructs Apigee


Edge for Private Cloud to save all
invalid TSC transactions to the
Postgres database. This is the default.
● discard: Instructs Apigee Edge for
Private Cloud to not save invalid TSC
transactions to the Postgres
database. Instead, they are discarded.

The default value is saveToDatabase.

To change this
mint.invalidTscStorage.setting
property for all your rating servers, you must
send a similar API request to each server.

Set server property example


The following example sets the mint.invalidTscStorage.setting property:

curl -u admin:admin123 -X PUT

"https://monetization_server_IP:8080/v1/mint/properties/mint.invalidTscStorage.set
ting?value=discard"

Enabling secret key encryption


This document explains how to enable the encryption of developer app consumer
secrets (client credentials) stored in the Cassandra database.

Note: We recommend that you implement the steps described in this document in a test
environment and do suitable testing before deploying the changes to a production environment.

Overview

Traditionally, Apigee Edge for Private Cloud has provided optional encryption for key
value map (KVM) data and OAuth access tokens.

The following table describes the encryption options for data at rest in Apigee for
Private Cloud:

Entity Encryption Encryption Related


enabled by optionally documentation
default available
KVMs No Yes See About encrypted
KVMs.

OAuth access No Yes See Hashing tokens


tokens for extra security.

Developer app No Yes To enable, perform the


consumer configuration steps in
secrets this document.

To enable encryption of client credentials, you need to perform the following tasks
on all message processor and management server nodes:

● Create a keystore to store a Key encryption key (KEK). Apigee uses this
encrypted key to encrypt the secret keys needed to encrypt your data.
● Edit configuration properties on all management server and message
processor nodes.
● Create a developer app to trigger key creation.
● Restart the nodes.

These tasks are explained in this document.

What you need to know about the key encryption


feature

The steps in this document explain how to enable the KEK feature, which allows
Apigee to encrypt the secret keys used to encrypt developer app consumer secrets
when they are stored at rest in the Cassandra database.

By default, any existing values in the database will remain unchanged (in plain text)
and will continue to work as before.

Note: Apigee provides a "fallback" option that is disabled by default. This means that, by default,
unencrypted consumer secrets will fail. If the allowFallback flag is set to true, as explained
in the steps that follow, then unencrypted entities will succeed. For example, if allowFallback
is false, unencrypted developer app consumer secrets will no longer work, causing API key
verification to fail.
If you perform any write operation on an unencrypted entity, it will be encrypted when
the operation is saved. For example, if you revoke an unencrypted token and then
later approve it, the newly approved token will be encrypted.

Keeping keys safe

Be sure to store a copy of the keystore in which the KEK is stored in a safe location.
We recommend using your own secure mechanism to save a copy of the keystore.
As the instructions in this document explain, a keystore must be placed on each
message processor and management server node where the local configuration file
can reference it. But it is also important to store a copy of the keystore somewhere
else for safekeeping and as a backup.

Enabling key encryption

Follow these steps to consumer secret key encryption:

Prerequisites

You must meet these requirements before performing the steps in this document:

● You must install or upgrade to Apigee Edge for Private Cloud 4.50.00.10 or
later.
● You must be an Apigee Edge for Private Cloud administrator.

Step 1: Generate a keystore

Follow these steps to create a keystore to hold the key encryption key (KEK):

Important: It is your responsibility to keep the keystore safe. See Keeping keys safe.

1. Execute the following command to generate a keystore to store a key that will
be used to encrypt the KEK. Enter the command exactly as shown. (You can
provide any keystore name you wish):

keytool -genseckey -alias KEYSTORE_NAME -keyalg AES -keysize 256 \

2. -keystore kekstore.p12 -storetype PKCS12


3. When prompted, enter a password. You will use this password in later
sections when you configure the management server and message
processor.
This command generates a kekstore.p12 keystore file containing a key with
alias KEYSTORE_NAME.
4. (Optional) Verify the file was generated correctly with the following command.
If the file is correct, the command returns a key with the alias
KEYSTORE_NAME:
5. keytool -list -keystore kekstore.p12

Step 2: Configure the management server

Next, configure the management server. If you have management servers installed
on multiple nodes, you must repeat these steps on each node.

1. Copy the keystore file you generated in Step 1 to a directory on the


management server node, such as
/opt/apigee/customer/application. For example:
2. cp certs/kekstore.p12 /opt/apigee/customer/application
3. Ensure the file is readable by the apigee user:
4. chown apigee:apigee /opt/apigee/customer/application/kekstore.p12
5. chmod 400 /opt/apigee/customer/application/kekstore.p12
6. Add the following properties to
/opt/apigee/customer/application/management-
server.properties. If the file does not exist, create it. See also Property
file reference.

conf_keymanagement_kmscred.encryption.enabled=true

# Fallback is true to ensure your existing plaintext credentials continue to work


conf_keymanagement_kmscred.encryption.allowFallback=true

conf_keymanagement_kmscred.encryption.keystore.path=PATH_TO_KEYSTORE_FIL
E
conf_keymanagement_kmscred.encryption.kek.alias=KEYSTORE_NAME

# These could alternately be set as environment variables. These variables should be


# accessible to Apigee user during bootup of the Java process. If environment
# variables are specified, you can skip the password configs below.
# KMSCRED_ENCRYPTION_KEYSTORE_PASS=
# KMSCRED_ENCRYPTION_KEK_PASS=
See also Using environment variables for configuration properties.

conf_keymanagement_kmscred.encryption.keystore.pass=KEYSTORE_PASSWORD

7. conf_keymanagement_kmscred.encryption.kek.pass=KEK_PASSWORD
8. Note that the KEK_PASSWORD may be the same as the KEYSTORE_PASSWORD
depending on the tool used to generate the keystore.
9. Restart the management server by using the following commands:
10. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
11. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
wait_for_ready
12. The wait_for_ready command returns the following message when the
management server is ready:
13. Checking if management-server is up: management-server is up.
14. If you have management servers installed on multiple nodes, repeat steps 1-4
above on each management server node.

Step 3: Create a developer app

Now that the management servers are updated, you must create a developer app to
trigger generation of the key used to encrypt the client credentials data:

1. Create a Developer app to trigger the creation of a data encryption key (KEK).
For steps, see Registering an app.
2. Delete the developer app if you wish. You do not need to keep it around once
the encryption key is generated.

Step 4: Configure message processors

Until encryption is enabled in the message processors, runtime requests will not be
able to process any encrypted credentials.

1. Copy the keystore file you generated in Step 1 to a directory on the message
processor node, such as /opt/apigee/customer/application. For
example:
2. cp certs/kekstore.p12 /opt/apigee/customer/application
3. Ensure the file is readable by the apigee user:
4. chown apigee:apigee /opt/apigee/customer/application/kekstore.p12
5. Add the following properties to
/opt/apigee/customer/application/message-
processor.properties. If the file does not exist, create it. See also
Property file reference.
conf_keymanagement_kmscred.encryption.enabled=true

# Fallback is true to ensure your existing plaintext credentials continue to work


conf_keymanagement_kmscred.encryption.allowFallback=true

conf_keymanagement_kmscred.encryption.keystore.path=PATH_TO_KEYSTORE_FIL
E
conf_keymanagement_kmscred.encryption.kek.alias=KEYSTORE_NAME
# These could alternately be set as environment variables. These variables should be
# accessible to Apigee user during bootup of the Java process. If environment
# variables are specified, you can skip the password configs below.
# KMSCRED_ENCRYPTION_KEYSTORE_PASS=
# KMSCRED_ENCRYPTION_KEK_PASS=
See also Using environment variables for configuration properties.

conf_keymanagement_kmscred.encryption.keystore.pass=KEYSTORE_PASSWORD

6. conf_keymanagement_kmscred.encryption.kek.pass=KEK_PASSWORD
7. Note that the KEK_PASSWORD may be the same as the KEYSTORE_PASSWORD
depending on the tool used to generate the keystore.
8. Restart the message processor by using the following commands:
9. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart
10. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
wait_for_ready
11. The wait_for_ready command returns the following message when the
message processor is ready to process messages:
12. Checking if message-processor is up: message-processor is up.
13. If you have message processors installed on multiple nodes, repeat steps 1-4
on each message processor node.

Summary

Any developer apps that you create from now on will have their credential secret
encrypted at rest in the Cassandra database.

Using environment variables for configuration


properties

You can alternatively set the following message processor and management server
configuration properties using environment variables. If set, the environment
variables override the properties set in the message processor or management
server configuration file.

conf_keymanagement_kmscred.encryption.keystore.pass=

conf_keymanagement_kmscred.encryption.kek.pass=

The corresponding environment variables are:


export KMSCRED_ENCRYPTION_KEYSTORE_PASS=KEYSTORE_PASSWORD

export KMSCRED_ENCRYPTION_KEK_PASS=KEK_PASSWORD

If you set these environment variables, you can omit these configuration properties
from the configuration files on the message processor and management server
nodes, as they will be ignored:

conf_keymanagement_kmscred.encryption.keystore.pass

conf_keymanagement_kmscred.encryption.kek.pass

Property file reference

This section describes the configuration properties that you must set on all message
processor and management server nodes, as explained previously in this document.

Property Default Description

conf_keymanagemen false Must be true to enable key


t_kmscred.encrypt encryption.
ion.enabled

conf_keymanagemen false Set allowFallback to true to


t_kmscred.encrypt ensure your existing plaintext
ion.allowFallback credentials continue to work.

conf_keymanagemen N/A Provide the path to the KEK


t_kmscred.encrypt keystore on the message processor
ion.keystore.path or management server node. See
Step 2: Configure the management
server and Step 3: Configure
message processors.

conf_keymanagemen N/A Alias against which the KEK is


t_kmscred.encrypt stored in the keystore.
ion.kek.alias

conf_keymanagemen N/A Optional if you use environment


t_kmscred.encrypt variables to set these properties.
See also Using environment
ion.keystore.pass variables for configuration
properties.

conf_keymanagemen N/A Optional if you use environment


t_kmscred.encrypt variables to set these properties.
ion.kek.pass See also Using environment
variables for configuration
properties.

IDP authentication

This section describes how to configure


Apigee Edge for Private Cloud to use an
IDP for authentication.
There are two approaches you can use,
depending on which Edge UI you want
to use:

Recom
UI Option Description mende
d?
Edge UI Configure
GET Apigee Edge
STARTED! for Private
Cloud to use
an external
LDAP or
SAML IDP for
accessing the
management
API and the
Edge UI.
Classic UI Configure an internal or external
IDP to access the management
GET STARTED! API and the Classic UI.

Overview of external IDP authentication


(New Edge UI)
The Edge UI and Edge management API operate by making requests to the Edge
Management Server, where the Management Server supports the following types of
authentication:

● Basic authentication: Log in to the Edge UI or make requests to the Edge


management API by passing your username and password.
● OAuth2: Exchange your Edge Basic Auth credentials for an OAuth2 access
token and refresh token. Make calls to the Edge management API by passing
the OAuth2 access token in the Bearer header of an API call.
Edge supports using the following external Identify Providers (IDPs) for
authentication:

● Security Assertion Markup Language (SAML) 2.0: Generate OAuth access


these from SAML assertions returned by a SAML identity provider.
● Lightweight Directory Access Protocol (LDAP): Use LDAP’s search and bind
or simple binding authentication methods to generate OAuth access tokens.

NOTE: SAML and LDAP are supported as authentication mechanisms only. They are not
supported for authorization. Therefore, you still use the Edge OpenLDAP database to maintain
authorization information.

See Assigning roles for more.

Both SAML and LDAP IDPs support a single sign-on (SSO) environment. By using an
external IDP with Edge, you can support SSO for the Edge UI and API in addition to
any other services that you provide and that also support your external IDP.

The instructions in this section to enable external IDP support is different from the
External authentication in the following ways:

● This section adds SSO support


● This section is for users of the Edge UI (not the Classic UI)
● This section is supported on 4.19.06 and later only

About Apigee SSO

To support SAML or LDAP on Edge, you install apigee-sso, the Apigee SSO
module. The following image shows Apigee SSO in an Edge for Private Cloud
installation:
You can install the Apigee SSO module on the same node as the Edge UI and
Management Server, or on its own node. Ensure that Apigee SSO has access to the
Management Server over port 8080.

Port 9099 has to be open on the Apigee SSO node to support access to Apigee SSO
from a browser, from the external SAML or LDAP IDP, and from the Management
Server and Edge UI. As part of configuring Apigee SSO, you can specify that the
external connection uses HTTP or the encrypted HTTPS protocol.

Apigee SSO uses a Postgres database accessible on port 5432 on the Postgres
node. Typically you can use the same Postgres server that you installed with Edge,
either a standalone Postgres server or two Postgres servers configured in
master/standby mode. If the load on your Postgres server is high, you can also
choose to create a separate Postgres node just for Apigee SSO.

Support added for OAuth2 to Edge for Private Cloud

As mentioned above, the Edge implementation of SAML relies on OAuth2 access


tokens.Therefore, OAuth2 support has been added to Edge for Private Cloud. For
more information, see Introduction to OAuth 2.0.

About SAML

SAML authentication offers several advantages. By using SAML you can:


● Take full control of user management. When users leave your organization
and are deprovisioned centrally, they are automatically denied access to
Edge..
● Control how users authenticate to access Edge. You can choose different
authentication types for different Edge organizations.
● Control authentication policies. Your SAML provider may support
authentication policies that are more in line with your enterprise standards.
● You can monitor logins, logouts, unsuccessful login attempts and high risk
activities on your Edge deployment.

With SAML enabled, access to the Edge UI and Edge management API uses OAuth2
access tokens. These tokens are generated by the Apigee SSO module which
accepts SAML assertions returned by the your IDP.

Once generated from a SAML assertion, the OAuth token is valid for 30 minutes and
the refresh token is valid for 24 hours. Your development environment might support
automation for common development tasks, such as test automation or Continuous
Integration/Continuous Deployment (CI/CD), that require tokens with a longer
duration. See Using SAML with automated tasks for information on creating special
tokens for automated tasks.

About LDAP

The Lightweight Directory Access Protocol (LDAP) is an open, industry standard


application protocol for accessing and maintaining distributed directory information
services. Directory services may provide any organized set of records, often with a
hierarchical structure, such as a corporate email directory.

LDAP authentication within Apigee SSO uses the Spring Security LDAP module. As a
result, the authentication methods and configuration options for Apigee SSO’s LDAP
support are directly correlated to those found in Spring Security LDAP.

LDAP with Edge for the Private Cloud supports the following authentication methods
against an LDAP-compatible server:

● Search and Bind (indirect binding)


● Simple Bind (direct binding)

NOTE: Edge for the Private Cloud’s LDAP authentication does not support the Search and
Compare authentication mechanism.

Apigee SSO attempts to retrieve the user's email address and update its internal user
record with it so that there is a current email address on file as Edge uses this email
for authorization purposes.
Edge UI and API URLs

The URL that you use to access the Edge UI and Edge management API is the same
as used before you enabled SAML or LDAP. For the Edge UI:

http://edge_UI_IP_DNS:9000

https://edge_UI_IP_DNS:9000

Where edge_UI_IP_DNS is the IP address or DNS name of the machine hosting the
Edge UI. As part of configuring the Edge UI, you can specify that the connection use
HTTP or the encrypted HTTPS protocol.

For the Edge management API:

http://ms_IP_DNS:8080/v1

https://ms_IP_DNS:8080/v1

Where ms_IP_DNS is the IP address or DNS name of the Management Server. As part
of configuring the API, you can specify that the connection use HTTP or the
encrypted HTTPS protocol.

Configure TLS on Apigee SSO

By default, the connection to Apigee SSO uses HTTP over port 9099 on the node
hosting apigee-sso, the Apigee SSO module. Built into apigee-sso is a Tomcat
instance that handles the HTTP and HTTPS requests.

Apigee SSO and Tomcat support three connection modes:

● DEFAULT: The default configuration supports HTTP requests on port 9099.


● SSL_TERMINATION: Enabled TLS access to Apigee SSO on the port of your
choice. You must specify a TLS key and cert for this mode.
● SSL_PROXY: Configures Apigee SSO in proxy mode, meaning you installed a
load balancer in front of apigee-sso and terminated TLS on the load
balancer. You can specify the port used on apigee-sso for requests from
the load balancer.

Enable external IDP support for the portal


After enabling external IDP support for Edge, you can optionally enable it for Apigee
Developer Services portal (or simply, the portal). The portal supports SAML and
LDAP authentication when making requests to Edge. Note that this is different from
SAML and LDAP authentication for developer login to the portal. You configure
external IDP authentication for developer login separately. See Configure the portal
to use IDPs for more.

As part of configuring the portal, you must specify the URL of the Apigee SSO
module that you installed with Edge:

Install and configure external IDPs for


Edge
NOTE: This section applies to using an external IDP with the new Edge UI. It is not intended for
users of the Classic UI.

The process for installing and configuring external IDP support on Apigee Edge for
Private Cloud requires that you perform some tasks on your IDP and some on Edge.
The general process is:

1. Install Edge: Ensure that your installation is working properly before


continuing.
2. Configure your IDP; you can choose from one of the following:
● Configure your SAML IDP
● Configure your LDAP IDP
3. Install and configure Edge SSO: Configuring the Apigee SSO module enables
SAML or LDAP on the Edge management API. As part of configuring this
module, you can optionally enable TLS access.
4. Enable your external IDP for the Edge UI.
5. Register new Edge users: For each user in the IDP that corresponds to an
Edge user, create an Edge user account and assign that user a role in an Edge
organization. The Edge user must have the same email address as is stored
for the user in the IDP.
6. (Optional) Enable HTTPS: Configure the Apigee SSO module to use HTTPS
instead of HTTP (the default).
7. (Optional) Disable Basic authentication: After you have confirmed that your
external IDP is working, you can disable Basic authentication to ensure your
environment is secure.

In addition, the following other tasks are also optional, depending on your
environment:

● Configure the portal to use external IDPs


● Install Apigee SSO for high availability
● Automate tasks with external IDPs

Configure your SAML IDP


The SAML specification defines three entities:

● Principal (Edge UI user)


● Service provider (Apigee SSO)
● Identity provider (returns SAML assertion)

When SAML is enabled, the principal (an Edge UI user) requests access to the
service provider (Apigee SSO). Apigee SSO (in it's role as a SAML service provider)
then requests and obtains an identity assertion from the SAML IDP and uses that
assertion to create the OAuth2 token required to access the Edge UI. The user is then
redirected to the Edge UI.

This process is shown below:


In this diagram:

1. The user attempts to access the Edge UI by making a request to the login URL
for the Edge UI. For example: https://edge_ui_IP_DNS:9000
2. Unauthenticated requests are redirected to the SAML IDP. For example,
"https://idp.customer.com".
3. If the user is not logged in to the identity provider, then they are prompted to
log in.
4. The user logs in.
5. The user is authenticated by the SAML IDP, which generates a SAML 2.0
assertion and returns it to the Apigee SSO.
6. Apigee SSO validates the assertion, extracts the user identity from the
assertion, generates the OAuth 2 authentication token for the Edge UI, and
redirects the user to the main Edge UI page at:
7. https://edge_ui_IP_DNS:9000/platform/orgName
8. Where orgName is the name of an Edge organization.

Edge supports many IDPs, including Okta and the Microsoft Active Directory
Federation Services (ADFS). For information on configuring ADFS for use with Edge,
see Configuring Edge as a Relying Party in ADFS IDP. For Okta, see the following
section.

To configure your SAML IDP, Edge requires an email address to identify the user.
Therefore, the identity provider must return an email address as part of the identity
assertion.

In addition, you might require some or all of the following:

Setting Description
Metadata The SAML IDP might require the metadata URL of Apigee SSO. The
URL metadata URL is in the form:

protocol://apigee_sso_IP_DNS:port/saml/metadata

For example:

http://apigee_sso_IP_or_DNS:9099/saml/metadata

Assertion Can be used as the redirect URL back to Edge after the user enters
Consume their IDP credentials, in the form:
r Service
protocol://apigee_sso_IP_DNS:port/saml/SSO/alias/apigee-saml-login-
URL
opdk

For example:

http://apigee_sso_IP_or_DNS:9099/saml/SSO/alias/apigee-saml-login-
opdk

Single You can configure Apigee SSO to support single logout. See Configure
logout single sign-out from the Edge UI for more. The Apigee SSO single
URL logout URL has the form:

protocol://apigee_SSO_IP_DNS:port/saml/SingleLogout/alias/apigee-
saml-login-opdk

For example:

http://1.2.3.4:9099/saml/SingleLogout/alias/apigee-saml-login-opdk
The SP For Apigee SSO:
entity ID
apigee-saml-login-opdk
(or
Note: See also the following Community article: Can I use an arbitrary string as
Audience
SP entity ID while configuring Apigee Edge SSO?.
URI)

Configuring Okta
To configure Okta:

1. Log in to Okta.
2. Select Applications and then select your SAML application.
3. Select the Assignments tab to add any users to the application. These user
will be able to log in to the Edge UI and make Edge API calls. However, you
must first add each user to an Edge organization and specify the user's role.
See Register new Edge users for more.
4. Select the Sign on tab to obtain the Identity Provider metadata URL. Store that
URL because you need it to configure Edge.
5. Select the General tab to configure the Okta application, as shown in the table
below:

Setting Description

Single Specifies the redirect URL back to Edge for use after the user
sign enters their Okta credentials. This URL is in the form:
on
URL http://apigee_sso_IP_DNS:9099/saml/SSO/alias/apigee-saml-
login-opdk

If you plan to enable TLS on apigee-sso:

https://apigee_sso_IP_DNS:9099/saml/SSO/alias/apigee-saml-
login-opdk

Where apigee_sso_IP_DNS is the IP address or DNS name of the


node hosting apigee-sso.

Note that this URL is case sensitive and SSO must appear in
caps.
If you have a load balancer in front to apigee-sso,then specify
the IP address or DNS name of apigee-sso as referenced
through the load balancer.

Use this Set this checkbox.


for
Reci
pient
URL
and
Desti
natio
n
URL

Audienc Set to apigee-saml-login-opdk


e URI
(SP
Entit
y ID)

Default Can leave blank.


Relay
State

Name ID Specify EmailAddress.


form
at

Applicati Specify Okta username.


on
user
nam
e

Attribute Specify FirstName, LastName, and Email as shown in the


State image below.
ment
s
(Opti
onal)

The SAML settings dialog box should appear as below once you are done:

Configure your LDAP IDP


This section describes the mechanisms with which you can use LDAP as an IDP with
Apigee Edge for Private Cloud.
Simple binding (direct binding)
With simple binding, the user supplies an RDN attribute. The RDN attribute can be a
username, email address, common name, or other type of user ID, depending on
what the primary identifier is. With that RDN attribute, Apigee SSO statically
constructs a distinguished name (DN). There are no partial matches with simple
binding.

The following shows the steps in a simple binding operation:

1. The user inputs an RDN attribute and password. For example, they might input
the username “alice”.
2. Apigee SSO constructs the DN; for example:
3. dn=uid=alice,ou=users,dc=test,dc=com
4. Apigee SSO uses the statically constructed DN and supplied password to
attempt a bind to the LDAP server.
5. If successful, Apigee SSO returns an OAuth token that the client can attach to
their requests to the Edge services.

Simple binding provides the most secure installation because no LDAP credentials or
other data are exposed through configuration to Apigee SSO. The administrator can
configure one or more DN patterns in Apigee SSO to be tried for a single username
input.

Search and bind (indirect binding)


With search and bind, the user supplies an RDN and password. Apigee SSO then
finds the user's DN. Search and bind allows for partial matches.

The search base is the top-most domain.

The following shows the steps in a search and bind operation:

1. The user inputs an RDN, such as a username or email address, plus their
password.
2. Apigee SSO performs a search using an LDAP filter and a set of known search
credentials.
3. If there is exactly one match, Apigee SSO retrieves the user's DN. If there are
zero or more than one matches, Apigee SSO rejects the user.
4. Apigee SSO then attempts to bind the user's DN and supplied password
against the LDAP server.
5. The LDAP server performs the authentication.
6. If successful, Apigee SSO returns an OAuth token that the client can attach to
their requests to the Edge services.
Apigee recommends that you use a set of read-only admin credentials that you make
available to Apigee SSO to perform a search on the LDAP tree where the user
resides.

Install and configure Apigee SSO


To install and configure the Apigee SSO module with an external IDP, you must do
the following:

1. Create keys and certificates.


2. Set up the base Apigee SSO configuration: The base file must include the
properties that are common to all SSO configurations.
3. Add IDP-specific configuration properties: Use one of the following IDP-
specific blocks of configuration properties in your configuration file:
● SAML configuration properties
● LDAP direct binding configuration properties
● LDAP indirect binding configuration properties
4. Install Apigee SSO: Install the Apigee SSO module, and pass the configuration
file to the installer.

Each of these steps is described in the sections that follow.

NOTE: By default, the Apigee SSO module is accessible over HTTP on port 9099 of the node on
which it is installed. You can enable TLS on the Apigee SSO module. To do so, you have to create
a third set of TLS keys and certificates used by Tomcat to support TLS. See Configure Apigee
SSO for HTTPS access for more.

Create keys and certificates


This section describes how to create self-signed certificates which might be fine for
your testing environment but you should use certificates signed by a Certificate
Authority (CA) for a production environment.

To create the key pair to sign for verification:

1. As a sudo user, create the following new directory:


2. sudo mkdir -p /opt/apigee/customer/application/apigee-sso/jwt-keys
3. Change to the new directory:
4. cd /opt/apigee/customer/application/apigee-sso/jwt-keys/
5. Generate the private key with the following command:
6. sudo openssl genrsa -out privkey.pem 2048
7. Generate the public key from the private key with the following command:
8. sudo openssl rsa -pubout -in privkey.pem -out pubkey.pem
9. Change the owner of the output PEM file to the "apigee" user:
10. sudo chown apigee:apigee *.pem

To create the key and self-signed cert, with no passphrase, for communicating with
the IDP:
1. As a sudo user, make a new directory:
2. sudo mkdir -p /opt/apigee/customer/application/apigee-sso/idp/
3. Change to the new directory:
4. cd /opt/apigee/customer/application/apigee-sso/idp/
5. Generate your private key with a passphrase:
6. sudo openssl genrsa -aes256 -out server.key 1024
7. Remove the passphrase from the key:
8. sudo openssl rsa -in server.key -out server.key
9. Generate certificate signing request for CA:
10. sudo openssl req -x509 -sha256 -new -key server.key -out server.csr
11. Generate self-signed certificate with 365 days expiry-time:
12. sudo openssl x509 -sha256 -days 365 -in server.csr -signkey server.key -out
selfsigned.crt
13. Change the owner of the key and crt file to the "apigee" owner:
sudo chown apigee:apigee server.key

14. sudo chown apigee:apigee selfsigned.crt

To enable TLS on the Apigee SSO module, by setting SSO_TOMCAT_PROFILE to


SSL_TERMINATION or to SSL_PROXY, you cannot use a self-signed certificate. You
must generate a cert from a CA. See Configure Apigee SSO for HTTPS access for
more.

Apigee SSO configuration settings


Before you can install the Apigee SSO module, you must define a configuration file.
You pass this configuration file to the installer when you install the Apigee SSO
module.

The configuration file has the following form:

IP1=hostname_or_IP_of_apigee_SSO
IP2=hostname_or_IP_of_apigee_SSO

## Management Server configuration.


# Management Server IP address and port
MSIP=$IP1
MGMT_PORT=8080
# Edge sys admin username and password as set when you installed Edge.
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=Secret123
# Set the protocol for the Edge management API. Default is http.
# Set to https if you enabled TLS on the management API.
MS_SCHEME=http

## Postgres configuration.
# Postgres IP address and port
PG_HOST=$IP1
PG_PORT=5432
# Postgres username and password as set when you installed Edge.
# If these credentials change, they must be updated and setup rerun.
PG_USER=apigee
PG_PWD=postgres

## Apigee SSO module configuration.


# Choose either "saml" or "ldap".
SSO_PROFILE="[saml|ldap]"
# Externally accessible IP or DNS name of apigee-sso.
SSO_PUBLIC_URL_HOSTNAME=$IP2
SSO_PG_DB_NAME=database_name_for_sso

# Default port is 9099. If changing, set both properties to the same value.
SSO_PUBLIC_URL_PORT=9099
SSO_TOMCAT_PORT=9099
# Uncomment the following line to set specific SSL cipher using OpenSSL syntax
# SSL_CIPHERS=HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!kRSA
# Set Tomcat TLS mode to DEFAULT to use HTTP access to apigee-sso.
SSO_TOMCAT_PROFILE=DEFAULT
SSO_PUBLIC_URL_SCHEME=http

# SSO admin user name. The default is ssoadmin.


SSO_ADMIN_NAME=ssoadmin
# SSO admin password using uppercase, lowercase, number, and special chars.
SSO_ADMIN_SECRET=Secret123

# Enable the ability to sign an authentication request with SAML SSO.


SSO_SAML_SIGN_REQUEST=y

# Path to signing key and secret from Create the TLS keys and certificates above.
SSO_JWT_SIGNING_KEY_FILEPATH=/opt/apigee/customer/application/apigee-
sso/jwt-keys/privkey.pem
SSO_JWT_VERIFICATION_KEY_FILEPATH=/opt/apigee/customer/application/
apigee-sso/jwt-keys/pubkey.pem

###########################################################
# Define External IDP #
# Use one of the following configuration blocks to #
# define your IDP settings: #
# - SAML configuration properties #
# - LDAP Direct Binding configuration properties #
# - LDAP Indirect Binding configuration properties #
###########################################################

INSERT_IDP_CONFIG_BLOCK_HERE (SAML, LDAP direct, or LDAP indirect, below)

# Configure an SMTP server so that the Apigee SSO module can send emails to
users
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=smtppwd
# omit for no password
SMTPSSL=n
SMTPPORT=25
# The address from which emails are sent
SMTPMAILFROM="My Company <myco@company.com>"

SAML SSO configuration properties

If you are using SAML for your IDP, use the following block of configuration
properties in your configuration file (defined above):

## SAML Configuration Properties


# Insert this section into your base configuration file, as described previously.

# Name of SAML IDP. For example, okta or adfs.


SSO_SAML_IDP_NAME=okta
# Text displayed on the SSO sign-in page after being redirected by either the New or
Classic Edge UI for SAML logins.
# Note: Installing SSO does not depend on the Edge UI or which version of the UI you
are using.
SSO_SAML_IDP_LOGIN_TEXT="Please log in to your IDP"

# The metadata URL from your IDP.


# If you have a metadata file, and not a URL,
# see "Specifying a metadata file instead of a URL" below.
SSO_SAML_IDP_METADATA_URL=https://dev-343434.oktapreview.com/app/
exkar20cl/sso/saml/metadata

# Determines whether to skip TLS validation for the URL specified


# by SSO_SAML_IDP_METADATA_URL.
# This is necessary if the URL uses a self-signed certificate.
# The default value is "n".
SSO_SAML_IDPMETAURL_SKIPSSLVALIDATION=n

# SAML service provider key and cert from Create the TLS keys and certificates
above.
SSO_SAML_SERVICE_PROVIDER_KEY=/opt/apigee/customer/application/apigee-
sso/saml/server.key
SSO_SAML_SERVICE_PROVIDER_CERTIFICATE=/opt/apigee/customer/
application/apigee-sso/saml/selfsigned.crt

# The passphrase used when you created the SAML cert and key.
# The section "Create the TLS keys and certificates" above removes the passphrase,
# but this property is available if you require a passphrase.
# SSO_SAML_SERVICE_PROVIDER_PASSWORD=samlSP123

# Requires that SAML responses be signed by your IDP.


# This property is enabled by default since release 4.50.00.05.
SSO_SAML_SIGNED_ASSERTIONS=y

LDAP Direct Binding configuration properties

If you are using LDAP direct binding for your IDP, use the following block of
configuration properties in you configuration file, as shown in the example above:

## LDAP Direct Binding configuration


# Insert this section into your base configuration file, as described previously.

# The type of LDAP profile; in this case, "direct"


SSO_LDAP_PROFILE=direct

# The base URL to which SSO connects; in the form: "ldap://hostname_or_IP:port


SSO_LDAP_BASE_URL=LDAP_base_URL

# Attribute name used by the LDAP server to refer to the user's email address; for
example, "mail"
SSO_LDAP_MAIL_ATTRIBUTE=LDAP_email_attribute

# Pattern of the user's DN; for example: =cn={0},ou=people,dc=example,dc=org


# If there is more than one pattern, separate with semicolons (";"); for example:
# =cn={0},ou=people,dc=example,dc=org;=cn={0},ou=people,dc=example,dc=com
SSO_LDAP_USER_DN_PATTERN=LDAP_DN_pattern

# Set true to allow users to login with non-email usernames


SSO_ALLOW_NON_EMAIL_USERNAME=false
LDAP Indirect Binding configuration properties

If you are using LDAP indirect binding for your IDP, use the following block of
configuration properties in your configuration file, as shown in the example above:

## LDAP Indirect Binding configuration


# Insert this section into your base configuration file, as described previously.

# Type of LDAP profile; in this case, "indirect"


SSO_LDAP_PROFILE=indirect

# Base URL to which SSO connects; in the form: "ldap://hostname_or_IP:port


SSO_LDAP_BASE_URL=LDAP_base_URL

# DN and password of the LDAP server's admin user


SSO_LDAP_ADMIN_USER_DN=LDAP_admin_DN
SSO_LDAP_ADMIN_PWD=LDAP_admin_password

# LDAP search base; for example, "dc=example,dc=org"


SSO_LDAP_SEARCH_BASE=LDAP_search_base

# LDAP search filter; for example, "cn={0}"


SSO_LDAP_SEARCH_FILTER=LDAP_search_filter

# Attribute name used by the LDAP server to refer to the user's email address; for
example, "mail"
SSO_LDAP_MAIL_ATTRIBUTE=LDAP_email_attribute

Install the Apigee SSO module


After you create the keys and set up your configuration file, you can install the Apigee
SSO module.

To install the Apigee SSO module:

1. Log in to the Management Server node. That node should already have
apigee-service installed, as described in Install the Edge apigee-setup
utility.
Alternatively, you can install the Apigee SSO module on a different node.
However, that node must be able to access the Management Server over port
8080.
2. Install and configure apigee-sso by executing the following command:
3. /opt/apigee/apigee-setup/bin/setup.sh -p sso -f configFile
4. Where configFile is the configuration file that you defined above.
5. Install the apigee-ssoadminapi.sh utility used to manage admin and
machine users for the apigee-sso module:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-ssoadminapi install
7. Log out of the shell, and then log back in again to add the apigee-
ssoadminapi.sh utility to your path.

Specify a metadata file instead of a URL


If your IDP does not support an HTTP/HTTPS metadata URL, you can use a
metadata XML file to configure Apigee SSO.

To use a metadata file instead of a URL to configure Apigee SSO:

1. Copy the contents of the metadata XML from your IDP to a file on the Apigee
SSO node. For example, copy the XML to:
2. /opt/apigee/customer/application/apigee-sso/saml/metadata.xml
3. NOTE: The file must be named metadata.xml.
4. Change ownership of the XML file to the "apigee" user:
5. chown apigee:apigee
/opt/apigee/customer/application/apigee-sso/saml/metadata.xml
6. Set the value of SSO_SAML_IDP_METADATA_URL to the absolute file path:
7. SSO_SAML_IDP_METADATA_URL=file:///opt/apigee/customer/application/
apigee-sso/saml/metadata.xml
8. You must prefix the file path with "file://", followed by the absolute path
from root (/).

Enable an external IDP on the New Edge


UI

NOTE: This section applies to using an external IDP with the new Edge UI. It is not intended for
users of the Classic UI.

After installing the Apigee SSO module, you must configure the New Edge UI to
support your external IDP.

Note: Installing SSO does not depend on whether you are using the Classic UI or the New UI.
However, the New Edge UI requires that SSO be installed.

After you enable an external IDP on the Edge UI, the Edge UI uses OAuth to connect
to the Management Server. You do not have to worry about expiring OAuth tokens
between the Edge UI and the Edge Management Server. Expired tokens are
automatically refreshed.
NOTE: Automatic refresh of tokens only occurs for the token use by the Edge UI to make calls to
the Management Server. Tokens generated for users when they log in to the Edge UI expire in
one hour and the refresh token expires in 12 hours.

You must create a configuration file to set up the Edge UI.

IP1=hostname_or_IP_of_apigee_SSO
# Comma separated list of URLs for the Edge UI,
# in the format: http_or_https://IP_or_hostname_of_UI:3001.
# You can have multiple URLs when you have multiple installations
# of the Edge UI or you have multiple data centers.
EDGEUI_PUBLIC_URIS=http_or_https://IP_or_hostname_of_UI:3001

# Publicly accessible URLs for Edge UI.


EDGEUI_SSO_REGISTERD_PUBLIC_URIS=$EDGEUI_PUBLIC_URIS

# Required variables
# Default is "n" to disable external IDP support.
EDGEUI_SSO_ENABLED=y

# Information about apigee-sso.


# Externally accessible IP or DNS of apigee-sso.
SSO_PUBLIC_URL_HOSTNAME=$IP1
SSO_PUBLIC_URL_PORT=9099
# Default is http. Set to https if you enabled TLS on apigee-sso.
SSO_PUBLIC_URL_SCHEME=http

# SSO admin credentials as set when you installed apigee-sso.


SSO_ADMIN_NAME=ssoadmin
SSO_ADMIN_SECRET=Secret123

# The name of the OAuth client used to connect to apigee-sso.


# The default client name is edgeui.
EDGEUI_SSO_CLIENT_NAME=edgeui
# Oauth client password using uppercase, lowercase, number, and special chars.
EDGEUI_SSO_CLIENT_SECRET=ssoClient123

# If set, the existing EDGEUI client is deleted and new one is created.
# The default value is "n".
# Set to "y" when you configure your external IDP and change the value of
# any of the EDGEUI_* properties.
EDGEUI_SSO_CLIENT_OVERWRITE=y
To configure the Edge UI to enable support for your external IDP:

1. Execute the following command:


2. /opt/apigee/apigee-service/bin/apigee-service edge-management-ui
configure-sso -f configFile
3. For Classic UI, use the edge-ui component rather than edge-management-
ui.

To later change these values, update the configuration file and execute the
command again.

Register new Edge users


After you enable an external IDP, the process of adding new users to the Edge UI is:

1. The user registers with the SAML or LDAP identity provider.


2. An Edge organization administrator adds the user to an Edge organization and
assigns the user a role, as described in Creating global users.
● With the UI: See Creating a global user through the Edge UI
● With the Edge management API: See Creating a global user through
the Edge API

Configure Apigee SSO for HTTPS


access
Install and configure Apigee SSO describes how to install and configure the Apigee
SSO module to use HTTP on port 9099, as specified by the following property in the
Edge configuration file:

SSO_TOMCAT_PROFILE=DEFAULT

Alternatively, you can set SSO_TOMCAT_PROFILE to one of the following values to


enable HTTPS access:

● SSL_PROXY: Configures apigee-sso, the Apigee SSO module, in proxy


mode, meaning you have installed a load balancer in front of apigee-sso
and terminated TLS on the load balancer. You then specify the port used on
apigee-sso for requests from the load balancer.
● SSL_TERMINATION: Enabled TLS access to apigee-sso on the port of your
choice. You must specify a keystore for this mode that contains a certificate
signed by a CA. You cannot use a self-signed certificate.
You can choose to enable HTTPS at the time you initially install and configure
apigee-sso, or you can enable it later.

Enabling HTTPS access to apigee-sso using either mode disables HTTP access.
That is, you cannot access apigee-sso using both HTTP and HTTPS concurrently.

NOTE: If you configure HTTPS access to apigee-sso, ensure that you update any URLs used by
the Edge UI and by your IDP to use HTTPS. Also, if you choose to enable HTTPS on a port other
than 9099, make sure you update the UI and IDP to use the correct port number.

Enable SSL_PROXY mode


In SSL_PROXY mode, your system uses a load balancer in front of the Apigee SSO
module and terminates TLS on the load balancer. In the following figure, the load
balancer terminates TLS on port 443, and then forwards requests to the Apigee SSO
module on port 9099:

In this configuration, you trust the connection from the load balancer to the Apigee
SSO module, so there is no need to use TLS for that connection. However, external
entities, such as the IDP, now must access the Apigee SSO module on port 443, not
on the unprotected port of 9099.

The reason to configure the Apigee SSO module in SSL_PROXY mode is that the
Apigee SSO module auto-generates redirect URLs used externally by the IDP as part
of the authentication process. Therefore, these redirect URLs must contain the
external port number on the load balancer, 443 in this example, and not the internal
port on the Apigee SSO module, 9099.

NOTE: You do not have to create a TLS cert and key for SSL_PROXY mode because the
connection from the load balancer to the Apigee SSO module uses HTTP.

To configure the Apigee SSO module for SSL_PROXY mode:

1. Add the following settings to your config file:


# Enable SSL_PROXY mode.
SSO_TOMCAT_PROFILE=SSL_PROXY

# Specify the apigee-sso port, typically between 1025 and 65535.


# Typically ports 1024 and below require root access by apigee-sso.
# The default is 9099.
SSO_TOMCAT_PORT=9099

# Specify the port number on the load balancer for terminating TLS.
# This port number is necessary for apigee-sso to auto-generate redirect URLs.
SSO_TOMCAT_PROXY_PORT=443
SSO_PUBLIC_URL_PORT=443

# Set public access scheme of apigee-sso to https.

2. SSO_PUBLIC_URL_SCHEME=https
3. Configure the Apigee SSO module:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-sso setup -f configFile
5. Update your IDP configuration to now make an HTTPS request on port 443 of
the load balancer to access Apigee SSO. For more information, see one of the
following:
● Configure your SAML IDP
● Configure your LDAP IDP
6. Update your Edge UI configuration for HTTPS by setting the following
properties in the config file:

SSO_PUBLIC_URL_PORT=443

7. SSO_PUBLIC_URL_SCHEME=https
8. Then update the Edge UI:
9. /opt/apigee/apigee-service/bin/apigee-service edge-management-ui
configure-sso -f configFile
10. Use the edge-ui component for the Classic UI.
11. If you installed the Apigee Developer Services portal (or simply, the portal),
update it to use HTTPS to access Apigee SSO. For more information, see
Configure the portal to use external IDPs

See Enable an external IDP on the Edge UI for more information.


Enable SSL_TERMINATION mode
For SSL_TERMINATION mode, you must:

● Generate a TLS cert and key and store them in a keystore file. You cannot use
a self-signed certificate. You must generate a cert from a CA.
● Update the configuration settings for apigee-sso.

To create a keystore file from your cert and key:

1. Create a directory for the JKS file:


2. sudo mkdir -p /opt/apigee/customer/application/apigee-sso/tomcat-ssl/
3. Change to the new directory:
4. cd /opt/apigee/customer/application/apigee-sso/tomcat-ssl/
5. Create a JKS file containing the cert and key. You must specify a keystore for
this mode that contains a cert signed by a CA. You cannot use a self-signed
cert. For an example of creating a JKS file, see Configuring TLS/SSL for Edge
On Premises.
6. Make the JKS file owned by the "apigee" user:

7. sudo chown -R apigee:apigee


/opt/apigee/customer/application/apigee-sso/tomcat-ssl

To configure the Apigee SSO module:

1. Add the following settings to your config file:


# Enable SSL_TERMINATION mode.
SSO_TOMCAT_PROFILE=SSL_TERMINATION

# Specify the path to the keystore file.


SSO_TOMCAT_KEYSTORE_FILEPATH=/opt/apigee/customer/application/apigee-
sso/tomcat-ssl/keystore.jks

SSO_TOMCAT_KEYSTORE_ALIAS=sso

# The password specified when you created the keystore.


SSO_TOMCAT_KEYSTORE_PASSWORD=keystorePassword

# Specify the HTTPS port number between 1025 and 65535.


# Typically ports 1024 and below require root access by apigee-sso.
# The default is 9099.
SSO_TOMCAT_PORT=9443
SSO_PUBLIC_URL_PORT=9443

# Set public access scheme of apigee-sso to https.


2. SSO_PUBLIC_URL_SCHEME=https
3. Configure the Apigee SSO module:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-sso setup -f configFile
5. Update your IDP configuration to now make an HTTPS request on port 9443
of the load balancer to access Apigee SSO. Be sure that no other service is
using this port.
For more information, see the following:
● Configure your SAML IDP
● Configure your LDAP IDP
6. Update your Edge UI configuration for HTTPS by setting the following
properties:
SSO_PUBLIC_URL_PORT=9443

7. SSO_PUBLIC_URL_SCHEME=https
8. If you installed the Developer Services portal, update it to use HTTPS to
access Apigee SSO. For more, see Configure the portal to use external IDPs.

Set SSO_TOMCAT_PROXY_PORT when using SSL_TERMINATION mode

You might have a load balancer in front of the Apigee SSO module that terminates
TLS on the load balancer but also enables TLS between the load balancer and
Apigee SSO. In the figure above for SSL_PROXY mode, this means the connection
from the load balancer to Apigee SSO uses TLS.

In this scenario, you configure TLS on Apigee SSO just as you did above for
SSL_TERMINATION mode. However, if the load balancer uses a different TLS port
number than Apigee SSO uses for TLS, then you must also specify the
SSO_TOMCAT_PROXY_PORT property in the config file. For example:

● The load balancer terminates TLS on port 443


● Apigee SSO terminates TLS on port 9443

Make sure to include the following setting in the config file:

# Specify the port number on the load balancer for terminating TLS.
# This port number is necessary for apigee-sso to generate redirect URLs.
SSO_TOMCAT_PROXY_PORT=443
SSO_PUBLIC_URL_PORT=443

The configure the IDP and Edge UI to make HTTPS requests on port 443.

Disable Basic authentication on Edge


After you have enabled SAML or LDAP on Edge, you can disable Basic
authentication. However, before you disable Basic authentication:

● Make sure you have added all Edge users, including system administrators, to
your IDP.
● Make sure you have thoroughly tested your IDP authentication on the Edge UI
and Edge management API.
● If you are using the Apigee Developer Services portal (or simply, the portal),
configure and test your external IDP on the portal to ensure that the portal can
connect to Edge. See Configuring the portal for external IDPs.

View the current security profile


You can view the Edge security profile to determine the current configuration to
determine if Basic authentication and an external IDP are currently enabled. Use the
following Edge management API call on the Edge Management Server to view the
current security profile used by Edge:

curl -H "accept:application/xml" http://localhost:8080/v1/securityprofile -u


sysAdminEmail:pWord

If you have not yet configured an external IDP, the response is as shown below,
meaning Basic authentication is enabled:

<SecurityProfile enabled="true" name="securityprofile">


<UserAccessControl enabled="true">
</UserAccessControl>
</SecurityProfile>

If you have already enabled an external IDP, the <ssoserver> element should
appear in the response:

<SecurityProfile enabled="true" name="securityprofile">


<UserAccessControl enabled="true">
<SSOServer>
<BasicAuthEnabled>true</BasicAuthEnabled>
<PublicKeyEndPoint>/token_key</PublicKeyEndPoint>
<ServerUrl>http://35.197.37.220:9099</ServerUrl>
</SSOServer>
</UserAccessControl>
</SecurityProfile>

Notice that the version with an external IDP enabled also shows
<BasicAuthEnabled>true</BasicAuthEnabled>, which means that Basic
authentication is still enabled.

Disable Basic authentication


Use the following Edge management API call on the Edge Management Server to
disable Basic authentication.

CAUTION: You can't revert or undo this action. After you disable Basic authentication, to enable
authentication again, see Re-enable Basic authentication.

To disable basic authentication, you pass the XML object returned in the previous
section as the payload. The only difference is that you set <BasicAuthEnabled>
to false, as the following example shows:

curl -H "Content-Type: application/xml"


http://localhost:8080/v1/securityprofile -u sysAdminEmail:pWord -d
'<SecurityProfile enabled="true" name="securityprofile">
<UserAccessControl enabled="true">
<SSOServer>
<BasicAuthEnabled>false</BasicAuthEnabled>
<PublicKeyEndPoint>/token_key</PublicKeyEndPoint>
<ServerUrl>http://35.197.37.220:9099</ServerUrl>
</SSOServer>
</UserAccessControl>
</SecurityProfile>'

After you disable Basic authentication, any Edge management API call that passes
Basic authentication credentials returns the following error:

<Error>
<Code>security.SecurityProfileBasicAuthDisabled</Code>
<Message>Basic Authentication scheme not allowed</Message>
<Contexts/>
</Error>

Re-enable Basic authentication


If for any reason you have to re-enable Basic authentication, you must perform the
following steps:

CAUTION: As part of re-enabling Basic authentication, you must temporarily disable all
authentication on Edge, including an external IDP authentication.

1. Log in to any Edge ZooKeeper node.


2. Run the following bash script to turn off all security:CAUTION: This step disables
all authentication on Edge, including SAML.

#! /bin/bash
/opt/apigee/apigee-zookeeper/bin/zkCli.sh -server localhost:2181 <<EOF
set /system/securityprofile <SecurityProfile></SecurityProfile>
quit
3. EOF
4. You will see output in the form:
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled
WATCHER::
WatchedEvent state:SyncConnected
type:None path:null
[zk: localhost:2181(CONNECTED) 0]
set /system/securityprofile <SecurityProfile></SecurityProfile>
cZxid = 0x89
...
[zk: localhost:2181(CONNECTED) 1] quit

5. Quitting...
6. Re-enable Basic authentication and the external IDP authentication:
curl -H "Content-Type: application/xml" http://localhost:8080/v1/securityprofile
-u sysAdminEmail:pWord -d '<SecurityProfile enabled="true" name="securityprofile">
<UserAccessControl enabled="true">
<SSOServer>
<BasicAuthEnabled>true</BasicAuthEnabled>
<PublicKeyEndPoint>/token_key</PublicKeyEndPoint>
<ServerUrl>http://35.197.37.220:9099</ServerUrl>
</SSOServer>
</UserAccessControl>

7. </SecurityProfile>'

You can now use Basic authentication again. Note that Basic authentication does
not work when the New Edge experience is enabled.

Configure the portal to use external


IDPs
Apigee Developer Services portal (or simply, the portal) acts as a client of Apigee
Edge. That means the portal does not function as a stand-alone system. Instead,
much of the information used by the portal is actually stored on Edge. When
necessary, the portal makes a request to retrieve information from Edge or to send
information to Edge.

The portal is always associated with a single Edge organization. When you configure
the portal, you can specify the basic authentication credentials (username and
password) for an account in the organization that the portal uses to communicate
with Edge.

If you choose to enable an external IDP such as SAML or LDAP for Edge
authentication, you can then configure the portal to use that authentication when
making requests to Edge. Configuring the portal to use an external IDP automatically
creates a new machine user account in the Edge organization that the portal then
uses to make requests to Edge. For more on machine users, see Automate tasks for
external IDPs.

External IDP support for the portal requires that you have already installed and
configures the Apigee SSO module on the Edge Management Server node. The
general process for enabling an external IDP for the portal is as follows:

1. Install the Apigee SSO module, as described in Install Apigee SSO for high
availability.NOTE: Basic authentication must still be enabled on Edge to install the
portal. Do not disable Basic authentication on Edge until after you have configured and
tested the portal with an external IDP.
2. Install the portal and ensure that your installation is working properly. See
Install the portal.
3. Configure SAML or LDAP on the portal, as described in this section.
4. (Optional) Disable Basic authentication on Edge, as described in Disable Basic
authentication on Edge.

Create a machine user for the portal

When an external IDP is enabled, Edge supports automated OAuth2 token generation
through the use of machine users. A machine user can obtain OAuth2 tokens without
having to specify a passcode. That means you can completely automate the process
of obtaining and refreshing OAuth2 tokens.

The IDP configuration process for the portal automatically creates a machine user in
the organization associated with the portal. The portal then uses this machine user
account to connect to Edge. For more on machine users, see Automate tasks for
external IDPs.

About authentication for portal developer accounts

When you configure the portal to use an external IDP, you enable the portal to use
either SAML or LDAP to authenticate with Edge so that the portal can make requests
to Edge. However, the portal also supports a type of user called developers.
Developers make up the community of users that build apps by using your APIs. App
developers use the portal to learn about your APIs, to register apps that use your
APIs, to interact with the developer community, and to view statistical information
about their app usage on a dashboard.

When a developer logs in to the portal, it is the portal that is responsible for
authenticating the developer and for enforcing role-based permissions. The portal
continues to use Basic authentication with developers even after you enable IDP
authentication between the portal and Edge. For more information, see
Communicating between the portal and Edge.

It is possible to also configure the portal to use SAML or LDAP to authenticate


developers. For an example on enabling SAML using third-party Drupal modules, see
SSO Integration via SAML with Developer Portal.

IDP configuration file for the portal

To configure an external IDP for the portal, you must create a configuration file that
defines the portal's settings.

The following example shows a portal configuration file with IDP support:

# IP address of Edge Management Server and the node on which the Apigee SSO
module is installed.

IP1=22.222.22.222

# URL of Edge management API.

MGMT_URL=http://$IP1:8080/v1

# Organization associated with the portal.

EDGE_ORG=myorg

# Information about the Apigee SSO module (apigee-sso).

# Externally accessible IP or DNS of apigee-sso.

SSO_PUBLIC_URL_HOSTNAME=$IP1
SSO_PUBLIC_URL_PORT=9099

# Default is http. Set to https if you enabled TLS on apigee-sso.

SSO_PUBLIC_URL_SCHEME=http

# SSO admin credentials as set when you installed apigee-sso.

SSO_ADMIN_NAME=ssoadmin

SSO_ADMIN_SECRET=Secret123

# Enables or disables external IDP support.

# Default is "n", which disables external IDP support.

# Change it to "y" to enable external IDs support.

DEVPORTAL_SSO_ENABLED=y

# The name of the OAuth2 client used to connect to apigee-sso.

# The default client name is portalcli.

PORTALCLI_SSO_CLIENT_NAME=portalcli

# OAuth client password using uppercase, lowercase, number, and special


characters.

PORTALCLI_SSO_CLIENT_SECRET=Abcdefg@1

# Email address and user info for the machine user created in the Edge org specified

# above by EDGE_ORG.

# This account is used by the portal to make requests to Edge.

# Add this email as an org admin before configuring the portal to use an external
IDP.

DEVPORTAL_ADMIN_EMAIL=DevPortal_SAML@google.com

DEVPORTAL_ADMIN_FIRSTNAME=DevPortal

DEVPORTAL_ADMIN_LASTNAME=SAMLAdmin
DEVPORTAL_ADMIN_PWD=Abcdefg@1

# If set, the existing portal OAuth client is deleted and a new one is created.

# The default value is "n".

# Set to "y" when you configure the external IDP and change the value of

# any of the PORTALCLI_* properties.

PORTALCLI_SSO_CLIENT_OVERWRITE=y

To enable external IDP support on the portal:

1. In the Edge UI, add the machine user specified by DEVPORTAL_ADMIN_EMAIL


to the organization associated with the portal as an Organization
Administrator.NOTE: The machine user does not exist yet but is created automatically
in the next step.
2. Execute the following command to configure the external IDP on the portal:
3. /opt/apigee/apigee-service/bin/apigee-service apigee-drupal-devportal
configure-sso -f configFile
4. Where configFile is the configuration file described above.
5. Log in to the portal as a portal admin.
6. In the main Drupal menu, select Configuration > Dev Portal. The portal
configuration screen appears, including the external IDP settings:
NOTE: The title and other aspects of this dialog specifies SAML, but the settings apply to
any other supported IDPs (such as LDAP).
Note the following:
● The box for This org is SAML-enabled is checked
● The endpoint for the Apigee SSO module is filled in
● The API key and Consumer secret fields for the portal Oauth client are
filled in
● The message Connection Successful appears under the Test
Connection button.
7. Click the Test Connection button to retest the connection at any time.

To later change these values, update the configuration file and execute this
procedure again.

Disable an external IDP on the portal


If you choose to disable your external IDP for communications between the portal
and Edge, the portal will no longer be able to make requests to Edge. Developers are
able to log in to the portal but will not be able to view product or create apps.

CAUTION: If you disable your external IDP, you should then reconfigure the portal to either use
SAML or LDAP or, if Edge is still configured to support Basic authentication, configure the portal
to communicate with Edge using Basic authentication.

For more information on using Basic authentication, see Communicating between the portal and
Edge.

To disable external IDP authentication on the portal:

1. Open the configuration file that you used previously to enable the external IDP.
2. Set the value of the DEVPORTAL_SSO_ENABLED property to n, as the
following example shows:
3. DEVPORTAL_SSO_ENABLED=n
4. Configure the portal by executing the following command:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-drupal-devportal
configure-sso -f configFile

Install Apigee SSO for high availability


You install multiple instances of Apigee SSO for high availability in two scenarios:

● In a single data center environment, install two Apigee SSO instances to


create a high availability environment, meaning the system continues to
operate if one of the Apigee SSO modules goes down.
● In an environment with two data centers, install Apigee SSO in both data
centers so that the system continues to operate if one of the Apigee SSO
modules goes down.

Install two Apigee SSO modules in the same data center


You deploy two instances of Apigee SSO, on different nodes, in a single data center
to support high availability. In this scenario:

● Both instances of Apigee SSO must be connected to the same Postgres


server. Apigee recommends using a dedicated Postgres server for Apigee
SSO and not the same Postgres server that you installed with Edge.
● Both instances of Apigee SSO must use the same JWT key pair as specified
by the SSO_JWT_SIGNING_KEY_FILEPATH and
SSO_JWT_VERIFICATION_KEY_FILEPATH properties in the configuration
file. See Install and configure Apigee SSO for more on setting these
properties.
● You require a load balancer in front of the two instances of Apigee SSO:
● The load balancer must support application generated cookie
stickiness, and the session cookie must be named JSESSIONID.
● Configure the load balancer to perform a TCP or HTTP health check on
Apigee SSO. For TCP, use the URL of Apigee SSO:
● http_or_https://edge_sso_IP_DNS:9099
● Specify the port as set by Apigee SSO. Port 9099 is the default.
For HTTP, include /healthz:
● http_or_https://edge_sso_IP_DNS:9099/healthz
● Some load balancer settings depend on whether you enabled HTTPS
on Apigee SSO. See the following sections for more information.

HTTP access to Apigee SSO

If you are using HTTP access to Apigee SSO, then configure the load balancer to:

● Use HTTP mode to connect to Apigee SSO.


● Listen on the same port as Apigee SSO.
By default, Apigee SSO listens for HTTP requests on port 9099. Optionally, you
can use SSO_TOMCAT_PORT to set the Apigee SSO port. If you used
SSO_TOMCAT_PORT to change the Apigee SSO port from the default, ensure
that the load balancer listens on that port.

For example, on each Apigee SSO instance you set the port to 9033 by adding the
following to the config file:

SSO_TOMCAT_PORT=9033

You then configure the load balancer to listen on port 9033 and forwarding requests
to an Edge SSO instance on port 9033. The public URL of Apigee SSO in this scenario
is:

http://LB_DNS_NAME:9033

HTTPS access to Apigee SSO

You can configure the Apigee SSO instances to use HTTPS. In this scenario, follow
the steps in Configure Apigee SSO for HTTPS access. As part of the process of
enabling HTTPS, you set SSO_TOMCAT_PROFILE in the Apigee SSO config file as
shown below:

SSO_TOMCAT_PROFILE=SSL_TERMINATION

You can also optionally set the port used by Apigee SSO for HTTPS access:

SSO_TOMCAT_PORT=9443
Then configure the load balancer to:

● Use TCP mode, not HTTP mode, to connect to Apigee SSO.


● Listen on the same port as Apigee SSO as defined by SSO_TOMCAT_PORT.

You then configure the load balancer to forward requests to an Apigee SSO instance
on port 9433. The public URL of Apigee SSO in this scenario is:

https://LB_DNS_NAME:9443

Install Apigee SSO in multiple data centers


In a multiple data center environment, you install an Apigee SSO instance in each
data center. One Apigee SSO instance then handles all traffic. If that Apigee SSO
instance goes down you can then switch to the second Apigee SSO instance.

Before you install Apigee SSO in two data centers, you need the following:

● The IP address or domain name of the Master Postgres server.


In a multiple data center environment, you typically install one Postgres server
in each data center and configure them in Master-Standby replication mode.
For this example, data center 1 contains the Master Postgres server and data
center 2 contains the Standby. For more information, see Set up Master-
Standby Replication for Postgres.
● A single DNS entry that points to one Apigee SSO instance. For example, you
create a DNS entry in the form below that points to the Apigee SSO instance in
data center 1:
● my-sso.domain.com => apigee-sso-dc1-ip-or-lb
● Both instances of Apigee SSO must use the same JWT key pair as specified
by the SSO_JWT_SIGNING_KEY_FILEPATH and
SSO_JWT_VERIFICATION_KEY_FILEPATH properties in the configuration
file. See Install and configure Apigee SSO for more on setting these
properties.

When you install Apigee SSO in each data center, you configure both to use the
Postgres Master in data center 1:

## Postgres configuration
PG_HOST=IP_or_DNS_of_PG_Master_in_DC1
PG_PORT=5432

You also configure both data centers to use the DNS entry as the publicly accessible
URL:

# Externally accessible URL of Apigee SSO


SSO_PUBLIC_URL_HOSTNAME=my-sso.domain.com
# Default port is 9099.
SSO_PUBLIC_URL_PORT=9099
If Apigee SSO in data center 1 goes down, you can then switch to the Apigee SSO
instance in data center 2:

1. Convert the Postgres Standby server in data center 2 to Master as described


in Handling a PostgresSQL database failover.
2. Update the DNS record to point my-sso.domain.com to the Apigee SSO
instance in data center 2:
3. my-sso.domain.com => apigee-sso-dc2-ip-or-lb
4. Update the config file for Apigee SSO in data center 2 to point to the new
Postgres Master server in data center 2:
## Postgres configuration

5. PG_HOST=IP_or_DNS_of_PG_Master_in_DC2
6. Restart Apigee SSO in data center 2 to update its configuration:
7. /opt/apigee/apigee-service/bin/apigee-service apigee-sso restart

Automate tasks for external IDPs


When using an external IDP with the Edge API, the process that you use to obtain
OAuth2 access and refresh tokens from the IDP interaction is called the passcode
flow. With the passcode flow, you use a browser to obtain a one-time passcode that
you then use to obtain OAuth2 tokens.

However, your development environment might support automation for common


development tasks, such as test automation or CI/CD. To automate these tasks
when an external IDP is enabled, you need a way to obtain and refresh OAuth2
tokens without having to copy/paste a passcode from a browser.

Edge supports automated token generation through the use of machine users within
organizations that have an IDP enabled. A machine user can obtain OAuth2 tokens
without having to specify a passcode. That means you can completely automate the
process of obtaining and refreshing OAuth2 tokens by using the Edge management
API.

There are two ways to create a machine user for an IDP-enabled organization:

● Using apigee-ssoadminapi.sh
● Using the management API

Each of these methods is described in the sections that follow.

You cannot create a machine user for organizations that have no enabled an external
IDP.

Create a machine user with apigee-ssoadminapi.sh


Use the apigee-ssoadminapi.sh utility to create a machine user within an IDP-
enabled organization. See Using apigee-ssoadminapi.sh for more. You can create a
single machine user that is used by all of your organizations, or create a separate
machine user for each organization.

NOTE You can only create machine users for organizations that use an external IDP such as
LDAP or SAML. You cannot create machine users for organizations that are not IDP-enabled.

The machine user is created and stored in the Edge datastore, not in your IDP.
Therefore, you are not responsible for maintaining the machine user by using the
Edge UI and Edge management API.

When you create the machine user, you must specify an email address and
password. After creating the machine user, you assign it to one or more
organizations.

To create a machine user with apigee-ssoadminapi.sh:

1. Use the following apigee-ssoadminapi.sh command to create the


machine user:
apigee-ssoadminapi.sh saml machineuser add --admin SSO_ADMIN_NAME \
--secret SSO_ADMIN_SECRET --host Edge_SSO_IP_or_DNS \

2. -u machine_user_email -p machine_user_password
3. QUESTION/TBD: Does apigee-ssoadminapi.sh also take "ldap" as an argument?
Where:
● SSO_ADMIN_NAME is the admin username defined by the
SSO_ADMIN_NAME property in the configuration file used to configure
the Apigee SSO module. The default is ssoadmin.
● SSO_ADMIN_SECRET is the admin password as specified by the
SSO_ADMIN_SECRET property in the config file.
In this example, you can omit the values for --port and --ssl
because the apigee-sso module uses the default values of 9099 for
--port and http for --ssl. If your installation does not use these
defaults, specify them as appropriate.
4. Log in to the Edge UI and add the machine user's email to your organizations,
and assign the machine user to the necessary role. See Adding global users
for more.

Create a machine user with the Edge management API


You can create a machine user by using the Edge management API instead of using
the apigee-ssoadminapi.sh utility.

NOTE You can only create machine users for organizations that use an external IDP. You cannot
create machine users for organizations that do not use an external IDP.

To create a machine user with the management API:


1. Use the following curl command to obtain a token for the ssoadmin user,
the username of the admin account for apigee-sso:
curl "http://Edge_SSO_IP_DNS:9099/oauth/token" -i -X POST \
-H 'Accept: application/json' / -H 'Content-Type: application/x-www-form-
urlencoded' \
-d "response_type=token" -d "grant_type=client_credentials" \
--data-urlencode "client_secret=SSO_ADMIN_SECRET" \

2. --data-urlencode "client_id=ssoadmin"
3. Where SSO_ADMIN_SECRET is the admin password you set when you
installed apigee-sso as specified by the SSO_ADMIN_SECRET property in
the config file.
This command displays a token that you need to make the next call.
4. Use the following curl command to create the machine user, passing the
token that you received in the previous step:
curl "http://edge_sso_IP_DNS:9099/Users" -i -X POST \
-H "Accept: application/json" -H "Content-Type: application/json" \
-d '{"userName" : "machine_user_email", "name" :
{"formatted":"DevOps", "familyName" : "last_name", "givenName" :
"first_name"}, "emails" : [ {"value" :
"machine_user_email", "primary" : true } ], "active" : true,
"verified" : true, "password" : "machine_user_password" }' \

5. -H "Authorization: Bearer token"


6. You need the machine user password in later steps.
7. Log in to the Edge UI.
8. Add the machine user's email to your organizations and assign the machine
user to the necessary role. See Adding global users for more.

Obtain and refresh machine user tokens


Use the Edge API to obtain and refresh OAuth2 tokens by passing the machine user's
credentials, instead of a passcode.

To obtain OAuth2 tokens for the machine user:

1. Use the following API call to generate the initial access and refresh tokens:

curl -H "Content-Type: application/x-www-form-urlencoded;charset=utf-8" \


-H "accept: application/json;charset=utf-8" \
-H "Authorization: Basic ZWRnZWNsaTplZGdlY2xpc2VjcmV0" -X POST \
http://Edge_SSO_IP_DNS:9099/oauth/token -s \
2. -d
'grant_type=password&username=m_user_email&password=m_user_passwor
d'
3. NOTE: For authorization, pass the reserved OAuth2 client credential, Basic
ZWRnZWNsaTplZGdlY2xpc2VjcmV0, in the Authorization header.
Save the tokens for later use.
4. Pass the access token to an Edge management API call as the Bearer
header, as the following example shows:
curl -H "Authorization: Bearer access_token" \

5. http://MS_IP_DNS:8080/v1/organizations/org_name
6. Where org_name is the name of the organization containing the machine user.
7. To later refresh the access token, use the following call that includes the
refresh token:
curl -H "Content-Type:application/x-www-form-urlencoded;charset=utf-8" \
-H "Accept: application/json;charset=utf-8" \
-H "Authorization: Basic ZWRnZWNsaTplZGdlY2xpc2VjcmV0" -X POST \
http://edge_sso_IP_DNS:9099/oauth/token \

8. -d 'grant_type=refresh_token&refresh_token=refreshToken'
9. NOTE: For authorization, pass the reserved OAuth2 client credential, Basic
ZWRnZWNsaTplZGdlY2xpc2VjcmV0, in the Authorization header.

Use SAML with the Edge UI


The SAML specification defines three entities:

● Principal (Edge UI user)


● Service provider (Apigee SSO)
● Identity provider (returns a SAML assertion)

When SAML is enabled, the principal (an Edge UI user) requests access to the
service provider (Apigee SSO). Apigee SSO (in it's role as a SAML service provider)
then requests and obtains an identity assertion from the SAML IDP and uses that
assertion to create the OAuth2 token required to access the Edge UI. The user is then
redirected to the Edge UI.

This process is shown below:


In this diagram:

1. The user attempts to access the Edge UI by making a request to the login URL
for the Edge UI. For example: https://edge_UI_IP_DNS:9000
2. Unauthenticated requests are redirected to the SAML IDP. For example,
"https://idp.customer.com".
3. If you are not logged in to the IDP, you are prompted to log in.
4. You are authenticated by the SAML IDP.
The SAML IDP generates and returns a SAML 2.0 assertion to the Apigee SSO
module.
5. Apigee SSO validates the assertion, extracts the user identity from the
assertion, generates the OAuth 2 authentication token for the Edge UI, and
redirects the user to the main Edge UI page at the following URL:
6. https://edge_ui_IP_DNS:9000/platform/orgName
7. Where orgName is the name of an Edge organization.

Configure single sign-out from the Edge


UI (SAML only)
By default, when a user logs out of the Edge UI, the Edge UI clears any cookies for
the user's session. Clearing cookies requires the user to log in again the next time
they want to access the Edge UI. If you have implemented a single sign-on
environment, the user can still access any other services through their single sign-on
credentials.

However, you might want a logout from any one service to sign the user out of all
services. In this case, you can configure your IDP to support single sign-out.
NOTE: You do not have to make configuration changes to the Edge UI to enable single sign-out.
You configure the IDP to sign out the user when they log out of any service. Therefore, this steps
to enable single sign-out are specific to your IDP.

To configure the IDP, you need the following information about the Edge UI:

● The single logout URL for the Edge UI: This URL is in the following form:
● http://apigee_sso_IP_DNS:9099/saml/SingleLogout/alias/apigee-saml-login-
opdk
● Or, if you enabled TLS on apigee-sso:
● https://apigee_sso_IP_DNS:9099/saml/SingleLogout/alias/apigee-saml-login-
opdk
● The service provider issuer: The value for the Edge UI is apigee-saml-
login-opdk.
● The IDP certificate: In Install and configure Apigee SSO, you created an IDP
certificate named selfsigned.crt and saved it in
/opt/apigee/customer/application/apigee-sso/idp/. Depending
on your IDP, you must use that same certificate to configure single sign-out.

For example, if you are using OKTA as your service provider, in the SAML settings for
your application:

1. In your OKTA application, select Show Advanced Settings.


2. Select Allow application to initiate Single Logout.
3. Enter the Single Logout URL for the Edge UI as shown above.
4. Enter the SP Issuer (service provider issuer).
5. In Signature Certificate, upload the
/opt/apigee/customer/application/apigee-sso/saml/selfsign
ed.crt TLS cert. The following image shows this information for OKTA:

6. Save your settings.

When a user next logs out of the Edge UI, the user is logged out of all services.
Using Edge admin utilities and APIs
after enabling SAML
This section describes how to run Edge system admin tools and commands after
enabling SAML. Many tasks on Edge require system administration credentials, such
as:

● Creating organizations and environments


● Adding and removing Edge components
● Runngin apigee-adminapi.sh commands

However, after you enable SAML on Edge you typically disable Basic Auth so that the
only way to authenticate is through the SAML IDP. Therefore, you must make sure
that you have added the system admin account to your SAML IDP.

Warning: Ensure that you have added the Edge system admin account to the SAML IDP before
you disable Basic Auth. Otherwise, you will not be able to authenticate as the system admin.

Calling Edge management APIs as the system


administrator
Many Edge API calls require you to pass system administrator credentials. Using
SAML with the Edge management API contains instructions on how to obtain and
refresh tokens when making Edge management API calls.

Using the apigee-adminapi.sh utility with SAML


authentication
Use the apigee-adminapi.sh utility to perform the same Edge configuration
tasks that you perform by making calls to the Edge management API. The advantage
to the apigee-adminapi.sh utility is that it:

● Use a simple command-line interface


● Implements tab-based command completion
● Provides help and usage information
● Can display the corresponding API call if you decide to try the API

For more, see Using apigee-ssoadminapi.sh.

After you enable SAML authentication, you have several ways to pass the system
admin credentials to the apigee-adminapi.sh utility.

Note: The credentials must be for a sys admin user, not an organization administrator or other
type of user.
You can see all of the options for any apigee-adminapi.sh command, including
the options for specifying SAML credentials, by using the "-h" option to the
command. For example:

apigee-adminapi.sh orgs list -h

For example, you can pass the system admin credentials:

apigee-adminapi.sh orgs list --sso-url http://edge_sso_IP_DNS:9099 --oauth-flow


password_grant \
--admin adminEmail --oauth-password adminPword

Where:

● The sso-url option specifies the URL of the Apigee SSO module. Modify the
port or protocol if you have changed them from 9099 and HTTP.
● oauth-flow specifies either passcode or password_grant. In this
example, you specify password_grant.
● adminEmail is the email address of the sys admin.
● oauth-password specifies the sys admin's password.

Alternatively, you can use a passcode when calling the command:

apigee-adminapi.sh orgs list --sso-url http://edge_sso_IP_DNS:9099 --oauth-flow


passcode \
--admin adminEmail --oauth-passcode passcode

Where:

● oauth-flow specifies passcode.


● oauth-passcode specifies the passcode obtained from
http://edge_sso_IP_DNS:9099/passcode.

Finally, you can use a token when calling the command:

apigee-adminapi.sh orgs list --sso-url http://edge_sso_IP_DNS:9099 --oauth-flow


passcode \
--admin adminEmail --oauth-token token

Where:

● oauth-flow specifies either passcode or password_grant, depending on


how you originally got the token. In this example, you specify passcode
because you originally got the token by using get_token. See Using SAML
with the Edge management API.
● oauh_token contains the token.
Using Edge utilities with SAML authentication
Many Edge utilities require system admin credentials, such as:

● apigee-provision used to create organizations, environments, and virtual


hosts
● setup.sh used to add nodes to an existing system
● Any other utility where you have to specify the system admin credentials in a
configuration file

These utilities take as input a configuration file that specifies the system admin's
credentials by using the properties:

ADMIN_EMAIL="adminEmail"
APIGEE_ADMINPW=adminPWord

If you omit the password, then you are prompted for it.

After you enable SAML you use different properties to specify the sys admin's
credentials. For example, you can pass the system admin credentials:

ADMIN_EMAIL="adminEmail"
SSO_LOGIN_URL=http://edge_sso_IP_DNS:9099
OAUTH_FLOW=password_grant
OAUTH_ADMIN_PASSWORD=adminPWord

Where:

● SSO_LOGIN_URL specifies the URL of the Apigee SSO module. Modify the
port or protocol if you have changed them from 9099 and HTTP.
● OAUTH_FLOW specifies either passcode or password_grant. In this
example, you specify password_grant because you are passing the sys
admin's password.
● OAUTH_ADMIN_PASSWORD specifies the sys admin's password.

Alternatively, you can use the following properties to specify the credentials as part
of a passcode flow:

ADMIN_EMAIL="adminEmail"
SSO_LOGIN_URL=http://edge_sso_IP_DNS:9099
OAUTH_FLOW=passcode
OAUTH_ADMIN_PASSCODE=passcode

Where:

● OAUTH_FLOW specifies passcode.


● OAUTH_ADMIN_PASSCODE specifies the passcode obtained from
http://edge_sso_IP_DNS:9099/passcode.
Finally, you can use a token

ADMIN_EMAIL="adminEmail"
SSO_LOGIN_URL=http://edge_sso_IP_DNS:9099
OAUTH_FLOW=passcode
OAUTH_BEARER_TOKEN=token

Where:

● OAUTH_FLOW specifies either passcode or password_grant, depending on


how you originally got the token. In this example, you specify passcode
because you originally got the token by using get_token. See Using SAML
with the Edge management API.
● OAUTH_BEARER_TOKEN contains the token.

Troubleshooting SAML on the Private


Cloud
If the installation or configuration process fails, you should first ensure that all
necessary ports are open and accessible:

● The apigee-sso node must be able to access the Postgres node on port
5432.
● Port 9099 on apigee-sso node must be open for external HTTP access by
the Edge UI and the SAML IDP. If you configure TLS on apigee-sso, the port
number might be different.
● The apigee-sso node must be able to access the SAML IDP at the URL
specified by the SSO_SAML_IDP_METADATA_URL property.
● The apigee-sso node must be able to access port 8080 on the Management
Server node.

If all necessary ports are open and accessible, you can rerun the configuration steps:

● For apigee-sso:
● /opt/apigee/apigee-service/bin/apigee-service apigee-sso setup -f configFile
● For the Edge UI:
● /opt/apigee/apigee-service/bin/apigee-service edge-ui configure-sso -f
configFile
If reconfiguration does work, then you can delete the Postgres database used by
apigee-sso, and then reconfigure apigee-sso and the Edge UI:

1. Disable SAML on the Edge UI as described in Disable SAML.


2. Stop apigee-sso:
3. /opt/apigee/apigee-service/bin/apigee-service apigee-sso stop
4. Log in to the Postgres node and drop the Postgres database:
5. psql -U postgres_username -p postgres_port -h postgres_host -c "drop
database \"apigee_sso\""
6. where:
● postgres_username is the Postgre username you specified when you
installed Edge. The default value is apigee.
● postgres_port is the Postgres port you specified when you installed
Edge. The default value is 5432.
● postgres_host is the IP or DNS name of the Postgres node.
7. Reconfigure apigee-sso:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-sso setup -f configFile
9. Reconfigure the Edge UI:
10. /opt/apigee/apigee-service/bin/apigee-service edge-ui configure-sso -f
configFile

Use an external IDP with the Edge


management API
Basic authentication is one way to authenticate when making calls to the Edge
management API. For example, you can make the following curl request to the
Edge management API to access information about your organization:

curl -u USER_NAME:PASSWORD
https://MS_IP_DNS:8080/v1/organizations/ORG_NAME

In this example, you use the curl -u option to pass Basic authentication
credentials. Alternatively, you can pass an OAuth2 token in the Bearer header to
make Edge management API calls, as the following example shows:

curl -H "Authorization: Bearer ACCESS_TOKEN"


https://MS_IP_DNS:8080/v1/organizations/ORG_NAME

After you enable an external IDP for authentication, you can optionally disable Basic
authentication. If you disable Basic authentication, all scripts (such as Maven, shell,
and apigeetool) that rely on Edge management API calls supporting Basic
authentication no longer work. You must update any API calls and scripts that use
Basic authentication to pass OAuth2 access tokens in the Bearer header.

Get and refresh tokens with get_token


The get_token utility exchanges your Basic authentication credentials (and in
some cases a passcode) for an OAuth2 access and refresh token. The get_token
utility accepts your credentials and returns a valid access token. If a token can be
refreshed, the utility refreshes and returns it. If the refresh token expires, it will
prompt for user credentials.

The get_token utility stores the tokens on disk, ready for use when required. It also
prints a valid access token to stdout. From there, you can use a browser extension
like Postman or embed it in an environment variable for use in curl.

To obtain an OAuth2 access token to make Edge management API calls:

1. Download the sso-cli bundle:


2. curl http://EDGE_SSO_IP_DNS:9099/resources/scripts/sso-cli/ssocli-
bundle.zip -o "ssocli-bundle.zip"
3. Where EDGE_SSO_IP_DNS is the IP address or DNS name of the machine
hosting the Apigee SSO module. If you configured TLS on Apigee SSO, use
https and the correct TLS port number.
4. Unzip the ssocli-bundle.zip bundle, as the following example shows:
5. unzip ssocli-bundle.zip
6. Install get_token in /usr/local/bin, as the following example shows:
7. ./install -b PATH
8. The -b option specifies a different location.
9. Set the SSO_LOGIN_URL environment variable to your login URL, in the
following form:
10. export SSO_LOGIN_URL="http://EDGE_SSO_IP_DNS:9099"
11. Where EDGE_SSO_IP_DNS is the IP address of the machine hosting the Apigee
SSO module. If you configured TLS on Apigee SSO, use https and the correct
TLS port number.
12. (SAML only) In a browser, navigate to the following URL to obtain a one-time
passcode:
13. http://EDGE_SSO_IP_DNS:9099/passcode
14. If you configured TLS on Apigee SSO, use https and the correct TLS port
number.
Note: If you are not currently logged in by your IDP, you will be prompted to log in.
This request returns a one-time passcode that remains valid until you refresh
that URL to obtain a new passcode or you use the passcode with get_token
to generate an access token.
Note that you can use a passcode only when authenticating with a SAML IDP.
You can not use a passcode to authenticate with an LDAP IDP.
15. Invoke get_token to get the OAuth2 access token, as the following example
shows:
16. get_token -u EMAIL_ADDRESS
17. Where EMAIL_ADDRESS is the email address of an Edge user.
(SAML only) Enter the passcode on the command line in addition to the email
address, as the following example shows:
18. get_token -u EMAIL_ADDRESS -p PASSCODE
19. The get_token utility obtains the OAuth2 access token, prints it to the
screen, and writes it and the refresh token to ~/.sso-cli.
20. Pass the access token to an Edge management API call as the Bearer
header, as the following example shows:
curl -H "Authorization: Bearer ACCESS_TOKEN"

21. https://MS_IP:8080/v1/organizations/ORG_NAME
22. After you get a new access token for the first time, you can get the access
token and pass it to an API call in a single command, as the following
example shows:
header=`get_token` && curl -H "Authorization: Bearer $header"

23. https://MS_IP:8080/v1/o/ORG_NAME
24. With this form of the command, if the access token has expired, it is
automatically refreshed until the refresh token expires.

(SAML only) After the refresh token expires, get_token prompts you for a new
passcode. You must navigate to the URL shown above in step 3 and generate a new
passcode before you can generate a new OAuth access token.

Use the management API to get and refresh tokens


Use OAuth2 security with the Apigee Edge management API shows how to use the
Edge management API to obtain and refresh tokens. You can also use Edge API calls
to get tokens generated from SAML assertions.

The only difference between the API calls documented in Using OAuth2 security with
the Apigee Edge management API is that the URL of the call must reference your
zone name. In addition, for generating the initial access token with a SAML IDP, you
must include the passcode, as shown in step 3 of the procedure above.

For authorization, pass a reserved OAuth2 client credential in the Authorization


header. The call prints the access and refresh tokens to the screen.

Get an access token

(LDAP) Use the following API call to generate the initial access and refresh tokens:

curl -H "Content-Type: application/x-www-form-urlencoded;charset=utf-8" \


-H "accept: application/json;charset=utf-8" \
-H "Authorization: Basic ZWRnZWNsaTplZGdlY2xpc2VjcmV0" -X POST \
http://EDGE_SSO_IP_DNS:9099/oauth/token -s \
-d 'grant_type=password&username=USER_EMAIL&password=USER_PASSWORD'

(SAML) Use the following API call to generate the initial access and refresh tokens:

curl -H "Content-Type: application/x-www-form-urlencoded;charset=utf-8" \


-H "accept: application/json;charset=utf-8" \
-H "Authorization: Basic ZWRnZWNsaTplZGdlY2xpc2VjcmV0" -X POST \
https://EDGE_SSO_IP_DNS:9099/oauth/token -s \
-d 'grant_type=password&response_type=token&passcode=PASSCODE'

Note that authentication with a SAML IDP requires a temporary passcode, whereas
an LDAP IDP does not.

Refresh an access token

To later refresh the access token, use the following call that includes the refresh
token:

curl -H "Content-Type:application/x-www-form-urlencoded;charset=utf-8" \
-H "Accept: application/json;charset=utf-8" \
-H "Authorization: Basic ZWRnZWNsaTplZGdlY2xpc2VjcmV0" -X POST \
https://EDGE_SSO_IP_DNS:9099/oauth/token \
-d 'grant_type=refresh_token&refresh_token=REFRESH_TOKEN'

Use apigee-ssoadminapi.sh
The Apigee SSO module supports two types of accounts:

● Administrators
● Machine users (for external IDP-enabled organizations only)

The apigee-ssoadminapi.sh utility lets you manage the adminiatrator and


machine user accounts that are associated with the Apigee SSO module.

Use the the apigee-ssoadminapi.sh utility to:

● View the list of admin/machine users


● Add or delete admin/machine users
● Change the password of admin/machine users

About administrator users


An administrator account on the Apigee SSO module is required to manage
properties of the module.
By default, when you install the Apigee SSO module, it creates an administrator
account with the following credentials:

● username: Defined by the SSO_ADMIN_NAME property in the configuration file


used to configure the Apigee SSO module. The default is ssoadmin.
● password: Defined by the SSO_ADMIN_SECRET property in the configuration
file used to configure the Apigee SSO module.

About machine users


A machine user can obtain OAuth2 tokens without having to specify a passcode.
That means you can completely automate the process of obtaining and refreshing
OAuth2 tokens by using the Edge management API.

NOTE Machine users can be created only for organizations that use external IDPs. Machine
users cannot be created organizations that do not use external IDPs.

You typically use machine users for:

● Configuring the Apigee Developer Services portal (or simply, the portal) to
communicate with Edge
● When development environment support automation for common
development tasks, such as test automation or CI/CD.

For more, see Automate tasks for external IDPs.

Installing apigee-ssoadminapi.sh
Install the apigee-ssoadminapi.sh utility on the Edge Management Server node
where you installed the Apigee SSO module. Typically you install the apigee-
ssoadminapi.sh utility when you install the Apigee SSO module.

If you have not yet installed the apigee-ssoadminapi.sh utility:

1. Log in to the Management Server node. That node should already have
apigee-service installed as described at Install the Edge apigee-setup
utility.
2. Install the apigee-ssoadminapi.sh utility used to manage admin and
machine users for the Apigee SSO module by executing the following
command:
3. /opt/apigee/apigee-service/bin/apigee-service apigee-ssoadminapi install
4. Log out of the shell, and then log back in again to add the apigee-
ssoadminapi.sh utility to your path.

View help information for the apigee-ssoadminapi.sh


The available commands for the utility are:
● admin add
● admin delete
● admin list
● admin setpassword
● saml machineuser add (use the saml command for all IDPs, including
LDAP and SAML)
● saml machineuser delete
● aaml machineuser list
● saml machineuser setpassword
NOTE: Use the saml command for all IDPs, which includes LDAP with direct and indirect binding.

You can view information about these commands in the /opt/apigee/apigee-


ssoadminapi/README.md file. Additionally, you can specify the "-h" option to each
command to view usage information.

For example, the following command:

apigee-ssoadminapi.sh admin list -h

Returns:

admin list
--admin SSOADMIN_CLIENT_NAME Name of the client having administrative
privilege on sso
--secret SSOADMIN_CLIENT_SECRET Secret/Password for the client
--host SSO_HOST Hostname of SSO server to connect
--port SSO_PORT Port to use during request
--ssl SSO_URI_SCHEME Set to https, defaults to http
--debug Set in debug mode, turns on verbose in curl
-h Displays Help

Invoke the apigee-ssoadminapi.sh utility


You can invoke the apigee-ssoadminapi.sh utility by passing all properties as
command line arguments, or in interactive mode by responding to prompts.

For example, to specify all required information on the command line to see the list
of admin users:

apigee-ssoadminapi.sh admin list --admin ssoadmin --secret Secret123 --host


35.197.94.184

Returns:

[
{
"client_id": "ssoadmin",
"access_token_validity": 300
}
]

If you omit any required information, such as the admin password, you are prompted.

In this example, you omit the values for --port and --ssl because the Apigee SSO
module uses the default values of 9099 for --port and http for --ssl. If your
installation does not use these defaults, specify them:

apigee-ssoadminapi.sh admin list --admin ssoadmin --secret Secret123


--host 35.197.94.184 --port 9443 --ssl https

Alternatively, use the interactive form where you are prompted for all information:

apigee-ssoadminapi.sh admin list

You are then prompted for all required information:

SSO admin name (current): ssoadmin


SSO Admin secret (current):
SSO host: 35.197.94.184

Disable external IDP authentication

This section shows you how to disable external IDP authentication for the Edge UI
and the management API.

To disable external IDP authentication on the Edge UI:

1. Open for edit the configuration file that you used to configure the external IDP.
2. Set the EDGEUI_SSO_ENABLED property to n, as the following example
shows:
3. EDGEUI_SSO_ENABLED=n
4. Configure the Edge UI by executing the following command:
5. /opt/apigee/apigee-service/bin/apigee-service edge-ui configure-sso -f
configFile
6. NOTE: Users who never set an Edge password must select the Reset password link on
the Edge UI login page to set a new password.
To disable external IDP authentication on the Edge management API:

1. Stop the apigee-sso module:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-sso stop

IDP authentication (Classic UI)


NOTE: This section describes how to use an external identity provider with the Classic UI. It is not
intended for users of the new Edge UI.

Also note, however, that this release of Apigee Edge for Private Cloud (4.51.00) is the last release
in which Classic UI will be supported. In all future releases, only the Edge UI will be supported.

This section provides an overview of how external directory services integrate with
an existing Apigee Edge for Private Cloud installation. This feature is designed to
work with any directory service that supports LDAP, such as Active Directory,
OpenLDAP, and others.

An external LDAP solution allows system administrators to manage user credentials


from a centralized directory management service, external to systems like Apigee
Edge that use them. The feature described in this document supports both direct and
indirect binding authentication.

For detailed instructions on configuring an external directory service, see Configuring


external authentication.

Audience
This document assumes that you are an Apigee Edge for Private Cloud global
system administrator and that you have an account the external directory service.

Overview
By default, Apigee Edge uses an internal OpenLDAP instance to store credentials
that are used for user authentication. However, you can configure Edge to use an
external authentication LDAP service instead of the internal one. The procedure for
this external configuration is explained in this document.

Edge also stores role-based access authorization credentials in a separate, internal


LDAP instance. Whether or not you configure an external authentication service,
authorization credentials are always stored in this internal LDAP instance. The
procedure for adding users that exist in the external LDAP system to the Edge
authorization LDAP are explained in this document.
Note that authentication refers to validating a user's identity, while authorization
refers to verifying the level of permission an authenticated user is granted to use
Apigee Edge features.

What you need to know about Edge authentication and


authorization
It's useful to understand the difference between authentication and authorization
and how Apigee Edge manages these two activities.

About authentication

Users who access Apigee Edge either through the UI or APIs must be authenticated.
By default, Edge user credentials for authentication are stored in an internal
OpenLDAP instance. Typically, users must register or be asked to register for an
Apigee account, and at that time they supply their username, email address,
password credentials, and other metadata. This information is stored in and
managed by the authentication LDAP.

However, if you wish to use an external LDAP to manage user credentials on behalf
of Edge, you can do so by configuring Edge to use the external LDAP system instead
of the internal one. When an external LDAP is configured, user credentials are
validated against that external store, as explained in this document.

About authorization

Edge organization administrators can grant specific permissions to users to interact


with Apigee Edge entities like API proxies, products, caches, deployments, and so on.
Permissions are granted through the assignment of roles to users. Edge includes
several built-in roles, and, if needed, org administrators can define custom roles. For
example, a user can be granted authorization (through a role) to create and update
API proxies, but not to deploy them to a production environment.

The key credential used by the Edge authorization system is the user's email
address. This credential (along with some other metadata) is always stored in Edge's
internal authorization LDAP. This LDAP is entirely separate from the authentication
LDAP (whether internal or external).

Users who are authenticated through an external LDAP must also be manually
provisioned into the authorization LDAP system. Details are explained in this
document.

NOTE: User passwords from the external LDAP system are never stored/cached in the internal
authorization system.

For more background on authorization and RBAC, see Managing organization users
and Assigning roles.
For a deeper view, see also Understanding Edge authentication and authorization
flows.

Understanding direct and indirect binding authentication


The external authorization feature supports both direct and indirect binding
authentication through the external LDAP system.

Summary: Indirect binding authentication requires a search on the external LDAP for
credentials that match the email address, username, or other ID supplied by the user
at login. With direct binding authentication, no search is performed--credentials are
sent to and validated by the LDAP service directly. Direct binding authentication is
considered to be more efficient because there is no searching involved.

About indirect binding authentication

With indirect binding authentication, the user enters a credential, such as an email
address, username, or some other attribute, and Edge searches authentication
system for this credential/value. If the search result is successful, the system
extracts the LDAP DN from the search results and uses it with a provided password
to authenticate the user.

The key point to know is that indirect binding authentication requires the caller (e.g.,
Apigee Edge) to provide external LDAP admin credentials so that Edge can "log in" to
the external LDAP and perform the search. You must provide these credentials in an
Edge configuration file, which is described later in this document. Steps are also
described for encrypting the password credential.

About direct binding authentication

With direct binding authentication, Edge sends credentials entered by a user directly
to the external authentication system. In this case, no search is performed on the
external system. Either the provided credentials succeed or they fail (e.g., if the user
is not present in the external LDAP or if the password is incorrect, the login will fail).

Direct binding authentication does not require you to configure admin credentials for
the external auth system in Apigee Edge (as with indirect binding authentication);
however, there is a simple configuration step that you must perform, which is
described in Configuring external authentication.

Access the Apigee community


The Apigee Community is a free resource where you can contact Apigee as well as
other Apigee customers with questions, tips, and other issues. Before posting to the
community, be sure to first search existing posts to see if your question has already
been answered.
Enabling external authentication
This section explains how to obtain, install, and configure the components required
to integrate an LDAP service into Apigee Edge for user authentication.

NOTE: These instructions apply to Microsoft Active Directory. For other LDAP services, please
consult their documentation.

Prerequisites
● You must have an Apigee Edge for Private Cloud 4.18.05 installation.
● You must have global system administrator credentials on Apigee Edge for
Private Cloud to perform this installation.
● You need to know the root directory of your Apigee Edge for Private Cloud
installation. The default root directory is /opt.
● You must add your Edge global system administrator credentials to the
external LDAP. Remember that by default, the sysadmin credentials are stored
in the Edge internal LDAP. Once you switch to the external LDAP, your
sysadmin credentials will be authenticated there instead. Therefore, you must
provision the credentials to the external system before enabling external
authentication in Edge.
For example if you have configured and installed Apigee Edge for Private
Cloud with global system administrator credentials as:
username: edgeuser@mydomain.com

● password: Secret123
● Then the user edgeuser@mydomain.com with password Secret123 must
also be present in the external LDAP.
● If you are running a Management Server cluster, note that you must perform
all of the steps in this document for each Management Server.

Configuring external authentication


The main activity you'll perform is configuring the management-
server.properties file. This activity includes stopping and starting the Edge
Management Server, deciding whether you want to use direct or indirect binding,
encrypting sensitive credentials, and other related tasks.

1. Important: Decide now whether you intend to use the indirect or direct binding
authentication method. This decision will affect some aspects of the
configuration. See External Authentication.
2. Important: You must do these config steps on each Apigee Edge
Management Server (if you are running more than one).
3. Open /opt/apigee/customer/application/management-
server.properties in a text editor. If the file does not exist, create it.
4. Add the following line:
5. conf_security_authentication.user.store=externalized.authentication
6. Note: Be sure that there are no trailing spaces at the end of the line.
This line adds the external authentication feature to your Edge for Private
Cloud installation.
7. To make this step easy, we have created two well-commented sample
configurations -- one for direct and one for indirect binding authentication.
See the samples below for the binding you wish to use, and complete the
configuration:
a. DIRECT BINDING configuration sample
b. INDIRECT BINDING configuration sample
8. Restart the Management Server:
9. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
10. Verify that the server is running:
11. /opt/apigee/apigee-service/bin/apigee-all status
12. Important: You must do an additional configuration under either (or both) of
the following circumstances:
a. If you intend to have users log in using usernames that are not email
addresses. In this case, your sysadmin user must also authenticate
with a username.
AND/OR
b. If the password for your sysadmin user account in your external LDAP
is different from the password you configured when you first installed
Apigee Edge for Private Cloud. See Configuration required for different
sysadmin credentials.

DIRECT BINDING configuration sample


## The first property is always required to enable the external authorization feature.
## Do not change it.
conf_security_externalized.authentication.implementation.class=com.apigee.rbac.i
mpl.LdapAuthenticatorImpl

## Identify the type of binding:


## Set to "true" for direct binding
## Set to "false" for indirect binding.
conf_security_externalized.authentication.bind.direct.type=true

## The next seven properties are needed regardless of direct or indirect binding. You
need to
## configure these per your external authentication installation.
## The IP or domain for your external LDAP instance.
conf_security_externalized.authentication.server.url=ldap://localhost:389

## Your external LDAP server version.


conf_security_externalized.authentication.server.version=3

## The server timeout in milliseconds.


conf_security_externalized.authentication.server.conn.timeout=50000

## Change these baseDN values to match your external LDAP service. This attribute
value will be
## provided by your external LDAP administrator, and may have more or fewer dc
elements depending
## on your setup.
conf_security_externalized.authentication.user.store.baseDN=dc=apigee,dc=com

## Do not change this search string. It is used internally.


conf_security_externalized.authentication.user.store.search.query=(&($
{userAttribute}=${userId}))

## Identifies the external LDAP property you want to bind against for Authentication.
For
## example if you are binding against an email address in Microsoft Active Directory,
this would be
## the userPrincipalName property in your external LDAP instance. Alternatively if
you are binding
## against the user's ID, this would typically be in the sAMAccountName property:
conf_security_externalized.authentication.user.store.user.attribute=userPrincipalN
ame

## The LDAP attribute where the user email value is stored. For direct binding with
AD, set it to
## userPrincipalName.
conf_security_externalized.authentication.user.store.user.email.attribute=userPrin
cipalName

## ONLY needed for DIRECT binding.


## The direct.bind.user.directDN property defines the string that is used for the bind
against the
## external authentication service. Ensure it is set as follows:
conf_security_externalized.authentication.direct.bind.user.directDN=${userDN}

INDIRECT BINDING configuration sample


## Required to enable the external authorization feature. Do not change it.
conf_security_externalized.authentication.implementation.class=com.apigee.rbac.i
mpl.LdapAuthenticatorImpl

## Identifies the type of binding:


## Set to "true" for direct binding
## Set to "false" for indirect binding.
conf_security_externalized.authentication.bind.direct.type=false

## The next seven properties are needed regardless of direct or indirect binding. You
need to
## configure these per your external LDAP installation.
## The IP or domain for your external LDAP instance.
conf_security_externalized.authentication.server.url=ldap://localhost:389

## Replace with your external LDAP server version.


conf_security_externalized.authentication.server.version=3

## Set the server timeout in milliseconds.


conf_security_externalized.authentication.server.conn.timeout=50000

## Change these baseDN values to match your external LDAP service. This attribute
value will be
# provided by your external LDAP administrator, and may have more or fewer dc
elements
# depending on your setup.
conf_security_externalized.authentication.user.store.baseDN=dc=apigee,dc=com

## Do not change this search string. It is used internally.


conf_security_externalized.authentication.user.store.search.query=(&($
{userAttribute}=${userId}))

## Identifies the external LDAP property you want to bind against for Authentication.
For example
## if you are binding against an email address, this would typically be in the
## userPrincipalName property in your external LDAP instance. Alternatively if you
are binding
## against the user's ID, this would typically be in the sAMAccountName property.
## See also "Configuration required for different sysadmin credentials".
conf_security_externalized.authentication.user.store.user.attribute=userPrincipalN
ame

## Used by Apigee to perform the Authorization step and currently, Apigee only
supports email
## address for Authorization. Make sure to set it to the attribute in your external
LDAP that
## stores the user's email address. Typically this will be in the userPrincipalName
property.
conf_security_externalized.authentication.user.store.user.email.attribute=userPrin
cipalName
## The external LDAP username (for a user with search privileges on the external
LDAP) and
## password and whether the password is encrypted. You must also set the
attribute
## externalized.authentication.bind.direct.type to false.
## The password attribute can be encrypted or in plain text. See
## "Indirect binding only: Encrypting the external LDAP user's password"
## for encryption instructions. Set the password.encrypted attribute to "true" if the
password is
## encrypted. Set it to "false" if the password is in plain text.
conf_security_externalized.authentication.indirect.bind.server.admin.dn=myExtLda
pUsername
conf_security_externalized.authentication.indirect.bind.server.admin.password=my
ExtLdapPassword
conf_security_externalized.authentication.indirect.bind.server.admin.password.enc
rypted=true

Testing the installation


1. Verify that the server is running:
2. /opt/apigee/apigee-service/bin/apigee-all status
3. Execute this command, providing a set of Apigee Edge global system admin
credentials. The API call we're going to test can only be executed by an Edge
sysadmin.Important: The identical credentials must exist in your external LDAP
account. If not, you need to add them now. Note that the username is usually an email
address; however, it depends on how you have configured external authentication, as
explained previously in this document.
4. curl -v http://management-server-IP:8080/v1/o -u sysadmin_username
5. For example:
6. curl -v http://192.168.52.100:8080/v1/o -u jdoe@mydomain.com
7. Enter your password when prompted.
If the command returns a 200 status and a list of organizations, the
configuration is correct. This command verifies that the API call to the Edge
Management Server was successfully authenticated through the external
LDAP system.

Disable the reset password link in the Edge


UI
By default, the log in screen of the Edge UI includes a link that lets a user reset their
password:

However, this link is not integrated with an external authentication server, so you can
hide it by using the following procedure:

1. Open the ui.properties file in an editor. If the file does not exist, create it:
2. vi /inst_root/apigee/customer/application/ui.properties
3. Set the conf_apigee_apigee.feature.disablepasswordreset token
to "true" in ui.properties:
4. conf_apigee_apigee.feature.disablepasswordreset="true"
5. Save your changes.
6. Restart the Edge UI:
7. /inst_root/apigee/apigee-service/bin/apigee-service edge-ui restart

To later re-enable this link, set the


conf_apigee_apigee.feature.disablepasswordreset token to "false" and
restart the Edge UI.

(Indirect binding only) Encrypting the


external LDAP user's password
If you are using indirect binding, you need to provide an external LDAP username and
password in management-server.properties that Apigee uses to log into the
external LDAP and perform the indirect credential search.

Note: Using plain text passwords in config files may be adequate for testing purposes; however,
for production environments, encryption is highly recommended.
The following steps explain how to encrypt your password:

1. Execute the following Java utility, replacing the


YOUR_EXTERNAL_LDAP_PASSWORD with your actual external LDAP
password:
java -cp /opt/apigee/edge-gateway/lib/thirdparty/*:/opt/apigee/edge-gateway/lib/
kernel/*:/opt/apigee/edge-gateway/lib/infra/libraries/*:/opt/apigee/edge-
management-server/conf/ com.apigee.util.CredentialUtil --
password="YOUR_EXTERNAL_LDAP_PASSWORD"

2.
3. where /opt/apigee/edge-management-server/conf/ is the path to
the edge-management-server's credential.properties file.
4. In the output of the command, you will see a newline followed by what looks
like a random character string. Copy that string.
5. Edit /opt/apigee/customer/application/management-
server.properties.
6. Update the following property, replacing myAdPassword with the string you
copied from step 2, above.
7. conf_security_externalized.authentication.indirect.bind.server.admin.passwor
d=myAdPassword
8. Be sure the following property is set to true:
9. conf_security_externalized.authentication.indirect.bind.server.admin.passwor
d.encrypted=true
10. Save the file.
11. Restart the Management Server:
12. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
13. Verify that the server is running:
14. /opt/apigee/apigee-service/bin/apigee-all status

Testing the installation


See the testing section at the end of Enabling external authentication, and perform
the same test described there.

Configuring TLS/SSL with external


authentication
This section explains how to configure SSL for the external authorization server. For
general information, see TLS/SSL.
1. Install the external LDAP Certificate Services.
2. Obtain the Server Certificate. For example:
3. certutil -ca.cert client.crt
4. Change to your latest Java version home directory:
5. cd /usr/java/latest
6. Import the Server Certificate. For example:
7. sudo ./bin/keytool -import -keystore ./jre/lib/security/cacerts -file FULLY-
QUALIFIED-PATH-TO-THE-CERT-FILE -alias CERT-ALIAS
8. Where CERT-ALIAS is optional, but recommended. Replace CERT-ALIAS with a
text name that you can use later to refer to the certificate, for example if you
want to delete it.
Note: The Default Keystore password used by Java is changeit. If this has been
changed already you will need to get your sysadmin to provide the keystore password so
you add your certificate.
9. Open /opt/apigee/customer/application/management-
server.properties in a text editor.
10. Change the
conf_security_externalized.authentication.server.url
property value as follows:
● Old Value: ldap://localhost:389
● New Value: ldaps://localhost:636
11. Restart the Management Server:
12. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
13. Verify that the server is running:
14. /opt/apigee/apigee-service/bin/apigee-all status

Testing the installation

See the testing section at the end of Enabling external authentication, and perform
the same test described there.

Configuration required for different


sysadmin credentials
When you first installed Edge, a special kind of user was created called a sysadmin
user, and at the same time some additional config files were updated with this user’s
details. If you configure your external LDAP to authenticate using a non-email
address username and / or you have a different password in your external LDAP for
this sysadmin user, then you will need to make the changes described in this section.

There are two locations that need to be updated:


● The Edge UI logs into the Management Server using credentials that are
stored encrypted in a configuration file on the Edge UI. This update is required
when either/both username or password for your sysadmin user is different.
● Edge stores the sysadmin username in another file which is used when
running various utility scripts. This update is only required when the username
of your sysadmin user is different.

Changing the Edge UI password


The way you change the Edge UI password depends on how your external LDAP
server represents usernames:

● If usernames are email addresses, use the setup.sh utility to update the
Edge UI
● If the usernames are IDs, instead of an email address, use API calls and
property files to update the Edge UI

Both procedures are described below.

Changing the Edge UI credential for an email address


1. Edit the silent config file that you used to install the Edge UI to set the
following properties:
ADMIN_EMAIL=newUser
APIGEE_ADMINPW=newPW
SMTPHOST=smtp.gmail.com
SMTPPORT=465
SMTPUSER=foo@gmail.com
SMTPPASSWORD=bar
SMTPSSL=y

2. SMTPMAILFROM="My Company <myco@company.com>"


3. Note that you must include the SMTP properties when passing the new
password because all properties on the UI are reset.
4. Use the apigee-setup utility to reset the password on the Edge UI from the
config file:
5. /opt/apigee/apigee-setup/bin/setup.sh -p ui -f configFile

Changing the Edge UI credentials for a user ID


1. Encrypt the user ID and password:

2. java -cp "/opt/apigee/edge-ui/conf:/opt/apigee/edge-ui/lib/*" utils.EncryptUtil


'userName:PWord'
3. Open the ui.properties file in an editor. If the file does not exist, create it:
4. vi /opt/apigee/customer/application/ui.properties
5. In ui.properties, set the conf_apigee_apigee.mgmt.credential
token to the value returned by the call in Step 1:
6. conf_apigee_apigee.mgmt.credential="STRING_RETURNED_IN_STEP_1"
7. Set the owner of ui.properties to "apigee":
8. chown apigee:apigee /opt/apigee/customer/application/ui.properties
9. Restart the Edge UI:
10. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Testing the configuration


1. Open the management UI in a browser at:
2. http://management_server_IP:9000/
3. For example:
4. http://192.168.52.100:9000/
5. Log in using the new credentials. If the login succeeds, the configuration is
correct.

Editing the Edge sysadmin username store for Apigee


utility scripts
1. Edit the silent config file that you used to install the Edge UI to set the
following property to change the value of ADMIN_EMAIL to the username you
will be using for your sysadmin user in your external LDAP:
APIGEE_EMAIL=newUser

2. IS_EXTERNAL_AUTH="true"
3. The IS_EXTERNAL_AUTH property configures Edge to support an account
name, rather than an email address, as the username.
4. Use the apigee-setup utility to reset the username on all Edge component
from the config file:
5. /opt/apigee/apigee-setup/bin/setup.sh -p edge -f configFile
6. You must run this command on all Edge component on all Edge nodes,
including: Management Server, Router, Message Processor, Qpid, Postgres.

Testing the configuration

Verify that you can access the central POD. On the Management Server, run the
following CURL command:

curl -u sysAdminEmail:password http://localhost:8080/v1/servers?pod=central

You should see output in the form:

[{
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "central",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ ]
},
"type" : [
"application-datastore",
"scheduler-datastore",
"management-server",
"auth-datastore",
"apimodel-datastore",
"user-settings-datastore",
"audit-datastore"
],
"uUID" : "d4bc87c6-2baf-4575-98aa-88c37b260469"
}, {
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "central",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "started.at",
"value" : "1454691312854"
}, ... ]
},
"type" : [ "qpid-server" ],
"uUID" : "9681202c-8c6e-4da1-b59b-23e3ef092f34"
}]

Disable external authentication


Perform these steps if you want to turn off external authentication and revert to
using the internal authentication LDAP in Apigee Edge.

Warning: Important: You must do the following steps on each Apigee Edge Management Server.
1. Open /opt/apigee/customer/application/management-
server.properties in a text editor.
2. Set the conf_security_authentication.user.store property to
ldap.Note: Be sure that there are no trailing spaces at the end of the line.
3. conf_security_authentication.user.store=ldap
4. OPTIONALLY, only applicable if you were using a non-email address
username or a different password in your external LDAP for your sysadmin
user. Follow the steps you previously followed in Configuration required for
different sysadmin credentials, but substitute the external LDAP username
with your Apigee Edge sysadmin user's email address.
5. Restart the Management Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
7. Verify that the server is running:
8. /opt/apigee/apigee-service/bin/apigee-all status
9. Important: An Edge organization administrator must take the following
actions after external authentication is turned off:
● Make sure there are no users in Apigee Edge that should not be there.
You need to manually remove those users.
● Communicate to users that because the external authentication has
been turned off, they need to either start using whatever their original
password was (if they remember) or complete a "forgot password"
process in order to log in.

External Role Mapping


External Role Mapping lets you map your own groups or roles to role-based access
control (RBAC) roles and groups created on Apigee Edge. This feature is only
available with Edge for Private Cloud.

Prerequisites
● You must be an Apigee Private Cloud system administrator with global
system admin credentials to perform this configuration.
● You need to know the root directory of your Apigee Edge Private Cloud
installation. The default root directory is /opt.

Step-by-step setup example


See this article on the Apigee Community Forums for a step-by-step example of
setting up External Role Mapping.
Default configuration
External Role Mapping is disabled by default.

Enable External Role Mapping


To enable External Role Mapping:

1. Before you can complete the following configuration, you must create a Java
class that implements the ExternalRoleMapperServiceV2 interface and
include your implementation in the Management Server classpath:
2. /opt/apigee/edge-management-server/lib/thirdparty/
3. For details about this implementation, see the section About the
ExternalRoleMapperImpl sample implementation later in this document.
4. Log in to your Apigee Edge management server and then stop the
management server process:
5. /opt/apigee/apigee-service/bin/apigee-service edge-management-server stop
6. Open /opt/apigee/customer/application/management-
server.properties in a text editor. If this file does not exist, create it.
7. Edit the properties file to make the following settings:
8. # The user store to be used for authentication.
9. # Use "externalized.authentication" for LDAP user store.
10. # Note that for authorization, we continue to use LDAP.
# See Enabling external authentication more on enabling external auth.
conf_security_authentication.user.store=externalized.authentication

#Enable the external authorizations role mapper.


conf_security_externalized.authentication.role.mapper.enabled=true
conf_security_externalized.authentication.role.mapper.implementation.class=
com.customer.authorization.impl.ExternalRoleMapperImpl
11. Important: The implementation class and package name referenced in the above
configuration (ExternalRoleMapperImpl) are only examples — it is a class that you must
implement and that you can name the class and package whatever you wish. For details
about implementing this class, see About the ExternalRoleMapperImpl sample
implementation class below. This is a class that you must implement to reflect your own
groups.
12.
13. Save the management-server.properties file.
14. Ensure that management-server.properties is owned by the apigee
user:
15. chown apigee:apigee /opt/apigee/customer/application/management-
server.properties
16. Start the management server:
17. /opt/apigee/apigee-service/bin/apigee-service edge-management-server start

Disable external authorization


To disable external authorization:
1. Open /opt/apigee/customer/application/management-
server.properties in a text editor. If the file doesn't exist, create it.
2. Change the authentication user store to "ldap":
3. conf_security_authentication.user.store=ldap
4. Set this property to "false":
5. conf_security_externalized.authentication.role.mapper.enabled=false
6. Restart the management server:
7. /opt/apigee/apigee-service/bin/apigee-service edge-management-server start

About the ExternalRoleMapperImpl sample


implementation
In the security.properties config file described previously in Enabling external role
mapping, note the this line:

externalized.authentication.role.mapper.implementation.class=com.customer.autho
rization.impl.ExternalRoleMapperImpl

This class implements the ExternalRoleMapperServiceV2 interface, and is required.


You need to create your own implementation of this class that reflects your
respective groups. When finished, place the compiled class in a JAR and put that
JAR in the Management Server's classpath in:

/opt/apigee/edge-management-server/lib/thirdparty/
Warning: To compile this class, you must reference the following JAR file included with Edge:
/opt/apigee/edge-management-server/lib/infra/libraries/authentication-1.0.0.jar

You can name the class and package whatever you wish as long as it implements
ExternalRoleMapperServiceV2, is accessible in your classpath, and is referenced
correctly in the management-server.properties config file.

The following is a sample implementation of an ExternalRoleMapperImpl class:

Note: For better readability, we recommend that you copy the code into a text editor or IDE with
wider margins.
package com.customer.authorization.impl;

import com.apigee.authentication.*;
import com.apigee.authorization.namespace.OrganizationNamespace;
import com.apigee.authorization.namespace.SystemNamespace;
import java.util.Collection;
import java.util.HashSet;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.Attributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;

/** *
* Sample Implementation constructed with dummy roles with expected
namespaces.
*/

public class ExternalRoleMapperImpl


implements ExternalRoleMapperServiceV2 {

InitialDirContext dirContext = null;

@Override
public void start(ConfigBean arg0) throws ConnectionException {

try {
// Customer Specific Implementation will override the
// ImplementDirContextCreationLogicForSysAdmin method implementation.
// Create InitialDirContext based on the system admin user credentials.
dirContext = ImplementDirContextCreationLogicForSysAdmin();
} catch (NamingException e) {
// TODO Auto-generated catch block
throw new ConnectionException(e);
}
}

@Override
public void stop() throws Exception {
}

/**
* This method should be replaced with customer's implementation
* For given roleName under expectedNamespace, return all users that belongs to
this role
* @param roleName
* @param expectedNamespace
* @return All users that belongs to this role. For each user, please return the
username/email that is stored in Apigee LDAP
* @throws ExternalRoleMappingException
*/
@Override
public Collection<String> getUsersForRole(String roleName, NameSpace
expectedNamespace) throws ExternalRoleMappingException {
Collection<String> users = new HashSet<>();
if (expectedNamespace instanceof SystemNamespace) {
//If requesting all users with sysadmin role
if (roleName.equalsIgnoreCase("sysadmin")) {
//Add sysadmin's email to results
users.add("sysadmin@wacapps.net");
}
} else {
String orgName = ((OrganizationNamespace)
expectedNamespace).getOrganization();
//If requesting all users of engRole in Apigee LDAP
if (roleName.equalsIgnoreCase("engRole")) {
//Get all users in corresponding groups in customer's LDAP. In this case
looking for 'engGroup';
SearchControls controls = new SearchControls();
controls.setSearchScope(1);
try {
NamingEnumeration<SearchResult> res =
dirContext.search("ou=groups,dc=corp,dc=wacapps,dc=net",
"cn=engGroup", new Object[]{"",""}, controls);
while (res.hasMoreElements()) {
SearchResult sr = res.nextElement();
//Add all users into return
users.addAll(sr.getAttributes().get("users").getAll());
}
} catch (NamingException e) {
//Customer needs to handle the exception here
}
}
}
return users;
}

/**
*
* This method would be implemented by the customer and would be invoked
* while including using X-Apigee-Current-User header in request.
*
* X-Apigee-Current-User allows the customer to login as another user
*
* Below is the basic example.
*
* If User has sysadmin role then it's expected to set SystemNameSpace
* along with the expected NameSpace. Otherwise role's expectedNameSpace
* to be set for the NameSpacedRole.
*
* Collection<NameSpacedRole> results = new HashSet<NameSpacedRole>();
*
* NameSpacedRole sysNameSpace = new NameSpacedRole("sysadmin",
* SystemNamespace.get());
*
* String orgName =
* ((OrganizationNamespace) expectedNameSpace).getOrganization();
*
* NameSpacedRole orgNameSpace = new NameSpacedRole ("orgadmin",
* expectedNameSpace);
*
* results.add(sysNameSpace);
*
* results.add(orgNameSpace);
*
*
* @param username UserA's username
* @param password UserA's password
* @param requestedUsername UserB's username. Allow UserA to request UserB's
userroles with
* UserA's credentials when requesting UserB as X-Apigee-Current-
User
* @param expectedNamespace
* @return
* @throws ExternalRoleMappingException
*/
@Override
public Collection<NameSpacedRole> getUserRoles(String username, String
password, String requestedUsername, NameSpace expectedNamespace) throws
ExternalRoleMappingException {
/************************************************************/
/******************** Authenticate UserA ********************/
/************************************************************/

// Customer Specific Implementation will override the


// ImplementDnameLookupLogic method implementation.

// obtain dnName for given username.


String dnName = ImplementDnNameLookupLogic(username);
// Obtain dnName for given requestedUsername.
String requestedDnName =
ImplementDnNameLookupLogic(requestedUsername);

if (dnName == null || requestedDnName == null) {


System.out.println("Error ");
}

DirContext dirContext = null;


try {

// Customer Specific Implementation will override the


// ImplementDirectoryContextCreationLogic method implementation

// Create a directory context with dnName or requestedDnName and


password
dirContext = ImplementDirectoryContextCreationLogic();

/************************************************/
/*** Map internal groups to apigee-edge roles ***/
/************************************************/
return apigeeEdgeRoleMapper(dirContext, requestedDnName,
expectedNamespace);

} catch (Exception ex) {


ex.printStackTrace();
System.out.println("Error in authenticating User: {}" + new Object[]
{ username });

} finally {
// Customer implementation to close
// ActiveDirectory/LDAP context.
}

return null;

/**
*
* This method would be implemented by the customer and would be invoked
* wihle using username and password for authentication and without the
* X-Apigee-Current-User header
*
* The customer can reuse implementations in
* getUserRoles(String username, String password, String requestedUsername,
NameSpace expectedNamespace)
* by
* return getUserRoles(username, password, username, expectedNamespace)
* in implementations.
*
* or the customer can provide new implementations as shown below.
*/

@Override
public Collection<NameSpacedRole> getUserRoles(String username, String
password, NameSpace expectedNamespace) throws
ExternalRoleMappingException {
/*************************************************************/
/****************** Authenticate Given User ******************/
/*************************************************************/

// Customer Specific Implementation will override the


// ImplementDnameLookupLogic implementation.

// Obtain dnName for given username or email address.


String dnName = ImplementDnNameLookupLogic(username);

if (dnName == null) {
System.out.println("Error ");
}

DirContext dirContext = null;


try {
// Create a directory context with username or dnName and password
dirContext = ImplementDirectoryContextCreationLogic();

/************************************************/
/*** Map internal groups to apigee-edge roles ***/
/************************************************/
return apigeeEdgeRoleMapper(dirContext, dnName, expectedNamespace);

} catch (Exception ex) {


ex.printStackTrace();
System.out.println("Error in authenticating User: {}" + new Object[]
{ username });

} finally {
// Customer implementation to close
// ActiveDirectory/LDAP context.
}

return null;
}

/**
*
* This method would be implemented by the customer and would be invoked
* while using security token or access token as authentication credentials.
*
*/
@Override
public Collection<NameSpacedRole> getUserRoles(String username, NameSpace
expectedNamespace) throws ExternalRoleMappingException {

/*************************************************************/
/****************** Authenticate Given User ******************/
/*************************************************************/

// Customer Specific Implementation will override the


// ImplementDnameLookupLogic implementation.

// Obtain dnName for given username or email address.


String dnName = ImplementDnNameLookupLogic(username);

if (dnName == null) {
System.out.println("Error ");
}

DirContext dirContext = null;


try {
// Create a directory context with username or dnName and password
dirContext = ImplementDirectoryContextCreationLogic();

/************************************************/
/*** Map internal groups to apigee-edge roles ***/
/************************************************/
return apigeeEdgeRoleMapper(dirContext, dnName, expectedNamespace);

} catch (Exception ex) {


ex.printStackTrace();
System.out.println("Error in authenticating User: {}" + new Object[]
{ username });

} finally {
// Customer implementation to close
// ActiveDirectory/LDAP context.
}

return null;
}

/**
* This method should be replaced with Customer Specific Implementations
*
* Provided as a sample Implementation of mapping user groups to apigee-edge
roles
*/
private Collection<NameSpacedRole> apigeeEdgeRoleMapper(DirContext
dirContext, String dnName, NameSpace expectedNamespace) throws Exception {

Collection<NameSpacedRole> results = new HashSet<NameSpacedRole>();

/****************************************************/
/************ Fetch internal groups *****************/
/****************************************************/

String groupDN = "OU=Groups,DC=corp,DC=wacapps,DC=net";


String userFilter = "(user=userDnName)";
SearchControls controls = new SearchControls();
controls.setSearchScope(SearchControls.ONELEVEL_SCOPE);

//Looking for all groups the user belongs to in customer's LDAP


NamingEnumeration<SearchResult> groups =
dirContext.search(groupDN,userFilter.replace("userDnName", dnName), new Object[]
{ "", "" }, controls);

if (groups.hasMoreElements()) {
while (groups.hasMoreElements()) {
SearchResult searchResult = groups.nextElement();
Attributes attributes = searchResult.getAttributes();
String groupName = attributes.get("name").get().toString();

/************************************************/
/*** Map internal groups to apigee-edge roles ***/
/************************************************/

if (groupName.equals("BusDev")) {
results.add(new
NameSpacedRole("businessAdmin",SystemNamespace.get()));

} else if (groupName.equals("Engineering")) {
if (expectedNamespace instanceof OrganizationNamespace) {
String orgName = ((OrganizationNamespace)
expectedNamespace).getOrganization();
results.add(new NameSpacedRole("orgadmin", new
OrganizationNamespace(orgName)));
// In OPDK 4.50 and later, do
// results.add(new NameSpacedRole("orgadmin",
OrganizationNamespace.of(orgName)));
}

} else if (groupName.equals("Marketing")) {
results.add(new
NameSpacedRole("marketAdmin",SystemNamespace.get()));

} else {
results.add(new NameSpacedRole("readOnly",SystemNamespace.get()));
}
}

} else {
// In case of no group found or exception found we throw empty roles.
System.out.println(" !!!!! NO GROUPS FOUND !!!!!");
}
return results;
}

/**
* The customer need to replace with own implementations for getting dnName for
given user
*/
private String ImplementDnNameLookupLogic(String username) {
// Connect to the customer's own LDAP to fetch user dnName
return customerLDAP.getDnName(username);
}

/**
* The customer need to replace with own implementations for creating DirContext
*/
private DirContext ImplementDirectoryContextCreationLogic() {
// Connect to the customer's own LDAP to create DirContext for given user
return customerLDAP.createLdapContextUsingCredentials();
}

Edge authentication and authorization


flows
This document explains how authentication and authorization work on Apigee Edge.
This information may provide useful context when you configure an external LDAP
with Apigee Edge.

The authentication and authorization flows depend whether a user authenticates


through the management UI or through the APIs.

When logging in through the UI


When you log in to Edge through the UI, Edge performs a separate login step to the
Apigee Management Server using the Edge global system administrator credentials.

The following UI login steps are illustrated in Figure 1:

1. The user enters login credentials in the login UI.


2. Edge logs in to the Management Server using the global system admin
credentials.
3. The global system admin credentials are authenticated and authorized. The UI
uses these credentials to make certain platform API requests.
a. If external authentication is enabled, the credentials are authenticated
against the external LDAP, otherwise, the internal Edge LDAP is used.
b. Authorization is always performed against the internal LDAP unless
you enable External Role Mapping.
4. The credentials entered by the user are authenticated and authorized.
a. If external authentication is enabled, the credentials are authenticated
against the external LDAP, otherwise, the internal Edge LDAP is used.
b. Authorization is always performed against the internal LDAP unless
you enable External Role Mapping.

The following image shows authorization and authentication through the Edge UI:

Figure 1: Authorization and authentication through the Edge UI

When logging in through APIs


When you log in to Edge through the an API, only the credentials entered with the API
are used. Unlike with UI login, a separate login with system admin credentials is not
needed.

The following API login steps are illustrated in Figure 2:

1. The user enters login credentials in the login UI.


2. The credentials entered by the user are authenticated and authorized.
3. If external authentication is enabled, the credentials are authenticated against
the external LDAP, otherwise, the internal Edge LDAP is used.
4. Authorization is always performed against the internal LDAP unless you
enable External Role Mapping.

The following image shows authorization and authentication through the Edge APIs:
Figure 2: Authorization and authentication through the Edge APIs

External authentication configuration


property reference

The following table provides a comparison view of management-


server.properties attributes required for direct and indirect binding for external
authentication.

In the following table, values are provided in between " ". When editing the
management-server.properties file, include the value between the quotes (" ") but do
not include the actual quotes.

Prop
DIRECT bind INDIRECT bind
erty

conf_security_externalized.authentication.implementation.class=com.apigee.rbac.i
mpl.LdapAuthenticatorImpl
This property is always required to enable the external authorization
feature. Do not change it.

conf_security_externalized.authentication.bind.direct.type=

Set to "true". Set to "false".

conf_security_externalized.authentication.direct.bind.user.directDN=

If the username is an email address, set to "$ Not required, comment out.
{userDN}".

If the username is an ID, set to "CN=$


{userDN},CN=Users,DC=apigee,DC=com",
replacing the CN=Users, DC=apigee,DC=com
with appropriate values for your external
LDAP.

conf_security_externalized.authentication.indirect.bind.server.admin.dn=

Not required, comment out. Set to the username/email


address of a user with
search privileges on the
external LDAP.

conf_security_externalized.authentication.indirect.bind.server.admin.password=

Not required, comment out. Set to the password for the


above user.

conf_security_externalized.authentication.indirect.bind.server.admin.password.enc
rypted=

Not required, comment out. Set to "false" if using a


plain-text password (NOT
RECOMMENDED)

Set to "true" if using an


encrypted password
(RECOMMENDED) as
described in Indirect
binding only: Encrypting the
external LDAP user's
password.

conf_security_externalized.authentication.server.url=

Set to "ldap://localhost:389", replacing "localhost" with the IP or domain for


your external LDAP instance.

conf_security_externalized.authentication.server.version=

Set to your external LDAP server version, e.g. "3".

conf_security_externalized.authentication.server.conn.timeout=

Set to a timeout (number in milliseconds) that is appropriate for your


external LDAP.

conf_security_externalized.authentication.user.store.baseDN=

Set to the baseDN value to match your external LDAP service. This value
will be provided by your external LDAP administrator. E.g. in Apigee we
might use "DC=apigee,DC=com"

conf_security_externalized.authentication.user.store.search.query=(&($
{userAttribute}=${userId}))

Do not change this search string. It is used internally.

conf_security_externalized.authentication.user.store.user.attribute=

This identifies the external LDAP property you want to bind against. Set to
whichever property contains the username in the format that your users
use to log into Apigee Edge. For example:

If users will log in with an email address and that credential is stored in
userPrincipalName, set above to "userPrincipalName".
If users will log in with an ID and that is stored in sAMAccountName, set
above to "sAMAccountName".

conf_security_externalized.authentication.user.store.user.email.attribute=

This is the LDAP attribute where the user email value is stored. This is
typically "userPrincipalName" but set this to whichever property in your
external LDAP contains the user's email address that is provisioned into
Apigee's internal authorization LDAP.

Migrate to the Edge UI


This section provides guidance for migrating from the Classic UI to the Edge UI with
an IDP such as LDAP or SAML.

For more information, see:

● IDP authentication with the Edge UI


● IDP authentication with the Classic UI

Who can perform the migration


To migrate to the Edge UI, you must be logged in as the user who originally installed
Edge or as a root user. After you run the installer for the Edge UI, any user can
configure them.

Before you begin


Before migrating from the Classic UI to the Edge UI, read the following general
guidelines:

● Backup your existing Classic UI nodes


Before you update, Apigee recommends that you back up your existing
Classic UI server.
● Ports/firewalls
By default Classic UI uses port 9000. The Edge UI uses port 3001.
● New VM
The Edge UI can’t be installed on the same VM as Classic UI.
To install the Edge UI, you must add a new machine to your configuration. If
you want to use the same machine as Classic UI, then you must uninstall
Classic UI completely.
● Identity Provider (LDAP or SAML)
The Edge UI authenticates users with either a SAML or LDAP IDP:
● LDAP: For LDAP, you can either use an external LDAP IDP or you can
use the internal OpenLDAP implementation that is installed with Edge.
● SAML: The SAML IDP must be an external IDP.
● For more information, see Install and configure IDPs.
● Same IDP
This section assumes that you will use the same IDP after migration. For
example, if you currently use an external LDAP IDP with the Classic UI, then
you will continue to use an external LDAP IDP with the Edge UI.

Migrate with an internal LDAP IDP


Use the following guidelines when migrating from the Classic UI to the Edge UI in a
configuration that uses the internal LDAP implementation (OpenLDAP) as an IDP:

● Indirect binding configuration


Install the Edge UI using these instructions, with the following change to your
silent configuration file:
Configure LDAP to use search and bind (indirect), as the following example
shows:
● SSO_LDAP_PROFILE=indirect
● SSO_LDAP_BASE_URL=ldap://localhost:10389
● SSO_LDAP_ADMIN_USER_DN=uid=admin,ou=users,ou=global,dc=apigee,dc=c
om
SSO_LDAP_ADMIN_PWD=Secret123
SSO_LDAP_SEARCH_BASE=dc=apigee,dc=com
SSO_LDAP_SEARCH_FILTER=mail={0}
SSO_LDAP_MAIL_ATTRIBUTE=mail
● Basic authentication for the management API
The basic authentication for APIs continues to work by default for all LDAP
users when Apigee SSO is enabled. You can optionally disable Basic
authentication, as described in Disable Basic authentication on Edge.
● OAuth2 authentication for the management API
Token based authentication is enabled when you enable SSO.
● New user/password flow
You must create new users with APIs because password flows will no longer
work in Edge UI.

Migrate with an external LDAP IDP


Use the following guidelines when migrating from the Classic UI to the Edge UI in a
configuration that uses an external LDAP implementation as an IDP:

● LDAP configuration
Install the Edge UI using these instructions. You can configure either direct or
indirect binding in your silent configuration file.
● Management Server configuration
After you enable Apigee SSO, you should remove all external LDAP properties
that are defined in the
/opt/apigee/customer/application/management-
server.properties file and restart the Management Server.
● Basic authentication for the management API
Basic authentication works for machine users but not LDAP users. These will
be critical if your CI/CD process still uses Basic authentication to access the
system.
● OAuth2 authentication for the management API
LDAP users can access the management API with tokens only.

Migrate with an external SAML IDP


When migrating to the Edge UI, there are no changes to the installation instructions
for a SAML IDP.

Configuring TLS/SSL
TLS (Transport Layer Security, whose predecessor is SSL) is the standard security
technology for for ensuring secure, encrypted messaging across your API
environment, from apps to Apigee Edge to your back-end services.

NOTE: Because Edge originally supported SSL, you will see some instances in the Edge UI, Edge
XML, and Edge properties that use the term "SSL". For example, the menu entry in the Edge UI
that you use to view certs is called SSL Certificates, the XML tag that you use to configure a
virtual host to use TLS is named <SSLInfo>, and the property to set the SSL port for the
management API is conf_webserver_ssl.port.

Regardless of the environment configuration for your management API—for example,


whether you're using a proxy, a router, and/or a load balancer in front of your
management API (or not); Edge lets you enable and configure TLS, giving you control
over message encryption in your on-premise API management environment.

For an on-premises installation of Edge Private Cloud, there are several places where
you can configure TLS:

1. Between a Router and Message Processor


2. For access to the Edge management API
3. For access to the Edge management UI
4. For access to the new Edge UI
5. For access from an app to your APIs
6. For access from Edge to your backend services

For a complete overview of configuring TLS on Edge, see TLS/SSL.

Creating a JKS file


For many TLS configurations, you represent the keystore as a JKS file, where the
keystore contains your TLS certificate and private key. There are several ways to
create a JKS file, but one way is to use the openssl and keytool utilities.

Note: If you have a certificate chain, all certs in the chain must be appended in order into a single
PEM file, where the last certificate is signed by a CA.

For example, you have a PEM file named server.pem containing your TLS
certificate and a PEM file named private_key.pem containing your private key. Use
the following commands to create the PKCS12 file:

openssl pkcs12 -export -clcerts -in server.pem -inkey private_key.pem -out


keystore.pkcs12

You have to enter the passphrase for the key, if it has one, and an export password.
This command creates a PKCS12 file named keystore.pkcs12.

Use the following command to convert it to a JKS file named keystore.jks:

keytool -importkeystore -srckeystore keystore.pkcs12 -srcstoretype pkcs12 -


destkeystore keystore.jks -deststoretype jks

You are prompted to enter the new password for the JKS file, and the existing
password for the PKCS12 file. Make sure you use the same password for the JKS
file as you used for the PKCS12 file.

If you have to specify a key alias, such as when configuring TLS between a Router
and Message Processor, include the -name option to the openssl command:

openssl pkcs12 -export -clcerts -in server.pem -inkey private_key.pem -out


keystore.pkcs12 -name devtest

Then include the -alias option to the keytool command:

keytool -importkeystore -srckeystore keystore.pkcs12 -srcstoretype pkcs12 -


destkeystore keystore.jks -deststoretype jks -alias devtest

Generating an obfuscated password


Some parts of the Edge TLS configuration procedure require you to enter an
obfuscated password in a configuration file. An obfuscated password is a more
secure alternative to entering your password in plain text.

You can generate an obfuscated password by using the following command on the
Edge Management Server:

/opt/apigee/apigee-service/bin/apigee-service edge-management-server generate-


obfuscated-password
Enter the new password, and then confirm it at the prompt. For security reasons, the
text of the password is not displayed. This command returns the password in the
form:

OBF:58fh40h61svy156789gk1saj
MD5:902fobg9d80e6043b394cb2314e9c6

Use the obfuscated password specified by OBF when configuring TLS.

For more information, see this article

Configuring TLS between a Router and


a Message Processor
By default, TLS between the Router and Message Processor is disabled.

NOTE: Port 8082 on the Message Processor has to be open for access by the Router to perform
health checks when you configure TLS/SSL between the Router and Message Processor. If you
do not configure TLS/SSL between the Router and Message Processor, the default configuration,
port 8082 still must be open on the Message Processor to manage the component, but the
Router does not require access to it.

To enable TLS encryption between a Router and a Message Processor:

1. Ensure that port 8082 on the Message Processor is accessible by the Router.
2. Generate the keystore JKS file containing your TLS certification and private
key. For more, see Configuring TLS/SSL for Edge On Premises.
3. Copy the keystore JKS file to a directory on the Message Processor server,
such as /opt/apigee/customer/application.
4. Change permissions and ownership of the JKS file:
chown apigee:apigee /opt/apigee/customer/application/keystore.jks

5. chmod 600 /opt/apigee/customer/application/keystore.jks


6. Where keystore.jks is the name of your keystore file.
7. Edit the /opt/apigee/customer/application/message-
processor.properties file. If the file does not exist, create it.
8. Set the following properties in the message-processor.properties file:
conf_message-processor-communication_local.http.ssl=true
conf/message-processor-communication.properties+local.http.port=8443
conf/message-processor-communication.properties+local.http.ssl.keystore.type=jks
conf/message-processor-communication.properties+local.http.ssl.keystore.path=/
opt/apigee/customer/application/keystore.jks
conf/message-processor-communication.properties+local.http.ssl.keyalias=apigee-
devtest
# Enter the obfuscated keystore password below.

9. conf/message-processor-
communication.properties+local.http.ssl.keystore.password=OBF:obsPword
10. Where keystore.jks is your keystore file, and obsPword is your obfuscated
keystore and keyalias password.
Note: The value of
11. conf/message-processor-
communication.properties+local.http.ssl.keyalias=apigee-devtest
12. must be the same as the value provided by the -alias option to the
keytool command, described at the end of the section Creating a JKS file.
See Configuring TLS/SSL for Edge On Premises for information on generating
an obfuscated password.
13. Ensure that the message-processor.properties file is owned by the
'apigee' user:
14. chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties
15. Stop the Message Processors and Routers:
/opt/apigee/apigee-service/bin/apigee-service edge-message-processor stop

16. /opt/apigee/apigee-service/bin/apigee-service edge-router stop


17. On the Router, delete any files in the /opt/nginx/conf.d directory:
18. rm -f /opt/nginx/conf.d/*
19. Start the Message Processors and Routers:
/opt/apigee/apigee-service/bin/apigee-service edge-message-processor start

20. /opt/apigee/apigee-service/bin/apigee-service edge-router start


21. Repeat this process for each Message Processor.

After TLS is enabled between the Router and Message Processor, the Message
Processor log file contains this INFO message:

MessageProcessorHttpSkeletonFactory.configureSSL() : Instantiating Keystore of


type: jks

This INFO statement confirms that TLS is working between the Router and Message
Processor.

The following table lists all of the available properties in message-


processor.properties:

Properties Description

conf_message-processor-communication_local. Optional. Hostname to


http.host=localhost_or_IP_address listen on for router
connections. This overrides
the host name configured at
registration.

conf/message-processor-communication. Optional. Port to listen on


properties+local.http.port=8998 for router connections.
Default is 8998.

conf_message-processor-communication_local. Set this to true to enable


http.ssl=[ false | true ] TLS/SSL. Default is false.
When TLS/SSL is enabled,
you must set
local.http.ssl.keysto
re.path and
local.http.ssl.keyali
as.

conf/message-processor-communication. Local file system path to the


properties+local.http.ssl.keystore.path= keystore (JKS or PKCS12).
Mandatory when
local.http.ssl=true.

conf/message-processor-communication. Key alias from the keystore


properties+local.http.ssl.keyalias= to be used for TLS/SSL
connections. Mandatory
when
local.http.ssl=true.

conf/message-processor-communication. Password used for


properties+local.http.ssl.keyalias.password= encrypting the key inside the
keystore. Use an obfuscated
password in the following
format:
OBF:obsPword

conf/message-processor-communication. Keystore type. Only JKS and


properties+local.http.ssl.keystore.type=jks PKCS12 are currently
supported. Default is JKS.

conf/message-processor-communication. Optional. Obfuscated


properties+local.http.ssl.keystore.password= password for the keystore.
Use an obfuscated
password in the following
format:
OBF:obsPword

conf_message-processor-communication_local. Optional. When configured,


http.ssl.ciphers=cipher1,cipher2 only the ciphers listed are
allowed. If omitted, use all
ciphers suppo

Configuring TLS for the management


API

By default, TLS is disabled for the management API and you access the Edge
management API over HTTP by using the IP address of the Management Server
node and port 8080. For example:

http://ms_IP:8080

Alternatively, you can configure TLS access to the management API so that you can
access it in the form:

https://ms_IP:8443

In this example, you configure TLS access to use port 8443. However, that port
number is not required by Edge - you can configure the Management Server to use
other port values. The only requirement is that your firewall allows traffic over the
specified port.

To ensure traffic encryption to and from your management API, configure the
settings in the /opt/apigee/customer/application/management-
server.properties file.

In addition to TLS configuration, you can also control password validation (password
length and strength) by modifying the management-server.properties file.

NOTE: After modifying these files, restart your management API service.

Ensure that your TLS port is open


The procedure in this section configures TLS to use port 8443 on the Management
Server. Regardless of the port that you use, you must ensure that the port is open on
the Management Server. For example, you can use the following command to open
it:
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 8443 -j ACCEPT --
verbose
NOTE: This example uses port 8443 for the TLS port, and not the more common port 443.

The reason is that ports below 1024 are typically protected by the operating system and require
the process that accesses them to have root access. The Edge Management Server runs as the
"apigee" user and therefore typically does not have access to ports below 1024.

One alternative it to use a load balancer with the Edge API and terminate TLS on the load
balancer on port 443. You can then use either HTTP or HTTPS between the load balancer and the
Edge API.

Another alternative is to use iptables to forward requests to port 443 to port 8443. For
example:

iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8443

Configure TLS
Edit the /opt/apigee/customer/application/management-
server.properties file to control TLS use on traffic to and from your
management API. If this file does not exist, create it.

To configure TLS access to the management API:

1. Generate the keystore JKS file containing your TLS certification and private
key. For more see Configuring TLS/SSL for Edge On Premises.
2. Copy the keystore JKS file to a directory on the Management Server node,
such as /opt/apigee/customer/application.
3. Change ownership of the JKS file to the "apigee" user:
4. chown apigee:apigee keystore.jks
5. Where keystore.jks is the name of your keystore file.
6. Edit /opt/apigee/customer/application/management-
server.properties to set the following properties. If that file does not
exist, create it:
7. conf_webserver_ssl.enabled=true
8. # Leave conf_webserver_http.turn.off set to false
9. # because many Edge internal calls use HTTP.
conf_webserver_http.turn.off=false
conf_webserver_ssl.port=8443
conf_webserver_keystore.path=/opt/apigee/customer/application/keystore.jk
s
# Enter the obfuscated keystore password below.
conf_webserver_keystore.password=OBF:obfuscatedPassword
10. Where keyStore.jks is your keystore file, and obfuscatedPassword is your
obfuscated keystore password. See Configuring TLS/SSL for Edge On
Premises for information on generating an obfuscated password.
11. Restart the Edge Management Server by using the command:
12. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart

The management API now supports access over TLS.

Note: After enabling TLS, ensure that the value of APIGEE_PORT_HTTP_MS is correct in
$APIGEE_ROOT/customer/defaults.sh. If it isn't correct, modify defaults.sh.

Configure the Edge UI to use TLS to access the Edge API


In the procedure above, Apigee recommended leaving
conf_webserver_http.turn.off=false so that the Edge UI can continue to
make Edge API calls over HTTP.

WARNING: Apigee recommends that you disable HTTP access in production environments.

Use the following procedure to configure the Edge UI to make these calls over
HTTPS only:

1. Configure TLS access to the management API as described above.


2. After confirming that TLS is working for the management API, edit
/opt/apigee/customer/application/management-
server.properties to set the following property:
3. conf_webserver_http.turn.off=true
4. Restart the Edge Management Server by executing the following command:
5. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
6. Edit /opt/apigee/customer/application/ui.properties to set the
following property for the Edge UI:
7. conf_apigee_apigee.mgmt.baseurl="https://FQ_domain_name:port/v1"
8. Where FQ_domain_name is the full domain name, as per your certificate
address of the Management Server, and the port is the port specified above by
conf_webserver_ssl.port.
If ui.properties does not exist, create it.
9. Only if you used a self-signed cert (not recommended in a production
environment) when configuring TLS access to the management API above,
add the following property to ui.properties:
10. conf/application.conf+play.ws.ssl.loose.acceptAnyCertificate=true
11. Otherwise, the Edge UI will reject a self-signed certificate.
12. Restart the Edge UI by executing the following command:
13. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

TLS properties for the Management Server


The following table lists all of the TLS/SSL properties that you can set in
management-server.properties:
Properties Description

conf_webserver_http.port=8080 Default is 8080.

conf_webserver_ssl.enabled=false To enable/disable
TLS/SSL. With
TLS/SSL enabled
(true), you must also
set the ssl.port and
keystore.path
properties.

conf_webserver_http.turn.off=true To enable/disable http


along with https. If you
want to use only
HTTPS, leave the
default value to true.

conf_webserver_ssl.port=8443 The TLS/SSL port.

Required when
TLS/SSL is enabled
(conf_webserver_s
sl.enabled=true).

conf_webserver_keystore.path=path The path to your


keystore file.

Required when
TLS/SSL is enabled
(conf_webserver_s
sl.enabled=true).

conf_webserver_keystore.password=password Use an obfuscated


password in this
format:
OBF:xxxxxxxxxx

conf_webserver_cert.alias=alias Optional keystore


certificate alias

conf_webserver_keymanager.password=passwor If your key manager


d has a password, enter
an obfuscated version
of the password in this
format:

OBF:xxxxxxxxxx

conf_webserver_trust.all=[false | true] Configure settings for


your trust store.
conf_webserver_trust.store.path=path Determine whether you
want to accept all
conf_webserver_trust.store.password=passwo TLS/SSL certificates
rd (for example, to accept
non-standard types).
The default is false.
Provide the path to
your trust store, and
enter an obfuscated
trust store password in
this format:

OBF:xxxxxxxxxx

conf_http_HTTPTransport.ssl.cipher.suites. Indicate any cipher


blacklist=CIPHER_SUITE_1, CIPHER_SUITE_2 suites you want to
include or exclude. For
conf_http_HTTPTransport.ssl.cipher.suites. example, if you
whitelist= discover vulnerability
in a cipher, you can
exclude it here.
Separate multiple
ciphers with commas.

Any ciphers you


remove via the
blacklist will take
precedence over any
ciphers included via
the whitelist.

Note: By default, if no
blacklist or whitelist is
specified, ciphers
matching the following
Java regular
expression are
excluded by default.

^.*_(MD5|SHA|SHA1)$
^TLS_RSA_.*$
^SSL_.*$
^.*_NULL_.*$
^.*_anon_.*$

However, if you do
specify a blacklist, this
filter is overridden, and
you must blacklist all
ciphers individually.

For information on
cypher suites and
cryptography
architecture, see Java
Cryptography
Architecture Oracle
Providers
Documentation for
JDK 8.

conf_webserver_ssl.session.cache.size=

conf_webserver_ssl.session.timeout=

Configuring TLS for the management UI


By default, you access the Edge UI over HTTP by using the IP address of the
Management Server node and port 9000. For example:

http://ms_IP:9000

Alternatively, you can configure TLS access to the Edge UI so that you can access it
in the form:

https://ms_IP:9443

In this example, you configure TLS access to use port 9443. However, that port
number is not required by Edge - you can configure the Management Server to use
other port values. The only requirement is that your firewall allows traffic over the
specified port.

Note: Whenever you upgrade an Edge-UI component while installing a patch or major version for
Edge, the TLS settings will be reverted. You will need to reapply the steps described in the
following sections every time you update an Edge-UI component.

Ensure that your TLS port is open


The procedure in this section configures TLS to use port 9443 on the Management
Server. Regardless of the port that you use, you must ensure that the port is open on
the Management Server. For example, you can use the following command to open
it:

iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 9443 -j ACCEPT --
verbose
NOTE: This example uses port 9443 for the TLS port, and not the more common port 443. The
reason is that ports below 1024 are typically protected by the operating system and require the
process that accesses them to have root access. The Edge UI runs as the "apigee" user and
therefore typically does not have access to ports below 1024.

One alternative it to use a load balancer with the Edge UI and terminate TLS on the load balancer
on port 443. You can then use either HTTP or HTTPS between the load balancer and the Edge UI.

Another alternative is to use iptables to forward requests to port 443 to port 9443. For
example:

iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 9443

Configure TLS
NOTE: If you have multiple UI instances installed on separate nodes, you must configure both to
support TLS. After enabling TLS on the first node, see Supporting multiple Edge UI instances for
information about configuring TLS on the second node.

Use the following procedure to configure TLS access to the management UI:

1. Generate the keystore JKS file containing your TLS certification and private
key and copy it to the Management Server node. For more information, see
Configuring TLS/SSL for Edge On Premises.
2. Run the following command to configure TLS:
3. /opt/apigee/apigee-service/bin/apigee-service edge-ui configure-ssl
4. Enter the HTTPS port number, for example, 9443.
5. Specify if you want to disable HTTP access to the management UI. By default,
the management UI is accessible over HTTP on port 9000.
6. Enter the keystore algorithm. The default is JKS.
7. Enter the absolute path to the keystore JKS file.
The script copies the file to the /opt/apigee/customer/conf directory on
the Management Server node, and changes the ownership of the file to
"apigee".
8. Enter the clear text keystore password.
9. The script then restarts the Edge management UI. After the restart, the
management UI supports access over TLS.
You can see these settings in /opt/apigee/etc/edge-ui.d/SSL.sh.

Using a config file to configure TLS


As an alternative to the above procedure, you can pass a config file to the command
in step 2 of the procedure. You'll need to use this method if you want to set optional
TLS properties.

To use a config file, create a new file and add the following properties:

HTTPSPORT=9443
DISABLE_HTTP=y
KEY_ALGO=JKS
KEY_FILE_PATH=/opt/apigee/customer/application/mykeystore.jks
KEY_PASS=clearTextKeystorePWord

Save the file in a local directory with any name you want. Then use the following
command to configure TLS:

/opt/apigee/apigee-service/bin/apigee-service edge-ui configure-ssl -f configFile

where configFile is the full path to the file you saved.

Configure the Edge UI when TLS terminates on the load


balancer
If you have a load balancer that forwards requests to the Edge UI, you might choose
to terminate the TLS connection on the load balancer, and then have the load
balancer forward requests to the Edge UI over HTTP. This configuration is supported
but you need to configure the load balancer and the Edge UI accordingly.

The additional configuration is required when the Edge UI sends users emails to set
their password when the user is created or when the user request to reset a lost
password. This email contains a URL that the user selects to set or reset a
password. By default, if the Edge UI is not configured to use TLS, the URL in the
generated email uses the HTTP protocol, and not HTTPS. You must configure the
load balancer and Edge UI to generates an email address that uses HTTPS.

To configure the load balancer, ensure that it sets the following header on requests
forwarded to the Edge UI:

X-Forwarded-Proto: https

To configure the Edge UI:

1. Open the /opt/apigee/customer/application/ui.properties file in


an editor. If the file does not exist, create it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set the following property in ui.properties:
4. conf/application.conf+trustxforwarded=true
5. Save your changes to ui.properties.
6. Restart the Edge UI:
7. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Setting optional TLS properties


The Edge UI supports optional TLS configuration properties that you can use to set
the following:

● Default TLS protocol


● List of supported TLS protocols
● Supported TLS algorithms
● Supported TLS ciphers

These optional parameters are only available when you set the following
configuration property in config file, as described in Using a config file to configure
TLS:

TLS_CONFIGURE=y
NOTE: You can only set these optional parameters in a config file passed to the apigee-
service edge-ui configure-ssl command. You cannot set these properties using the
interactive command.

The following table describes these properties:

Property Description

TLS_PROT Defines the default TLS protocol for the Edge UI. By default, it is TLS
OCOL 1.2. Valid values are TLSv1.2, TLSv1.1, TLSv1.

TLS_ENAB Defines the list of enabled protocols as a comma-separated array.


LED_PROT For example:
OCOL TLS_ENABLED_PROTOCOL=[\"TLSv1.2\", \"TLSv1.1\", \"TLSv1\"]

Notice that you have to escape the " character.

By default all protocols are enabled.

TLS_DISA Defines the disabled cipher suites and can also be used to prevent
BLED_ALG small key sizes from being used for TLS handshaking. There is no
default value.
O
The values passed to the TLS_DISABLED_ALGO correspond to the
allowed values for jdk.tls.disabledAlgorithms as described
here. However, you must escape space characters when setting
TLS_DISABLED_ALGO:
TLS_DISABLED_ALGO=EC\ keySize\ <\ 160,RSA\ keySize\ <\ 2048

TLS_ENAB Defines the list of available TLS ciphers as a comma-separated array.


LED_CIPH For example:
ERS TLS_ENABLED_CIPHERS=[\"TLS_DHE_RSA_WITH_AES_128_CBC_SH
A\",
\"TLS_DHE_DSS_WITH_AES_128_CBC_SHA\"]

Notice that you have to escape the " character.

The default list of enabled ciphers is:

"TLS_DHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_DHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_DHE_DSS_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_CBC_SHA",
"TLS_RSA_WITH_AES_128_CBC_SHA",
"SSL_RSA_WITH_RC4_128_SHA",
"SSL_RSA_WITH_RC4_128_MD5",
"TLS_EMPTY_RENEGOTIATION_INFO_SCSV"

Find the list of available ciphers here.

Disabling TLS protocols


To disable TLS protocols, you need to edit the config file, described in Using a config
file to configure TLS, as follows:

1. Open the config file in an editor.


2. To disable a single TLS protocol—for example, TLSv1.0—add the following to
the config file:
TLS_CONFIGURE=y

3. TLS_DISABLED_ALGO="tlsv1"
4. To disable multiple protocols—for example, TLSv1.0 and TLSv1.1— add the
following to the config file:
TLS_CONFIGURE=y

5. TLS_DISABLED_ALGO="tlsv1, tlsv1.1"
6. Save your changes to the config file.
7. Run the following command to configure TLS:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui configure-ssl -f
configFile
9. where configFile is the full path to the config file.
10. Restart the Edge UI:
11. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

Use secure cookies


Apigee Edge for Private Cloud supports adding the secure flag to the Set-Cookie
header for responses from the Edge UI. If this flag is present, then the cookie can
only be sent over TLS-enabled channels. If it is not present, then the cookie can be
sent over any channel, whether it is secure or not.

Cookies without the secure flag can potentially allow an attacker to capture and
reuse the cookie or hijack an active session. Therefore, best practice is to enable this
setting.

To set the secure flag for Edge UI cookies:

1. Open the following file in a text editor:


2. /opt/apigee/customer/application/ui.properties
3. If the file does not exist, create it.
4. Set the conf_application_session.secure property to true in the
ui.properties file, as the following example shows:
5. conf_application_session.secure=true
6. Save your changes.
7. Restart Edge UI by using the apigee-serice utility, as the following
example shows:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

To confirm the change is working, check the response headers from the Edge UI by
using a utility such as curl; for example:

curl -i -v https://edge_UI_URL

The header should contain a line that looks like the following:

Set-Cookie: secure; ...

Disable TLS on the Edge UI


To disable TLS on the Edge UI, use the following command:

/opt/apigee/apigee-service/bin/apigee-service edge-ui disable-ssl

Configuring TLS for the new Edge UI


By default, you access the new Edge UI over HTTP by using the IP address or DNS
name of the Edge UI node and port 3001. For example:

http://newue_IP:3001

Alternatively, you can configure TLS access to the Edge UI so that you can access it
in the form:

https://newue_IP:3001

NOTE: The Edge UI uses the same port for HTTPS and HTTP. That means you cannot access the
Edge UI using both HTTP and HTTPS concurrently.

TLS requirements

The Edge UI only supports TLS v1.2. If you enable TLS on the Edge UI, users must
connect to the Edge UI using a browser compatible with TLS v1.2.

TLS configuration properties

Execute the following command to configure TLS for the Edge UI:

/opt/apigee/apigee-service/bin/apigee-service edge-management-ui configure-ssl -f


configFile

Where configFile is the config file that you used to install the Edge UI.

Before you execute this command, you must edit the config file to set the necessary
properties that control TLS. The following table describes the properties that you use
to configure TLS for the Edge UI:

Property Description Required?

MANAGEME Sets the protocol, "http" or "https", used to access the Yes
NT_UI_SC Edge UI. The default value is "http". Set it to "https" to
HEME enable TLS:

MANAGEMENT_UI_SCHEME=https

MANAGEME If "n", specifies that TLS requests to the Edge UI are Yes
NT_UI_TL terminated at the Edge UI. You must set
S_OFFLOA MANAGEMENT_UI_TLS_KEY_FILE and
D MANAGEMENT_UI_TLS_CERT_FILE.

If "y", specifies that TLS requests to the Edge UI are


terminated on a load balancer, and that the load
balancer then forwards the request to the Edge UI by
using HTTP.

If you terminate TLS on the load balancer, the Edge UI


still needs to be aware that the original request came
in over TLS. For example, some cookies have a Secure
flag set.

You must set MANAGEMENT_UI_SCHEME to "https" or


else MANAGEMENT_UI_TLS_OFFLOAD is ignored:

MANAGEMENT_UI_SCHEME=https

MANAGEMENT_UI_TLS_OFFLOAD=y

MANAGEME If MANAGEMENT_UI_TLS_OFFLOAD=n, specifies the Yes if


NT_UI_TL absolute path to the TLS key and cert files. The files MANAGEMEN
S_KEY_FI must be formatted as PEM files with no passphrase, T_UI_TLS_
LE and must be owned by the "apigee" user. OFFLOAD=n

The recommended location for these files is:


MANAGEME
NT_UI_TL /opt/apigee/customer/application/edge-
S_CERT_F management-ui
ILE
If that directory does not exist, create it.

If MANAGEMENT_UI_TLS_OFFLOAD=y, then omit


MANAGEMENT_UI_TLS_KEY_FILE and
MANAGEMENT_UI_TLS_CERT_FILE. They are
ignored because requests to the Edge UI come in over
HTTP.

MANAGEME If MANAGEMENT_UI_TLS_OFFLOAD=n, specifies the Yes


NT_UI_PU URL of the Edge UI.
BLIC_URI
S
Set this property based on other properties in the
config file. For example:

MANAGEMENT_UI_PUBLIC_URIS=$MANAGEMENT_UI
_SCHEME://$MANAGEMENT_UI_IP:
$MANAGEMENT_UI_PORT

Where:

● MANAGEMENT_UI_SCHEME specifies the


protocol, "http" or "https", as described above.
● MANAGEMENT_UI_IP specifies the IP address
or DNS name of the Edge UI.
● MANAGEMENT_UI_PORT specifies the port
used by the Edge UI.

See Install the new Edge UI for more on these


properties.

If MANAGEMENT_UI_TLS_OFFLOAD=y:

● MANAGEMENT_UI_IP specifies the IP address


or DNS name of the load balancer, not of the
Edge UI.
● The load balancer and the New UE must use
the same port number for requests, for
example 3001. Use MANAGEMENT_UI_PORT to
specify the port number on the load balancer
and on the New UE.

MANAGEME Defines the list of available TLS ciphers as a comma-


NT_UI_TL separated or space-separated string.
S_ALLOWE
D_CIPHER Comma-separated string:
S
MANAGEMENT_UI_TLS_ALLOWED_CIPHERS=TLS_EC
DHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECD
HE_RSA_WITH_AES_128_GCM_SHA256

Space-separated string enclosed in double quotes:

MANAGEMENT_UI_TLS_ALLOWED_CIPHERS="TLS_E
CDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"

SHOEHORN Before you Install the new Edge UI, you first install the Yes and set
_SCHEME base Edge UI, called shoehorn. The installation config to "http"
file uses the following property to specify the protocol,
"http", used to access the base Edge UI:

SHOEHORN_SCHEME=http

The base Edge UI does not support TLS, so even when


you enable TLS on the Edge UI, this property must still
be set to "http".

Configure TLS

To configure TLS access to the Edge UI:

1. Generate the TLS cert and key as PEM files with no passphrase. For example:
2. mykey.pem
3. mycert.pem
4. There are many ways to generate a TLS cert and key. For example, you can
execute the following command to generate an unsigned cert and key:
5. openssl req -x509 -newkey rsa:4096 -keyout mykey.pem -out mycert.pem -
days 365 -nodes -subj '/CN=localhost'
6. Note: You do not typically use an unsigned cert and key in a production environment.
7. Copy the key and cert files to the
/opt/apigee/customer/application/edge-management-ui
directory. If that directory does not exist, create it.
8. Make sure the cert and key are owned by the "apigee" user:
9. chown apigee:apigee /opt/apigee/customer/application/edge-management-
ui/*.pem
10. Edit the config file that you used to install the Edge UI to set the following TLS
properties:
Note: When enabling TLS, you must pass the complete config file that you used to install
the Edge UI.

# Set to https to enable TLS.


MANAGEMENT_UI_SCHEME=https
# Do NOT terminate TLS on a load balancer.
MANAGEMENT_UI_TLS_OFFLOAD=n

# Specify the key and cert.


MANAGEMENT_UI_TLS_KEY_FILE=/opt/apigee/customer/application/edge-
management-ui/mykey.pem
MANAGEMENT_UI_TLS_CERT_FILE=/opt/apigee/customer/application/edge-
management-ui/mycert.pem

# Leave these properties set to the same values as when you installed the Edge UI:
MANAGEMENT_UI_PUBLIC_URIS=$MANAGEMENT_UI_SCHEME://
$MANAGEMENT_UI_IP:$MANAGEMENT_UI_PORT

11. SHOEHORN_SCHEME=http
12. Execute the following command to configure TLS:
13. /opt/apigee/apigee-service/bin/apigee-service edge-management-ui
configure-ssl -f configFile
14. Where configFile is the name of the config file.
The script restarts the Edge UI.
15. Run the following commands to setup and restart shoehorn:
/opt/apigee/apigee-service/bin/apigee-service edge-ui setup -f configFile

16. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart


17. After the restart, the Edge UI supports access over HTTPS. If you cannot log in
to the Edge UI after enabling TLS, clear the browser cache and try logging in
again.

Configure the Edge UI when TLS terminates on the load


balancer

If you have a load balancer that forwards requests to the Edge UI, you might choose
to terminate the TLS connection on the load balancer, and then have the load
balancer forward requests to the Edge UI over HTTP:

This configuration is supported but you need to configure the load balancer and the
Edge UI accordingly.

To configure the Edge UI when TLS terminates on the load balancer:

1. Edit the config file that you used to install the Edge UI to set the following TLS
properties:
NOTE: When enabling TLS, you must pass the complete config file that you used to install
the Edge UI.

# Set to https to enable TLS


MANAGEMENT_UI_SCHEME=https
# Terminate TLS on a load balancer
MANAGEMENT_UI_TLS_OFFLOAD=y
# Set to the IP address or DNS name of the load balancer.
MANAGEMENT_UI_IP=LB_IP_DNS
# Set to the port number for the load balancer and Edge UI.
# The load balancer and the Edge UI must use the same port number.
MANAGEMENT_UI_IP=3001

# Leave these properties set to the same values as when you installed the Edge UI:
MANAGEMENT_UI_PUBLIC_URIS=$MANAGEMENT_UI_SCHEME://
$MANAGEMENT_UI_IP:$MANAGEMENT_UI_PORT
SHOEHORN_SCHEME=http

2.
3. If you set MANAGEMENT_UI_TLS_OFFLOAD=y, omit
MANAGEMENT_UI_TLS_KEY_FILE and MANAGEMENT_UI_TLS_CERT_FILE.
They are ignored because requests to the Edge UI come in over HTTP.
4. Execute the following command to configure TLS:
5. /opt/apigee/apigee-service/bin/apigee-service edge-management-ui
configure-ssl -f configFile
6. Where configFile is the name of the config file.
The script restarts the Edge UI.
7. Run the following commands to setup and restart shoehorn:
/opt/apigee/apigee-service/bin/apigee-service edge-ui setup -f configFile

8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart


9. After the restart, the Edge UI supports access over HTTPS. If you cannot log in
to the Edge UI after enabling TLS, clear the browser cache and try logging in
again.

Disable TLS on the Edge UI

To disable TLS on the Edge UI:

1. Edit the config file that you used to install the Edge UI to set the following TLS
property:
NOTE: When disabling TLS, you must pass the complete config file that you used to
install the Edge UI.
# Set to http to disable TLS.
MANAGEMENT_UI_SCHEME=http

# Only if you had terminated TLS on a load balancer,


# reset to the IP address or DNS name of the Edge UI.
MANAGEMENT_UI_IP=newue_IP_DNS

2.
3. Execute the following command to disable TLS:
4. /opt/apigee/apigee-service/bin/apigee-service edge-management-ui
configure-ssl -f configFile
5. Where configFile is the name of the config file.
The script restarts the Edge UI.
6. Run the following commands to setup and restart shoehorn:
/opt/apigee/apigee-service/bin/apigee-service edge-ui setup -f configFile

7. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart


8. You can now access the Edge UI over HTTP. If you cannot log in to the Edge
UI after disabling TLS, clear the browser cache and try logging in again.

Configuring TLS 1.3 for northbound


traffic
This page explains how to configure TLS 1.3 in Apigee Routers for northbound traffic
(traffic between a client and the Router).

Note: This feature is only available in Edge for Private Cloud 04.51.00 and later versions.To use
TLS 1.3, nodes hosting the Router must have the Red Hat Enterprise Linux (RHEL) 8.0 operating
system (or later) installed, which supports the openssl 1.1 library.

See Virtual hosts for more information about virtual hosts.

Enable TLS 1.3 for all TLS-based virtual hosts in a Router


Use the following procedure to enable TLS 1.3 for all TLS-based virtual hosts in a
Router:

1. On the Router, open the following properties file in an editor.


2. /opt/apigee/customer/application/router.properties
3. If the file doesn't exist, create it.
4. Add the following line to the properties file:
5. conf_load_balancing_load.balancing.driver.server.ssl.protocols=TLSv1
TLSv1.1 TLSv1.2 TLSv1.3
6. Add all TLS protocols you want to support. Note that the protocols are space-
separated and case sensitive.
7. Save the file.
8. Ensure the file is owned by apigee user:
9. chown apigee:apigee /opt/apigee/customer/application/router.properties
10. Restart the Router:
11. /opt/apigee/apigee-service/bin/apigee-service edge-router restart
12. Repeat the steps above on all Router nodes one by one.

Enable TLS 1.3 for specific virtual hosts only


This section explains how to enable TLS 1.3 for specific virtual hosts. To enable TLS
1.3, perform the following steps on the Management server nodes:

1. On each Management server node, edit the file


/opt/apigee/customer/application/management-
server.properties and add the following line. (If the file doesn’t exist,
create it.)
2. conf_virtualhost_virtual.host.allowed.protocol.list=TLSv1,TLSv1.1,TLSv1.2,TL
Sv1.3
3. For this file, the protocols are comma-separated (and case sensitive).
4. Save the file.
5. Ensure the file is owned by apigee user:
6. chown apigee:apigee /opt/apigee/customer/application/management-
server.properties
7. Restart the Management server:
8. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
9. Repeat the steps above on all Management server nodes one by one.
10. Create (or update an existing) virtual host with the following property. Note
that the protocols are space-separated and case sensitive.
"properties": {
"property": [
{
"name": "ssl_protocols",
"value": "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
}
]

11. }
12. A sample vhost with this property is shown below:
{
"hostAliases": [
"api.myCompany,com",
],
"interfaces": [],
"listenOptions": [],
"name": "secure",
"port": "443",
"retryOptions": [],
"properties": {
"property": [
{
"name": "ssl_protocols",
"value": "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
}
]
},
"sSLInfo": {
"ciphers": [],
"clientAuthEnabled": "false",
"enabled": "true",
"ignoreValidationErrors": false,
"keyAlias": "myCompanyKeyAlias",
"keyStore": "ref://myCompanyKeystoreref",
"protocols": []
},
"useBuiltInFreeTrialCert": false

13. }
14. Testing TLS 1.3
To test TLS 1.3, enter the following command:
15. curl -v --tlsv1.3 "https://api.myCompany,com/testproxy"
16. Note that TLS 1.3 can only be tested on clients that support this protocol. If
TLS 1.3 is not enabled, you will see an error message like the following:
17. sslv3 alert handshake failure

Configuring TLS 1.3 for southbound


traffic
This page explains how to configure TLS 1.3 in Apigee Message Processors for
southbound traffic (traffic between the Message Processor and the backend server).
Note: This feature is only available in Edge for Private Cloud 04.51.00 and later versions.Note: To
use TLS 1.3, the Java version you’re using in Message Processors must support TLS 1.3. TLS 1.3
was introduced in Oracle Java 8u261 and OpenJDK 8u272.

To learn more about TLS 1.3 feature in Java, see JDK 8u261 Update Release Notes.

The procedure to enable TLS 1.3 depends on the version of Java you're using. See
Check the Java version in a Message Processor below to find the version of Java
installed in the Message Processor.

TLS v1.3 and Java versions


TLS 1.3 feature was introduced in the following versions of Java:

● Oracle JDK 8u261


● OpenJDK 8u272

In the following Java versions, TLS v1.3 feature exists but is not enabled by default in
client roles:

● Oracle JDK 8u261 or later but less than Oracle JDK 8u341
● OpenJDK 8u272 or later but less than OpenJDK 8u352

If you are using one of these versions, you need to enable TLS v1.3, as described in
How to enable TLS v1.3 when it is not enabled by default.

If you are using one of the following versions, TLS v1.3 should already be enabled by
default in client roles (Message Processor acts as a client for southbound TLS
connections), so you don't need to take any action:

● Oracle JDK 8u341 or later


● OpenJDK 8u352 or later

For TLS v1.3 to work, all the following must hold true:

● Underlying Java on Message Processor must support TLS v1.3.


● TLS v1.3 must be enabled in Java on Message Processor.
● TLS v1.3 must be enabled in the Message Processor application.

How to enable TLS v1.3 in Java when it is not enabled by


default.
This section explains how to enable TLS v1.3 in case you are using one of the
following versions of Java:

● Oracle JDK 8u261 or later but less than Oracle JDK 8u341
● OpenJDK 8u272 or later but less than OpenJDK 8u352
In the message processor, set Java property jdk.tls.client.protocols.
Values are comma separated and can contain one or more of TLSv1, TLSv1.1,
TLSv1.2, TLSv1.3, and SSLv3.

For example, setting -Djdk.tls.client.protocols=TLSv1.2,TLSv1.3


enables client protocols TLSv1.2 and TLSv1.3.

See to Change other JVM properties to learn how to set JVM properties in an Edge
component.

To enable TLS v1, v1.1, v1.2 and v1.3 protocols:

1. Set the following configuration in the message processor configuration file


(/opt/apigee/customer/application/message-
processor.properties):
2. bin_setenv_ext_jvm_opts=-
Djdk.tls.client.protocols=TLSv1,TLSv1.1,TLSv1.2,TLSv1.3
3. Restart the Message Processor.
>

How to disable TLS v1.3 when it is enabled by default


If you're using Oracle JDK 8u341 or later, or OpenJDK 8u352 or later, TLSv1.3 is
enabled by default for clients. If you wish to disable TLS v1.3 in such cases, you have
two options:

● Configure your target server's SSLInfo and ensure that TLSv1.3 is not
mentioned in the protocols list. See TLS/SSL TargetEndpoint Configuration
Elements. Note: If no protocols are specified in the target server configuration,
whatever protocols are supported by Java will be sent as options in client
handshake.
● Disable TLS v1.3 in the message processor by disabling protocol completely.
See Set the TLS protocol on the Message Processor.

Check the Java version in a Message Processor


To check the Java version installed in a Message Processor, log into the Message
Processor node and execute the following command:

java -version

The sample output below shows that OpenJDK 8u312 is installed.

$ java -version
openjdk version "1.8.0_312"
OpenJDK Runtime Environment (build 1.8.0_312-b07)
OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)
Supported ciphers
At present, Java 8 supports 2 TLS v1.3 ciphers:

● TLS_AES_256_GCM_SHA384
● TLS_AES_128_GCM_SHA256

You can use openssl to check if your target server supports TLS v1.3 and at least
one of the ciphers above using below. Note that this example uses the openssl11
utility which has TLS v1.3 enabled.

$ openssl11 s_client -ciphersuites


"TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256" -connect
target_host:target_port -tls1_3

Setting TLS protocol for Router and


Message Processor
By default, the Router and Message Processor support TLS versions 1.0, 1.1, 1.2.
However, you might want to limit the protocols supported by the Router and
Message Processor. This document describes how to set the protocol globally on
the Router and Message Processor.

For the Router, you can also set the protocol for individual virtual hosts. See
Configuring TLS access to an API for the Private Cloud for more.

For the Message Processor, you can set the protocol for an individual
TargetEndpoint. See Configuring TLS from Edge to the backend (Cloud and Private
Cloud) for more.

Set the TLS protocol on the Router


To set the TLS protocol on the Router, set properties in the router.properties
file:

1. Open the router.properties file in an editor. If the file does not exist,
create it:
2. vi /opt/apigee/customer/application/router.properties
3. Set the properties as desired:
4. # Possible values are space-delimited list of: TLSv1 TLSv1.1 TLSv1.2
5. conf_load_balancing_load.balancing.driver.server.ssl.protocols=TLSv1.2
6. Save your changes.
7. Make sure the properties file is owned by the "apigee" user:
8. chown apigee:apigee /opt/apigee/customer/application/router.properties
9. Restart the Router:
10. /opt/apigee/apigee-service/bin/apigee-service edge-router restart
11. Verify that the protocol is updated correctly by examining the NGINX file
/opt/nginx/conf.d/0-default.conf:
12. cat /opt/nginx/conf.d/0-default.conf
13. Ensure that the value for ssl_protocols is TLSv1.2.
14. If you re using two-way TLS with a virtual host, you must also set the TLS
protocol in the virtual host as described in Configuring TLS access to an API
for the Private Cloud.

Set the TLS protocol on the Message Processor


To set the TLS protocol on the Message Processor, set properties in the message-
processor.properties file:

1. Open the message-processor.properties file in an editor. If the file does


not exist, create it:
2. vi /opt/apigee/customer/application/message-processor.properties
3. Configure the properties using the following syntax:
4. # Possible values are a comma-delimited list of TLSv1, TLSv1.1, and TLSv1.2
5. conf/system.properties+https.protocols=[TLSv1][,TLSv1.1][,TLSv1.2]
6. # Possible values are a comma-delimited list of SSLv3, TLSv1, TLSv1.1,
TLSv1.2
# SSLv3 is required
conf_jvmsecurity_jdk.tls.disabledAlgorithms=SSLv3[,TLSv1][,TLSv1.1]
[,TLSv1.2]

# Specify the ciphers that the Message Processor supports. (You must
separate ciphers with a comma.):
conf_message-processor-communication_local.http.ssl.ciphers=cipher[,...]
7. Possible values for conf_message-processor-
communication_local.http.ssl.ciphers are:
● TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
● TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
● TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
● TLS_RSA_WITH_AES_256_GCM_SHA384
● TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
● TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
● TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
● TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
8. For example:

conf/system.properties+https.protocols=TLSv1.2
conf/jvmsecurity.properties+jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1

9. conf_message-processor-
communication_local.http.ssl.ciphers=TLS_ECDHE_ECDSA_WITH_AES_256_G
CM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE
_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA38
4,TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDH_RSA_WITH_A
ES_256_GCM_SHA384,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,TLS_EC
DHE_RSA_WITH_AES_256_CBC_SHA384
10. For a complete list of related properties, see Configuring TLS between a
Router and a Message Processor.
11. Save your changes.
12. Make sure the properties file is owned by the "apigee" user:
13. chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties
14. Restart the Message Processor:
15. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart
16. If you are using two-way TLS with the backend, set the TLS protocol in the
virtual host as described in Configuring TLS from Edge to the backend (Cloud
and Private Cloud).

Enable Cassandra internode encryption


Internode (or node-to-node) encryption protects data traveling between nodes in a
cluster using TLS. This page explains how to enable Cassandra internode encryption
using TLS on Edge for Private Cloud. To perform these steps, you must be familiar
with the details of your Cassandra ring.

Enable Cassandra internode encryption


Follow these steps to enable Cassandra internode encryption:

1. Generate server certificates by following the steps in the Appendix to create a


self-signed key and certificate.
The following steps assume you have created keystore.node0 and
truststore.node0, as well as the keystore and truststore passwords, as
explained in the Appendix. The keystore and truststore should be created as
preliminary steps on each node before proceeding with next steps.
2. Add the following properties to the
/opt/apigee/customer/application/cassandra.properties file. If
the file does not exist, create it.
conf_cassandra_internode_encryption=all
conf_cassandra_keystore=/opt/apigee/data/apigee-cassandra/keystore.node0
conf_cassandra_keystore_password=keypass
conf_cassandra_truststore=/opt/apigee/data/apigee-cassandra/truststore.node0
conf_cassandra_truststore_password=trustpass
# Optionally set the following to enable 2-way TLS or mutual TLS

3. # conf_cassandra_require_client_auth=true
4. Ensure that the file cassandra.properties is owned by the apigee user:
chown apigee:apigee \

5. /opt/apigee/customer/application/cassandra.properties

Execute the following steps on each Cassandra node, one at a time, so the changes
take effect without causing any downtime for users:

3. Stop the Cassandra service:


/opt/apigee/apigee-service/bin/apigee-service \

4. apigee-cassandra stop
5. Restart the Cassandra service:
/opt/apigee/apigee-service/bin/apigee-service \

6. apigee-cassandra start
7. To determine if the TLS encryption service has started, check the system logs
for the following message:
8. Starting Encrypted Messaging Service on TLS port

Perform certificate rotation


To rotate certificates, follow these steps:

1. Add the certificate for each unique generated key pair (see Appendix) to an
existing Cassandra node's truststore, such that both the old certificates and
new certificates exist in the same truststore:
keytool -import -v -trustcacerts -alias NEW_ALIAS \

2. -file CERT -keystore EXISTING_TRUSTSTORE


3. where NEW_ALIAS is a unique string to identify the entry, CERT is the name of
the certificate file to add, and EXISTING_TRUSTSTORE is the name of the
existing truststore on the Cassandra node.
4. Use a copy utility, such as scp, to distribute the truststore to all Cassandra
nodes in the cluster replacing the existing truststore in use by each node.
5. Perform a rolling restart of the cluster to load the new truststore and establish
trust for the new keys before they are in place:
/opt/apigee/apigee-service/bin/apigee-service \

6. apigee-cassandra restart
7. On each Cassandra node in the cluster, update the properties shown below to
the new keystore values in the cassandra.properties file:
conf_cassandra_keystore=NEW_KEYSTORE_PATH
conf_cassandra_keystore_password=NEW_KEYSTORE_PASSOWRD

8.
where NEW_KEYSTORE_PATH is the path to the directory where the keystore file is
located and NEW_KEYSTORE_PASSWORD is the keystore password set when the
certificates

9. were created, as explained in the Appendix.


10. Stop the Cassandra service:
/opt/apigee/apigee-service/bin/apigee-service \

11. apigee-cassandra stop


12. Restart the Cassandra service:
/opt/apigee/apigee-service/bin/apigee-service \

13. apigee-cassandra start


14. When communication is successfully established between all nodes, proceed
to the next Cassandra node. Note: Only proceed to the next node if
communication is successfully established between all nodes.

Appendix
The following example explains how to prepare server certificates needed to perform
the internode encryption steps. The commands shown in the example use the
following parameters:

Parameter Description

node0 Any unique string to identify the node.

keystore.no A keystore name. The commands assume this file is


de0 in the current directory.

keypass The keypass must be the same for both the keystore
and the key.

dname Identifies the IP address of node0 as 10.128.0.39.

-validity The value set on this flag makes the generated key
pair valid for 10 years.
1. Go to the following directory:
2. cd /opt/apigee/data/apigee-cassandra
3. Run the following command to generate a file named keystore.node0 in
the current directory:
keytool -genkey -keyalg RSA -alias node0 -validity 3650 \
-keystore keystore.node0 -storepass keypass \
-keypass keypass -dname "CN=10.128.0.39, OU=None, \

4. O=None, L=None, C=None"


5. Important: Make sure that the key password is the same as the keystore
password.
6. Export the certificate to a separate file:
keytool -export -alias node0 -file node0.cer \

7. -keystore keystore.node0
8. Ensure the file is readable by the apigee user only and by no one else:
$ chown apigee:apigee \
/opt/apigee/data/apigee-cassandra/keystore.node0

9. $ chmod 400 /opt/apigee/data/apigee-cassandra/keystore.node0


10. Import the generated certificate node0.cer to the truststore of the node:
keytool -import -v -trustcacerts -alias node0 \

11. -file node0.cer -keystore truststore.node0


12. The command above asks you to set a password. This is the truststore
password and can be different from the keystore password you set earlier. If
prompted to trust the certificate, enter yes.
13. Use openssl to generate a PEM file of the certificate with no keys. Note that
cqlsh does not work with the certificate in the format generated.
$ keytool -importkeystore -srckeystore keystore.node0 \
-destkeystore node0.p12 -deststoretype PKCS12 -srcstorepass \
keypass -deststorepass keypass
$ openssl pkcs12 -in node0.p12 -nokeys -out node0.cer.pem \
-passin pass:keypass

14. $ openssl pkcs12 -in node0.p12 -nodes -nocerts -out node0.key.pem -passin
pass:keypass
15. For node-to-node encryption, copy the node0.cer file to each node and
import it to the truststore of each node.
keytool -import -v -trustcacerts -alias node0 \

16. -file node0.cer -keystore truststore.node1


17. Use keytool -list to check for certificates in the keystore and truststore
files:
$ keytool -list -keystore keystore.node0

18. $ keytool -list -keystore truststore.node0

Introduction to Apigee mTLS


The Apigee mTLS feature adds security to communications between components in
your Edge for the Private Cloud cluster. It provides an industry-standard way to
configure and install the service mesh. It supports package management and
configuration automation.

Architectural overview
To provide secure communications between components, Apigee mTLS uses a
service mesh that establishes secure, mutually authenticated TLS connections
between components.

The following image shows connections between Apigee components that Apigee
mTLS secures (in red). The ports shown in the image are examples; refer to Port
usage for a list of ranges that each component can use.
(Note that ports indicated with an "M" are used to manage the component and must
be open on the component for access by the Management Server.)

As you can see in the diagram above, Apigee mTLS adds security to connections
between most components in the cluster, including:

Source Destination

Management arrow_right_alt Router, MP, QPid, LDAP,


Server Postgres, Zookeeper, and
Cassandra nodes
Router Loopback; Qpid,
Zookeeper, and Cassandra
nodes

Message Loopback; Qpid,


Processor Zookeeper, and Cassandra
nodes

ZooKeeper and Other Zookeeper and


Cassandra Cassandra nodes

Edge UI SMTP (for external IDP


only)

Postgres Other Postgres, Zookeeper,


and Cassandra nodes

Message encryption/decryption
The Apigee mTLS service mesh consists of Consul servers that run on each
ZooKeeper node in your cluster and the following Consul services on every node in
the cluster:

● An egress proxy that intercepts outgoing messages on the host node. This
service encrypts outgoing messages before sending them to their destination.
● An ingress proxy that intercepts incoming messages on the host node. This
service decrypts incoming messages before sending them to their final
destination.

For example, when the Management Server sends a message to the Router, the
egress proxy service intercepts the outgoing message, encrypts it, and then sends it
to the Router. When the Router's node receives the message, the ingress proxy
service decrypts the message and then passes it to the Router component for
processing.

This all happens transparently to the Edge components: they are unaware of the
encryption and decryption process carried out by the Consul proxy services.

In addition, Apigee mTLS uses the iptables utility, a Linux firewall service that
manages traffic redirection.

TIP: Consul service names use the following patterns:

● component_name-port-IP-ingress
● component_name-port-IP-egress
Requirements
Before you can install Apigee mTLS, your environment must meet the following
requirements:

● Version, platform and topology requirements


● Utilities installed and enabled
● User account with the appropriate level of permissions
● An administration machine (recommended)
● Port usage

The following sections describe each of these requirements in detail.

Version, platform, and topology

The following table lists the mTLS requirements:

Requirement Description

Versions ● 4.51.00
● 4.50.00
● 4.19.06

Topology Must include at least three Zookeeper nodes. As


a result, you can only install Apigee mTLS on
topologies that use 5, 9, 12 (multi-data center), or
13 nodes. For more information, see Installation
topologies.

Platforms/Operating Use the following values to determine whether


Systems Apigee mTLS is supported on a particular OS:

Operating
Supported Private Cloud Version
System

v4.19.06 v4.50.00 v4.51.00

CentOS 7.5, 7.6, 7.5, 7.6,


7.5, 7.6,
7.7, 7.8, 7.7, 7.8,
RedHat 7.7
7.9 7.9, 8.0
Enterprise
Linux
(RHEL)

Oracle
Linux

Note that Apigee mTLS does not necessarily


support all OSes that are supported by the
corresponding version of Apigee Edge for Private
Cloud on which it is running.

For example, if v4.19.06 supports CentOS x and


y, that does not necessarily mean that Apigee
mTLS is supported on CentOS x and y for
v4.19.06.

Utilities/packages

Apigee mTLS requires that you have the following packages installed and enabled on
each machine in your cluster, including your administration machine, before you
begin the installation process:

OK to Remove
Utility/package Description
After Installation?

base64 Verifies data within the


installation scripts.

gnu-bash Used by the installation script


gnu-sed and other common tools.
gnu-grep

iptables Replaces the default firewall,


firewalld.

iptables- Provides functionality to the


services iptables utility.

lsof Used by the installation script.

nc Verifies iptables routes.


openssl Signs certificates locally during
the initial bootstrapping process.

During installation, you also install the Consul package on the administration
machine so that you can generate credentials and the encryption key.

The apigee-mtls package installs and configures the Consul servers including the
ingress and egress proxies on ZooKeeper nodes in the cluster.

User account permissions

Before installing, create a new user account or ensure that you have access to one
that has elevated priveleges.

The account that executes the Apigee mTLS installation on each node in the cluster
must be able to:

● Start, stop, restart, and initialize Apigee components


● Set firewall rules
● Create a new OS/system user account
● Enable, disable, start, stop, and mask services with systemctl

Administration machine (recommended)

Apigee recommends that you have a node within the cluster on which you can
perform various administrative tasks described in this document, including:

1. Install HashiCorp Consul 1.6.2.


2. Generate and distribute a certificate/key pair and gossip encryption key.
3. Update and distribute the configuration file.

When setting up the administration machine:

● Ensure that you have root access to it.


● Download and install the apigee-service and apigee-setup utilities on
it, as described in Install Edge apigee-setup utility.
● Ensure that you can use scp/ssh to access to all nodes in the cluster from
the administration machine. This is required so that you can distribute your
configuration file and credentials.

Port usage and assignment

This section describes port usage and port assignments to support Consul
communications with Apigee mTLS.

Port usage: All nodes running apigee-mtls


All nodes in the cluster that use the apigee-mtls service must allow connections
from services on the localhost (127.0.0.1). This lets the Consul proxies
communicate with the other services as they process incoming and outgoing
messages.

Port usage: Consul server nodes (nodes running ZooKeeper)

You must open most of the following ports on the Consul server nodes (the nodes
running ZooKeeper) to accept requests from all nodes in the cluster:

Consul Allow external


Node Server Description Protocol mtls-agents
Port *

8300 Connects all RPC


Consul servers in
the cluster.

8301 Handles UDP/TCP


membership and
broadcast
messages within
the cluster.

8302 WAN port that UDP/TCP


handles
membership and
Consul Server broadcast
(ZooKeeper messages in a
nodes) multi-data center
configuration.

8500 Handles HTTP HTTP


connections to
the Consul
Server APIs from
from processes
on the same
node.
This port is not
used for remote
communication
or coordination;
it listens on
localhost only.

8502 Handles gRPC+HTTPS


gRPC+HTTPS
connections to
the Consul
Server APIs from
other nodes in
the cluster.

8503 Handles HTTPS HTTPS


connections to
the Consul
Server APIs from
other nodes in
the cluster.

8600 Handles the UDP/TCP


Consul server's
DNS.

* Apigee recommends that you restrict inbound requests to cluster members only (including cross-datastore). You can do this with

iptables.

As this table shows, the nodes running the consul-server component (ZooKeeper
nodes) must open ports 8301, 8302, 8502, and 8503 to all members of the cluster
that are running the apigee-mtls service, even across data centers. Nodes that are
not running ZooKeeper do not need to open these ports.

Port assignments for all Consul nodes (including nodes running ZooKeeper)

To support Consul communications, nodes running the following Apigee


components must allow external connections to ports within the following ranges:

Number of Ports Required Per


Apigee Component Range
Node

Apigee mTLS 10700 to 1


10799
Cassandra 10100 to 2
10199

Message Processor 10500 to 2


10599

OpenLDAP 10200 to 1
10299

Postgres 10300 to 3
10399

Qpid 10400 to 2
10499

Router 10600 to 2
10699

ZooKeeper 10001 to 3
10099

Consul assigns ports in a simple linear fashion. For example, if your cluster has two
Postgres nodes, the first node uses two ports, so Consul assigns ports 10300 and
10301 to it. The second node also uses two ports, so Consol assigns 10302 and
10303 to that node. This applies to all component types.

As you can see, the actual number of ports depends on topology: If your cluster has
two Postgres nodes, then you'll need to open four ports (two nodes times two ports
each).

Note the following:

● Consul proxies cannot listen on the same ports as Apigee services.


● Consul has only one port address space. Consul proxy port assignments must
be unique across the cluster, which includes data centers. This means that if
proxy A on host A listens on port 15000, then proxy B on host B cannot listen
on port 15000.
● The number of ports used varies based on the topology, as described
previously.

In a multi-data center configuration, all hosts running mTLS must also open port
8302.

You can customize the default ports that Apigee mTLS uses. For information on how
to do this, see Proxy port range customization.
Limitations
Apigee mTLS has the following limitations:

● Does not encrypt inter-node Cassandra communications (port 7000)


● Configuration and setup is not idempotent. This means that if you make one
change on one node, you must make the same change on all nodes; the
system does not apply that change for you on any other node. For more
information, see Change an existing apigee-mtls configuration.
NOTE: After installing Apigee mTLS on your Private Cloud cluster, when you start all components
on a node, you must start the apigee-mtls component before any other component on the
node.

Terminology
This section uses the following terminology:

Term Definition

cluster The group of machines that make up your Edge for the
Private Cloud installation.

Consul The service mesh used by Apigee mTLS. For information


about how Consul secures your Private Cloud
communications, see Consul's Security Model.

mTLS Mutually Authenticated TLS.

service An overlay network (or a network within a network).


mesh

TLS Transaction Layer Security. An industry-standard


authentication protocol for secure communications.

Before you begin


This section describes additional tasks (beyond the requirements that you must
perform before you can install Apigee mTLS. These tasks include:
● Ensuring that you have not disabled localhost.
● Replacing your default firewall service (in many cases, firewalld) with
iptables on all nodes in your cluster.
● Backing up your Cassandra, ZooKeeper, and Postgres data

Each of these tasks is described in the sections that follow.

Back up your Cassandra, Zookeeper, and Postgres data


During the installation and configuration of Apigee mTLS, you will reinstall
Cassandra and Postgres on their respective nodes. As a result, you should back up
the data for the following components:

● apigee-cassandra
● apigee-postgresql
● apigee-zookeeper

While you do not reinstall ZooKeeper in your cluster, Apigee recommends that you
back up its data prior to installing Apigee mTLS.

For instructions on how to back up the data for these components, see How to back
up.

Ensure that the loopback address is enabled


Apigee mTLS requires that the localhost loopback address is enabled. The IP
address 127.0.0.1 must be routable and it must resolve to localhost on every
node in the cluster. The Consul proxy servers in the service mesh depend on this.

If you have previously disabled the localhost loopback address, you must re-
enable it on all nodes in your cluster.

Replace the default firewall


The default firewall on CentOS and RedHat Enterprise Linux (RHEL) is firewalld.
However, Apigee mTLS requires that you use iptables as your firewall instead. As
a result, you must:

1. Disable and remove firewalld, if installed.


AND
2. Install iptables on each node and ensure that it's running.
Note: Do not disable and remove your firewall without also installing iptables.

This section describes how to perform these tasks.

The order of the nodes on which you do this does not matter.
To uninstall firewalld and ensure iptables is installed and running:

1. Log in to the node as the root user.


2. Stop all components by executing the following command:
3. /opt/apigee/apigee-service/bin/apigee-all stop
4. Disable and uninstall firewalld:Note: If your cluster node doesn't use
firewalld, then you can skip to step 4.
a. Stop the firewalld service by executing the following command:
b. systemctl stop firewalld
c. Disable the firewalld service and mask it by executing the following
commands:
d. systemctl disable firewalld
e. systemctl mask --now firewalld
f. Remove the firewalld service with yum by executing the following
command:
g. yum remove firewalld
h. Reset all services that have a failed status by executing the following
command:
i. systemctl reset-failed
j. Reload all services by executing the following command:
k. systemctl daemon-reload
5. Install iptables:
a. Install the iptables and iptables-services packages by
executing the following command:
b. yum install iptables iptables-services
c. Reload running services by executing the following command:
d. systemctl daemon-reload
e. Enable iptables by executing the following command:
f. systemctl enable iptables ip6tables
g. Start the iptables and ip6tables services by executing the
following command:
h. systemctl start iptables ip6tables
6. Note: This command starts both the iptables and ip6tables services.
7. Repeat this process for each node in the cluster.

Install Apigee mTLS


After you have ensured that all nodes in your Private Cloud cluster meet all
requirements and you have performed the tasks in Before you begin, you can install
the apigee-mtls component.

(For information on performing an offline installation, see Install Edge apigee-setup


utility on a node with no external internet connection.)
To install Apigee mTLS:

1. Log in to a node as root (or use sudo with the commands). Which node you
choose and the order in which you choose the nodes does not matter.
2. Stop all Apigee services by using the stop command, as the following
example shows:
3. /opt/apigee/apigee-service/bin/apigee-all stop
4. You do not restart the components until after you install and configure Apigee
mTLS.
5. Check that all services are stopped by using the status command, as the
following example shows:
6. /opt/apigee/apigee-service/bin/apigee-all status
7. Install Apigee mTLS by executing the following command:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls install
9. This command installs the following RPMs with your Edge for the Private
Cloud installation:
● apigee-mtls
● apigee-mtls-consul
10. Repeat steps 1 through 4 on each node in the cluster.

After installing Apigee mTLS on all nodes in the cluster, perform the following
steps:

1. Configure apigee-mtls on all nodes as described in Configure Apigee


mTLS.
2. (Optional) Verify your configuration as described in Verify your configuration.

After you install Apigee mTLS on a node, when you restart components on that node,
you must start the apigee-mtls component before any other component on that
node.

Configure Apigee mTLS


After you install Apigee mTLS on all nodes in your cluster, you must configure and
initialize the apigee-mtls component. You do this by generating a certificate/key
pair and updating the configuration file on your administration machine. You then
deploy the same generated files and configuration file to all nodes in the cluster and
initialize the local apigee-mtls component.

Configure apigee-mtls (after initial installation)


This section describes how to configure Apigee mTLS directly after the initial
installation. For information on updating an existing installation of Apigee mTLS, see
Change an existing apigee-mtls configuration.

This section applies to installations in a single data center. For information on


configuring Apigee mTLS in a multiple data center setup, see Configure multiple data
centers for Apigee mTLS.

The general process for configuring apigee-mtls is as follows:

1. Update your configuration file: On your administration machine, update the


configuration file to include the apigee-mtls settings.
2. Install Consul and generate credentials: Install Consul and (optionally) use it
to generate TLS credentials (once only).
In addition, edit your Apigee mTLS configuration file to:
a. Add the credentials information
b. Define the cluster's topology
3. Note that you can use your existing credentials or generate them with Consul.
4. Distribute the credentials and configuration file: Distribute the same
generated certificate/key pair and updated configuration file to all nodes in
your cluster.
5. Initialize apigee-mtls: Initialize the apigee-mtls component on each node.

Each of these steps is described in the sections that follow.

Step 1: Update your configuration file


This section describes how to modify your configuration file to include mTLS
configuration properties. For more general information about the configuration file,
see Creating a configuration file.

After you update your configuration file with the mTLS-related properties, copy it to
all nodes in the cluster before you initialize the apigee-mtls component on those
nodes.

TIP Commands in this section refer to the configuration file as config_file, which indicates that
its location is variable, depending on where you store it on each node.

To update the configuration file:

1. On your administration machine, open the configuration file for edit.


2. Copy the following set of mTLS configuration properties and paste them into
the configuration file:
3. ALL_IP="ALL_PRIVATE_IPS_IN_CLUSTER"
4. ZK_MTLS_HOSTS="ZOOKEEPER_PRIVATE_IPS"
5. CASS_MTLS_HOSTS="CASSANDRA_PRIVATE_IPS"
PG_MTLS_HOSTS="POSTGRES_PRIVATE_IPS"
RT_MTLS_HOSTS="ROUTER_PRIVATE_IPS"
MS_MTLS_HOSTS="MGMT_SERVER_PRIVATE_IPS"
MP_MTLS_HOSTS="MESSAGE_PROCESSOR_PRIVATE_IPS"
QP_MTLS_HOSTS="QPID_PRIVATE_IPS"
LDAP_MTLS_HOSTS="OPENLDAP_PRIVATE_IPS"
MTLS_ENCAPSULATE_LDAP="y"

ENABLE_SIDECAR_PROXY="y"
ENCRYPT_DATA="BASE64_GOSSIP_MESSAGE"
PATH_TO_CA_CERT="PATH/TO/consul-agent-ca.pem"
PATH_TO_CA_KEY="PATH/TO/consul-agent-ca-key.pem"
APIGEE_MTLS_NUM_DAYS_CERT_VALID_FOR="NUMBER_OF_DAYS"
6. Set the value of each property to align with your configuration.
7. The following table describes these configuration properties:

Property Description

ALL_IP A space separated list of the private


host IP addresses of all nodes in the
cluster.
The order of IP addresses does not
matter, except that it must be the
same in all configuration files across
the cluster.

If you configure Apigee mTLS for


multiple data centers, list all IP
addresses for all hosts on all
regions.

LDAP_MTLS_HO The private host IP address of the


STS OpenLDAP node in the cluster.

ZK_MTLS_HOST A space separated list of private host IP


S addresses on which ZooKeeper
nodes are hosted in the cluster.

Note that based on the requirements,


there must be at least three
ZooKeeper nodes.

CASS_MTLS_HO A space separated list of private host IP


STS addresses on which Cassandra
servers are hosted in the cluster.
PG_MTLS_HOST A space separated list of private host IP
S addresses on which Postgres
servers are hosted in the cluster.

RT_MTLS_HOST A space separated list of private host IP


S addresses on which Routers are
hosted in the cluster.

MTLS_ENCAPSU Encrypts LDAP traffic between the


LATE_LDAP Message Processor and LDAP
server. Set to y.

MS_MTLS_HOST A space separated list of private host IP


S addresses on which Management
Server nodes are hosted in the
cluster.

MP_MTLS_HOST A space separated list of private host IP


S addresses on which Message
Processors are hosted in the cluster.

QP_MTLS_HOST A space separated list of private host IP


S addresses on which Qpid servers are
hosted in the cluster.

ENABLE_SIDEC Determines whether Cassandra and


AR_PROXY Postgres should be aware of the
service mesh.
You must set this value to "y".

ENCRYPT_DATA The base64-encoded encryption key


used by Consul. You generated this
key by using the consul keygen
command in Step 2: Install Consul
and generate credentials.

This value must be the same across all


nodes in the cluster.
PATH_TO_CA_C The location of the certificate file on the
ERT node. You generated this file in Step
2: Install Consul and generate
credentials.

This location should be the same across


all nodes in the cluster so that the
configuration files are the same.

The certificate must be X509v3


encoded.

PATH_TO_CA_K The location of the key file on the node.


EY You generated this file in Step 2:
Install Consul and generate
credentials.

This location should be the same across


all nodes in the cluster so that the
configuration files are the same.

The key file must be X509v3 encoded.

APIGEE_MTLS_ The number of days a certificate is good


NUM_DAYS_ for when you generate a custom
CERT_VALI certificate.
D_FOR
The default value is 365. The maximum
value is 7865 days (5 years).

8. In addition to the properties listed above, Apigee mTLS uses several


additional properties when you install it on a multi-data center configuration.
For more information, see Configure multiple data centers.
9. Be sure that ENABLE_SIDECAR_PROXY is set to "y".
10. Update the IP addresses in the host-related properties. Be sure to use the
private IP addresses when referring to each node, not the public IP addresses.
In later steps, you will set the values of the other properties such as
ENCRYPT_DATA, PATH_TO_CA_CERT, and PATH_TO_CA_KEY. You do not set
their values yet.
When editing the apigee-mtls configuration properties, note the following:
● All properties are strings; you must wrap the values of all properties in
single or double quotes.
● If a host-related value has more than one private IP address, separate
each IP address with a space.
● Use private IP addresses and not host names or public IP addresses
for all host-related properties in the configuration file.
● The order of IP addresses in a property value must be in the same
order in all configuration files across the cluster.
11. Save your changes to the configuration file.

Step 2: Install Consul and generate credentials


This section describes how to install Consul and generate the credentials that are
used by the mTLS-enabled components.

You must choose one of the following methods to generate your credentials:

● (Recommended) Create your own Certificate Authority (CA) using Consul, as


described in this section
● Use the credentials of an existing CA with Apigee mTLS (advanced)

About the credentials

The credentials consist of the following:

● Certificate: The TLS certificate


● Key: The TLS public key
● Gossip message: A base-64 encoded encryption key

You generate a single version of each of these files once only. You then copy the key
and certificate files to all the nodes in your cluster, and add the encryption key to
your configuration file that you also copy to all nodes.

For more information about Consul's encryption implementation, see the following:

● Encryption
● Securing Agent Communication with TLS Encryption

Install Consul and generate credentials

To generate credentials that Apigee mTLS uses for authenticating secure


communications among the nodes in your Private Cloud cluster, use a local Consul
binary . As a result, you must install Consul on your administration machine before
you can generate credentials.

To install Consul and generate mTLS credentials:

1. On your administration machine, download the Consul 1.6.2 binary from the
HashiCorp website.
2. Extract the contents of the downloaded archive file. For example, extract the
contents to /opt/consul/.NOTE: Continue with this procedure to use Consul to
generate the credentials (recommended). If you want to use your existing credentials,
see Integrate custom certificates with Apigee mTLS (advanced).
3. On your administration machine, create a new Certificate Authority (CA) by
executing the following command:
4. /opt/consul/consul tls ca create
5. Consul creates the following files, which form a certificate/key pair:
● consul-agent-ca.pem (certificate)
● consul-agent-ca-key.pem (key)
6. By default, certificate and key files are X509v3 encoded.
Later, you will copy these files to all nodes in the cluster. At this time, however,
you must only decide where on the nodes you will put these files. They should
be in the same location on each node. For example, /opt/apigee/.
7. In the configuration file, set the value of PATH_TO_CA_CERT to the location to
which you will copy the consul-agent-ca.pem file on the node. For
example:
8. PATH_TO_CA_CERT="/opt/apigee/consul-agent-ca.pem"
9. Set the value of PATH_TO_CA_KEYto the location to which you will copy the
consul-agent-ca-key.pem file on the node. For example:
10. PATH_TO_CA_KEY="/opt/apigee/consul-agent-ca-key.pem"
11. Create an encryption key for Consul by executing the following command:
12. /opt/consul/consul keygen
13. Consul outputs a randomized string that looks similar to the following:
14. QbhgD+EXAMPLE+Y9u0742X/IqX3X429/x1cIQ+JsQvY=
15. Copy this generated string and set it as the value of the ENCRYPT_DATA
property in your configuration file. For example:
16. ENCRYPT_DATA="QbhgD+EXAMPLE+Y9u0742X/IqX3X429/x1cIQ+JsQvY="
17. NOTE: The value above is an example encryption key. It is not the actual string you will
use. You must generate your own.
18. Save your configuration file.

The following example shows the mTLS-related settings in a configuration file (with
example values):

...
IP1=10.126.0.121
IP2=10.126.0.124
IP3=10.126.0.125
IP4=10.126.0.127
IP5=10.126.0.130
ALL_IP="$IP1 $IP2 $IP3 $IP4 $IP5"
LDAP_MTLS_HOSTS="$IP3"
ZK_MTLS_HOSTS="$IP3 $IP4 $IP5"
CASS_MTLS_HOSTS="$IP3 $IP4 $IP5"
PG_MTLS_HOSTS="$IP2 $IP1"
RT_MTLS_HOSTS="$IP4 $IP5"
MS_MTLS_HOSTS="$IP3"
MP_MTLS_HOSTS="$IP4 $IP5"
QP_MTLS_HOSTS="$IP2 $IP1"
ENABLE_SIDECAR_PROXY="y"
ENCRYPT_DATA="QbhgD+EXAMPLE+Y9u0742X/IqX3X429/x1cIQ+JsQvY="
PATH_TO_CA_CERT="/opt/apigee/consul-agent-ca.pem"
PATH_TO_CA_KEY="/opt/apigee/consul-agent-ca-key.pem"
...

Step 3: Distribute the configuration file and credentials


Copy the following files to all nodes using a tool such as scp:

● Configuration file: Copy the updated version of this file and replace the
existing version on all nodes (not just the nodes running ZooKeeper).
● consul-agent-ca.pem: Copy to the location you specified as the value of
PATH_TO_CA_CERT in the configuration file.
● consul-agent-ca-key.pem: Copy to the location you specified as the value of
PATH_TO_CA_KEY in the configuration file.

Be sure that the locations to which you copy the certificate and key files match the
values you set in the configuration file in Step 2: Install Consul and generate
credentials.

Step 4: Initialize apigee-mtls


After you installed apigee-mtls on each node, updated your configuration file, and
copied it and the credentials to all nodes in the cluster, you are ready to initialize the
apigee-mtls component on each node.

NOTE: This process also involves reinstalling Cassandra and Postgres on their respective nodes.

To initialize apigee-mtls:

1. Log in to a node in the cluster as the root user. You can perform these steps
on the nodes in any order you want.
2. Make the apigee:apigee user an owner of the updated configuration file, as
the following example shows:
3. chown apigee:apigee config_file
4. Configure the apigee-mtls component by executing the following
command:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls setup -f config_file
6. (Optional) Execute the following command to verify that your setup was
successful:
7. /opt/apigee/apigee-mtls/lib/actions/iptables.sh validate
8. Start Apigee mTLS by executing the following command:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls start
10. After installing Apigee mTLS, you must start this component before any other
components on the node.
11. (Cassandra nodes only) Cassandra requires additional arguments to work
within the security mesh. As a result, you must execute the following
commands on each Cassandra node:
/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra setup -f config_file
/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra configure

12. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restart


13. (Postgres nodes only) Postgres requires additional arguments to work within
the security mesh. As a result, you must do the following on the Postgres
nodes:
(Primary only)
a. Execute the following commands on the Postgres primary node:

/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup -f config_file


/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql configure

b. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql restart


14. (Standby only)
a. Back up your existing Postgres data. To install Apigee mTLS, you must
re-initialize the primary/standby nodes, so there will be data loss. For
more information, see Set up primary/standby replication for Postgres.
b. Delete all Postgres data:
c. rm -rf /opt/apigee/data/apigee-postgresql/pgdata
d. Configure Postgres and then restart Postgres, as the following example
shows:

/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup -f config_file


/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql configure

e. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql restart


15. NOTE: Due to network issues, you might need to run this step more than once before it
succeeds.
If you are installing on a multi-data center topology, use an absolute path for
the configuration file.
16. Start the remaining Apigee components on the node in the start order, as the
following example shows:
17. /opt/apigee/apigee-service/bin/apigee-service component_name start
18. Repeat this process for each node in the cluster.
19. (Optional) Verify that the apigee-mtls initialization was successful by
using one or more of the following methods:
a. Validate the iptables configuration
b. Verify remote proxy status
c. Verify quorum status
20. Each of these methods is described in Verify your configuration.

Change an existing apigee-mtls configuration


To customize an existing apigee-mtls configuration, you must uninstall and
reinstall apigee-mtls. You must also be sure to apply your customization across
all nodes.

CAUTION: Apigee mTLS setup and configuration is not idempotent. As a result, the apigee-
mtls configuration entries must be identical on all nodes: if you make a change on one node,
you must make that same change on all nodes. If you do not, the service mesh may fail.

To reiterate this point, when changing an existing Apigee mTLS configuration:

● If you change a configuration file, you must first uninstall apigee-mtls and
re-run setup or configure:
● # DO THIS:
● /opt/apigee/apigee-service/bin/apigee-service apigee-mtls uninstall

# BEFORE YOU DO THIS:
/opt/apigee/apigee-service/bin/apigee-service apigee-mtls setup -f file
OR
/opt/apigee/apigee-service/bin/apigee-service apigee-mtls configure
● You must uninstall and re-run setup or configure on all nodes in the
cluster, not just a single node.
NOTE: Repeatedly executing setup or configure on apigee-mtls does not act the same as
other Apigee services. It will not necessarily generate a new configuration (or modify the local
configuration). As a result, you must uninstall mTLS when you change the configuration.

Upgrade Apigee mTLS


This section explains how to upgrade Apigee mTLS to Apigee Edge for Private Cloud
v4.51.00.

Apigee mTLS is supported on Apigee Edge for Private Cloud versions 4.51.00, 4.50.00, 4.19.06,
and 4.19.01. You cannot use apigee-mtls on versions 4.18.* or lower.

Before you begin


Apigee recommends that you perform backups of the Cassandra, Postgres, and
Zookeeper services before you upgrade Apigee mTLS.

Upgrading
To upgrade Apigee mTLS:

1. Stop all Apigee components, including apigee-mtls, by executing the


following command:
2. /opt/apigee/apigee-service/bin/apigee-all stop
3. Uninstall Apigee mTLS by executing the following command:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls uninstall
5. Reinstall Apigee mTLS by executing the following command:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls install
7. Execute the following command:
8. apigee-service apigee-mtls setup -f /opt/silent.conf
9. Start apigee-mtls on all hosts by executing the following command:
10. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls start
11. Start the remaining Private Cloud components by following the existing
startup sequence documentation.Note: It may take as long as 3-5 minutes for
Zookeeper to re-establish its quorum within the mTLS service mesh. This may or may not
be of consequence depending on how much—or how little—automation is used during
the restart process.
12. Follow the existing update/upgrading ordering steps for other components.

Verify Apigee mTLS installation


This section describes various ways to validate that the Apigee mTLS installation
was successful. You can also use the techniques described in this section when
troubleshooting issues with the cluster.

Validate the iptables configuration


You can validate that the apigee-mtls installation was successful by checking that
the iptables routes are working and that the rules are valid.

WARNING: Do not flush your iptables rules after enabling mTLS.

Before validating an iptables configuration, be sure that:

● You uninstalled firewalld from the node and replaced it with iptables, as
described in Replace the default firewall.
● You stopped all Apigee components on the node, including apigee-mtls.

To validate the apigee-mtls configuration was successful with iptables:

1. Log in to a node in your cluster. The order in which you do this does not
matter.
2. Stop all components on the node, as the following example shows:
3. /opt/apigee/apigee-service/bin/apigee-all stop
4. Execute the validate command, as the following example shows:
5. /opt/apigee/apigee-mtls/lib/actions/iptables.sh validate
6. iptables sends messages to every port that either Consul or the local
Apigee services use. If the script encounters an invalid rule or a failed route, it
displays an error.
If any Apigee services or Consul servers are running on the node, then this
command will fail.
7. Start the apigee-mtls component before all other components on the node
by executing the following command:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls start
9. Start the remaining Apigee components on the node in the start order, as the
following example shows:
10. /opt/apigee/apigee-service/bin/apigee-service component_name start
11. Repeat these steps on all nodes in the cluster. Ideally, do this on all nodes
within 5 minutes of having started on the first node.

Verify the remote proxy status


You can use Consul on ZooKeeper nodes to check if the ingress and egress proxy
services on all nodes are alive, healthy, and have joined the service mesh.

To check the proxy status of your nodes:

1. Log in to a node that is running ZooKeeper.


2. Execute the following command:
3. systemctl status consul_server

Verify the quorum status


The mTLS installation includes adding the Consul proxy services to all nodes. As a
result, you should verify the quorum status of all ZooKeeper nodes.

To check the quorum status, log in to each node running ZooKeeper and execute the
following command:

/opt/apigee/apigee-mtls-consul/bin/consul operator raft list-peers

This command displays a list of the Consul instances and their statuses, as the
following example shows:

Node ID Address State Voter RaftProtocol


prc-test-0-1619 b59c1f44-6eb0-81d4-42 10.126.0.98:8300 leader true 3
prc-test-1-1619 a4372a6e-8044-e587-43 10.126.0.146:8300 follower true 3
prc-test-2-1619 71eb181f-4242-5353-44 10.126.0.100:8300 follower true 3

For more information, see the following:

● Consul Operator Raft


● Running replicated ZooKeeper

In addition, you can get information about the cluster's health, including whether the
cluster's quorum has formed and if remote members are impairing functionality. To
do this, use the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-mtls status


Customize proxy port ranges
By default, Consul chooses the ports that its proxies use from the sparsely used
block of 10001 to 10800.

You can change these ports, but note the following:

● You must uninstall and reinstall apigee-mtls with the new values.
● Consul proxies cannot listen on the same ports as Apigee Services.
● Consul has only one port address space. This means that if proxy A on host A
listens on port 15000, then proxy B on host B cannot listen on port 15000.
● Be sure that you review Apigee port requirements to ensure no collisions
occur.

You can customize the ports that are used by the proxies to suit your particular
configuration.

Generating a report on port usage


When customizing proxy port ranges, it may be useful to generate a report on the
current port assignments. To do so, enter the following command:

apigee-service apigee-mtls report -f silent.conf > port_report.json

This generates a JSON file named port_report.json that contains information


about current port usage for each host. You can name the file whatever you wish.

Report structure

Below is a sample showing the stucture of the generated report.

{
"192.168.1.1": {
"datacenter_member": "dc-1",
"daemons": {
"zookeeper-ingress": {
"ingress": true,
"name": "zk-2888-192-168-1-1",
"listeners": [
{
"purpose": "terminate service mesh for zk port 2888",
"ip_address": "192.168.1.1",
"port": 10001,
}
]
},
"consul-server": {
.
.
.
}
}
},
"192.168.1.2": { }
.
.
.
}

In the example above, the host "zk-2888-192-168-1-1" is assigned port 10001.

Customizing ports used by Apigess mTLS


To customize the ports used by Apigee mTLS:

1. Uninstall apigee-mtls if it is already installed, as shown below:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls uninstall
3. For more information, see Uninstall Apigee mTLS.
4. On each node, open the silent configuration file. For more general information
about this file, see Creating a configuration file.
If you wish, you can run the command shown in Generating a report on port
usage before the apigee-mtls setup is complete, to see what your silent
configuration file will generate.
5. Add or change the values of the properties that set the ports.
The following table lists the ports and provides the names of the properties
that you use to customize the ports used by components with Apigee mTLS:

Default
Node Ran Description
ge

Apige 10700 Each host with an apigee-mtls installation requires


e to a single port in the specified range.
m 1079
TL 9 You define the port by setting the minimum and
S maximum port number to the same value with the
following properties:

SMI_PROXY_MINIMUM_EGRESS_PROXY_PORT

SMI_PROXY_MAXIMUM_EGRESS_PROXY_PORT

Cassa 10100 Each host with an apigee-cassandra installation


nd to requires two ports in the specified range.
ra 1019
9 You define a custom range by setting the minimum
and maximum port numbers with the following
properties:

SMI_PROXY_MINIMUM_CASSANDRA_PROXY_PORT

SMI_PROXY_MAXIMUM_CASSANDRA_PROXY_PORT

Mess 10500 Each host with an apigee-message-processor


ag to installation requires two ports in the specified
e 1059 range.
Pr 9
oc You define a custom range by setting the minimum
es and maximum port numbers with the following
so properties:
r
SMI_PROXY_MINIMUM_MESSAGEPROCESSOR_PROXY_
PORT

SMI_PROXY_MAXIMUM_MESSAGEPROCESSOR_PRO
XY_PORT

Open 10200 Each host with an apigee-ldap installation requires


LD to one port in the specified range.
AP 1029
9 You define the port by setting the minimum and
maximum port number to the same value with the
following properties:

SMI_PROXY_MINIMUM_LDAP_PROXY_PORT

SMI_PROXY_MAXIMUM_LDAP_PROXY_PORT
Postg 10300 Each host with an apigee-postgres installation
re to requires three ports in the specified range.
s 1039
9 You define a custom range by setting the minimum
and maximum port numbers with the following
properties:

SMI_PROXY_MINIMUM_POSTGRES_PROXY_PORT

SMI_PROXY_MAXIMUM_POSTGRES_PROXY_PORT

QPid 10400 Each host with an apigee-qpid installation requires


to two ports in the specified range.
1049
9 You define a custom range by setting the minimum
and maximum port numbers with the following
properties:

SMI_PROXY_MINIMUM_QPID_PROXY_PORT

SMI_PROXY_MAXIMUM_QPID_PROXY_PORT

Route 10600 Each host with an apigee-router installation


r to requires two ports in the specified range.
1069
9 You define a custom range by setting the minimum
and maximum port numbers with the following
properties:

RT_PROXY_PORT_MIN

RT_PROXY_PORT_MAX

ZooK 10001 Each host with an apigee-zookeeper installation


ee to requires three ports in the specified range.
pe 1009
r 9 You define a custom range by setting the minimum
and maximum port numbers with the following
properties:

SMI_PROXY_MINIMUM_ZOOKEEPER_PROXY_PORT
SMI_PROXY_MAXIMUM_ZOOKEEPER_PROXY_PORT

6.
The following example defines custom values for the Cassandra ports:
7. SMI_PROXY_MINIMUM_CASSANDRA_PROXY_PORT=10142
8. SMI_PROXY_MAXIMUM_CASSANDRA_PROXY_PORT=10143
9. Save the configuration file.
10. Install apigee-mtls as described in Install Apigee mTLS.
11. Configure the apigee-mtls component by using the following command:
12. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls setup -f config_file
13. Repeat these steps for each node in your cluster so that all configuration files
are the same across all nodes.

Configure apigee-monit with mTLS


The apigee-monit tool periodically polls Edge services; if a service is unavailable,
then apigee-monit attempts to restart it. With Apigee mTLS, apigee-monit
checks the status of consul_server, consul_egress, and the ingress proxies in
addition to the usual Edge services.

Before you can use apigee-monit with mTLS, you must change the owner of the
tool to root. To do this, execute the following commands on each node in the
cluster:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit stop


/opt/apigee/apigee-service/bin/apigee-service apigee-monit chown root root
/opt/apigee/apigee-service/bin/apigee-service apigee-monit start

For additional information about using monit (the tool on which apigee-monit is
based) with Consul, see:

● https://mmonit.com/monit/documentation/monit.html
● https://mmonit.com/wiki/
● Consul Checks
● Consul Server Health API

Configure multiple data centers for


Apigee mTLS
Apigee mTLS supports multiple data centers so that you can scale your
configuration to include more complex topologies such as a 12-node clustered
installation.

TERMINOLOGY: When configuring mTLS for multiple data centers, each data center is
considered a region.

The installation process for mTLS on a multi-data center topology is the same as it is
for simpler topologies. However, you must ensure that your installation meets the
prerequisites and that you change your configuration files as described in the
sections that follow.

Prerequisites

To use Apigee mTLS with multiple data centers, you must:

● Uninstall apigee-mtls and reinstall it with the multiple data center


configuration. You cannot modify an existing configuration. For more
information, see Change an existing apigee-mtls configuration.
● Open port 8302 on every host that is running mTLS.
● Ensure that all mTLS cluster members have unique IP addresses, which are
consistent for all members of the cluster.
● When specifying configuration files, use absolute paths in your commands
where ambiguity might exist.
● Add multi-data center configuration properties, as described in Configuration
files for multiple data centers.

Configuration files for multiple data centers

To use Apigee mTLS with multiple data centers, you create a separate configuration
file for each data center.

In each of the configuration files:

1. Change the value of the ALL_IP configuration property to include all host IP
addresses in all regions.
2. Ensure that the value of the REGION property is the name of the current region
or data center. For example, "dc-1".
3. Add the following properties:

Property Description
APIGEE_ Whether or not you are using a multi-data
MTLS_ center configuration. Set to "y" if you are
MULTI configuring multiple data centers.
Otherwise, omit or set to "n". The default is
_DC_E
omitted.
NABLE

MTLS_LO A space-delimited list of all IP addresses used


CAL_R by the current region that you are
EGION configuring. For example, "10.0.0.1 10.0.0.2
10.0.0.3".
_IP

For the second region in the configuration, use


the MTLS_REMOTE_REGION_1_IP
property.

MTLS_RE The name of the second region in a multi-data


MOTE_ center configuration. For example, "dc-2".
REGIO
N_1_N
In the second region's configuration file, you'll
AME
use "dc-2" for REGION and "dc-1" for
MTLS_REMOTE_REGION_1_NAME.

MTLS_RE A space delimited list of all IP addresses used


MOTE_ by the second region in a multi-data center
REGIO configuration. For example, "10.0.0.4
10.0.0.5 10.0.0.6".
N_1_I
P

The following examples show the configuration files for two data centers ("dc-1" and
"dc-2"). Properties that are specific to a multi-data center configuration are
highlighted):

DC-1 CONFIGURATION FILE

DC-2 CONFIGURATION FILE


ALL_IP="10.126.0.114 10.126.0.113 10.126.0.96 10.126.0.132 10.126.0.133 10.126.0.104
10.126.0.106 10.126.0.105 10.126.0.95 10.126.0.102 10.126.0.100 10.126.0.112"

LDAP_MTLS_HOSTS="10.126.0.114 10.126.0.106"

ZK_MTLS_HOSTS="10.126.0.114 10.126.0.113 10.126.0.96 10.126.0.106 10.126.0.105


10.126.0.95"

CASS_MTLS_HOSTS="10.126.0.114 10.126.0.113 10.126.0.96 10.126.0.106 10.126.0.105


10.126.0.95"

PG_MTLS_HOSTS="10.126.0.104 10.126.0.112"

RT_MTLS_HOSTS="10.126.0.113 10.126.0.96 10.126.0.105 10.126.0.95"

MS_MTLS_HOSTS="10.126.0.114 10.126.0.106"

MP_MTLS_HOSTS="10.126.0.113 10.126.0.96 10.126.0.105 10.126.0.95"

QP_MTLS_HOSTS="10.126.0.132 10.126.0.133 10.126.0.102 10.126.0.100"

ENABLE_SIDECAR_PROXY="y"

ENCRYPT_DATA="zRNQ9lhRySNTfegiLLLfIQ=="

PATH_TO_CA_CERT="/opt/consul-agent-ca.pem"

PATH_TO_CA_KEY="/opt/consul-agent-ca-key.pem"

APIGEE_MTLS_MULTI_DC_ENABLE="y"

REGION="dc-1"

MTLS_LOCAL_REGION_IP="10.126.0.114 10.126.0.113 10.126.0.96 10.126.0.132


10.126.0.133 10.126.0.104"

MTLS_REMOTE_REGION_1_NAME="dc-2"

MTLS_REMOTE_REGION_1_IP="10.126.0.106 10.126.0.105 10.126.0.95 10.126.0.102


10.126.0.100 10.126.0.112"

For information about the standard configuration properties, see Step 1: Update your
configuration file.

Test a multi-data center configuration


The raft list-peers command displays a list of IP addresses that are defined in
MTLS_LOCAL_REGION_IP, meaning they are located within the same data center.

The following examples show sample output from a raft list-peers command:

[ec2-user]# consul operator raft list-peers

Node ID Address State Voter RaftProtocol

prc-test-1-2119 d1361917-b244-42 10.126.0.151:8300 leader true 3

prc-test-0-2119 fad66fc3-22a0-43 10.126.0.155:8300 follower true 3

prc-test-2-2119 78847b12-dd83-44 10.126.0.159:8300 follower true 3

prc-test-6-2119 60bb50ac-37b6-52 10.126.0.152:8300 leader true 3

prc-test-7-2119 515bbdfd-e968-53 10.126.0.147:8300 follower true 3

prc-test-8-2119 d869c9a5-b4f6-54 10.126.0.158:8300 follower true 3

Apigee mTLS has been tested on two data centers. You can, however, specify
configurations of up to 17 data centers by using the following properties:

● MTLS_REMOTE_REGION_[1-17]_IP
● MTLS_REMOTE_REGION_[1-17]_NAME

Uninstall Apigee mTLS


You can remove Apigee mTLS at any time. This section describes how to remove it
and to verify that it has been removed.

To roll back the Apigee mTLS installation:

1. Log in to a node in your cluster. The order in which you do this does not
matter.
2. Stop all components on the node, as the following example shows:
3. /opt/apigee/apigee-service/bin/apigee-all stop
4. Uninstall the apigee-mtls service by executing the following command:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls uninstall
6. Start all components on the node in the start order, as the following example
shows:
7. /opt/apigee/apigee-service/bin/apigee-service component_name start
8. Repeat this process for each node in the cluster.

To verify that the uninstallation was successful, you can do the following (in any
order):

1. On each node that is running ZooKeeper, check that the Consul services are
not in the /usr/lib/systemd/system directory:
● Change to the /usr/lib/systemd/system directory:
● cd /usr/lib/systemd/system
● Ensure that the following files are not in that directory:
● consul_egress.service
● consul_server.service
● If either of these files is in the /usr/lib/systemd/system directory,
delete it.
2. On each node that is running ZooKeeper, check to see if the apigee-mtls
and apigee-mtls-consul directories exist:
● Change to the Apigee root directory:
● cd ${APIGEE_ROOT:-/opt/apigee}
● Check the contents of the directory:
● ls
● Ensure that the following directories do not exist in this directory:
● apigee-mtls-version
● apigee-mtls-consul-version
● If either of these directories exist, delete them.
3. In the same directory, ensure that symlinks to the following have been
removed:
● apigee-mtls
● apigee-mtls-consul
4. To do this, use the find -L option, as the following example shows:
5. find -L ./
6. If symbolic links to these directories remain, you can remove them with either
the rm or unlink commands.
7. On each node that is running ZooKeeper, check that Consul has been
removed by using the which command:
8. which consul
9. This command should respond with a message similar to the following:
10. "/usr/bin/which: no consul in (...:/opt/apigee/apigee-adminapi-version/bin:...)"
11. Execute the following command as root or with sudo:
12. iptables -t nat -L OUTPUT
13. This command should display column headings but no data in the columns,
as the following example shows:
14. target prot opt source destination
15. Use yum to determine if the Apigee mTLS packages are installed:
16. yum list installed
17. This command should not display any packages matching the following:
● apigee-mtls-version
● apigee-mtls-consul-version

Use a custom certificate


Apigee mTLS requires that a certificate and key exist on each node in the cluster.

To generate the certificate, choose one of the following options:

● If you have your own, established Certificate Authority (CA): Use it to


generate your certificate and key, as described in this section.
● If you do not have a CA: Apigee recommends that you install Consul and use
it to generate the certificate/key pair. For more information, see Step 2: Install
Consul and generate credentials.
NOTE: While each node will have its own unique local key/cert pair, all nodes use the same CA's
key/cert pair. Both the local key/cert pair and the CA's key/cert pair must be present on each
node.

This process consists of the following steps, which you must perform on each node:

1. Create the private key for the node. Each node must have a unique private
key.
2. Create the signature config for the node. Each node must have its own
signature configuration file.
3. Build the request by converting the signature configuration file into a
signature request file.
4. Sign the request so that you can get a local key/cert pair for the node.
5. Integrate all the key/cert pairs with your nodes.

Apigee recommends that you perform steps 1 through 4 for all nodes, and then
perform step 5 for all nodes, rather than walking through all 5 steps for each node,
one at a time.

GET STARTED!

Step 1: Create the local private key


Each node must have its own version of a local private key.

To create a private key for a node, execute the following command on each node:

openssl genrsa -out KEY_FILE KEY_SIZE


Where:

● KEY_FILE is the path to the key file you want to create.


● KEY_SIZE is the number of bytes for the key. Apigee recommends that you
use either 4096 or 8192.

For example:

openssl genrsa -out local_key.pem 8192

This command creates an RSA private key file named local_key.pem which has
no public key component.

Next Step
1 NEXT: (2) CREATE THE SIGNATURE CONFIG 3 4 5

Step 2: Create the local signature config


file
After you create the local private key for a node, create its signature configuration
file. Each node must have its own version of the signature configuration file.

The following example shows the syntax for a signature configuration file:

[req]
distinguished_name = req_distinguished_name
attributes = req_attributes
prompt = no

[ req_distinguished_name ]
C=COUNTRY_NAME
ST=STATE_NAME
L=CITY_NAME
O=ORG_OR_BUSINESS_NAME
OU=ORG_UNIT
CN=ORG_DEPARTMENT

[ req_attributes ]

[ cert_ext ]
subjectKeyIdentifier=hash
keyUsage=critical,keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names

[alt_names]
DNS.1=localhost
DNS.2=ipv4-localhost
DNS.3=ipv6-localhost
DNS.4=cli.dc-1.consul
DNS.5=client.dc-1.consul
DNS.6=server.dc-1.consul
DNS.7=FQDN
# ADDITIONAL definitions, as needed:
DNS.8=ALT_FQDN_1
DNS.9=ALT_FQDN_2

# REQUIRED (at least 1 IP address plus localhost definitions)


IP.1 = IP_ADDRESS
IP.2=0.0.0.0
IP.3=127.0.0.1
IP.4=::1
# ADDITIONAL definitions, as needed:
IP.5=ALT_IP_ADDRESS_1
IP.6=ALT_IP_ADDRESS_2
...

The following table describes the properties in the signature configuration file:

Required
Property Description
?

C A two-letter code for the nation in which the server is


running.

ST The state/province in which the server is running.

L The city in which the server is running.

O The name of the business running the server.

OU Sub-division within the business.


CN Sub-division within the business.

DNS.[1...] DNS servers used by Consul. You must set DNS.1


through DNS.7.

Use dc-1 for the data center, as the following


example shows:

...
[alt_names]
DNS.1=localhost
DNS.2=ipv4-localhost
DNS.3=ipv6-localhost
DNS.4=cli.dc-1.consul
DNS.5=client.dc-1.consul
DNS.6=server.dc-1.consul
DNS.7=FQDN
...

FQDN is the Fully Qualified Domain Name of the


network server that will use this certificate. For
example, nickdanger.la.corp.example.com.

To get the FQDN on a Linux server, use the following


command:

hostname --fqdn

IP.[1...] Set IP.1 to the valid IPv4 address that every member
of the cluster (including cross-data center traffic)
observes this node as.

In addition, Apigee requires that you include the


following localhost definitions:

# REQUIRED (at least 1 IP address plus localhost


definitions)
IP.1=216.3.128.12
IP.2=0.0.0.0
IP.3=127.0.0.1
IP.4=::1

If the node uses more than one IP address to


communicate with other nodes, then specify
additional IP addresses, each on a separate line; for
example:
# REQUIRED (at least 1 IP address plus localhost
definitions)
IP.1=216.3.128.12
IP.2=0.0.0.0
IP.3=127.0.0.1
IP.4=::1
# ADDITIONAL definitions, as needed:
IP.5=192.0.2.0
IP.6=192.0.2.1

Next Step
1 2 NEXT: (3) BUILD THE REQUEST 4 5

Step 3: Build the signature request


After you create a signature configuration file for your local private key, you can
convert this configuration into a signature request file.

To create a signature request file, execute the following command:

openssl req \
-key LOCAL_PRIVATE_KEY \
-config SIGNATURE_CONFIG_FILE \
-new \
-nodes \
-days NUMBER_OF_DAYS \

-out SIG_REQUEST_FILE

Where:

● LOCAL_PRIVATE_KEY is the path to the node's local private key that you
created in Step 1: Create the local private key.
● SIGNATURE_CONFIG_FILE is the path to the node's file that you created in
Step 2: Create the local signature config file.
● NUMBER_OF_DAYS is the number of days for which the certificate is good.
The default is 30.
● SIG_REQUEST_FILE is the output location for the signature request file that
this command creates.

This command creates a *.csr file with the name you specified.

Next Step

1 2 3 NEXT: (4) SIGN THE REQUEST 5

Step 4: Sign the signature request


After you generate a signature request file, you must sign the request.

To sign the *.csr file, execute the following command:

openssl x509 -req \


-CA CA_PUBLIC_CERT \
-CAkey CA_PRIVATE_KEY \
-extensions cert_ext \
-set_serial 1 \
-extfile SIGNATURE_CONFIGURATION \
-in SIGNATURE_REQUEST \
-out LOCAL_CERTIFICATE_OUTPUT

Where:

● CA_PUBLIC_CERT is the path to your Certificate Authority's public key.


● CA_PRIVATE_KEY is the path to your Certificate Authority's private key.
● SIGNATURE_CONFIGURATION is the path to the file that you created in Step 2:
Create the local signature config file.
● SIGNATURE_REQUEST is the path to the file that you created in Build the
signature request.
● LOCAL_CERTIFICATE_OUTPUT is the path at which this command creates the
node's certificate.

This command generates the local_cert.pem and local_key.pem files. You


can use these files on a single node only in the Apigee mTLS installation. Each node
must have its own key/cert pair.

The following example shows a successful response for this command:

user@host:~/certificate_example$ openssl x509 -req \


-CA certificate.pem \
-CAkey key.pem \
-extensions cert_ext \
-set_serial 1 \
-extfile request_for_sig \
-in temp_request.csr \
-out local_cert.pem

Signature ok
subject=C = US, ST = CA, L = San Jose, O = Google, OU = Google-Cloud, CN =
Apigee
Getting CA Private Key

user@host:~/certificate_example$ ls

certificate.pem key.pem local_cert.pem local_key.pem request_for_sig


temp_request.csr

Your custom certificate/key pair is good for 365 days by default. You can configure
the number of days by using the APIGEE_MTLS_NUM_DAYS_CERT_VALID_FOR
property, as described in Step 1: Update your configuration file.

Next Step
1 2 3 4 NEXT: (5) INTEGRATE

Step 5: Integrate the custom certificate

The standard installation for Apigee mTLS is to perform the following general steps:

/opt/apigee/apigee-service apigee-mtls install


/opt/apigee/apigee-service apigee-mtls setup -f /opt/silent.conf

/opt/apigee/apigee-service apigee-mtls start

For custom certificate installation, you must perform additional steps, as described
in this section.
To integrate your custom certificates with Apigee mTLS, you copy the following files
to both the /certs and /source directories on each node in the cluster. You do
this during the installation:

● Generated local_key.pem (unique to each node)


● Generated local_cert.pem (unique to each node)
● The Certificate Authority's certificate.pem
● The Certificate Authority's key.pem

For example, the installation steps for Apigee mTLS with a custom certificate are as
follows:

/opt/apigee/apigee-service apigee-mtls install


/opt/apigee/apigee-service apigee-mtls setup -f /opt/silent.conf

Copy the local generated certificate


cp PATH_TO_LOCAL_CERT /opt/apigee/apigee-mtls/certs/local_cert.pem
cp PATH_TO_LOCAL_CERT /opt/apigee/apigee-mtls/source/certs/local_cert.pem

Copy the local generated key


cp PATH_TO_LOCAL_KEY /opt/apigee/apigee-mtls/certs/local_key.pem
cp PATH_TO_LOCAL_KEY /opt/apigee/apigee-mtls/source/certs/local_key.pem

Copy the CA's certificate


cp PATH_TO_CA_CERT /opt/apigee/apigee-mtls/certs/ca_cert.pem
cp PATH_TO_CA_CERT /opt/apigee/apigee-mtls/source/certs/ca_cert.pem

Copy the CA's key


cp PATH_TO_CA_KEY /opt/apigee/apigee-mtls/certs/ca_key.pem
cp PATH_TO_CA_KEY /opt/apigee/apigee-mtls/source/certs/ca_key.pem

/opt/apigee/apigee-service apigee-mtls start

NOTE: Be sure to mimic the permissions and ownership of the original files you are replacing. As
a result, if you replace a file whose owner is the apigee:apigee user, then set that ownership
to be the same for the new file.

This process overrides the certificates that were generated during the initial setup.

After you complete the integration of the new certificates, you can verify that they are
valid by using the instructions in Verify the certificate.
Verify the certificate

You can inspect the certificates on each node to ensure that the names you added
are present in the final output.

To inspect a certificate, execute the following command:

openssl x509 -in PATH_TO_CERT -ext subjectAltName -noout

Where PATH_TO_CERT is the certificate that you want to verify.

This command returns output similar to the following:

openssl x509 -in local_cert.pem -ext subjectAltName -noout

X509v3 Subject Alternative Name:


DNS:localhost, DNS:ipv4-localhost, DNS:ipv6-localhost, DNS:cli.dc-1.consul,
DNS:client.dc-1.consul,
DNS:server.dc-1.consul, DNS:prc-test-0-1337.edge-continious-integration.us-west-
1a.google.com,

IP Address:0.0.0.0, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, IP


Address:10.126.0.117

The list of names and IP addresses should match those that you added in Step 2:
Create the local signature config file.

Troubleshoot Apigee mTLS


This section describes some common errors and resolutions that you might
encounter when installing Apigee mTLS.

Certificate errors
ZooKeeper hosts might experience blocks when executing the following commands:

● consul operator raft list-peers


● systemctl consul_server state returns green
If this happens, try executing the following command:

journalctl -u consul_server | grep "x509: certificate signed by unknown authority"

If you find instances of the 509 error shown in the response, then multiple CAs are
being used and the Consul servers are failing to verify one another. In this case,
Apigee recommends that you uninstall apigee-mtls and then reinstall it on all
nodes in the cluster. For more information, see Uninstall Apigee mTLS.

SCALING
Adding a Router or Message Processor
node
You can add a Router or Message Processor node to an existing installation. For a
list of the system requirements for a Router or Message Processor, see Installation
Requirements.

Add a Router
After you install Edge on the node, use the following procedure to add the Router:

1. Install Edge on the node using the internet or non internet procedure as
described in the Edge Installation manual.
2. At the command prompt, run the apigee-setup.sh script:
3. /opt/apigee/apigee-setup/bin/setup.sh -p r -f configFile
4. The -p r option specifies to install the Router. See Install Edge components
on a node for information on creating a configFile.
5. When the installation completes, the script displays the UUID of the Router. If
you need to determine the UUID later, use the following cURL command on
the host where you installed the Router:
6. curl http://router_IP:8081/v1/servers/self
7. If you are using Cassandra authentication, enable the Router to connect to
Cassandra:
8. /opt/apigee/apigee-service/bin/apigee-service edge-router
store_cassandra_credentials -u username -p password
9. For more information, see Enable Cassandra authentication.
10. To check the configuration, you can run the following curl command:
11. curl -v -u adminEmail:pword "http://ms_IP:8080/v1/servers?pod=pod_name"
12. Where pod_name is gateway or your custom pod name. You should see the
UUIDs of all Routers, including the Router that you just added.
If the Router UUID does not appear in the output, run the following cURL
command to add it:
curl -v -u adminEmail:pword \
-X POST http://ms_IP:8080/v1/regions/region_name/pods/pod_name/servers \

13. -d "action=add&uuid=router_UUID&type=router"
14. Replace ms_IP with the IP address of the Management Server, region_name
with the default region name of dc-1 or your custom region name, and
pod_name with gateway or your custom pod name.
15. To test the router, you should be able to make requests to your APIs through
the IP address or DNS name of the Router. For example:
16. http://newRouter_IP:port/v1/apiPath
17. For example, if you completed the first tutorial where you created the weather
API:
18. http://newRouter_IP:port/v1/weather/forecastrss?w=12797282

Add a Message Processor


After you install Edge on the node, use the following procedure to add a Message
Processor:

1. Install Edge on the node using the internet or non internet procedure as
described in the Edge Installation manual.
2. At the command prompt, run the apigee-setup.sh script:
3. /opt/apigee/apigee-setup/bin/setup.sh -p mp -f configFile
4. The -p mp option specifies to install the Message Processor. See Install Edge
components on a node for information on creating a configFile.
5. When the installation completes, the script displays the UUID of the Message
Processor. Note that UUID as you need it to complete the configuration
process. If you need to determine the UUID, use the following curl command
on the host where you installed the Message Processor:
6. curl http://mp_IP:8082/v1/servers/self
7. For each environment in each organization in your installation, use the
following curl command to associate the Message Processor with the
environment:
curl -v -u adminEmail:pword \
-H "Content-Type: application/x-www-form-urlencoded" -X POST
"http://ms_IP:8080/v1/o/org_name/e/env_name/servers" \

8. -d "action=add&uuid=mp_UUID"
9. Replace ms_IP with the IP address of the Management Server and org_name
and env_name with the organization and environment associated with the
Message Processor.
10. To check the configuration, you can run the following curl command:
curl -v -u adminEmail:pword \

11. "http://ms_IP:8080/v1/o/org_name/e/env_name/servers"
12. Where org_name is the name of your organization, and env_name is the
environment. You should see the UUIDs of all Message Processors
associated with the organization and environment, including the Message
Processor that you just added.
13. If you are using Cassandra authentication, enable the Message Processor to
connect to Cassandra:
14. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
store_cassandra_credentials -u username -p password
15. For more information, see Enable Cassandra authentication.

Add both a Router and a Message Processor


After you install Edge on the node, use the following procedure to add a router and
Message Processor at the same time:

1. At the command prompt, run the apigee-setup script:


2. /opt/apigee/apigee-setup/bin/setup.sh -p rmp -f configFile
3. The -p rmp option specifies to install the Router and Message Processor.
See Install Edge components on a node for information on creating a
configFile.
4. Follow the procedures above to configure the Router and Message Processor.

Remove a server (Management


Server/Message Processor/Router)

This section describes how to remove the following components from a Private
Cloud installation:

● Management Server
● Message Processor
● Router

For information on how to remove a Qpid server, see Remove a Qpid server.

To remove a Message Processor, Management Server, or Router:

1. (Message Processor only) Deregister the Message Processor from the


organization's environments:
curl -u adminEmail:pWord
-X POST http://management_IP:8080/v1/o/orgName/e/envName/servers

2. -d "uuid=uuid&region=regionName&pod=podName&action=remove"
3. Deregister the server's type:
curl -u adminEmail:pWord
-X POST http://management_IP:8080/v1/servers

4. -d "type=[message-processor|router|management-
server]&region=regionName&pod=podName&uuid=uuid&action=remove"
5. Delete the server:
6. curl -u adminEmail:pWord -X DELETE
http://management_IP:8080/v1/servers/uuid
7. (Message Processor only) After deregistering a Message Processor, restart
the routers:
8. /opt/apigee/apigee-service/bin/apigee-service edge-router restart

Adding Cassandra nodes


This document describes how to add three new Cassandra nodes to an existing
Edge for Private Cloud installation.

While you can add one or two Cassandra nodes to an existing Edge installation,
Apigee recommends that you add three nodes at a time.

For a list of the system requirements for a Cassandra node, see Installation
requirements.

Existing Edge configuration


All the supported Edge topologies for a production system specify to use three
Cassandra nodes. The three nodes are specified to the CASS_HOSTS property in the
config file as shown below:

IP1=10.10.0.1
IP2=10.10.0.2
IP3=10.10.0.3
HOSTIP=$(hostname -i)
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=Secret123
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=1
APIGEE_LDAPPW=secret
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1 $IP2 $IP3"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1"
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
SMTPPASSWORD=smtppwd

Note that the REGION property specifies the region name as "dc-1". You need that
information when adding the new Cassandra nodes.

Modifying the config file to add the three new Cassandra


nodes
In this example, the three new Cassandra nodes are at the following IP addresses:

● 10.10.0.14
● 10.10.0.15
● 10.10.0.16

You must first update Edge configuration file to add the new nodes:

IP1=10.10.0.1
IP2=10.10.0.2
IP3=10.10.0.3
# Add the new node IP addresses.
IP14=10.10.0.14
IP15=10.10.0.15
IP16=10.10.0.16
HOSTIP=$(hostname -i)
ADMIN_EMAIL=opdk@google.com
...
# Update CASS_HOSTS to add each new node after an existing nodes.
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1:1,1 $IP14:1,1 $IP2:1,1 $IP15:1,1 $IP3:1,1 $IP16:1,1"
Note: Add each new Cassandra node to CASS_HOSTS after an existing node.

This ensure that the existing nodes retain their initial token settings, and the initial
token of each new node is between the token values of the existing nodes.
Configure Edge
After editing the config file, you must:

● Reconfigure the existing Cassandra nodes


● Install Cassandra on the new nodes
● Reconfigure the Management Server

Reconfigure the existing Cassandra nodes

On the existing Cassandra nodes:

1. Rerun the setup.sh with the "-p c" profile and the new config file:
2. /opt/apigee/apigee-setup/bin/setup.sh -p c -f updatedConfigFile

Install Cassandra on the new nodes

Use the procedure below to install Cassandra on the new nodes.

Note: Cassandra uses the fixed data-center names dc-1, dc-2, and so on, irrespective of the
region names specified in the REGION property of the configuration file for the regions in which
you install the new nodes. In step 2 of the following procedure, you must use Cassandra's data
center names (dc-1, dc-2, etc.) in the commands.

On each new Cassandra node:

1. Install Cassandra on the three nodes:


a. Install apigee-setup on the first node as described in Install the
Edge apigee-setup utility.
b. Install Cassandra on the first node by using the updated config file:
c. /opt/apigee/apigee-setup/bin/setup.sh -p c -f updatedConfigFile
d. Repeat these two steps for the remaining new Cassandra nodes.
2. Rebuild the three new Cassandra nodes, specifying the region name to be the
data center in which you are adding the node (dc-1, dc-2, and so on). In this
example, it is dc-1:
a. On the first node, run:
b. /opt/apigee/apigee-cassandra/bin/nodetool [-u username -pw
password] -h nodeIP rebuild dc-1
c. Where nodeIP is the IP address of the Cassandra node.
You only need to pass your username and password if you enabled
JMX authentication for Cassandra.
d. Repeat this step on the remaining new Cassandra nodes.

Reconfigure the Management Server

On a Management-Server node

1. Rerun setup.sh to update the Management Server for the newly added
Cassandra nodes:
2. /opt/apigee/apigee-setup/bin/setup.sh -p ms -f updatedConfigFile

Restart all Routers and Message Processors


1. On all Routers:
2. /opt/apigee/apigee-service/bin/apigee-service edge-router restart
3. On all Message Processors:
4. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart

Free disk space on existing Cassandra nodes

After you add a new node, you can use the nodetool cleanup command on the
pre-existing nodes to free up disk space. This command clears configuration tokens
that are no longer owned by the pre-existing Cassandra node.

To free up disk space on pre-existing Cassandra nodes after adding a new node,
execute the following command:

/opt/apigee/apigee-cassandra/bin/nodetool [-u username -pw password] -h


cassandraIP cleanup

You only need to pass your username and password if you enabled JMX
authentication for Cassandra.

Verify rebuild
Use the following commands to verify that the rebuild was successful:

nodetool [-u username -pw password] -h nodeIP netstats

This command should indicate MODE: Normal when the node is up and the indexes
are built.

nodetool [-u username -pw password] -h nodeIP statusthrift

Should indicate that the thrift server is running, which allows Cassandra to accept
new client requests.

nodetool [-u username -pw password] -h nodeIP statusbinary

Should indicate that the native transport (or binary protocol) is running.

nodetool [-u username -pw password] -h nodeIP describecluster

Should show new nodes are using the same schema version as the older nodes.

For more information on using nodetool, see the nodetool usage documentation.
AWS best practices
This section summarizes our best practices and provides our recommendations for
using OPDK with AWS cloud.

Cassandra is used as a backend and datastore for almost all the policies and is a
critical part of the Apigee Edge runtime environment. This document focuses on
optimizing Casssandra for the AWS environment.

Storage and I/O requirements


Most Cassandra I/O is sequential, but there are cases where you require random I/O.
An example is when reading Sorted String Tables during read operations. SSD is the
recommended storage mechanism for Cassandra, because it provides extremely
low-latency response times for random read operations while supplying enough
sequential write performance for compaction operations. Replication is also taken
into consideration here.

Many instances in AWS EC2 comes with local storage in which the hard drive is
physically attached to the hardware that the EC2 instance is hosted on. Apigee
recommends leveraging both SSD and instance stores when running Cassandra in
production. When you use an instance type with more than 1 SSD, you can use RAID0
to get more throughput and storage capacity.

Network requirements
Cassandra uses the Gossip protocol to exchange information with other nodes
about network topology. The use of Gossip plus the distributed nature of Cassandra
— which involves talking to multiple nodes for read and write operations — results in
a lot of data transfer through the network. Apigee recommends using Instance type
with at least 1Gbps network bandwidth and more than 1Gbps for production
systems.

Use a VPC with CIDR of /16. Since subnets in AWS cannot span across more than 1
AZ, Apigee recommends the following:

● Create 1 subnet per Availability Zone (AZ)


● Use 3 private subnets for your Apigee installation, with one Cassandra node in
each AZ. The 3 subnets should have enough CIDR blocks to accommodate
horizontal expansion of the Cassandra cluster.
● Configure 3 public subnets with dedicated NAT for Cassandra to be able to
talk to the internet for software download and security updates.

Unlike legacy master-slave architectures, Cassandra has a master-less architecture


in which all nodes play an identical role, so there is no single point of failure.
Consider spreading your Cassandra nodes across multiple AZs to enable high
availability. By spreading nodes across AZs, you can still maintain availability and
uptime in the case of disaster.
Choosing an instance family
When looking at Cassandra CPU requirements, it's useful to note that insert-heavy
workloads are CPU-bound in Cassandra before becoming IO-bound. In other words,
all write operations go to the commit log, but Cassandra is so efficient in writing that
the CPU becomes the limiting factor. Cassandra is highly concurrent and uses as
many CPU cores as available.

Apigee recommends using an instance family, which has a balance of both CPU and
memory. Specifically, we recommend using C5 family instances if available in your
AWS region and C3 as a fall back option. In some cases, 4xlarge is the optimal
instance in both families that provides the best price/performance.

Apigee also recommends using a default tenancy for Cassandra instances. When
you scale to more than 1 instance per AZ, most likely all your Cassandra instances
will be placed on the same underlying hardware if you set tenancy to be dedicated.
So, when hardware fails, you will likely lose all your instances in that AZ.

Recommendation summary
The following table summarizes the Apigee recommendations for using AWS with
Apigee Edge for Private Cloud:

Instance Family C5d (preferred ) or C3

Instance Type C(x).4xlarge

Instance Store SSD (local storage) with RAID0

Tenancy Type default

Node Placement 1 Cassandra node per AZ

VPC and Subnet 1 subnet per AZ and a VPC per region

For more information, see Amazon instance types.

Adding ZooKeeper nodes


This document describes how to add three new ZooKeeper nodes to an existing
Edge for Private Cloud installation.

You can add one or two ZooKeeper nodes to an existing Edge installation, however,
you must make sure that you always have an odd number of ZooKeeper voter nodes,
as described below.

Existing Edge configuration


All the supported Edge topologies for a production system specify to use three
ZooKeeper nodes. The three nodes are specified to the ZK_HOSTS and
ZK_CLIENT_HOSTS properties in the config file as shown below:

IP1=10.10.0.1
IP2=10.10.0.2
IP3=10.10.0.3
HOSTIP=$(hostname -i)
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=Secret123
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=1
APIGEE_LDAPPW=secret
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1 $IP2 $IP3"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"
CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1"
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
SMTPPASSWORD=smtppwd

Where:

● ZK_HOSTS specifies the IP addresses or DNS names of the ZooKeeper nodes.


The IP addresses or DNS names must be listed in the same order on all
ZooKeeper nodes. In a multi-data center environment, list all ZooKeeper
nodes from both data centers.
● ZK_CLIENT_HOSTS specifies the IP addresses or DNS names of the
ZooKeeper nodes used by this data center. The IP addresses or DNS names
must be listed in the same order on all ZooKeeper nodes.
In a single data center installation, these are the same nodes as specified by
ZK_HOSTS. In a multi-data center environment, list only the ZooKeeper nodes
in this data center.
Modifying the config file to add the three new ZooKeeper
nodes
In this example, the three new ZooKeeper nodes are at the following IP addresses:

● 10.10.0.14
● 10.10.0.15
● 10.10.0.16

You must first update Edge configuration file to add the new nodes:

IP1=10.10.0.1
IP2=10.10.0.2
IP3=10.10.0.3
# Add the new node IP addresses.
IP14=10.10.0.14
IP15=10.10.0.15
IP16=10.10.0.16
HOSTIP=$(hostname -i)
ADMIN_EMAIL=opdk@google.com
...
# Update ZK_HOSTS to add each new node after an existing nodes.
ZK_HOSTS="$IP1 $IP2 $IP3 $IP14 $IP15 $IP16:observer"
# Update ZK_Client_HOSTS to add each new node after an existing nodes.
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3 $IP14 $IP15 $IP16"

Mark the last node in ZK_HOSTS with the with :observer modifier. Nodes without
the :observer modifier are called "voters". You must have an odd number of
"voters" in your configuration. Therefore, in this configuration, you have 5 ZooKeeper
voters and one observer.

Note: While you can configure three observers and three voters, Apigee recommends that you
use five voters.

Make sure to add the nodes to both ZK_HOSTS and ZK_CLIENT_HOSTS in the same
order. However, omit the :observer modifier when setting ZK_CLIENT_HOSTS.

Configure Edge
After editing the config file, you must perform all of the following tasks.

Install ZooKeeper on the new nodes


1. Install apigee-setup on the first node as described in Install the Edge
apigee-setup utility.
2. Install ZooKeeper on the first node by using the following commands:
/opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper install

3. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper setup -f


updatedConfigFile
4. Repeat steps 1 and 2 for the remaining new ZooKeeper nodes.

Reconfigure the existing ZooKeeper nodes

On the existing ZooKeeper nodes:

1. Rerun the setup command with the new config file:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper setup -f
updatedConfigFile

Restart all Zookeeper nodes

On all ZooKeeper nodes:

1. Restart the node:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restart
3. You must restart all ZooKeeper nodes, but order of restart does not matter.

Reconfigure the Management Server node

On the Management Server node:

1. Run the setup command:


2. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
setup -f updatedConfigFile
3. Restart the Management Server:
4. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart

Reconfigure all the Routers

On all Router nodes:

1. Run the setup command:


2. /opt/apigee/apigee-service/bin/apigee-service edge-router setup -f
updatedConfigFile
3. Restart the Router:
4. /opt/apigee/apigee-service/bin/apigee-service edge-router restart

Reconfigure all the Message Processors

On all Message Processor nodes:

1. Run the setup command:


2. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
setup -f updatedConfigFile
3. Restart the Message Processor:
4. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart

Reconfigure all Qpid nodes

On all Qpid nodes:

1. Run the setup command:


2. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server setup -f
updatedConfigFile
3. Restart Qpid:
4. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart

Reconfigure all Postgres nodes

On all Postgres nodes:

1. Run the setup command:


2. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server setup -f
updatedConfigFile
3. Restart Postgres:
4. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart

Validate the installation


You can validate the installation of the new ZooKeeper nodes by sending commands
to port 2181 using netcat (nc) or telnet. For more info on ZooKeeper commands, see:
http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_zkCommands.

To validate:

1. If it is not installed on the ZooKeeper node, install nc:


2. sudo yum install nc
3. Run the following nc command:
4. echo stat | nc localhost 2181
5. Repeat steps 1 and 2 on each ZooKeeper node. In the Mode line of the output
for the nodes, one node should be designated as observer, one node as
leader, and the rest as followers.

Add or remove Qpid nodes


This document describes how to add and remove a Qpid server in an existing Edge
installation.
For information on how to remove a Management Server, Message Processor, or
Router, see Remove a server.

Add a Qpid server


To add a Qpid server:

1. On the Management Server, determine the name of the analytics and


consumer groups.
Many of the commands below require that information. By default, the name
of the analytics group is axgroup-001, and the name of the consumer group
is consumer-group-001. In the silent config file for a region, you can set the
name of the analytics group by using the AXGROUP property.
If you are unsure of the names of the analytics and consumer groups, use the
following command to display them:
2. apigee-adminapi.sh analytics groups list --admin adminEmail --pwd
adminPword --host localhost
3. This command returns the analytics group name in the name field, and the
consumer group name in the consumer-groups field.
4. Install the Edge apigee-setup utility on the node using the internet or non
internet procedure as described in Install the Edge apigee-setup utility.
5. Use apigee-setup.sh to install Qpid on the node:
6. /opt/apigee/apigee-setup/bin/setup.sh -p qs -f configFile
7. The "-p qs" option specifies to install Qpid. See Install Edge components on a
node for information on creating a configFile.
When the installation completes, the script displays the UUID of the Qpid
server. If you need to determine the UUID later, use the following cURL
command on the host where you installed Qpid:
8. curl http://qpid_IP:8083/v1/servers/self
9. Add Qpid to the analytics group:
curl -u adminEmail:pword -H "Content-Type: application/json"

10. -X POST "http://ms_IP:8080/v1/analytics/groups/ax/AX_GROUP/servers?


uuid=QPID_UUID&type=qpid-server"
11. In the output, you see the UUID of the Qpid node added to the qpid-server
property under axgroup-001:
12. {
13. "name" : "axgroup-001",
14. "properties" : {},
"scopes" : [ "VALIDATE~test", "sgilson~prod" ],
"uuids" : {
"qpid-server" : [
"d6d0480f-8393-465d-a2a1-b4a16a033c55",
"8398a95c-3640-4bd9-bf7e-1eb89155810a"
]
}
}
15. Add Qpid to the consumer group:
curl -u adminEmail:pword -H "Content-Type: application/json"

16. -X POST "http://ms_IP:8080/v1/analytics/groups/ax/AX_GROUP/consumer-


groups/CONSUMER_GROUP/consumers?uuid=QPID_UUID"
17. In the output, you see the UUID of the Qpid node added to the consumer-
groups property under consumer-group-001:
18. "consumer-groups" : [ {
19. "name" : "consumer-group-001",
20. "consumers" : [
"d6d0480f-8393-465d-a2a1-b4a16a033c55",
"8398a95c-3640-4bd9-bf7e-1eb89155810a"
]
}]
21. Restart all edge-qpid-server components on all nodes to make sure the
change is picked up by those components:
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart

22. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server wait_for_ready

The installation is complete.

Remove a Qpid server


To remove a Qpid node:

1. On the Management Server, determine the name of the analytics and


consumer groups. Many of the commands below require that information.

By default, the name of the analytics group is axgroup-001, and the name of
the consumer group is consumer-group-001. In the silent config file for a
region, you can set the name of the analytics group by using the AXGROUP
property.
If you are unsure of the names of the analytics and consumer groups, use the
following command to display them:
2. apigee-adminapi.sh analytics groups list --admin adminEmail --pwd
adminPword --host localhost
3. This command returns the analytics group name in the name field, and the
consumer group name in the consumer-groups field.
4. Remove Qpid from the consumer group:
curl -u adminEmail:pword -H "Content-Type: application/json"

5. -X DELETE
"http://ms_IP:8080/v1/analytics/groups/ax/AX_GROUP/consumer-groups/
CONSUMER_GROUP/consumers/QPID_UUID"
6. Remove Qpid from the analytics group:
curl -v -u adminEmail:pword

7. -X DELETE "http://ms_IP:8080/v1/analytics/groups/ax/AX_GROUP/servers?
uuid=QPID_UUID&type=qpid-server"
8. Deregister the Qpid server from the Edge installation:
curl -u adminEmail:pword

9. -X POST http://ms_IP:8080/v1/servers -d "type=qpid-server&region=dc-


1&pod=central&uuid=QPID_UUID&action=remove"
10. Remove the Qpid server from the Edge installation:
11. curl -u adminEmail:pword -X DELETE
http://ms_IP:8080/v1/servers/QPID_UUID
12. Restart all edge-qpid-server components on all nodes to make sure the
change is picked up by those components:
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart

13. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server wait_for_ready


14. Uninstall Qpid as described at Uninstalling Edge.

Adding a data center


This document describes how to add a data center (also called a region) to an
existing data center.

Note: If you require three or more data centers, please contact your Sales Rep or Account
Executive to engage with Apigee.Caution: If you need to update Apigee Edge as well as add a
data center, you must perform these actions in separate procedures. Complete one procedure
before starting the other.

Considerations before adding a data center


Before you install add a data center, you must understand how to configure
OpenLDAP, ZooKeeper, Cassandra, and Postgres servers across the data centers.
You must also ensure that the necessary ports are open between the nodes in the
two data centers.

● OpenLDAP
Each data center has its own OpenLDAP server configured with replication
enabled. When you install the new data center, you must configure OpenLDAP
to use replication, and you must reconfigure the OpenLDAP server in the
existing data center to use replication.
● ZooKeeper
For the ZK_HOSTS property for both data centers, specify the IP addresses or
DNS names of all ZooKeeper nodes from both data centers, in the same order,
and mark any nodes with the with “:observer” modifier. Nodes without the
:observer modifier are called "voters". You must have an odd number of
"voters" in your configuration.
In this topology, the ZooKeeper host on host 9 is the observer:

In the example configuration file shown below, node 9 is tagged with the
:observer modifier so that you have five voters: Nodes 1, 2, 3, 7, and 8.
For the ZK_CLIENT_HOSTS property for each data center, specify the IP
addresses or DNS names of only the ZooKeeper nodes in the data center, in
the same order, for all ZooKeeper nodes in the data center.
● Cassandra
All data centers must to have the same number of Cassandra nodes.
For CASS_HOSTS for each data center, ensure that you specify all Cassandra
IP addresses (not DNS names) for both data centers. For data center 1, list
the Cassandra nodes in that data center first. For data center 2, list the
Cassandra nodes in that data center first. List the Cassandra nodes in the
same order for all Cassandra nodes in the data center.
All Cassandra nodes must have a suffix ':d,r'; for example 'ip:1,1 = data center
1 and rack/availability zone 1 and 'ip:2,1 = data center 2 and rack/availability
zone 1.
For example, "192.168.124.201:1,1 192.168.124.202:1,1 192.168.124.203:1,1
192.168.124.204:2,1 192.168.124.205:2,1 192.168.124.206:2,1"
The first node in rack/availability zone 1 of each data center will be used as
the seed server. In this deployment model, Cassandra setup will look like this:

● Postgres
By default, Edge installs all Postgres nodes in master mode. However, when
you have multiple data centers, you configure Postgres nodes to use master-
standby replication so that if the master node fails, the standby node can
continue to server traffic. Typically, you configure the master Postgres server
in one data center, and the standby server in the second data center.
If the existing data center is already configured to have two Postgres nodes
running in master/standby mode, then as part of this procedure, deregister the
existing standby node and replace it with a standby node in the new data
center.
The following table shows the before and after Postgres configuration for
both scenarios:
Before After

Single Master Postgres Master Postgres node in dc-1


node in dc-1
Standby Postgres node in dc-2

Master Postgres node in Master Postgres node in dc-1


dc-1
Standby Postgres node in dc-2
Standby Postgres node
in dc-1 Deregister old Standby
Postgres node in dc-1

● Port requirements
You must ensure that the necessary ports are open between the nodes in the
two data centers. For a port diagram, see Port requirements.

Updating the existing data center


Adding a data center requires you to perform the steps to install and configure the
new data center nodes, but it also requires you to update nodes in the original data
center. These modifications are necessary because you are adding new Cassandra
and ZooKeeper nodes in the new data center that have to be accessible to the
existing data center, and you have to reconfigure OpenLDAP to use replication.

Creating the configuration files


Shown below are the silent configuration files for the two data centers, where each
data center has 6 nodes as shown in Installation topologies. Notice that the config
file for dc-1 adds additional settings to:

● Configure OpenLDAP with replication across two OpenLDAP nodes.


● Add the new Cassandra and ZooKeeper nodes from dc-2 to the config file for
dc-1.

# Datacenter 1 # Datacenter 2
IP1=IPorDNSnameOfNode1 IP1=IPorDNSnameOfNode1
IP2=IPorDNSnameOfNode2 IP2=IPorDNSnameOfNode2
IP3=IPorDNSnameOfNode3 IP3=IPorDNSnameOfNode3
IP7=IPorDNSnameOfNode7 IP7=IPorDNSnameOfNode7
IP8=IPorDNSnameOfNode8 IP8=IPorDNSnameOfNode8
IP9=IPorDNSnameOfNode9 IP9=IPorDNSnameOfNode9
HOSTIP=$(hostname -i) HOSTIP=$(hostname -i)
MSIP=$IP1 MSIP=$IP7
ADMIN_EMAIL=opdk@google.com ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=Secret123 APIGEE_ADMINPW=Secret123
LICENSE_FILE=/tmp/license.txt LICENSE_FILE=/tmp/license.txt
USE_LDAP_REMOTE_HOST=n USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=2 LDAP_TYPE=2
LDAP_SID=1 LDAP_SID=2
LDAP_PEER=$IP7 LDAP_PEER=$IP1
APIGEE_LDAPPW=secret APIGEE_LDAPPW=secret
MP_POD=gateway-1 MP_POD=gateway-2
REGION=dc-1 REGION=dc-2
ZK_HOSTS="$IP1 $IP2 $IP3 $IP7 $IP8 ZK_HOSTS="$IP1 $IP2 $IP3 $IP7 $IP8
$IP9:observer" $IP9:observer"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3" ZK_CLIENT_HOSTS="$IP7 $IP8 $IP9"
# Must use IP addresses for # Must use IP addresses for
CASS_HOSTS, not DNS names. CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1:1,1 $IP2:1,1 CASS_HOSTS="$IP7:2,1 $IP8:2,1
$IP3:1,1 $IP7:2,1 $IP8:2,1 $IP9:2,1" $IP9:2,1 $IP1:1,1 $IP2:1,1 $IP3:1,1"
SKIP_SMTP=n SKIP_SMTP=n
SMTPHOST=smtp.example.com SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com SMTPUSER=smtp@example.com
SMTPPASSWORD=smtppwd SMTPPASSWORD=smtppwd
SMTPSSL=n SMTPSSL=n
SMTPPORT=25 SMTPPORT=25
SMTPMAILFROM="My Company SMTPMAILFROM="My Company
<myco@company.com>" <myco@company.com>"

Add a new data center


Use the procedure below to install a new data center.

Note: Cassandra uses the fixed data-center names dc-1, dc-2, and so on, irrespective of the
region names specified in the REGION property of the configuration file for the regions in which
you install the new nodes. In the following procedure, you must use Cassandra's data center
names (dc-1, dc-2, etc.) in the commands.

In the procedure, the data centers have the following names:

● dc-1: the existing data center


● dc-2: the new data center

To add a new data center:


1. On dc-1, rerun setup.sh on the original Cassandra nodes with the new dc-1
config file that includes the Cassandra nodes from dc-2:
2. /opt/apigee/apigee-setup/bin/setup.sh -p ds -f configFile1
3. On dc-1, rerun setup.sh on the Management Server node:
4. /opt/apigee/apigee-setup/bin/setup.sh -p ms -f configFile1
5. On dc-2, install apigee-setup on all nodes. See Install the Edge apigee-
setup utility for more info.
6. On dc-2, install Cassandra and ZooKeeper on the appropriate nodes:
7. /opt/apigee/apigee-setup/bin/setup.sh -p ds -f configFile2
8. On dc-2, run the rebuild command on all Cassandra nodes, specifying the
region name of dc-1:
9. /opt/apigee/apigee-cassandra/bin/nodetool [-u username -pw password] -h
cassIP rebuild dc-1
10. You only need to pass your username and password if you enabled JMX
authentication for Cassandra.
11. On dc-2, install the Management Server on the appropriate node:
12. /opt/apigee/apigee-setup/bin/setup.sh -p ms -f configFile2
13. On the Management Server node in dc-2, install apigee-provision, which
installs the apigee-adminapi.sh utility:
14. /opt/apigee/apigee-service/bin/apigee-service apigee-provision install
15. On dc-2, install the Routes and Message Processors on the appropriate
nodes:
16. /opt/apigee/apigee-setup/bin/setup.sh -p rmp -f configFile2
17. On dc-2, install Qpid on the appropriate nodes:
18. /opt/apigee/apigee-setup/bin/setup.sh -p qs -f configFile2
19. On dc-2, install Postgres on the appropriate node:
20. /opt/apigee/apigee-setup/bin/setup.sh -p ps -f configFile2
21. Setup Postgres master/standby for the Postgres nodes. The Postgres node in
dc-1 is the master, and the Postgres node in dc-2 is the standby server. Note: If
dc-1 is already configured to have two Postgres nodes running in master/standby mode,
then as part of this procedure, use the existing master Postgres node in dc-1 as the
master, and the Postgres node in dc-2 as the standby server. Later in this procedure, you
will deregister the existing Postgres standby server in dc-1.
a. On the master node in dc-1, edit the config file to set:

PG_MASTER=IPorDNSofDC1Master

b. PG_STANDBY=IPorDNSofDC2Standby
c. Enable replication on the new master:
d. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
setup-replication-on-master -f configFIle
e. On the standby node in dc-2, edit the config file to set:

PG_MASTER=IPorDNSofDC1Master

f. PG_STANDBY=IPorDNSofDC2Standby
g. On the standby node in dc-2, stop the server and then delete any
existing Postgres data:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop

h. rm -rf /opt/apigee/data/apigee-postgresql/
i. If necessary, you can backup this data before deleting it.
j. Configure the standby node in dc-2:
k. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
setup-replication-on-standby -f configFile
22. On dc-1, update the analytics configuration and configure the organizations.
a. On the Management Server node of dc-1, get the UUID of the Postgres
node:

apigee-adminapi.sh servers list -r dc-1 -p analytics -t postgres-server \

b. --admin adminEmail --pwd adminPword --host localhost


c. The UUID appears at the end of the returned data. Save that value.
Note: If dc-1 is configured to have two Postgres nodes running in master/standby
mode, you see two IP addresses and UUIDs in the output. Save both UUIDs. From
the IPs, you should be able to determine which UUID is for the master and which
is for the standby node.
d. On the Management Server node of dc-2, get the UUID of the Postgres
node as shown in the previous step. Save that value.
e. On the Management Server node of dc-1, determine the name of the
analytics and consumer groups. Many of the commands below require
that information.
By default, the name of the analytics group is "axgroup-001", and the
name of the consumer group is "consumer-group-001". In the silent
config file for a region, you can set the name of the analytics group by
using the AXGROUP property.
If you are unsure of the names of the analytics and consumer groups,
use the following command to display them:

apigee-adminapi.sh analytics groups list \

f. --admin adminEmail --pwd adminPword --host localhost


g. This command returns the analytics group name in the name field, and
the consumer group name in the consumer-groups field.
h. On the Management Server node of dc-1, remove the existing Postgres
server from the analytics group:
i. Remove the Postgres node from the consumer-group:

apigee-adminapi.sh analytics groups consumer_groups datastores remove \


-g axgroup-001 -c consumer-group-001 -u UUID \

ii. -Y --admin adminEmail --pwd adminPword --host localhost


iii. If dc-1 is configured to have two Postgres nodes running in
master/standby mode, remove both:
apigee-adminapi.sh analytics groups consumer_groups datastores remove \
-g axgroup-001 -c consumer-group-001 -u "UUID_1,UUID_2" \

iv. -Y --admin adminEmail --pwd adminPword --host localhost


v. Remove the Postgres node from the analytics group:

apigee-adminapi.sh analytics groups postgres_server remove \


-g axgroup-001 -u UUID -Y --admin adminEmail \

vi. --pwd adminPword --host localhost


vii. If dc-1 is configured to have two Postgres nodes running in
master/standby mode, remove both:

apigee-adminapi.sh analytics groups postgres_server \


remove -g axgroup-001 -u UUID1,UUID2 -Y --admin adminEmail \

viii. --pwd adminPword --host localhost


i. On the Management Server node of dc-1, add the new master/standby
Postgres servers to the analytics group:
i. Add both Postgres servers to the analytics group:

apigee-adminapi.sh analytics groups postgres_server \


add -g axgroup-001 -u "UUID_1,UUID_2" --admin adminEmail \

ii. --pwd adminPword --host localhost


iii. Ehere UUID_1 corresponds to the master Postgres node in dc-1,
and UUID_2 corresponds to the standby Postgres node in dc-2.
iv. Add the PG servers to the consumer-group as master/standby:

apigee-adminapi.sh analytics groups consumer_groups datastores \


add -g axgroup-001 -c consumer-group-001 -u "UUID_1,UUID_2" --admin
adminEmail \

v. --pwd adminPword --host localhost


j. Add the Qpid servers from dc-2 to the analytics group:
i. On the Management Server node of dc-1, get the UUIDs of the
Qpid nodes in dc-2:
apigee-adminapi.sh servers list -r dc-2 -p central -t qpid-server \

ii. --admin adminEmail --pwd adminPword --host localhost


iii. The UUIDs appear at the end of the returned data. Save those
values.
iv. On the Management Server node of dc-1, add the Qpid nodes to
the analytics group (run both commands):

apigee-adminapi.sh analytics groups qpid_server \


add -g axgroup-001 -u "UUID_1" --admin adminEmail \
--pwd adminPword --host localhost

apigee-adminapi.sh analytics groups qpid_server \


add -g axgroup-001 -u "UUID_2" --admin adminEmail \

v. --pwd adminPword --host localhost


vi. On the Management Server node of dc-1, add the Qpid nodes to
the consumer group (run both commands):

apigee-adminapi.sh analytics groups consumer_groups consumers \


add -g axgroup-001 -c consumer-group-001 -u "UUID_1" \
--admin adminEmail --pwd adminPword --host localhost

apigee-adminapi.sh analytics groups consumer_groups consumers \


add -g axgroup-001 -c consumer-group-001 -u "UUID_2" \

vii. --admin adminEmail --pwd adminPword --host localhost


k. Deregister and delete the old Postgres standby server from dc-1:
i. Deregister the existing dc-1 Postgres standby server:

apigee-adminapi.sh servers deregister -u UUID -r dc-1 \


-p analytics -t postgres-server -Y --admin adminEmail \

ii. --pwd adminPword --host localhost


iii. Where UUID is the old standby Postgres node in dc-1.
iv. Delete the existing dc-1 Postgres standby server:Note: This
command does not uninstall the Postgres server node. It only removes it
from the list of Edge nodes. You can later uninstall Postgres from the
node, if necessary.

apigee-adminapi.sh servers delete -u UUID \


v. --admin adminEmail --pwd adminPword --host localhost
23. Update Cassandra keyspaces with correct replication factor for the two data
centers. You only have to run this step once on any Cassandra server in either
data center:Note: The commands below all set the replication factor to "3", indicating
three Cassandra nodes in the cluster. Modify this value as necessary for your installation.
a. Start the Cassandra cqlsh utility:
b. /opt/apigee/apigee-cassandra/bin/cqlsh cassandraIP
c. Execute the following CQL commands at the "cqlsh>" prompt to set the
replication levels for Cassandra keyspaces:
i. ALTER KEYSPACE "identityzone" WITH replication = { 'class':
'NetworkTopologyStrategy', 'dc-1': '3','dc-2': '3' };
ii. ALTER KEYSPACE "system_traces" WITH replication = { 'class':
'NetworkTopologyStrategy', 'dc-1': '3','dc-2': '3' };
iii. View the keyspaces by using the command:
iv. select * from system.schema_keyspaces;
v. Exit cqlsh:
vi. exit
24. Run the following nodetool command on all Cassandra nodes in dc-1 to free
memory:
25. /opt/apigee/apigee-cassandra/bin/nodetool [-u username -pw password] -h
cassandraIP cleanup
26. You only need to pass your username and password if you enabled JMX
authentication for Cassandra.
27. For each organization and for each environment that you want to support
across data centers:
a. On the Management Server node of dc-1, add the new MP_POD to the
Organization:

apigee-adminapi.sh orgs pods add -o orgName -r dc-2 -p gateway-2 \

b. --admin adminEmail --pwd adminPword --host localhost


c. Where gateway-2 is the name of the gateway pod as defined by the
MP_POD property in the dc-2 config file.
d. Add the new Message Processors to the org and environment:
i. On the Management Server node of dc-2, get the UUIDs of the
Message Processor nodes in dc-2:

apigee-adminapi.sh servers list -r dc-2 -p gateway-2 \

ii. -t message-processor --admin adminEmail --pwd adminPword --


host localhost
iii. The UUIDs appear at the end of the returned data. Save those
values.
iv. On the Management Server node of dc-1, for each Message
Processor in dc-2, add the Message Processor to an
environment for the org:
apigee-adminapi.sh orgs envs servers add -o orgName -e envName \

v. -u UUID --admin adminEmail --pwd adminPword --host localhost


e. On the Management Server node of dc-1, check the organization:

apigee-adminapi.sh orgs apis deployments -o orgName -a apiProxyName \

f. --admin adminEmail --pwd adminPword --host localhost


g. Where apiProxyName is the name of an API proxy deployed in the
organization.

Decommissioning a data center


At times, you might need to decommission a data center. For example, if you are
upgrading your operating system, you need to install the new operating system in a
new data center and then decommission the old data center. The following sections
present an example of decommissioning a data center, in which there are two data
centers, dc-1 and dc-2, on a 12-node clustered installation:

● dc-1 is the data center to be decommissioned.


● dc-2 is a second data center, which is used in the decommissioning
procedure.

If you are upgrading your operating system, dc-2 could be the data center in which
you have installed the new version of the operating system (OS). However, installing
a new OS isn't required to decommission a data center.

Note: The decommissioning produre described below is for two data centers. If you have more
than two data centers, Apigee recommends that you work with a Customer Success partner to
decommission a data center.

Considerations before decommissioning a data center


Keep the following considerations in mind when you decommission a data center:

● Block all runtime and management traffic to the data center being
decommissioned and redirect them to other data centers.
● After decommissioning the data center, you will have reduced capacity in your
Apigee cluster. To make up for it, consider increasing capacity in the
remaining data centers or adding data centers after decommissioning.
● During the decommission process, there is a potential for analytics data loss,
depending on which analytics components are installed in the data center
being decommissioned. You can find more details in Add or remove Qpid
nodes.
● Before you decommission a data center, you should understand how all the
components are configured across all data centers, especially the OpenLDAP,
ZooKeeper, Cassandra, and Postgres servers. You should also take backups
of all components and their configurations.

Before you start


Caution:

● If you restart a component after deregistering it, without uninstalling the component
completely, the component will register itself again Edge. You must uninstall the
component completely or isolate the node network once decommissioned.
● Uninstalling a component will not remove data from /opt/apigee/data.
● During the decommissioning process you will be required to reconfigure multiple Edge
components. While an Edge component is being reconfigured, that specific component
on that specific node will be unavailable.
● Management Server: All the decommission steps are highly dependent on the
Management Server. If you have only one Management Server available, we
recommend that you install a new Management Server component on a data
center other than dc-1 before decommissioning the Management Server on
dc-1, and make sure that one of the Management Servers is always available.
● Router: Before decommissioning a Router, disable reachability of Routers by
blocking the port 15999. Ensure no runtime traffic is being directed at the
Routers being decommissioned.
● Cassandra and ZooKeeper: The sections below describe how to
decommission dc-1 in a two data center setup. If you have more than two
data centers, make sure to remove all references to the node being
decommissioned (dc-1 in this case) from all silent configuration files across
all the remaining data centers. For Cassandra nodes that are to be
decommissioned, drop those hosts from CASS_HOSTS. The remaining
Cassandra nodes should remain in the original ordering of CASS_HOSTS.
● Postgres: If you decommission Postgres master, make sure to promote any
one of the available standby nodes as a new postgres master. While the QPID
server keeps a buffer in the queue, if Postgres master is unavailable for a
longer time, you risk losing analytics data.
Caution: You must remove and add Postgresql servers from the analytics group and the
consumer group at the same time. You might lose some analytics data if there is a
substantial delay between the two steps.
Suggestion: The Message Processor has a small queue where it keeps analytics data in
memory for a brief time, which you can usw to help avoid loss of data. Before you make
changes to the analytics group, block port 5672 in the data center that is taking traffic,
and make sure that you quickly re-enable this port after making the changes, as there is a
chance of data overflow.
For example, to temporarily stop analytics traffic from Message Processors to the Qpid
server:
1. Enter
2. iptables -A INPUT -p tcp --destination-port 5672 ! -s `hostname` -i eth0 -j DROP
3. Follow the axgroup change steps.
4. Remove the iptable rule after axgroup changes:
5. iptables -F

Prerequisites
● Before decommissioning any component, we recommend that you perform a
complete backup of all nodes. Use the procedure for your current version of
Edge to perform the backup. For more information on backup, see Backup
and restore.
Note: If you have multiple Cassandra or ZooKeeper nodes, back them up one
at a time, as the backup process temporarily shuts down ZooKeeper.
● Ensure that Edge is up and running before decommissioning, using the
command:
● /opt/apigee/apigee-service/bin/apigee-all status
● Make sure that no runtime traffic is currently arriving at the data center you
are decommissioning.

Order of decommissioning components

If you install Edge for Private Cloud on multiple nodes, you should decommission
Edge components on those nodes in the following order:

1. Edge UI (edge-ui)
2. Management Server (edge-management-server)
3. OpenLDAP (apigee-openldap)
4. Router (edge-router)
5. Message Processor (edge-message-processor)
6. Qpid Server and Qpidd (edge-qpid-server and apigee-qpidd)
7. Postgres and PostgreSQL database (edge-postgres-server and apigee-
postgresql)
8. ZooKeeper (apigee-zookeeper)
9. Cassandra (apigee-cassandra)

The following sections explain how to decommission each component.

Edge UI
To stop and uninstall the Edge UI component of dc-1, enter the following commands:

/opt/apigee/apigee-service/bin/apigee-service edge-ui stop


/opt/apigee/apigee-service/bin/apigee-service edge-ui uninstall

Management Server
Caution: If you have only one Management Server available, we recommend that you install a
new Management Server component on a data center other than dc-1 before decommissioning
the Management Server on dc-1, and make sure that one of the Management Servers is always
available.

To decommission the Management Server on dc-1, do the following steps:

1. Stop the Management Server on dc-1:


2. apigee-service edge-management-server stop
3. Find the UUID of Management Server registered in dc-1:
curl -u <AdminEmailID>:'<AdminPassword>' \

4. -X GET “http://{MS_IP}:8080/v1/servers?pod=central&region=dc-
1&type=management-server”
5. Deregister server’s type:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST
http://{MS_IP}:8080/v1/servers \

6. -d "type=management-server&region=dc-
1&pod=central&uuid=UUID&action=remove"
7. Delete the server. Note: If other components are also installed on this server,
deregister all of them first before deleting the UUID.
8. curl -u <AdminEmailID>:'<AdminPassword> -X DELETE
http://{MS_IP}:8080/v1/servers/{UUID}
9. Uninstall Management Server component on dc-1:
10. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
uninstall

Open LDAP
This section explains how to decommission OpenLDAP on dc-1.

Note: If you have more then two data centers, see Setups with more than two data
centers below.

To decommission OpenLDAP on dc-1, do the following steps:

1. Back up the dc-1 OpenLDAP node by following the steps in How to back up.
2. Break the data replication between the two data centers, dc-1 and dc-2, by
executing the following steps in both data centers.
Note: If your setup has more than two data centers, you need to:
a. Identify which OpenLDAP node is replicating to the node being decommissioned.
b. Change the replication for that node to replicate to a different node to complete
the replication circle.
3. For example, suppose you have three data centers, in which node 1 replicates to node 2,
node 2 replicates to node 3, and node 3 replicates to node 1. If you are decommissioning
node 1, you need to break the replication on node 1 and also modify replication for node
3 to replicate to node 2.
a. Check the present state:
b. ldapsearch -H ldap://{HOST}:{PORT} -LLL -x -b "cn=config" -D
"cn=admin,cn=config" -w {credentials} -o ldif-wrap=no 'olcSyncRepl' |
grep olcSyncrepl
c. The output should be similar to the following:
d. olcSyncrepl: {0}rid=001 provider=ldap://{HOST}:{PORT}/
binddn="cn=manager,dc=apigee,dc=com" bindmethod=simple
credentials={credentials} searchbase="dc=apigee,dc=com" attrs="*,+"
type=refreshAndPersist retry="60 1 300 12 7200 +" timeout=1
e. Create a file break_repl.ldif containing the following commands:

dn: olcDatabase={2}bdb,cn=config
changetype: modify
delete: olcSyncRepl

dn: olcDatabase={2}bdb,cn=config
changetype: modify

f. delete: olcMirrorMode
g. Run the ldapmodify command:
h. ldapmodify -x -w {credentials} -D "cn=admin,cn=config" -H
"ldap://{HOST}:{PORT}/" -f path/to/file/break_repl.ldif
i. The output should be similar to the following:

modifying entry "olcDatabase={2}bdb,cn=config"

j. modifying entry "olcDatabase={2}bdb,cn=config"


4. You can verify that dc-2 is no longer replicating to dc-1 by creating an entry in
dc-2 LDAP and ensuring it doesn't show up in LDAP of dc-1.
Optionally, you can follow the steps below, which create a read-only user in
the dc-2 OpenLDAP node and then check whether the user is replicated or not.
The user is subsequently deleted.
a. Create a file readonly-user.ldif in dc-2 with following contents:

dn: uid=readonly-user,ou=users,ou=global,dc=apigee,dc=com
objectClass: organizationalPerson
objectClass: person
objectClass: inetOrgPerson
objectClass: top
cn: readonly-user
sn: readonly-user

b. userPassword: {testPassword}
c. Add user with `ldapadd` command in dc-2:
d. ldapadd -H ldap://{HOST}:{PORT} -w {credentials} -D
"cn=manager,dc=apigee,dc=com" -f path/to/file/readonly-user.ldif
e. The output will be similar to:
f. adding new entry "uid=readonly-
user,ou=users,ou=global,dc=apigee,dc=com"
g. Search for the user in dc-1 to make sure the user is not replicated. If
the user is not present in dc-1, you will be sure that both LDAPs are no
longer replicating:
h. ldapsearch -H ldap://{HOST}:{PORT} -x -w {credentials} -D
"cn=manager,dc=apigee,dc=com" -b uid=readonly-
user,ou=users,ou=global,dc=apigee,dc=com -LLL
i. The output should be similar to the following:

No such object (32)

j. Matched DN: ou=users,ou=global,dc=apigee,dc=com


k. Remove the read-only user you added previously:
l. ldapdelete -v -H ldap://{HOST}:{PORT} -w {credentials} -D
"cn=manager,dc=apigee,dc=com" "uid=readonly-
user,ou=users,ou=global,dc=apigee,dc=com"
5. Stop OpenLDAP in dc-1:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap stop
7. Uninstall the OpenLDAP component on dc-1:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap uninstall

Router
This section explains how to decommission a Router. See Remove a server for more
details about removing the Router.

Note: Before decommissioning the Router, disable reachability of Routers by blocking the port
15999. Ensure no runtime traffic is being directed at the Routers being decommissioned.

The following steps decommission the Router from dc-1. If there are multiple Router
nodes configured in dc-1, perform the steps in all Router nodes one at a time

Note: Here it is assumed that the router's health-check port 15999 is configured in
your load balancer, and that blocking port 15999 will make router unreachable. You
may need root access to block the port.

To decommission a Router, do the following steps:

1. Disable the reachability of routers by blocking port 15999, the health check
port. Ensure that runtime traffic is blocked on this data center:
2. iptables -A INPUT -i eth0 -p tcp --dport 15999 -j REJECT
3. Verify that the router is reachable:
4. curl -vvv -X GET http://{ROUTER_IP}:15999/v1/servers/self/reachable
5. The output should be similar to the following:
About to connect() to 10.126.0.160 port 15999 (#0)
Trying 10.126.0.160...
Connection refused
Failed connect to 10.126.0.160:15999; Connection refused
Closing connection 0

6. curl: (7) Failed connect to 10.126.0.160:15999; Connection refused


7. Get the UUID of the Router, as described in Get UUIDs.
8. Stop the router:
9. /opt/apigee/apigee-service/bin/apigee-service edge-router stop
10. List the available gateway pods in the organization by the following
command:
11. curl -u <AdminEmailID>:<AdminPassword> -X GET
"http://{MS_IP}:8080/v1/organizations/{ORG}/pods"
12. See About Pods.
13. Deregister the server’s type:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST
http://{MS_IP}:8080/v1/servers \

14. -d "type=router&region=dc-1&pod=gateway-1&uuid=UUID&action=remove"
15. Deregister the server:
16. curl -u <AdminEmailID>:'<AdminPassword>’ -X DELETE
http://{MS_IP}:8080/v1/servers/UUID
17. Uninstall edge-router:
18. /opt/apigee/apigee-service/bin/apigee-service edge-router uninstall
19. See Remove a server.
20. Flush iptables rules to enable the blocked port 15999:
21. iptables -F

Message Processor
This section describes how to decommission the Message Processor from dc-1. See
Remove a server for more details about removing the Message Processor.

Since we are assuming that dc-1 has a 12-node clustered installation, there are two
Message Processor nodes configured in dc-1. Perform the following commands in
both the nodes.

1. Get the UUIDs of the Message Processors, as described in Get UUIDs.


2. Stop the Message Processor:
3. apigee-service edge-message-processor stop
4. Deregister the server’s type:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST http://{MS_IP}:8080/v1/servers
\

5. -d "type=message-processor&region=dc-1&pod=gateway-
1&uuid=UUID&action=remove"
6. Disassociate an environment from the Message Processor.
Note: You need to remove the bindings on each org/env that associates the
Message Processor UUID.
curl -H "Content-Type:application/x-www-form-urlencoded" -u <AdminEmailID>:'' \
-X POST http://{MS_IP}:8080/v1/organizations/{ORG}/environments/{ENV}/servers \

7. -d "action=remove&uuid=UUID"
8. Deregister the server’s type:
9. curl -u :'' -X POST http://MS_IP:8080/v1/servers -d "type=message-
processor®ion=dc-1&pod=gateway-1&uuid=UUID&action=remove"
10. Uninstall the Message Processor:
11. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
uninstall
12. Deregister the server:
13. curl -u <AdminEmailID>:'<AdminPassword> -X DELETE
http://{MS_IP}:8080/v1/servers/UUID

Qpid server and Qpidd


This section explains how to decommission Qpid Server (edge-qpid-server) and
Qpidd (apigee-qpidd). There are two Qpid nodes configured in dc-1, so you must
do the following steps for both nodes:

1. Get the UUID for Qpidd, as described in Get UUIDs.


2. Stop edge-qpid-server and apigee-qpidd:
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server stop

3. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd stop


4. Get a list of Analytics and consumer groups:
5. curl -u <AdminEmailID>:'<AdminPassword>' -X GET
http://{MS_IP}:8080/v1/analytics/groups/ax
6. Remove Qpid from the consumer group:
7. curl -u <AdminEmailID>:'<AdminPassword>' -H "Content-Type:
application/json" -X DELETE \
"http://{MS_IP}:8080/v1/analytics/groups/ax/{ax_group}/consumer-groups/
{consumer_group}/consumers/{QPID_UUID}"
8. Remove Qpid from the analytics group:
curl -v -u <AdminEmailID>:'<AdminPassword>' \

9. -X DELETE "http://{MS_IP}:8080/v1/analytics/groups/ax/{ax_group}/servers?
uuid={QPID_UUID}&type=qpid-server"
10. Deregister the Qpid server from the Edge installation:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST
http://{MS_IP}:8080/v1/servers \

11. -d "type=qpid-server&region=dc-
1&pod=central&uuid={QPID_UUID}&action=remove"
12. Remove the Qpid server from the Edge installation:
13. curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE
http://{MS_IP}:8080/v1/servers/UUID
14. Restart all edge-qpid-server components on all nodes to make sure the
change is picked up by those components:
$ /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart
15. $ /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server
wait_for_ready
16. Uninstall edge-qpid-server and apigee-qpidd:
$ /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server uninstall

17. $ /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd uninstall

Postgres and Postgresql


The data center you are decommissioning could have a Postgres master or a
Postgres standby. The following sections explain how to decommission them:

● Decommissioning Postgres master


● Decommissioning Postgres standby
Caution: You must remove and add Postgresql servers from the analytics group and the
consumer group at the same time. You might lose some analytics data if there is a substantial
delay between the two steps.

Suggestion: The Message Processor has a small queue where it keeps analytics data in memory
for a brief time, which you can usw to help avoid loss of data. Before you make changes to the
analytics group, block port 5672 in the data center that is taking traffic, and make sure that you
quickly re-enable this port after making the changes, as there is a chance of data overflow.

For example, to temporarily stop analytics traffic from Message Processors to the Qpid server:

1. Enter
2. iptables -A INPUT -p tcp --destination-port 5672 ! -s `hostname` -i eth0 -j DROP
3. Follow the axgroup change steps.
4. Remove the iptable rule after axgroup changes:
5. iptables -F

Decommissioning Postgres master

Note: If you decommission Postgres master, make sure to promote any one of the
available standby nodes as a new postgres master. While the QPID queeues buffer
data, if the Postgres master is unavailable for a long time, you risk losing analytics
data.

To decommission Postgres master:

1. Back up the dc-1 Postgres master node by following the instructions in the
following links:
a. Backup and restore
b. How to back up
2. Get the UUIDs of the Postgres servers, as described in Get UUIDs.
3. On dc-1, stop edge-postgres-server and apigee-postgresql on the
current master:
/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server stop

4. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop


5. On the standby node on dc-2, enter the following command to make it the
master node:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql promote-
standby-to-master <IP of OLD Progress master>
7. Note: If you have more than one standby Postgres node, you must add host
entries on the new master and update the replication setting for all available
postgres standby nodes.
To add host entries to the new Postgres master: follow the steps in the
appropriate section below:
If there is only one standby node remaining
For example, suppose that before decommissioning, there were three
Postgres nodes configured. You decommissioned the existing master and
promoted one of the remaining postgres standby nodes to master. Configure
the remaining standby node with the following steps:
a. On the new master, edit the configuration file to set:

PG_MASTER=IP_or_DNS_of_new_PG_MASTER

b. PG_STANDBY=IP_or_DNS_of_PG_STANDBY
c. Enable replication on the new master:
d. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
setup-replication-on-master -f configFIle
8. If there is more than one standby node remaining
a. Add the following configuration in
/opt/apigee/customer/application/postgresql.properti
es:
b. conf_pg_hba_replication.connection=host replication apigee
standby_1_ip/32 trust \n host replication apigee standby_2_ip/32 trust
c. Ensure the file
/opt/apigee/customer/application/postgresql.properties is owned by
apigee user:
d. chown apigee:apigee
/opt/apigee/customer/application/postgresql.properties
e. Restart apigee-postgresql:
f. apigee-service apigee-postgresql restart
g. To update replication settings on a standby node:
i. Modify the configuration file /opt/silent.conf and update
the PG_MASTER field with the IP address of the new Postgres
master.
ii. Remove any old Postgres data with following command:
iii. rm -rf /opt/apigee/data/apigee-postgresql/
iv. Set up replication on the standby node:
v. /opt/apigee/apigee-service/bin/apigee-service apigee-
postgresql setup-replication-on-standby -f configFile
h. Verify that Postgres master is set up correctly by entering the following
command in dc-2:
i. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
postgres-check-master
j. Remove and add Postgresql servers from the analytics group and the
consumer group.Caution: Perform the next two steps together, as there may
be a loss of analytics data between the two steps.
i. Remove the old Postgres server from the analytics group
following the instructions in Remove a Postgres server from an
analytics group.
ii. Add a new postgres server to the analytics group following the
instructions in Add an existing Postgres server to an analytics
group.
k. Deregister the old postgres server from dc-1:

curl -u <AdminEmailID>:'<AdminPassword>' -X POST


http://{MS_IP}:8080/v1/servers \

l. -d "type=postgres-server&region=dc-
1&pod=analytics&uuid=UUID&action=remove"<
m. Delete the old postgres server from dc-1:
n. curl -u >AdminEmailID>:'>AdminPassword>' -X DELETE
http://{MS_IP}:8080/v1/servers/UUID
o. The old Postgres master is now safe to decommission. Uninstall
edge-postgres-server and apigee-postgresql:

/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server uninstall

p. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
uninstall
9. Decommissioning Postgres standby
Note: The documentation for a 12-node clustered installation shows the dc-1
postgresql node as master, but for convenience, in this section it is assumed
that the dc-1 postgresql node is standby and the dc-2 postgresql node is
master.
To decommission Postgres standby, do the following steps:
a. Get the UUIDs of the Postgres servers, following the instructions in Get
UUIDs.
b. Stop apigee-postgresql on the current standby node in dc-1:

/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server stop

c. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop


d. Remove and add Postgresql servers from the analytics group and the
consumer group.Caution: Perform the next two steps together, as there may
be a loss of analytics data between the two steps.
i. Remove the old Postgres server from the analytics group
following the instructions in Remove a Postgres server from an
analytics group.
ii. Add a new postgres server to the analytics group following the
instructions in Add an existing Postgres server to an analytics
group.
e. Deregister the old postgres server from dc-1:

curl -u <AdminEmailID>:'<AdminPassword>' -X POST


http://{MS_IP}:8080/v1/servers \

f. -d "type=postgres-server&region=dc-
1&pod=analytics&uuid=UUID&action=remove"<
g. Delete the old postgres server from dc-1:
h. curl -u >AdminEmailID>:'>AdminPassword>' -X DELETE
http://{MS_IP}:8080/v1/servers/UUID
i. The old Postgres master is now safe to decommission. Uninstall
edge-postgres-server and apigee-postgresql:

/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server uninstall

j. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
uninstall
10. ZooKeeper and Cassandra
This section explains how to decommission ZooKeeper and Cassandra
servers in a two data center setup.
If you have more than two data centers, make sure to remove all references to
the node being decommissioned (dc-1 in this case) from all silent
configuration files across all the remaining data centers. For Cassandra nodes
that are to be decommissioned, drop those hosts from CASS_HOSTS. The
remaining Cassandra nodes should remain in the original ordering of
CASS_HOSTS.
Note on ZooKeeper: You must maintain a quorum of voter nodes while
modifying the ZK_HOST property in the configuration file, to ensure that the
ZooKeeper ensemble stays functional. You must have an odd number of voter
nodes in your configuration. For more information, see Apache ZooKeeper
maintenance tasks.
Caution: If you decommission a data center that has a leader ZooKeeper node, there will
be downtime.
To decommission ZooKeeper and Cassandra servers:
a. Back up the dc-1 Cassandra and ZooKeeper nodes by following the
instructions in the following links:
i. Backup and restore
ii. How to back up
b. Note: If you have multiple ZooKeeper nodes, back them up one at a time. The
backup process temporarily shuts down ZooKeeper.
c. List the UUIDs of ZooKeeper and Cassandra servers in the data center
where Cassandra nodes are about to be decommissioned.
d. apigee-adminapi.sh servers list -r dc-1 -p central -t application-
datastore --admin <AdminEmailID> --pwd '<AdminPassword>' --host
localhost
e. Deregister the server’s type:
f. curl -u <AdminEmailID>:'<AdminPassword>' -X POST
http://MS_IP:8080/v1/servers -d "type=cache-datastore&type=user-
settings-datastore&type=scheduler-datastore&type=audit-
datastore&type=apimodel-datastore&type=application-
datastore&type=edgenotification-datastore&type=identityzone-
datastore&type=user-settings-datastore&type=auth-datastore®ion=dc-
1&pod=central&uuid=UUID&action=remove"
g. Deregister the server:
h. curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE
http://MS_IP:8080/v1/servers/UUID
i. Update the configuration file with the IPs of decommissioned nodes
removed from ZK_HOSTS and CASS_HOSTS.Notes:
i. You must maintain a quorum of voter nodes while modifying the
ZK_HOST property in the configuration file, to ensure the ZooKeeper
ensemble stays functional. You must have an odd number of voter nodes
in your configuration. See About ZooKeeper and edge.
ii. For Cassandra nodes that are to be decommissioned, drop those hosts
from CASS_HOSTS. The remaining Cassandra nodes should remain in the
original ordering of CASS_HOSTS.
iii. ZooKeeper observer nodes do not participate in the voting and are not
counted when determining a quorum.
j. Example: Suppose you have the IPs $IP1 $IP2 $IP3 in dc-1 and
$IP4 $IP5 $IP6 in dc-2, and you are decommissioning dc-1. Then
you should remove the IPs $IP1 $IP2 $IP3 from the configuration
files.
i. Existing configuration file entries:

ZK_HOSTS="$IP1 $IP2 $IP3 $IP4 $IP5 $IP6"

ii. CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1, $IP4:2,1 $IP5:2,1


$IP6:2,1”
iii. New configuration file entries:

ZK_HOSTS="$IP4 $IP5 $IP6"

iv. CASS_HOSTS="$IP4:2,1 $IP5:2,1 $IP6:2,1"


k. Update the silent configuration file (modified in step e) with the IPs of
the removed decommissioned nodes and run the Management server
profile on all nodes hosting Management Servers:
l. /opt/apigee/apigee-setup/bin/setup.sh -p ms -f updated_config_file
m. Update the configuration file with IPs of the removed decommissioned
nodes and run the MP/RMP profile on all Router and Message
Processor nodes:
i. If Edge Router and Message Processor are configured on the
same node, enter:
ii. /opt/apigee/apigee-setup/bin/setup.sh -p rmp -f
updated_config_file
iii. If Edge Router and Message Processor are configured on
separate nodes, enter the following:
For the Router:
iv. /opt/apigee/apigee-setup/bin/setup.sh -p r -f
updated_config_file
v. For the Message Processor:
vi. /opt/apigee/apigee-setup/bin/setup.sh -p mp -f
updated_config_file
n. Reconfigure all Qpid nodes, with the IPs of decommissioned nodes
removed from Response File:
o. /opt/apigee/apigee-setup/bin/setup.sh -p qs -f updated_config_file
p. Reconfigure all Postgres nodes, with the IPs of decommissioned nodes
removed from Response File:
q. /opt/apigee/apigee-setup/bin/setup.sh -p ps -f updated_config_file
r. Alter system_auth keyspace. If you have Cassandra auth enabled on
an existing Cassandra node, update the replication factor of the
system_auth keyspace by running the following command:
s. ALTER KEYSPACE system_auth WITH replication = {'class':
'NetworkTopologyStrategy', 'dc-2': '3'};
t. This command sets the replication factor to '3', indicating three
Cassandra nodes in the cluster. Modify this value as necessary.
After completing this step, the Cassandra topology should not have
dc-1 in any of the keyspaces.
u. Decommission the Cassandra nodes on dc-1, one by one.
Note: The decommissioned nodes still have Cassandra with seed information of
the original cluster on them. If by chance the Cassandra service should come up,
the decommissioned nodes will rejoin the cluster ring. To avoid this, make sure
you uninstall the Cassandra component after decommissioning and do not allow
any decommissioned nodes to come up and rejoin the cluster.
To decommission the Cassandra nodes, enter the following command:
v. /opt/apigee/apigee-cassandra/bin/nodetool -h cassIP -u cassandra -
pw '<AdminPassword>' decommission
w. Note: If a node becomes stuck in a decommissioning state, you can use
Cassandra nodetool removenode to remove it. For example:
x. /opt/apigee/apigee-cassandra/bin/nodetool removenode UUID
y.
z. Check the connection of the Cassandra nodes from dc-1 using one of
the following commands:
aa. /opt/apigee/apigee-cassandra/bin/cqlsh cassIP 9042 -u cassandra -p
'<AdminPassword>'
bb. Or secondary verification command to be run on the decommissioned
node:
cc. /opt/apigee/apigee-cassandra/bin/nodetool netstats
dd. The above command should return:
ee. Mode: DECOMMISSIONED
ff. Run the DS profile for all Cassandra and ZooKeeper nodes in dc-2:
gg. /opt/apigee/apigee-setup/bin/setup.sh -p ds -f updated_config_file
hh. Stop apigee-cassandra and apigee-zookeeper in dc-1:

apigee-service apigee-cassandra stop

ii. apigee-service apigee-zookeeper stop


jj. Uninstall apigee-cassandra and apigee-zookeeper in dc-1:

apigee-service apigee-cassandra uninstall

kk. apigee-service apigee-zookeeper uninstall


11. Delete the bindings from dc-1
To delete the bindings from dc-1, do the following steps:
a. Delete the bindings from dc-1.
i. List all the available pods under organization:
ii. curl -v -u <AdminEmailID>:<AdminPassword> -X GET
"http://MS_IP:8080/v1/o/ORG/pods"
iii. To check whether all the bindings have been removed, get the
UUIDs of the servers associated with the pods:

curl -v -u <AdminEmailID>:<AdminPassword> \

iv. -X GET
"http://MS_IP:8080/v1/regions/dc-1/pods/gateway-1/servers"
v. If this command doesn't return any UUIDs, the previous steps
have removed all the bindings, and you can skip the next step.
Otherwise, perform the next step.
vi. Remove all the server bindings for the UUIDs obtained in the
previous step:
vii. curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE
http://MS_IP:8080/v1/servers/UUID
viii.Disassociate Org from the pod:
ix. curl -v -u <AdminEmailID>:<AdminPassword>
"http://MS_IP:8080/v1/o/ORG/pods" -d
"action=remove&region=dc-1&pod=gateway-1" -H "Content-Type:
application/x-www-form-urlencoded" -X POST
b. Delete the pods:
c. curl -v -u <AdminEmailID>:<AdminPassword>
"http://MS_IP:8080/v1/regions/dc-1/pods/gateway-1" -X DELETE
d. Delete the region.
e. curl -v -u <AdminEmailID>:<AdminPassword>
"http://MS_IP:8080/v1/regions/dc-1" -X DELETE
12. Note: If you miss one of the steps deleting the servers, the above step will
return an error message that a particular server in the pod still exists. So,
delete them by following the troubleshooting steps below, while customizing
the types in the curl command.
At this point you have completed the decommissioning of dc-1.
Appendix
Troubleshooting
If after performing the previous steps, there are still servers in some pods, do
following steps to deregister and delete the servers. Note: Change the types
and pod as necessary.
a. Get the UUIDs using the following command:
b. apigee-adminapi.sh servers list -r dc-1 -p POD -t --admin
<AdminEmailID> --pwd '<AdminPassword>’ --host localhost
c. Deregister server’s type:
d. curl -u <AdminEmailID>:'<AdminPassword>' -X POST
http://MP_IP:8080/v1/servers -d "type=TYPE=REGION=dc-
1&pod=POD&uuid=UUID&action=remove"
e. Delete the servers one by one:
f. curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE
http://MP_IP:8080/v1/servers/UUID
13. Validation
You can validate the decommissioning using the following commands.
Management Server
Run the following commands from the Management Servers on all the
regions.
curl -v -u <AdminEmailID>:'<AdminPassword>' http://MS_IP:8080/v1/servers?
pod=central&region=dc-1
curl -v -u <AdminEmailID>:'<AdminPassword>' http://MS_IP:8080/v1/servers?
pod=gateway&region=dc-1

14. curl -v -u <AdminEmailID>:'<AdminPassword>'


http://MS_IP:8080/v1/servers?pod=analytics&region=dc-1
15. Run the following command on all components to check port requirements
for all the management ports.
16. curl -v http://MS_IP:8080/v1/servers/self
17. Check the analytics group.
curl -v -u <AdminEmailID>:'<AdminPassword>'
"http://MS_IP:8080}/v1/o/ORG/e/ENV/provisioning/axstatus"

18. curl -v -u <AdminEmailID>:'<AdminPassword>'


http://MS_IP:8080/v1/analytics/groups/ax
19. Cassandra/ZooKeeper nodes
On all Cassandra nodes enter:
20. /opt/apigee/apigee-cassandra/bin/nodetool -h <host> statusthrift
21. This will return a running or not running status for that particular node.
On one node enter:
22. /opt/apigee/apigee-cassandra/bin/nodetool -h <host> ring
23. /opt/apigee/apigee-cassandra/bin/nodetool -h <host> status
24. The above commands will return active data center information.
On ZooKeeper nodes, first enter:
25. echo ruok | nc <host> 2181
26. This command will return imok.
Then enter:
27. echo stat | nc <host> 2181 | grep Mode
28. The value of Mode returned by the above command will be one of the
following: observer, leader, or follower.
In one ZooKeeper node:
29. /opt/apigee/apigee-zookeeper/contrib/zk-tree.sh >> /tmp/zk-tree.out.txt
30. On the Postgres master node, run:
31. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql postgres-
check-master
32. Validate that the response says the node is the master.
On the standby node:
33. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql postgres-
check-standby
34. Validate that the response says the node is the standby.
Login to PostgreSQL database using the command
35. psql -h localhost -d apigee -U postgres
36. When prompted, enter the 'postgres' user password as 'postgres'. Select
max(client_received_start_timestamp) from analytics.
”$org.$env.fact” limit 1;
Logs
Check the logs on the components to make sure there are no errors.

OPERATIONS
Starting, stopping, restarting, and
checking the status of Apigee Edge
Stop/start order
The order of stopping and starting the subsystems is important. Start and stop
scripts are provided that take care of that for you for Edge components running on
the same node.
Stop order

If you install Edge on multiple nodes, then you should stop Edge components on
those nodes in the following stop order:

1. Management Server (edge-management-server)


2. Message Processor (edge-message-processor)
3. Postgres Server (edge-postgres-server)
4. Qpid Server (edge-qpid-server)
5. Router (edge-router)
6. Edge UI: edge-ui (classic) or edge-management-ui(new)
7. Cassandra (apigee-cassandra)
8. OpenLDAP (apigee-openldap)
9. PostgreSQL database (apigee-postgresql)
10. Qpidd (apigee-qpidd)
11. ZooKeeper (apigee-zookeeper)
12. Apigee SSO (apigee-sso)

Start order

If you install Edge on multiple nodes, then you should start Edge components on
those nodes in the following start order:

1. Cassandra (apigee-cassandra)
2. OpenLDAP (apigee-openldap)
3. PostgreSQL database (apigee-postgresql)
4. Qpidd (apigee-qpidd)
5. ZooKeeper (apigee-zookeeper)
6. Management Server (edge-management-server)
7. Message Processor (edge-message-processor)
8. Postgres Server (edge-postgres-server)
9. Qpid Server (edge-qpid-server)
10. Router (edge-router)
11. Edge UI: edge-ui (classic) or edge-management-ui(new)
12. Edge SSO (apigee-sso)

Start/stop/check all components


The following scripts detect the Apigee components configured to run on the system
on which the script is executed, and will start or stop only those components in the
correct order for that node.

● To stop all Apigee components:


● /opt/apigee/apigee-service/bin/apigee-all stop
● To start all Apigee components:
● /opt/apigee/apigee-service/bin/apigee-all start
● To restart all Apigee components:
● /opt/apigee/apigee-service/bin/apigee-all restart
● To check which components are running:
● /opt/apigee/apigee-service/bin/apigee-all status

Start/stop/restart individual components


You can use the apigee-service tool to start/stop/restart or check the status of
an individual Apigee component on any specific server.

/opt/apigee/apigee-service/bin/apigee-service component_name [start|stop|restart|


status]

Where component_name identifies the component. Possible values of


component_name include (in alphabetical order):

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)

For example, to start, stop, or restart the Management Server, run the following
commands:

/opt/apigee/apigee-service/bin/apigee-service edge-management-server start


/opt/apigee/apigee-service/bin/apigee-service edge-management-server stop
/opt/apigee/apigee-service/bin/apigee-service edge-management-server restart

You can also check the status of an individual Apigee component by using the
following command:

/opt/apigee/apigee-service/bin/apigee-service component_name status

For example:

/opt/apigee/apigee-service/bin/apigee-service edge-management-server status


Setting server autostart
An on-premises installation of Edge Private Cloud does not restart automatically
during a reboot. You can use the following commands to enable/disable autostart on
any node.

To enable all components on the node:

/opt/apigee/apigee-service/bin/apigee-all enable_autostart

To disable all components on the node:

/opt/apigee/apigee-service/bin/apigee-all disable_autostart

To enable or disable autostart for a specific component on the node:

/opt/apigee/apigee-service/bin/apigee-service component_name enable_autostart


/opt/apigee/apigee-service/bin/apigee-service component_name disable_autostart

Where component_name identifies the component. Possible values include:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)

The script only affects the node on which you run it. If you want to configure all
nodes for autostart, run the script on all nodes.

Note: If you run into issues where the OpenLDAP server does not start automatically on reboot,
try disabling SELinux or setting it to permissive mode.

Note that the order of starting the components is very important:

1. First start ZooKeeper, Cassandra, LDAP (OpenLDAP)


If ZooKeeper and Cassandra are installed as cluster, the complete cluster
must be up and running before starting any other Apigee component.
2. Then, any Apigee component (Management Server, Router, UI, etc.). For
Postgres Server first start postgresql and for Qpid Server first start qpidd.

Implications:

● For a complete restart of an Apigee Edge environment, the nodes with


ZooKeeper and Cassandra need to be booted completely prior to any other
node.
● If any other Apigee component is running on one or more ZooKeeper and
Cassandra nodes, it is not recommended to use autostart. Instead, start the
components in the order described in Starting, stopping, restarting, and
checking the status of Apigee Edge.

Troubleshooting autostart
If you configure autostart, and Edge encounters issues with starting the OpenLDAP
server, you can try disabling SELinux or setting it to permissive mode on all nodes.
To configure SELinux:

1. Edit the /etc/sysconfig/selinux file:


2. sudo vi /etc/sysconfig/selinux
3. Set SELINUX=disabled or SELINUX=permissive.
4. Save your edits.
5. Restart the machine and then restart Edge:
6. /opt/apigee/apigee-service/bin/apigee-all restart

Replacing the Edge license file


If your license expires, use the following procedure to replace it:

1. Overwrite the existing license file with the new license file:
2. cp customer_license_file /opt/apigee/customer/conf/license.txt
3. Change ownership of the new license file to the "apigee" user:
chown apigee:apigee /opt/apigee/customer/conf/license.txt
Specify a different user if you are

4. running Edge as a user other than "apigee".


5. Restart the Edge Management Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
Viewing pod details
You can use the following API call to view server registration details at the end of the
installation for each pod. This is a useful monitoring tool.

curl -u adminEmail:pword http://ms_IP:8080/v1/servers?pod=podName

Where ms_IP is the IP address or DNS name of the Management Server, and
podName is one of the following:

● gateway
● central
● analytics

For example, for the "gateway" pod:

curl -u adminEmail:pword http://ms_IP:8080/v1/servers?pod=gateway

You see output in the form:

[{
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "jmx.rmi.port",
"value" : "1101"
}, ... ]
},
"type" : [ "message-processor" ],
"uUID" : "276bc250-7dd0-46a5-a583-fd11eba786f8"
},
{
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ ]
},
"type" : [ "dc-datastore", "management-server", "cache-datastore", "keyvaluemap-
datastore", "counter-datastore", "kms-datastore" ],
"uUID" : "13cee956-d3a7-4577-8f0f-1694564179e4"
},
{
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "jmx.rmi.port",
"value" : "1100"
}, ... ]
},
"type" : [ "router" ],
"uUID" : "de8a0200-e405-43a3-a5f9-eabafdd990e2"
}]

For more information about pods, see About planets, regions, pods, organizations,
environments and virtual hosts.

Enable access to OAuth 2.0 tokens by


user ID and app ID
This document describes how to enable retrieval and revocation of OAuth 2.0 access
tokens by end user ID, app ID, or both.

App IDs are automatically added to an OAuth access token. Therefore, after you use
the procedure below to enable token access for an organization, you can access
tokens by app ID.

To retrieve and revoke OAuth 2.0 access tokens by end user ID, an end user ID must
be present in the access token. The procedure below describes how add an end user
ID to an existing token or to new tokens.
By default, when Edge generates an OAuth 2.0 access token, the token has the
format:

{
"issued_at" : "1421847736581",
"application_name" : "a68d01f8-b15c-4be3-b800-ceae8c456f5a",
"scope" : "READ",
"status" : "approved",
"api_product_list" : "[PremiumWeatherAPI]",
"expires_in" : "3599",
"developer.email" : "tesla@weathersample.com",
"organization_id" : "0",
"token_type" : "BearerToken",
"client_id" : "k3nJyFJIA3p62DWOkLO6OJNi87GYXFmP",
"access_token" : "7S22UqXGJDTuUADGzJzjXzXSaGJL",
"organization_name" : "myorg",
"refresh_token_expires_in" : "0",
"refresh_count" : "0"
}

Note the following:

● The application_name field contains the UUID of the app associated with
the token. If you enable retrieval and revocation of OAuth 2.0 access tokens
by app ID, this is the app ID you use.
● The access_token field contains the OAuth 2.0 access token value.

To enable retrieval and revocation of OAuth 2.0 access tokens by end user ID,
configure the OAuth 2.0 policy to include the user ID in the token, as described in the
procedure below.

The end user ID is the string that Edge uses as the developer ID, not the developer's
email address. You can determine the developer's ID from the developer's email
address by using the Get Developer API call.

After you configure Edge to include the end user ID in the token, it is included as the
app_enduser field, as shown below:

{
"issued_at" : "1421847736581",
"application_name" : "a68d01f8-b15c-4be3-b800-ceae8c456f5a",
"scope" : "READ",
"app_enduser" : "6ZG094fgnjNf02EK",
"status" : "approved",
"api_product_list" : "[PremiumWeatherAPI]",
"expires_in" : "3599",
"developer.email" : "tesla@weathersample.com",
"organization_id" : "0",
"token_type" : "BearerToken",
"client_id" : "k3nJyFJIA3p62DWOkLO6OJNi87GYXFmP",
"access_token" : "7S22UqXGJDTuUADGzJzjXzXSaGJL",
"organization_name" : "myorg",
"refresh_token_expires_in" : "0",
"refresh_count" : "0"
}

APIs to retrieve and revoke OAuth 2.0 access tokens by


user ID and app ID
Use the following APIs to access OAuth tokens by user ID, app ID, or both:

● Get OAuth 2.0 Access Token by End User ID or App ID


● Revoke OAuth 2.0 Access Token by End User ID or App ID

Procedure for enabling token access


Use the following procedure to enable retrieval and revocation of OAuth 2.0 access
tokens by end user ID and app ID.

Step 1: Enable token access support for an organization

You must enable token access for each organization separately. Call the PUT API
below for each organization on which you want to enable the retrieval and revocation
of OAuth 2.0 access tokens by end user ID or app ID.

The user making the following call must be in the role orgadmin or opsadmin for the
organization. Replace the values with your org-specific values:

curl -H "Content-type:text/xml" -X POST \


https://management_server_IP;:8080/v1/organizations/org_name \
-d '<Organization name="org_name">
<Properties>
<Property name="features.isOAuthRevokeEnabled">true</Property>
<Property name="features.isOAuth2TokenSearchEnabled">true</Property>
</Properties>
</Organization>' \
-u USER_EMAIL:PASSWORD

Step 2: Set permissions for opsadmin role in the organization


Only the orgadmin and opsadmin roles in an organization should be given
permissions to retrieve (HTTP GET) and revoke (HTTP PUT) OAuth 2.0 tokens based
on user ID or app ID. To control access, set get and put permissions on the /oauth2
resource for an organization. That resource has a URL in the form:

https://management_server_IP:8080/v1/organizations/org_name/oauth2

The orgadmin role should already have the necessary permissions. For the
opsadmin role for the /oauth2 resource, the permissions should look like this:

<ResourcePermission path="/oauth2">
<Permissions>
<Permission>get</Permission>
<Permission>put</Permission>
</Permissions>
</ResourcePermission>

You can use the Get Permission for a Single Resource API call to see which roles
have permissions for the /oauth2 resource.

Based on the response, you can use the Add Permissions for Resource to a Role and
Delete Permission for Resource API calls to make any necessary adjustments to
the /oauth2 resource permissions.

Use the following curl command to give the opsadmin role get and put
permissions for the /oauth2 resource. Replace the values with your org-specific
values:

curl -X POST -H 'Content-type:application/xml' \


http://management_server_IP:8080/v1/organizations/org_name/userroles/
opsadmin/permissions \
-d '<ResourcePermission path="/oauth2">
<Permissions>
<Permission>get</Permission>
<Permission>put</Permission>
</Permissions>
</ResourcePermission>' \
-u USEREMAIL:PASSWORD

Use the following curl command to revoke get and put permissions for the
/oauth2 resource from roles other than orgadmin and opsadmin. Replace the
values with your org-specific values:

curl -X DELETE -H 'Content-type:application/xml' \


http://management_server_IP:8080/v1/organizations/org_name/userroles/roles/
permissions \
-d '<ResourcePermission path="/oauth2">
<Permissions></Permissions>
</ResourcePermission>' \
-u USEREMAIL:PASSWORD

Step 3: Set the oauth_max_search_limit property

Ensure that the conf_keymanagement_oauth_max_search_limit property in


/opt/apigee/customer/application/management-server.properties
file is set to 100:

conf_keymanagement_oauth_max_search_limit = 100

If this file does not exist, create it.

This property sets the page size used when fetching tokens. Apigee recommends a
value of 100, but you can set it as you see fit.

On a new installation, the property should be already set to 100. If you have to
change the value of this property, restart the Management Server and Message
Processor by using the commands:

/opt/apigee/apigee-service/bin/apigee-service edge-management-server restart


/opt/apigee/apigee-service/bin/apigee-service edge-message-processor restart

Step 4: Configure the OAuth 2.0 policy that generates tokens to include end
user ID
Note: This task is optional. You only need to perform it if you want to retrieve and revoke OAuth
2.0 access tokens by end user ID.

Configure the OAuth 2.0 policy used to generate access tokens to include the end
user ID in the token. By including end user IDs in the access token, you can retrieve
and revoke tokens by ID.

To configure the policy to include an end user ID in an access token, the request that
creates the access token must include the end user ID and you must specify the
input variable that contains the end user ID.

The OAuth 2.0 policy below, named GenerateAccessTokenClient, generates an


OAuth 2.0 access token. Note the addition of the <AppEndUser> tag in bold that
specifies the variable containing the end user ID:

<OAuthV2 async="false" continueOnError="false" enabled="true"


name="GenerateAccessTokenClient">
<DisplayName>OAuth 2.0.0 1</DisplayName>
<ExternalAuthorization>false</ExternalAuthorization>
<Operation>GenerateAccessToken</Operation>
<SupportedGrantTypes>
<GrantType>client_credentials</GrantType>
</SupportedGrantTypes>
<GenerateResponse enabled="true"/>
<GrantType>request.queryparam.grant_type</GrantType>
<AppEndUser>request.header.appuserID</AppEndUser>
<ExpiresIn>960000</ExpiresIn>
</OAuthV2>

You can then use the following curl command to generate the OAuth 2.0 access
token, passing the user ID as the appuserID header:

curl -H "appuserID:6ZG094fgnjNf02EK" \
https://myorg-test.apigee.net/oauth/client_credential/accesstoken?
grant_type=client_credentials \
-X POST -d
'client_id=k3nJyFJIA3p62TKIkLO6OJNXFmP&client_secret=gk5K5lIp943AY4'

In this example, the appuserID is passed as a request header. You can pass
information as part of a request in many ways. For example, as an alternative, you
can:

● Use a form parameter variable: request.formparam.appuserID


● Use a flow variable providing the end user ID

Enable HTTP deployment


By default, Edge uses RPC to deploy API proxies. While this mode works very well for
most installations, larger topologies with many MPs might experience timeouts
when a large number of concurrent calls are made via RPC. Apigee plans to
deprecate this implementation in the future.

As a result, Apigee recommends that larger deployments use HTTP rather than RPC
for deployment.

In addition to potentially providing greater reliability, enabling HTTP deployment also


improves the content and format of exceptions that might be thrown during the
deployment process.

This section describes how to enable HTTP for deployment.


Update your organization
To enable HTTP deployment, send a PUT request to the Update organization
properties API. Set the following properties in the body of the request:

Property Description

allow.deployment.o Determines whether Edge can deploy API


ver.http proxies via HTTP (in addition to RPC). Set to
true to allow HTTP deployment; otherwise,
false. The default is false.

To enable HTTP deployments, you must set


this property to true.

use.http.for.confi Specifies which method to use for


guration configuration events. Possible values are:

● never: All config events use RPC.


This is the default.
● retry: All config events use RPC first;
if an event fails via RPC, Edge tries
HTTP. This can cause delays if you
should be using HTTP.
● always: All config events use HTTP.

To enable HTTP deployments, Apigee


recommends setting this property to
always.

In addition to setting these properties in the body of the message, you must set the
Content-Type header to application/json or application/xml.

The following example calls the Update organization properties API with a JSON
message body.

Note: When updating properties, you must pass all existing properties to the API, even if they are
not being changed. If you omit existing properties from the payload in this API call, the properties
are removed. To get the current list of properties for the environment, use the Get organization
API.
curl -u admin_email:admin_password
"http://management_server_IP:8080/v1/organizations/org_name"
-X POST -H "Content-Type: application/json" -d
'{
"properties" : {
"property" : [
{
"name" : "allow.deployment.over.http",
"value" : "true"
},
{
"name" : "use.http.for.configuration",
"value" : "always"
}]
}
}'
Note: After enabling HTTP deployment, wait about 10 minutes for the changes to take effect.
Before the changes take effect, all deployment events continue to use RPC. After that,
deployment events should use HTTP.

To enable HTTP deployment on all API proxies across all your organizations, you
must update each organization as described above.

Test the update


To test that your update was successful, trigger a deployment event on an API proxy
in the updated organization and then look at the Message Processor's log files. The
log entry for the deployment events should contain mode:API.

For more information, see Log files.

Recurring Edge Services maintenance


tasks
To ensure optimal day-to-day operation of the Apigee system, certain tasks should
be performed when the system is originally installed and/or on a periodic basis.

The following tools are used to communicate with, or maintain various components
of, the Apigee system.

Tool Used For System Location

nodetool Apache Cassandra /opt/apigee/


maintenance apigee-cassandra/
bin

cassandra-c Apache Cassandra command /opt/apigee/


li line apigee-cassandra/
bin

zkCli.sh Apache ZooKeeper /opt/apigee/


command line utility apigee-zookeeper/
bin

nc Arbitrary TCP/IP and UDP /usr/bin/nc or


commands; invocation of another location
ZooKeeper "four-letter dependent on your
commands" operating system

In situations where the nc or telnet commands might be considered a security


risk, the following Python script can be used:

import time
import socket
import sys

if len(sys.argv) <> 4:
print "Usage: %s address port 4-letter-cmd" % sys.argv[0]
else:
c = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
c.connect((sys.argv[1], int(sys.argv[2])))
c.send(sys.argv[3])
time.sleep(0.1)
print c.recv(512)

About Cassandra replication factor and


consistency level
About the Cassandra replication factor
Cassandra stores data replicas on multiple nodes to ensure reliability and fault
tolerance. The replication strategy for each Edge keyspace determines the nodes
where replicas are placed.

The total number of replicas for a keyspace across a Cassandra cluster is referred to
as the keyspace's replication factor. A replication factor of one means that there is
only one copy of each row in the Cassandra cluster. A replication factor of two
means there are two copies of each row, where each copy is on a different node. All
replicas are equally important; there is no primary or master replica.

NOTE
Apigee does not support changing the replication factor because it:

● May or may not increase overall availability when more than one node fails
● Might result in adverse effects on the overall ecosystem with increased latencies

In a production system with three or more Cassandra nodes in each data center, the
default replication factor for an Edge keyspace is three. As a general rule, the
replication factor should not exceed the number of Cassandra nodes in the cluster.

Use the following procedure to view the Cassandra schema, which shows the
replication factor for each Edge keyspace:

1. Log in to a Cassandra node.


2. Run the following command:
3. /opt/apigee/apigee-cassandra/bin/cassandra-cli -h $(hostname -i) <<< "show
schema;"
4. Where $(hostname -i) resolves to the IP address of the Cassandra node.
Or you can replace $(hostname -i) with the IP address of the node.

For each keyspace, you will see output in the form:

create keyspace kms


with placement_strategy = 'NetworkTopologyStrategy'
and strategy_options = {dc-1 : 3}
and durable_writes = true;

You can see that for data center 1, dc-1, the default replication factor for the kms
keyspace is three for an installation with three Cassandra nodes.

Note: For a smaller Cassandra cluster, such as a single node Cassandra cluster used in an Edge
all-in-one installation, the replication factor is one. However, an Edge production cluster should
always have three or more Cassandra nodes.

If you add additional Cassandra nodes to the cluster, the default replication factor is
not affected.

For example, if you increase the number of Cassandra nodes to six, but leave the
replication factor at three, you do not ensure that all Cassandra nodes have a copy of
all the data. If a node goes down, a higher replication factor means a higher
probability that the data on the node exists on one of the remaining nodes. The
downside of a higher replication factor is an increased latency on data writes.

About the Cassandra consistency level


The Cassandra consistency level is defined as the minimum number of Cassandra
nodes that must acknowledge a read or write operation before the operation can be
considered successful. Different consistency levels can be assigned to different
Edge keyspaces.

When connecting to Cassandra for read and write operations, Message Processor
and Management Server nodes typically use the Cassandra value of LOCAL_QUORUM
to specify the consistency level for a keyspace. However, some keyspaces are
defined to use a consistency level of one.

The calculation of the value of LOCAL_QUORUM for a data center is:

LOCAL_QUORUM = (replication_factor/2) + 1

As described above, the default replication factor for an Edge production


environment with three Cassandra nodes is three. Therefore, the default value of
LOCAL_QUORUM = (3/2) +1 = 2 (the value is rounded down to an integer).

With LOCAL_QUORUM = 2, at least two of the three Cassandra nodes in the data
center must respond to a read/write operation for the operation to succeed. For a
three node Cassandra cluster, the cluster could therefore tolerate one node being
down per data center.

By specifying the consistency level as LOCAL_QUORUM, Edge avoids the latency


required by validating operations across multiple data centers. If a keyspace used
the Cassandra QUORUM value as the consistency level, read/write operations would
have to be validated across all data centers.

To see the consistency level used by the Edge Message Processor or Management
Server nodes:

1. Log in to a Message Processor node.


2. Change to the /opt/apigee/edge-message-processor/conf directory:
3. cd /opt/apigee/edge-message-processor/conf
4. For write consistency:
5. grep -ri "write.consistencylevel" *
6. For read consistency:
7. grep -ri "read.consistencylevel" *
8. Log in to the Management Server node.
9. Change to the /opt/apigee/edge-management-server/conf directory:
10. cd /opt/apigee/edge-management-server/conf
11. Repeat steps 3 and 4.

If you add additional Cassandra nodes to the cluster, the consistency level is not
affected.
Apache Cassandra maintenance tasks
This section describes periodic maintenance tasks for Cassandra.

Anti-entropy maintenance
The Apache Cassandra ring nodes require periodic maintenance to ensure
consistency across all nodes. To perform this maintenance, use the following
command:

apigee-service apigee-cassandra apigee_repair -pr

Apigee recommends the following when running this command:

● Run on every Cassandra node (across all regions or data centers).


● Run on one node at a time to ensure consistency across all nodes in the ring. Ensure that
50% disk space is available on each Cassandra node. Running repair jobs on multiple
nodes at the same time can impair the health of Cassandra.
To check if a repair job on a node has completed successfully, look in the
nodes's system.log file for an entry with the latest repair session's UUID
and the phrase "session completed successfully." Here is a sample log entry:
INFO [AntiEntropySessions:1] 2015-03-01 10:02:56,245 RepairSession.java (line 282)
[repair #2e7009b0-c03d-11e4-9012-99a64119c9d8] session completed
successfully"

● Ref: https://support.datastax.com/hc/en-us/articles/204226329-How-to-
check-if-a-scheduled-nodetool-repair-ran-successfully
● Run during periods of relatively low workload (the tool imposes a significant
load on the system).
● Run at least every seven days in order to eliminate problems related to
Cassandra's "forgotten deletes".
● Run on different nodes on different days, or schedule it so that there are
several hours between running it on each node.
● Use the -pr option (partitioner range) to specify the primary partitioner range
of the node only.

If you enabled JMX authentication for Cassandra, you must include the username
and password when you invoke nodetool. For example:

apigee-service apigee-cassandra apigee_repair -u username -pw password -pr

You can also run the following command to check the supported options of
apigee_repair:

apigee-service apigee-cassandra apigee_repair -h

Note: apigee_repair is a wrapper around Cassandra's nodetool repair, which


performs additional checks before performing Cassandra's repair.
For more information, see the following resources:

● Forgotten deletes
● Data consistency
● nodetool repair
NOTE: Apigee does not recommend adding, moving or removing Cassandra nodes without
contacting Apigee Edge Support. The Apigee system tracks Cassandra nodes using their IP
address, and performing ring maintenance without performing corresponding updates on the
Apigee environment metadata will cause undesirable results.

Log file maintenance


Cassandra logs are stored in the /opt/apigee/var/log/cassandra directory on
each node. By default, a maximum of 50 log files, each with a maximum size of 20
MB, can be created; once this limit is reached older logs are deleted when newer logs
are created.

If you should find that Cassandra log files are taking up excessive space, you can
modify the amount of space allocated for log files by editing the log4j settings.

1. Edit /opt/apigee/customer/application/cassandra.properties
to set the following properties. If that file does not exist, create it:
conf_logback_maxfilesize=20MB
# max file size

2. conf_logback_maxbackupindex=50 # max open files


3. Restart Cassandra by using the following command:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restart

Disk space maintenance


You should monitor Cassandra disk utilization regularly to ensure at least 50 percent
of each disk is free. If disk utilization climbs above 50 percent, we recommend that
you add more disk space to reduce the percentage that is in use.

Cassandra automatically performs the following operations to reduce its own disk
utilization:

● Authentication token deletion when tokens expire. However, it may take a


couple of weeks to free up the disk space the tokens were using, depending
on your configuration. If automatic deletion is not adequate to maintain
sufficient disk space, contact support to learn about manually deleting tokens
to recover space.Note: Manual deletion will delete all tokens, after which all clients will
need to re-authenticate.
● Note on data compaction: Starting with Edge for Private Cloud 4.51.00, new
installations of Apigee Cassandra will create keyspaces with the Leveled
Compaction Strategy.
Installations of older versions of Edge for Private Cloud that have been
upgraded to Private Cloud 4.51.00 will continue to retain the prior compaction
strategy. If the existing compaction strategy is
SizeTieredCompactionStrategy, we recommend changing to
LeveledCompactionStrategy, which provides better disk utilization.

Note: When Cassandra performs data compactions, it can take a considerable


amount of CPU cycles and memory. But resource utilization should return to normal
once compactions are complete. You can run the 'Nodetool compactionstats'
command on each node to check if compaction is running. The output of
compactionstats informs you if there are pending compactions to be executed
and the estimated time for completion.

Change the Cassandra compaction


strategy
Apigee uses Cassandra databases to store most of its data, including data for
proxies, caches, and tokens. Compaction is a standard process to reduce the size of
the data stored in databases, which is vital to keeping databases running efficiently.
Cassandra supports various strategies for compaction. Apigee recommends that all
Edge for Private Cloud customers operate their Cassandra clusters with the strategy
LeveledCompactionStrategy, rather than the default strategy
SizeTieredCompactionStrategy, for all column families.
LeveledCompactionStrategy offers better performance, better disk utilization,
more efficient compactions, and requires less free space, than
SizedTieredCompactionStrategy.

All new installations of Apigee 4.51.00 or later will automatically set up Cassandra
with LeveledCompactionStrategy. However, if you are using an older version of
Apigee or have upgraded to Apigee 4.51.00 from an older version, your version may
still be using Cassandra with SizeTieredCompactionStrategy. To find out
which compaction strategy your version of Cassandra is using, see the section
Check compaction strategy.

This page explains how to change the compaction strategy to


LeveledCompactionStrategy.

Note: Changing compaction strategy to LeveledCompactionStrategy is a prerequisite for


upgrading to Apigee 4.52.00 or later.

Preparation
Check the existing compaction strategy
To check the existing compaction strategy on column families, follow the
instructions in Check compaction strategy. If the compaction strategy is already
LeveledCompactionStrategy, you don't need to follow the remaining
instructions on this page.

Backup

Since changing compaction strategy triggers a full compaction cycle in C* nodes. it


might introduce some latencies due to the load of compactions and simultaneous
application traffic. A rollback of this change may be required to restore Cassandra
nodes from backups. See How to backup to learn how to back up your data before
changing the compaction strategy.

Compaction throughput

After the compaction strategy is changed to LeveledCompactionStrategy,


compactions may run for a long time. Depending on the size of the data compaction,
runtime may vary. During the compaction cycle, Cassandra may use up more system
resources. To ensure compaction does not take up a lot of system resources, which
might disrupt API Runtime requests, we recommend setting limits for compaction
throughput.

Run the following nodetool command on each of the nodes to set compaction
throughput to max out at 128MB on all C* nodes:

nodetool setcompactionthroughput 128

Sizing VMs for compactions

Ensure, C* nodes have enough CPU/Memory resources before executing this


change. Ensure that no C* nodes are operating at more than 25% of the CPU load
before executing this change.

After a compaction strategy change, a full compaction cycle is expected to run, so it


is recommended to change the compaction strategy during low traffic periods.

Staggered runs

You might not be able to complete the change of all nodes within a day, especially if
you operate large Cassandra clusters, as indices need to be rebuilt on each node one
by one. You could change the compaction strategy of one schema or one column
family (table) at a time. For this, alter the column family to change its compaction
strategy and then rebuild all indices on the table (if any) on all the nodes. Then repeat
the above procedure for each table or keyspace. Such runs for one table or one
keyspace can be broken down to be run across different days.

For example, to change compaction strategy of oauth_20_access_tokens


column family in kms schema, you can do the following:
1. Alter table to change compaction strategy:

2. ALTER TABLE kms.oauth_20_access_tokens WITH compaction = {'class' :


'LeveledCompactionStrategy'};
3. Rebuild all indices of just this table:
nodetool rebuild_index kms oauth_20_access_tokens
oauth_20_access_tokens_app_id_idx
nodetool rebuild_index kms oauth_20_access_tokens
oauth_20_access_tokens_client_id_idx

4. nodetool rebuild_index kms oauth_20_access_tokens


oauth_20_access_tokens_refresh_token_idx

Changing compaction strategy


At a high level, changing compaction strategy is a 2-step process:

1. Modify compaction strategy of every table.


2. Rebuild all indices on each node one by one.

Alter Table to set new compaction strategy

Run the following Cassandra Query Language (CQL) commands on any one
Cassandra node, changing strategy for one keyspace at a time. You can run CQLs
can at the cql prompt. To invoke the cql prompt:

/opt/apigee/apigee-cassandra/bin/cqlsh `hostname -i
`

You will see a response like the following:

Connected to apigee at XX.XX.XX.XX:9042.


[cqlsh 5.0.1 | Cassandra 2.1.16 | CQL spec 3.2.1 | Native
protocol v3]
Use HELP for help.
cqlsh>

Run the following CQLs to change the compaction strategy:

● CQLs to change compaction strategy for keyspace kms:


ALTER TABLE kms.organizations WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE kms.maps WITH compaction = {'class' : 'LeveledCompactionStrategy'};
ALTER TABLE kms.apps WITH compaction = {'class' : 'LeveledCompactionStrategy'};
ALTER TABLE kms.app_credentials WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE kms.api_products WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE kms.apiproducts_appslist WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE kms.api_resources WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE kms.app_end_user WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE kms.oauth_20_authorization_codes WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE kms.oauth_20_access_tokens WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE kms.oauth_10_request_tokens WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE kms.oauth_10_access_tokens WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE kms.oauth_10_verifiers WITH compaction = {'class' :
'LeveledCompactionStrategy'};

● ALTER TABLE kms.app_enduser_tokens WITH compaction = {'class' :


'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace user_settings:
● ALTER TABLE user_settings.user_settings WITH compaction = {'class' :
'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace keyvaluemap:

● ALTER TABLE keyvaluemap.keyvaluemaps_r21 WITH compaction = {'class' :


'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace devconnect:

ALTER TABLE devconnect.developers WITH compaction = {'class' :


'LeveledCompactionStrategy'};
ALTER TABLE devconnect.companies WITH compaction = {'class' :
'LeveledCompactionStrategy'};

● ALTER TABLE devconnect.company_developers WITH compaction = {'class' :


'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace counter:
ALTER TABLE counter.counters_current_version WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE counter.counters_with_expiry WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE counter.counters WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE counter.key_timestamp_count WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE counter.timestamp_key WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE counter.period_timestamp WITH compaction = {'class' :
'LeveledCompactionStrategy'};

● ALTER TABLE counter.gateway_quota WITH compaction = {'class' :


'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace cache:
ALTER TABLE cache.cache_entries WITH compaction = {'class' :
'LeveledCompactionStrategy'};

● ALTER TABLE cache.cache_sequence_id_r24 WITH compaction = {'class' :


'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace
ax_custom_report_model:
ALTER TABLE ax_custom_report_model.report_description WITH compaction =
{'class' : 'LeveledCompactionStrategy'};
ALTER TABLE ax_custom_report_model.report_id_lookup WITH compaction =
{'class' : 'LeveledCompactionStrategy'};
ALTER TABLE ax_custom_report_model.org_metadata WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE ax_custom_report_model.org_report_lookup WITH compaction =
{'class' : 'LeveledCompactionStrategy'};
ALTER TABLE ax_custom_report_model.report_created_view WITH compaction =
{'class' : 'LeveledCompactionStrategy'};

● ALTER TABLE ax_custom_report_model.report_viewed_view WITH


compaction = {'class' : 'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace auth:
● ALTER TABLE auth.totp WITH compaction = {'class' :
'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace audit:
ALTER TABLE audit.audits WITH compaction = {'class' :
'LeveledCompactionStrategy'};

● ALTER TABLE audit.audits_ref WITH compaction = {'class' :


'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace apprepo:
ALTER TABLE apprepo.organizations WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apprepo.environments WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apprepo.apiproxies WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apprepo.apiproxy_revisions WITH compaction = {'class' :
'LeveledCompactionStrategy'};

● ALTER TABLE apprepo.api_proxy_revisions_r21 WITH compaction = {'class' :


'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace apimodel:
ALTER TABLE apimodel.apis WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apimodel.apis_revision WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apimodel.resource WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apimodel.method WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apimodel.revision_counters WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apimodel.template_counters WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apimodel.template WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apimodel.credentials WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apimodel.credentialsv2 WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apimodel.schemas WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE apimodel.security WITH compaction = {'class' :
'LeveledCompactionStrategy'};

● ALTER TABLE apimodel.template_auth WITH compaction = {'class' :


'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace identityzone:
ALTER TABLE identityzone.IdentityZones WITH compaction = {'class' :
'LeveledCompactionStrategy'};

● ALTER TABLE identityzone.OrgToIdentityZone WITH compaction = {'class' :


'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace dek:
● ALTER TABLE dek.keys WITH compaction = {'class' :
'LeveledCompactionStrategy'};
● CQLs to change compaction strategy for keyspace analytics:
ALTER TABLE analytics.custom_aggregates_defn WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE analytics.custom_rules_defn WITH compaction = {'class' :
'LeveledCompactionStrategy'};
ALTER TABLE analytics.custom_aggregates_defn_updates WITH compaction =
{'class' : 'LeveledCompactionStrategy'};

● ALTER TABLE analytics.custom_rules_defn_updates WITH compaction =


{'class' : 'LeveledCompactionStrategy'};

Rebuild indices

You need to execute this step after the compaction strategy change. Run the
following nodetool commands one by one on each Cassandra node one after the
other.

Notes:

● The nodetool command may be located at


/opt/apigee/apigee-cassandra/bin/nodetool. You might need to pass the
entire path to nodetool if it cannot be located by your operating system.
● Rebuilding indices of kms and audit keyspaces may take a while to complete.

Run the following nodetool commands to rebuild indices:

● Rebuild indices for keyspace kms:


nodetool rebuild_index kms maps maps_organization_name_idx
nodetool rebuild_index kms apps apps_app_family_idx
nodetool rebuild_index kms apps apps_app_id_idx
nodetool rebuild_index kms apps apps_app_type_idx
nodetool rebuild_index kms apps apps_name_idx
nodetool rebuild_index kms apps apps_organization_name_idx
nodetool rebuild_index kms apps apps_parent_id_idx
nodetool rebuild_index kms apps apps_parent_status_idx
nodetool rebuild_index kms apps apps_status_idx
nodetool rebuild_index kms app_credentials app_credentials_api_products_idx
nodetool rebuild_index kms app_credentials
app_credentials_organization_app_id_idx
nodetool rebuild_index kms app_credentials app_credentials_organization_name_idx
nodetool rebuild_index kms api_products api_products_organization_name_idx
nodetool rebuild_index kms app_end_user app_end_user_app_id_idx
nodetool rebuild_index kms oauth_20_authorization_codes
oauth_20_authorization_codes_client_id_idx
nodetool rebuild_index kms oauth_20_authorization_codes
oauth_20_authorization_codes_organization_name_idx
nodetool rebuild_index kms oauth_20_access_tokens
oauth_20_access_tokens_app_id_idx
nodetool rebuild_index kms oauth_20_access_tokens
oauth_20_access_tokens_client_id_idx
nodetool rebuild_index kms oauth_20_access_tokens
oauth_20_access_tokens_refresh_token_idx
nodetool rebuild_index kms oauth_10_request_tokens
oauth_10_request_tokens_consumer_key_idx
nodetool rebuild_index kms oauth_10_request_tokens
oauth_10_request_tokens_organization_name_idx
nodetool rebuild_index kms oauth_10_access_tokens
oauth_10_access_tokens_app_id_idx
nodetool rebuild_index kms oauth_10_access_tokens
oauth_10_access_tokens_consumer_key_idx
nodetool rebuild_index kms oauth_10_access_tokens
oauth_10_access_tokens_organization_name_idx
nodetool rebuild_index kms oauth_10_access_tokens
oauth_10_access_tokens_status_idx
nodetool rebuild_index kms oauth_10_verifiers
oauth_10_verifiers_organization_name_idx

● nodetool rebuild_index kms oauth_10_verifiers


oauth_10_verifiers_request_token_idx
● Rebuild indices for keyspace devconnect:
nodetool rebuild_index devconnect companies companies_name_idx
nodetool rebuild_index devconnect companies companies_organization_name_idx
nodetool rebuild_index devconnect companies companies_status_idx
nodetool rebuild_index devconnect company_developers
company_developers_company_name_idx
nodetool rebuild_index devconnect company_developers
company_developers_developer_email_idx
nodetool rebuild_index devconnect company_developers
company_developers_organization_name_idx
nodetool rebuild_index devconnect developers developers_email_idx
nodetool rebuild_index devconnect developers developers_organization_name_idx

● nodetool rebuild_index devconnect developers developers_status_idx


● Rebuild indices for keyspace cache:
● nodetool rebuild_index cache cache_entries cache_entries_cache_name_idx
● Rebuild indices for keyspace audit:
nodetool rebuild_index audit audits audits_operation_idx
nodetool rebuild_index audit audits audits_requesturi_idx
nodetool rebuild_index audit audits audits_responsecode_idx
nodetool rebuild_index audit audits audits_timestamp_idx

● nodetool rebuild_index audit audits audits_user_idx


● Rebuild indices for keyspace apprepo:
● nodetool rebuild_index apprepo environments
environments_organization_name_idx
● Rebuild indices for keyspace apimodel:
nodetool rebuild_index apimodel apis a_name
nodetool rebuild_index apimodel apis a_org_name
nodetool rebuild_index apimodel apis_revision ar_a_name
nodetool rebuild_index apimodel apis_revision ar_a_uuid
nodetool rebuild_index apimodel apis_revision ar_base_url
nodetool rebuild_index apimodel apis_revision ar_is_active
nodetool rebuild_index apimodel apis_revision ar_is_latest
nodetool rebuild_index apimodel apis_revision ar_org_name
nodetool rebuild_index apimodel apis_revision ar_rel_ver
nodetool rebuild_index apimodel apis_revision ar_rev_num
nodetool rebuild_index apimodel resource r_a_name
nodetool rebuild_index apimodel resource r_api_uuid
nodetool rebuild_index apimodel resource r_ar_uuid
nodetool rebuild_index apimodel resource r_base_url
nodetool rebuild_index apimodel resource r_name
nodetool rebuild_index apimodel resource r_org_name
nodetool rebuild_index apimodel resource r_res_path
nodetool rebuild_index apimodel resource r_rev_num
nodetool rebuild_index apimodel method m_a_name
nodetool rebuild_index apimodel method m_api_uuid
nodetool rebuild_index apimodel method m_ar_uuid
nodetool rebuild_index apimodel method m_base_url
nodetool rebuild_index apimodel method m_name
nodetool rebuild_index apimodel method m_org_name
nodetool rebuild_index apimodel method m_r_name
nodetool rebuild_index apimodel method m_r_uuid
nodetool rebuild_index apimodel method m_res_path
nodetool rebuild_index apimodel method m_rev_num
nodetool rebuild_index apimodel method m_verb
nodetool rebuild_index apimodel template t_a_name
nodetool rebuild_index apimodel template t_a_uuid
nodetool rebuild_index apimodel template t_entity
nodetool rebuild_index apimodel template t_name
nodetool rebuild_index apimodel template t_org_name
nodetool rebuild_index apimodel schemas s_api_uuid
nodetool rebuild_index apimodel schemas s_ar_uuid
nodetool rebuild_index apimodel security sa_api_uuid
nodetool rebuild_index apimodel security sa_ar_uuid

● nodetool rebuild_index apimodel template_auth au_api_uuid


● Rebuild indices for keyspace dek:
● nodetool rebuild_index dek keys usecase_index
1. Check that the compaction strategy change on schema has taken effect by
following the instructions in Check compaction strategy.
2. Verify compaction has run successfully and data is compacted after the
strategy change:
a. On each cassandra node, run the following nodetool command to
see if all compactions are completed and nothing is pending:
b. nodetool compactionstats
c. Once verified by using the command above to ensure no compactions
are pending, check the last modified timestamp of the data files
(under /opt/apigee/data/apigee-cassandra/data/) to be after the
timestamp when the compaction strategy change CQL was executed.

Roll back
In case you need to roll back, you can follow one of the options below:

Option 1: Revert change

Roll back the compaction strategy to SizeTieredCompactionStrategy.

Note: Changing compaction strategy back to SizeTieredcompactionstrategy will cause


another set of compactions to run, so rollback only is absolutely necessary and make sure to let
those compactions run.

Run the following CQLs on any one Cassandra node changing strategy for one
keyspace at a time. You can run CQLs at the cql prompt. To invoke the cql prompt:

:
/opt/apigee/apigee-cassandra/bin/cqlsh `hostname -i`

You will see a response like the following:

Connected to apigee at XX.XX.XX.XX:9042.


[cqlsh 5.0.1 | Cassandra 2.1.16 | CQL spec 3.2.1 | Native
protocol v3]
Use HELP for help.
cqlsh>

Run the following CQLs to change the compaction strategy:

● CQLs to change compaction strategy for keyspace kms:


ALTER TABLE kms.organizations WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.maps WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.apps WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.app_credentials WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.api_products WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.apiproducts_appslist WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.api_resources WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.app_end_user WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.oauth_20_authorization_codes WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.oauth_20_access_tokens WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.oauth_10_request_tokens WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.oauth_10_access_tokens WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE kms.oauth_10_verifiers WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};

● ALTER TABLE kms.app_enduser_tokens WITH compaction = {'class' :


'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace user_settings:
● ALTER TABLE user_settings.user_settings WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace keyvaluemap:
● ALTER TABLE keyvaluemap.keyvaluemaps_r21 WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace devconnect:
ALTER TABLE devconnect.developers WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE devconnect.companies WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};

● ALTER TABLE devconnect.company_developers WITH compaction = {'class' :


'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace counter:
ALTER TABLE counter.counters_current_version WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE counter.counters_with_expiry WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE counter.counters WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE counter.key_timestamp_count WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE counter.timestamp_key WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE counter.period_timestamp WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};

● ALTER TABLE counter.gateway_quota WITH compaction = {'class' :


'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace cache:
ALTER TABLE cache.cache_entries WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};

● ALTER TABLE cache.cache_sequence_id_r24 WITH compaction = {'class' :


'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace
ax_custom_report_model:
ALTER TABLE ax_custom_report_model.report_description WITH compaction =
{'class' : 'SizeTieredCompactionStrategy'};
ALTER TABLE ax_custom_report_model.report_id_lookup WITH compaction =
{'class' : 'SizeTieredCompactionStrategy'};
ALTER TABLE ax_custom_report_model.org_metadata WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE ax_custom_report_model.org_report_lookup WITH compaction =
{'class' : 'SizeTieredCompactionStrategy'};
ALTER TABLE ax_custom_report_model.report_created_view WITH compaction =
{'class' : 'SizeTieredCompactionStrategy'};

● ALTER TABLE ax_custom_report_model.report_viewed_view WITH


compaction = {'class' : 'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace auth:
● ALTER TABLE auth.totp WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace audit:
ALTER TABLE audit.audits WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};

● ALTER TABLE audit.audits_ref WITH compaction = {'class' :


'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace apprepo:
ALTER TABLE apprepo.organizations WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apprepo.environments WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apprepo.apiproxies WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apprepo.apiproxy_revisions WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};

● ALTER TABLE apprepo.api_proxy_revisions_r21 WITH compaction = {'class' :


'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace apimodel:
ALTER TABLE apimodel.apis WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apimodel.apis_revision WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apimodel.resource WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apimodel.method WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apimodel.revision_counters WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apimodel.template_counters WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apimodel.template WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apimodel.credentials WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apimodel.credentialsv2 WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apimodel.schemas WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE apimodel.security WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
● ALTER TABLE apimodel.template_auth WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace identityzone:
ALTER TABLE identityzone.IdentityZones WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};

● ALTER TABLE identityzone.OrgToIdentityZone WITH compaction = {'class' :


'SizeTieredCompactionStrategy'};
● CQLs to change compaction strategy for keyspace dek:
● CQLs to change compaction strategy for keyspace analytics:
ALTER TABLE analytics.custom_aggregates_defn WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE analytics.custom_rules_defn WITH compaction = {'class' :
'SizeTieredCompactionStrategy'};
ALTER TABLE analytics.custom_aggregates_defn_updates WITH compaction =
{'class' : 'SizeTieredCompactionStrategy'};

● ALTER TABLE analytics.custom_rules_defn_updates WITH compaction =


{'class' : 'SizeTieredCompactionStrategy'};

After all the column families have been altered, if you have already rebuilt indices
while changing compaction strategy to LeveledCompactionStrategy, you will
need to rebuild indices again. Follow the same steps as earlier for rebuilding all
indices. If you didn’t rebuild indices earlier, you don’t need to rebuild indices during
the rollback.

Option 2 - Full data restore from backup


Note: This option restores back up of data from all keyspaces, which includes all runtime
keyspaces and Management keyspaces as well. Also, restoring from backup may result in data
loss, especially any data which was supposed to be persisted in Cassandra between the time
backup was taken and restore is performed. So, use this restore option only when it is absolutely
necessary.

To perform a full data restore, see the instructions in Restore from backiup.

Check compaction strategy


Compaction strategies are set at a column family (table) level in Cassandra. You can
use the CQL queries below to check compaction strategy for each column family.

You can run CQLs at the cql prompt. To invoke cql prompt:

/opt/apigee/apigee-cassandra/bin/cqlsh `hostname -i`

You will see a response like the following:

Connected to apigee at XX.XX.XX.XX:9042.


[cqlsh 5.0.1 | Cassandra 2.1.16 | CQL spec 3.2.1 | Native
protocol v3]
Use HELP for help.
cqlsh>

You can determine the current compaction strategy as follows:

● If the compaction strategy is set to SizeTieredCompactionStrategy, the


output of the queries below will be
org.apache.cassandra.db.compaction.SizeTieredCompactionSt
rategy.
● If the compaction strategy is set to LeveledCompactionStrategy, the
output of the queries below will be
org.apache.cassandra.db.compaction.LeveledCompactionStrat
egy.

Run the following CQLs to verify the compaction strategy:

● CQLs to verify compaction strategy for keyspace kms:


https://docs.apigee.com/private-cloud/v4.51.00/how-perform-restore
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'organizations';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'maps';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'apps';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'app_credentials';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'api_products';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'apiproducts_appslist';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'api_resources';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'app_end_user';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'oauth_20_authorization_codes';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'oauth_20_access_tokens';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'oauth_10_request_tokens';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'oauth_10_access_tokens';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name ='kms' and columnfamily_name = 'oauth_10_verifiers';

● SELECT compaction_strategy_class from system.schema_columnfamilies


where keyspace_name ='kms' and columnfamily_name =
'app_enduser_tokens';
● CQLs to verify compaction strategy for keyspace user_settings:
● SELECT compaction_strategy_class from system.schema_columnfamilies
where keyspace_name = 'user_settings' and columnfamily_name =
'user_settings';
● CQLs to verify compaction strategy for keyspace keyvaluemap:
● SELECT compaction_strategy_class from system.schema_columnfamilies
where keyspace_name = 'keyvaluemap' and columnfamily_name =
'keyvaluemaps_r21';
● CQLs to verify compaction strategy for keyspace devconnect:
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'devconnect' and columnfamily_name = 'developers';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'devconnect' and columnfamily_name = 'companies';

● SELECT compaction_strategy_class from system.schema_columnfamilies


where keyspace_name = 'devconnect' and columnfamily_name =
'company_developers';
● CQLs to verify compaction strategy for keyspace counter:
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'counter' and columnfamily_name = 'counters_current_version';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'counter' and columnfamily_name = 'counters_with_expiry';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'counter' and columnfamily_name = 'counters';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'counter' and columnfamily_name = 'key_timestamp_count';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'counter' and columnfamily_name = 'timestamp_key';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'counter' and columnfamily_name = 'period_timestamp';

● SELECT compaction_strategy_class from system.schema_columnfamilies


where keyspace_name = 'counter' and columnfamily_name = 'gateway_quota';
● CQLs to verify compaction strategy for keyspace cache:
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'cache' and columnfamily_name = 'cache_entries';
● SELECT compaction_strategy_class from system.schema_columnfamilies
where keyspace_name = 'cache' and columnfamily_name =
'cache_sequence_id_r24';
● CQLs to verify compaction strategy for keyspace
ax_custom_report_model:
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'ax_custom_report_model' and columnfamily_name =
'report_description';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'ax_custom_report_model' and columnfamily_name =
'report_id_lookup';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'ax_custom_report_model' and columnfamily_name =
'org_metadata';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'ax_custom_report_model' and columnfamily_name =
'org_report_lookup';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'ax_custom_report_model' and columnfamily_name =
'report_created_view';
● SELECT compaction_strategy_class from system.schema_columnfamilies
where keyspace_name = 'ax_custom_report_model' and columnfamily_name
= 'report_viewed_view';
● CQLs to verify compaction strategy for keyspace auth:
● SELECT compaction_strategy_class from system.schema_columnfamilies
where keyspace_name = 'auth' and columnfamily_name = 'totp';
● CQLs to verify compaction strategy for keyspace audit:
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'audit' and columnfamily_name = 'audits';

● SELECT compaction_strategy_class from system.schema_columnfamilies


where keyspace_name = 'audit' and columnfamily_name = 'audits_ref';
● CQLs to verify compaction strategy for keyspace apprepo:
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apprepo' and columnfamily_name = 'organizations';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apprepo' and columnfamily_name = 'environments';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apprepo' and columnfamily_name = 'apiproxies';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apprepo' and columnfamily_name = 'apiproxy_revisions';
● SELECT compaction_strategy_class from system.schema_columnfamilies
where keyspace_name = 'apprepo' and columnfamily_name =
'api_proxy_revisions_r21';
● CQLs to verify compaction strategy for keyspace apimodel:
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'apis';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'apis_revision';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'resource';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'method';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'revision_counters';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'template_counters';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'template';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'credentials';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'credentialsv2';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'schemas';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'apimodel' and columnfamily_name = 'security';

● SELECT compaction_strategy_class from system.schema_columnfamilies


where keyspace_name = 'apimodel' and columnfamily_name =
'template_auth';
● CQLs to verify compaction strategy for keyspace identityzone:
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'identityzone' and columnfamily_name = 'identityzones';

● SELECT compaction_strategy_class from system.schema_columnfamilies


where keyspace_name = 'identityzone' and columnfamily_name =
'orgtoidentityzone';
● CQLs to verify compaction strategy for keyspace dek:
● SELECT compaction_strategy_class from system.schema_columnfamilies
where keyspace_name = 'dek' and columnfamily_name = 'keys';
● CQLs to verify compaction strategy for keyspace analytics:
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'analytics' and columnfamily_name = 'custom_aggregates_defn';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'analytics' and columnfamily_name = 'custom_rules_defn';
SELECT compaction_strategy_class from system.schema_columnfamilies where
keyspace_name = 'analytics' and columnfamily_name =
'custom_aggregates_defn_updates';

● SELECT compaction_strategy_class from system.schema_columnfamilies


where keyspace_name = 'analytics' and columnfamily_name =
'custom_rules_defn_updates';

About ZooKeeper maintenance


ZooKeeper ensembles are designed to remain functional, with no data loss, despite
the loss of one or more ZooKeeper nodes. This resilience can be used effectively to
perform maintenance on ZooKeeper nodes with no system downtime.

About ZooKeeper and Edge

In Edge, ZooKeeper nodes contain configuration data about the location and
configuration of the various Edge components, and notifies the different
components of configuration changes. All supported Edge topologies for a
production system use at least three ZooKeeper nodes.

Note: Three ZooKeeper nodes is the minimum number required. You can add additional
ZooKeeper nodes as necessary based on your system requirements.

Use the ZK_HOSTS and ZK_CLIENT_HOSTS properties in the Edge config file to
specify the ZooKeeper nodes. For example:

ZK_HOSTS="$IP1 $IP2 $IP3"

ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"

Where:

● ZK_HOSTS specifies the IP addresses of the ZooKeeper nodes. The IP


addresses must be listed in the same order on all ZooKeeper nodes.
In a multi-Data Center environment, list all ZooKeeper nodes from all Data
Centers.
● ZK_CLIENT_HOSTS specifies the IP addresses of the ZooKeeper nodes used
by this Data Center only. The IP addresses must be listed in the same order on
all ZooKeeper nodes in the Data Center.
In a single Data Center installation, these are the same nodes as specified by
ZK_HOSTS. In a multi-Data Center environment, the Edge config file for each
Data Center should list only the ZooKeeper nodes for that Data Center.

By default, all ZooKeeper nodes are designated as voter nodes. That means the
nodes all participate in electing the ZooKeeper leader. You can include the
:observer modifier with ZK_HOSTS to signify that the node is an observer node,
and not a voter. An observer node does not participate in the election of the leader.

You typically specify the :observer modifier when creating multiple Edge Data
Centers, or when a single Data Center has a large number of ZooKeeper nodes. For
example, in a 12-host Edge installation with two Data Centers, ZooKeeper on node 9
in Data Center 2 is the observer:

You then use the following settings in your config file for Data Center 1:

ZK_HOSTS="$IP1 $IP2 $IP3 $IP7 $IP8 $IP9:observer"

ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"

And for Data Center 2:

ZK_HOSTS="$IP1 $IP2 $IP3 $IP7 $IP8 $IP9:observer"

ZK_CLIENT_HOSTS="$IP7 $IP8 $IP9"

The following sections contain more details about leader, voter, and observer nodes
and describe the considerations of adding voter and observer nodes.

About leaders, followers,voters, and observers

In a multi-node ZooKeeper installation, one of the nodes is designated as the leader.


All other ZooKeeper nodes are designated as followers. While reads can happen
from any ZooKeeper node, all write requests get forwarded to the leader. For
example, a new Message Processor is added to Edge. That information is written to
the ZooKeeper leader. All followers then replicate the data.
At Edge installation time, you designate each ZooKeeper node as a voter or as an
observer. The leader is then elected by all voter ZooKeeper nodes. The one
requirement for the election of a leader is that there must be a quorum of functioning
ZooKeeper voter nodes available. A quorum means more than half of all voter
ZooKeeper nodes, across all Data Centers, are functional.

If there is not a quorum of voter nodes available, no leader can be elected. In this
scenario, Zookeeper cannot serve requests. That means you cannot make a request
to the Edge Management Server, process Management API requests, or log in to the
to Edge UI until the quorum is restored.

Note: ZooKeeper observer nodes do not participate in the voting and are not counted when
determining a quorum.

For example, in a single Data Center installation:

● You installed three ZooKeeper nodes


● All ZooKeeper nodes are voters
● The quorum is two functioning voter nodes
● If only one voter node is available then the ZooKeeper ensemble cannot
function

In an installation with two Data Centers:

● You installed three ZooKeeper nodes per Data Center, for a total of six nodes
● Data Center 1 has three voter nodes
● Data Center 2 has two voter nodes and one observer node
● The quorum is based on the five voters across both Data Centers, and is
therefore three functioning voter nodes
● If only two or fewer voter nodes are available then the ZooKeeper ensemble
cannot function

Considerations for adding nodes as voters or observers

Your system requirements might require that you add additional ZooKeeper nodes to
your Edge installation. The Adding ZooKeeper nodes documentation describes how
to add additional ZooKeeper nodes to Edge. When adding ZooKeeper nodes, you
must take into consideration the type of nodes to add: voter or observer.

You want to make sure to have enough voter nodes so that if one or more voter
nodes are down the ZooKeeper ensemble can still function, meaning there is still a
quorum of voter nodes available. By adding voter nodes, you increase the size of the
quorum, and therefore you can tolerate more voter nodes being down.
However, adding additional voter nodes can negatively affect write performance
because write operations require the quorum to agree on the leader. The time it
takes to determine the leader is based on the number of voter nodes, which
increases as you add more voter nodes. Therefore, you do not want to make all the
nodes voters.

Rather than adding voter nodes, you can add observer nodes. Adding observer nodes
increases the overall system read performance without adding to the overhead of
electing a leader because observer nodes do not vote and do not affect the quorum
size. Therefore, if an observer node goes down, it does not affect the ability of the
ensemble to elect a leader. However, losing observer nodes can cause degradation
in the read performance of the ZooKeeper ensemble because there are fewer nodes
available to service data requests.

In a single Data Center, Apigee recommends that you have no more than five voters
regardless of the number of observer nodes. In two Data Centers, Apigee
recommends that you have no more than nine voters (five in one Data Center and
four in the other). You can then then add as many observer nodes as necessary for
your system requirements.

Remove a Zookeeper node

There are many reasons you might want to remove a Zookeeper node; for example, a
node got corrupted or it was added to the wrong environment.

This section describes how to remove a Zookeeper node when the node is down and
not reachable.

To remove a Zookeeper node:

1. Edit your silent configuration file and remove the IP address of the Zookeeper
node that you want to remove.
2. Re-run the setup command for Zookeeper to reconfigure the remaining
ZooKeeper nodes:
3. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper setup -f
updated_config_file
4. Restart all Zookeeper nodes:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restart
6. Reconfigure the Management Server node:
/opt/apigee/apigee-service/bin/apigee-service edge-management-server setup -f
updated_config_file

7. /opt/apigee/apigee-service/bin/apigee-service edge-management-server restart


8. Reconfigure all the Routers:
/opt/apigee/apigee-service/bin/apigee-service edge-router setup -f
updated_config_file

9. /opt/apigee/apigee-service/bin/apigee-service edge-router restart


10. Reconfigure all the Message Processors:
/opt/apigee/apigee-service/bin/apigee-service edge-message-processor setup -f
updated_config_file

11. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor restart


12. Reconfigure all Qpid nodes:
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server setup -f
updated_config_file

13. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart


14. Reconfigure all Postgres nodes:
/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server setup -f
updated_config_file

15. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart

Maintenance considerations

You can perform ZooKeeper maintenance on a fully functioning ensemble with no


downtime if you performed it on a single node at a time. By making sure that only
one ZooKeeper node is down at any one time, you can ensure that there is always a
quorum of voter nodes available to elect a leader.

Maintenance across multiple data centers

When working with multiple Data Centers, remember that the ZooKeeper ensemble
does not distinguish between Data Centers. ZooKeeper assemblies view all of the
ZooKeeper nodes across all Data Centers as one ensemble.

The location of the voter nodes in a given Data Center is not a factor when
ZooKeeper performs quorum calculations. Individual nodes can go down across
Data Centers, but as long as a quorum is preserved across the entire ensemble,
ZooKeeper remains functional.

Maintenance implications
At various times, you will have to take a ZooKeeper node down for maintenance,
either a voter node or an observer node. For example, you might have to upgrade the
version of Edge on the node, the machine hosting ZooKeeper might fail, or the node
might become unavailable for some other reason such as a network error.

If the node that goes down is an observer node, you can expect a slight degradation
in the performance of the ZooKeeper ensemble until the node is restored. If the node
is a voter node, it can impact the viability of the ZooKeeper ensemble due to the loss
of a node that participates in the leader election process. Regardless of the reason
for the voter node going down, it is important to maintain a quorum of available voter
nodes.

Maintenance procedure

Perform maintenance procedures only after ensuring that a ZooKeeper ensemble is


functional. This assumes that observer nodes are functional and that there are
enough voter nodes available during maintenance to retain a quorum.

When these conditions are met, a ZooKeeper ensemble of arbitrary size can tolerate
the loss of a single node at any point with no loss of data or meaningful impact to
performance. This means you are free to perform maintenance on any node in the
ensemble as long as it is on one node at a time.

As part of performing maintenance, use the following procedure to determine the


type of a ZooKeeper node (leader, voter, or observer):

1. If it is not installed on the ZooKeeper node, install nc:


2. sudo yum install nc
3. Run the following nc command on the node, where 2181 is the ZooKeeper
port:
4. echo stat | nc localhost 2181
5. You should see output on the form:
Zookeeper version: 3.4.5-1392090,
built on 09/30/2012 17:52 GMT
Clients: /a.b.c.d:xxxx[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0xc00000044
Mode: follower
6. Node count: 653
7. In the Mode line of the output for the nodes, you should see observer,
leader, or follower (meaning a voter that is not the leader) depending on
the node configuration.
Note: In a standalone installation of Edge with a single ZooKeeper node, the Mode is set
to standalone.
8. Repeat steps 1 and 2 on each ZooKeeper node.

Summary

The best way to perform maintenance on a ZooKeeper ensemble is to perform it one


node at a time. Remember:

● You must maintain a quorum of voter nodes during maintenance to ensure


the ZooKeeper ensemble stays functional.
● Taking down an observer node does not affect the quorum or the ability to
elect a leader.
● The quorum is calculated across all ZooKeeper nodes in all Data Centers.
● Proceed with maintenance to the next server after the prior server is
operational.
● Use the nc command to inspect the ZooKeeper node.

Apache Zookeeper maintenance tasks


Four-letter commands
Apache ZooKeeper has a number of "four-letter commands" that can be helpful in
determining the current status of ZooKeeper voter and observer nodes. These
commands can be invoked using nc, telnet or another utility that has the ability to
send commands to a specific port. Details on the four-letter commands can be
found in the Apache ZooKeeper commands reference.

Removing old snapshot files


Apache ZooKeeper automatically performs periodic maintenance to remove old
snapshot files which accumulate as updates to the system are made. The following
settings in /opt/apigee/apigee-zookeeper/conf/zoo.cfg control this
process:

## The number of snapshots to retain in dataDir:


autopurge.snapRetainCount=5
# Purge task interval in hours.
# Set to "0" to disable auto purge feature.
autopurge.purgeInterval=120

To set these properties to different values:

1. Edit /opt/apigee/customer/application/zookeeper.properties
to set the following properties. If that file does not exist, create it.
2. Set the following properties in zookeeper.properties:
3. # Set the snapshot count. In this example set it to 10:
4. conf_zoo_autopurge.snapretaincount=10
5.
# Set the purge interval. In this example, set is to 240 hours:
conf_zoo_autopurge.purgeinterval=240
6. Make sure the file is owned by the "apigee" user:
7. chown apigee:apigee
/opt/apigee/customer/application/zookeeper.properties
8. Restart ZooKeeper by using the command:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restart

Log file maintenance


Apache Zookeeper log files are kept in /opt/apigee/var/log/apache-
zookeeper. Normally, log file maintenance should not be required, but if you find
that there are an excessive number of ZooKeeper logs or that the logs are very large
you can modify ZooKeeper's log4j properties to set the maximum file size and file
count.

1. Edit /opt/apigee/customer/application/zookeeper.properties
to set the following properties. If that file does not exist, create it.
2. Set the following properties in zookeeper.properties:
3. conf_log4j_log4j.appender.rollingfile.maxfilesize=10MB
4. # max file size
5. conf_log4j_log4j.appender.rollingfile.maxbackupindex=50 # max open files
6. Make sure the file is owned by the "apigee" user:
7. chown apigee:apigee
/opt/apigee/customer/application/zookeeper.properties
8. Restart ZooKeeper by using the command:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restart

OpenLDAP maintenance tasks


Log file location
OpenLDAP log files are contained in the directory /opt/apigee/var/log. These
files can be periodically archived and removed in order to ensure that they do not
take up excessive disk space. Information on maintaining, archiving and removing
OpenLDAP logs can be found in Section 19.2 of the OpenLDAP manual at
http://www.openldap.org/doc/admin24/maintenance.html.

Manually set a user's password


User's can request a new Edge password in the Edge UI. The user then receives an
email with information about setting a password. However, if your SMTP server is
down, or the user cannot receive an email for any reason, you can manually set the
user's password by using OpenLDAP commands.

To set a user's password:

1. Use ldapsearch to download user information:


2. ldapsearch -w ldapAdminPWord -D "cn=manager,dc=apigee,dc=com" -b
"dc=apigee,dc=com" -LLL -h LDAP_IP -p 10389 > ldap.txt
3. Search the ldap.txt file for the user's email address. You should see a block in
the form:
dn: uid=29383a67-9279-4aa8-a75b-
cfbf901578fc,ou=users,ou=global,dc=apigee,dc=com
mail: foo@bar.com
userPassword::
e1NTSEF9a01UUDdSd01BYXRuUURXdXN5OWNPRzBEWWlYZFBRTm14MHlNVWc9
PQ==

4. uid: 29383a67-9279-4aa8-a75b-cfbf901578fc
5. Use ldappasswd to set the user's password based on the user's uid:
ldappasswd -h LDAP_IP -p 10389 -D "cn=manager,dc=apigee,dc=com" -W -s
newPassWord \

6. "uid=29383a67-9279-4aa8-a75b-
cfbf901578fc,ou=users,ou=global,dc=apigee,dc=com"
7. You are prompted for the OpenLDAP admin password.

The user can now log in by using newPassWord.

Manually set OpenLDAP system password


Resetting Edge passwords describes how to change the OpenLDAP system
password but requires that you know the existing password. If you have lost that
password, you can use the following procedure to reset it.

1. Use slappasswd to create the SSHA encrypted password for a new


password:
2. slappasswd -h {SSHA} -s newPassWord
3. This command returns a string in the form:
4. {SSHA}+DOup9d6l+czfWzkIvajwYPArjPurhS6
5. Open the
/opt/apigee/data/apigee-openldap/slapd.d/cn=config/olcDat
abase={2}bdb.ldif file in an editor:
6. vi
/opt/apigee/data/apigee-openldap/slapd.d/cn=config/olcDatabase={2}bdb.ld
if
7. Find the line in the form:
8. olcRootPW:: OldPasswordString
9. Replace OldPasswordString with the string returned from slappasswd. If
there are 2 colons after olcRootPw, remove one and ensure there is a space
after the colon:
10. olcRootPW: {SSHA}RGon+bLCe+Sk+HyHholFBj8ONQfabrhw
11. Restart OpenLDAP:
12. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap restart
13. Check using ldapsearch if your new password works:
14. ldapsearch -W -D "cn=manager,dc=apigee,dc=com" -b "dc=apigee,dc=com" -
LLL -h LDAP_IP -p 10389
15. You are prompted for the OpenLDAP admin password.
16. Repeat these steps on any other OpenLDAP servers that are being used for
replication.
17. Update the Management Server to use the new password:
18. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
store_ldap_credentials -p newPassWord

Manually set Edge admin password


Resetting Edge Passwords describes how to change the Edge system password but
requires that you know the existing password. If you have lost the Edge system
password, you can use the following procedure to reset it.

1. On the UI node, stop the Edge UI:


2. /opt/apigee/apigee-service/bin/apigee-service edge-ui stop
3. Use ldappasswd to set the Edge sys admin password:
ldappasswd -h localhost -p 10389 -D "cn=manager,dc=apigee,dc=com" -W -s
newPassWord \

4. "uid=admin,ou=users,ou=global,dc=apigee,dc=com"
5. You are prompted for the OpenLDAP admin password.
6. Update the config file that you used to install the Edge UI with the new Edge
system password:
7. APIGEE_ADMINPW=newPassWord
8. Configure and restart the Edge UI:
9. /opt/apigee/apigee-setup/bin/setup.sh -p ui -f configFile
10. (Only if TLS is enabled on the UI) Re-enable TLS on the Edge UI as described
in Configuring TLS for the management UI.

Delete SLAPD lock file


If you get an error when trying to start OpenLDAP that the slapd.pid lock file
exists, you can delete the file.

The file is located in /opt/apigee/apigee-openldap/var/run/slapd.pid.


Delete the file and try to restart OpenLDAP:

/opt/apigee/apigee-service/bin/apigee-service apigee-openldap restart

If OpenLDAP does not start, try starting it in debug mode and check for errors:

slapd -h ldap://:10389/ -u apigee -d 255 -F


/opt/apigee/data/apigee-openldap/slapd.d

Errors may point to resource issues, memory, or CPU utilization issues.

Modifying OpenLDAP replication


This section explains how to modify OpenLDAP replication.

Perform the steps in the following procedure on the OpenLDAP replicator node,
which replicates its data to the other OpenLDAP node. For example, if you are setting
replication from node1 to node2, run the commands on node1.

1. Check the present state:


2. ldapsearch -H ldap://{HOST}:{PORT} -LLL -x -b "cn=config" -D
"cn=admin,cn=config" -w {PASSWORD} -o ldif-wrap=no 'olcSyncRepl' | grep
olcSyncrepl
3. The output should be similar to the following:
4. olcSyncrepl: {0}rid=001 provider=ldap://{HOST}:{PORT}/
binddn="cn=manager,dc=apigee,dc=com" bindmethod=simple
credentials={PASSWORD} searchbase="dc=apigee,dc=com" attrs="*,+"
type=refreshAndPersist retry="60 1 300 12 7200 +" timeout=1
5. Create a file repl.lidf and paste the following commands into the file:
dn: olcDatabase={2}bdb,cn=config
changetype: modify
replace: olcSyncRepl
olcSyncRepl: rid=001
provider=ldap://{NEW_HOST}:{PORT}/
binddn="cn=manager,dc=apigee,dc=com"
bindmethod=simple
credentials={PASSWORD}
searchbase="dc=apigee,dc=com"
attrs="*,+"
type=refreshAndPersist
retry="60 1 300 12 7200 +"

6. timeout=1
7. Make sure you replace appropriate value for the following placeholders:
● {NEW_HOST}: The new OpenLDAP host, to which you are planning to
replicate.
● {PORT}: The OpenLDAP port. The default port is 10389.
● {PASSWORD}: The OpenLDAP password.
8. Run the ldapmodify command:
ldapmodify -x -w {PASSWORD} -D "cn=admin,cn=config" -H "ldap://{HOST}:{PORT}/" -
f repl.ldif

9.
10. The output should be similar to the following:

11.
12. modifying entry "olcDatabase={2}bdb,cn=config"
13. Verify replication:
14. ldapsearch -H ldap://{HOST}:{PORT} -LLL -x -b "cn=config" -D
"cn=admin,cn=config" -w {PASSWORD} -o ldif-wrap=no 'olcSyncRepl' | grep
olcSyncrepl
15. The output should be similar to the following:
16. olcSyncrepl: {0}rid=001 provider=ldap://{NEW_HOST}:{PORT}/
binddn="cn=manager,dc=apigee,dc=com" bindmethod=simple
credentials={PASSWORD} searchbase="dc=apigee,dc=com" attrs="*,+"
type=refreshAndPersist retry="60 1 300 12 7200 +" timeout=1
17. You can verify that replication is working correctly by reading and comparing
the contextCSN value from each server and ensuring that they match.
18. ldapsearch -w {PASSWORD} -D "cn=manager,dc=apigee,dc=com" -b
"dc=apigee,dc=com" -LLL -h localhost -p 10389 contextCSN | grep contextCSN

Troubleshooting OpenLDAP replication problems


If your installation uses multiple OpenLDAP servers, you can check the replication
settings to ensure that they servers are functioning properly.

1. Ensure that ldapsearch returns data from each OpenLDAP server:


2. ldapsearch -W -D "cn=manager,dc=apigee,dc=com" -b "dc=apigee,dc=com" -
LLL -h LDAP_IP -p 10389
3. You are prompted for the OpenLDAP admin password.
4. Check the replication configuration by examining the
/opt/apigee/conf/openldap/slapd.d/cn=config/olcDatabase={
2}bdb.ldif file.
5. Make sure the system password is the same on each OpenLDAP server.
6. Check iptables and tcp wrapper settings.

Managing LDAP resources


When using the LDAP policy for authentication or DN (Domain Name) queries, the
policy uses an Apigee-side LDAP resource that contains the connection details to
your LDAP provider. This section describes how to create and manage LDAP
resources via an API.

Create an LDAP resource


Following is the API for creating an LDAP resource:

/v1/organizations/org_name/environments/environment_name/ldapresources

Following is an annotated XML payload that describes the LDAP resource


configuration you'll send to create the resource:

<LdapResource name="ldap1">
<Connection>
<Hosts>
<Host port="636">foo.com</Host> <!-- port is optional: defaults to 389 for ldap://
and 636 for ldaps:// -->
</Hosts>
<SSLEnabled>false</SSLEnabled> <!-- optional, defaults to false -->
<Version>3</Version> <!-- optional, defaults to 3-->
<Authentication>simple</Authentication> <!-- optional, only simple supported -->
<ConnectionProvider>jndi|unboundid</ConnectionProvider> <!-- required -->
<ServerSetType>single|round robin|failover</ServerSetType> <!-- not applicable for
jndi -->
<LdapConnectorClass>com.custom.ldap.MyProvider</LdapConnectorClass> <!-- If
using a custom LDAP provider, the fully qualified class -->
</Connection>
<ConnectPool enabled="true"> <!-- enabled is optional, defaults to true -->
<Timeout>30000</Timeout> <!-- optional, in milliseconds; if not set, no timeout -->
<Maxsize>50</Maxsize> <!-- optional; if not set, no max connections -->
<Prefsize>30</Prefsize> <!-- optional; if not set, no pref size -->
<Initsize></Initsize> <!-- optional; if not set, defaults to 1 -->
<Protocol></Protocol> <!-- optional; if not set, defaults to 'ssl plain' -->
</ConnectPool>
<Admin>
<DN>cn=admin,dc=apigee,dc=com</DN>
<Password>secret</Password>
</Admin>
</LdapResource>

Example

The following example creates an LDAP resource named ldap1:

curl -X POST -H "Content-Type: application/xml" \


https://api.enterprise.apigee.com/v1/organizations/myorg/environments/test/
ldapresources \
-u apigee_email:password -d \
'<LdapResource name="ldap1">
<Connection>
<Hosts>
<Host>foo.com</Host>
</Hosts>
<SSLEnabled>false</SSLEnabled>
<Version>3</Version>
<Authentication>simple</Authentication>
<ConnectionProvider>unboundid</ConnectionProvider>
<ServerSetType>round robin</ServerSetType>
</Connection>
<ConnectPool enabled="true">
<Timeout>30000</Timeout>
<Maxsize>50</Maxsize>
<Prefsize>30</Prefsize>
<Initsize></Initsize>
<Protocol></Protocol>
</ConnectPool>
<Admin>
<DN>cn=admin,dc=apigee,dc=com</DN>
<Password>secret</Password>
</Admin>
</LdapResource>'

List all LDAP Resources


curl https://api.enterprise.apigee.com/v1/organizations/myorg/environments/test/
ldapresources \
-u apigee_email:password
Get the Details of an LDAP Resource
curl https://api.enterprise.apigee.com/v1/organizations/myorg/environments/test/
ldapresources/ldap1 \
-u apigee_email:password

Update an LDAP resource


curl -X POST -H "Content-Type: application/xml" \
https://api.enterprise.apigee.com/v1/organizations/myorg/environments/test/
ldapresources/ldap1 \
-u apigee_email:password -d \
'<LdapResource name="ldap1">
<Connection>
<Hosts>
<Host>foo.com</Host>
</Hosts>
<SSLEnabled>false</SSLEnabled>
<Version>3</Version>
<Authentication>simple</Authentication>
<ConnectionProvider>unboundid</ConnectionProvider>
<ServerSetType>round robin</ServerSetType>
</Connection>
<ConnectPool enabled="true">
<Timeout>50000</Timeout>
<Maxsize>50</Maxsize>
<Prefsize>30</Prefsize>
<Initsize></Initsize>
<Protocol></Protocol>
</ConnectPool>
<Admin>
<DN>cn=admin,dc=apigee,dc=com</DN>
<Password>secret</Password>
</Admin>
</LdapResource>'

Delete an LDAP Resource


curl -X DELETE \
https://api.enterprise.apigee.com/v1/organizations/myorg/environments/test/
ldapresources/ldap1 \
-u apigee_email:password
Apigee mTLS Maintenance
This page describes Apigee mTLS maintenance tasks that need to be performed
regularly.

Rotating local certificates


Local certificates, which are installed on each Apigee host, need to be replaced with
new ones annually. This is called certificate rotation. There are two ways to rotate
certificates, depending on whether you are using a custom certificate authority, or a
certificate installed by Consul.

Rotating local certificates without a custom certficate authority (CA)

The simplest way to rotate certificates without a custom CA is to uninstall and re-
install apigee-mtls. This removes all old certificates present, and generates fresh
certificates locally. You can do this with minimal downtime by performing the
following commands on each host, one at a time:

Note: This assumes the same silent.conf file that was used for the initial
installation is present.

1. Stop all core Apigee components:


2. /opt/apigee/apigee-service/bin/apigee-all stop
3. See Start/stop/check all components.
4. Stop apigee-mtls:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls stop
6. Uninstall apigee-mtls:
7. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls uninstall
8. Reinstall apigee-mtls:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls install
10. Run apigee-mtls setup:
11. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls setup -f
/opt/silent.conf
12. Restart apigee-mtls:
13. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls start
14. Restart all core Apigee components:
15. /opt/apigee/apigee-service/bin/apigee-all start
16. See Start/stop/check all components.

Rotating local certificates with a custom certificate authority (CA)

To rotate local certificates with a custom CA, do the following steps:


1. Follow the steps in Use a custom certificate to generate the new certificates
you'll be using.
2. Stop all core Apigee components:
3. /opt/apigee/apigee-service/bin/apigee-all stop
4. See Start/stop/check all components.
5. Stop apigee-mtls:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls stop
7. Remove the old local cert files:
rm -f /opt/apigee/apigee-mtls/certs/local_cert.pem
rm -f /opt/apigee/apigee-mtls/certs/local_key.pem
rm -f /opt/apigee/apigee-mtls/source/certs/local_cert.pem
rm -f /opt/apigee/apigee-mtls/source/certs/local_key.pem

8. rm -rf /opt/apigee/data/apigee-mtls
9. Copy the new cert/key pair generated in the first step into the following
locations, and update permissions:
cp ${new_cert} /opt/apigee/apigee-mtls/certs/local_cert.pem

chmod \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/certs/local_cert.pem

chown \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/certs/local_cert.pem

cp ${new_cert} /opt/apigee/apigee-mtls/source/certs/local_cert.pem

chmod \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/source/certs/local_cert.pem

chown \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/source/certs/local_cert.pem

cp ${new_key} /opt/apigee/apigee-mtls/certs/local_key.pem

chmod \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/source/certs/local_cert.pem

chown \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/source/certs/local_cert.pem

cp ${new_key} /opt/apigee/apigee-mtls/source/certs/local_key.pem
chmod \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/source/certs/local_cert.pem

chown \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \

10. /opt/apigee/apigee-mtls/source/certs/local_cert.pem
11. Restart apigee-mtls:
12. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls start
13. Restart all core Apigee components:
14. /opt/apigee/apigee-service/bin/apigee-all start
15. See Start/stop/check all components.

Recurring analytics services


maintenance tasks
Many Apigee Analytics Services tasks can be performed using standard Postgres
utilities. The routine maintenance tasks you would perform on the Analytics
database—such as database reorganization using VACUUM, reindexing and log file
maintenance—are the same as those you would perform on any PostgreSQL
database. Information on routine Postgres maintenance can be found at
http://www.postgresql.org/docs/9.1/static/maintenance.html.

NOTE: Postgres autovacuum is enabled by default, so you do not have to explicitly run the
VACUUM command. Autovacuum reclaims storage by removing obsolete data from the
PostgreSQL database.NOTE: Apigee does not recommend moving the PostgreSQL database
without contacting Apigee Customer Success. The Apigee system uses the PostgreSQL
database server's IP address. As a result, moving the database or changing its IP address
without performing corresponding updates on the Apigee environment metadata will cause
undesirable results.

For more on maintaining PostgreSQL database, see


http://www.postgresql.org/docs/9.1/static/maintenance.html.

NOTE: No pruning is required for Cassandra. Expired tokens are automatically purged after 180
days.

Pruning Analytics Data


As the amount of analytics data available within the Apigee repository increases, you
may find it desirable to "prune" the data beyond your required retention interval. Run
the following command to prune data for a specific organization and environment:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql pg-data-purge
org_name env_name number_of_days_to_retain

To run the script, enter the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql pg-data-purge


org_name env_name number_of_days_to_retain [Delete-from-parent-fact - N/Y]
[Confirm-delete-from-parent-fact - N/Y]

The script has the following options:

● Delete-from-parent-fact Default : No. Will also delete data older than


retention days from parent fact table.
● Skip-confirmation-prompt. Default: No. If No, the script will prompt for
confirmation before deleting data from parent fact. Set to Yes if the purge
script is automated.

This command interrogates the "childfactables" table in the "analytics" schema to


determine which raw data partitions cover the dates for which data pruning is to be
performed, then drops those tables. Once the tables are dropped, the entries in
"childfactables" related to those partitions are deleted.

Childfactables are daily-partitioned fact data. Every day new partitions are created
and data gets ingested into the daily partitioned tables. So at a later point in time,
when the old fact data will not be required, you can purge the respective
childfactables.

The script has the following options since version 4.51.00.00:

● Delete-from-parent-fact Default : No. Will also delete data older than retention
days from parent fact table.
● Confirm-delete-from-parent-fact. Default: No. If No, the script will prompt for
confirmation before deleting data from parent fact. Set to Yes if the purge
script is automated.
NOTE: If you have configured master-standby replication between Postgres servers, then you
only need to run this script on the Postgres master.

Moving Apigee servers


Moving components from one machine to another may cause a configuration
mismatch if you do not keep the IP addresses in your component configuration files
in sync.

This section describes how to diagnos and fix configuration mismatches.

IP addresses versus host names


You should use IP addresses rather than host names in your component
configuration files.

While some component configuration files allow you to use host names rather than
IP addresses, using host names can complicate troubleshooting. For example, host
names can be the source of issues related to DNS server connectivity, lookup
failures, and synchronization.

As a result, Apigee strongly recommends using IP addresses for all component


configurations. In some cases, such as with Cassandra, you must use IP addresses
and cannot use host names. Most examples in the documentation use IP addresses
for component configuration.

For host names and IP addresses, consider the implications of the following
scenarios when moving Apigee servers:

Scenario Impact on moving servers

Change in IP address Update all related files that reference the


original IP address

Hostname change without No impact


change in IP address

Hostname change with Same as a change in IP address


change in IP address

Changing the IP Address of a Cassandra Node


To change the IP address of a Cassandra node, perform the following steps:

For configurations with a single Cassandra node


1. Edit /opt/apigee/customer/application/cassandra.properties
on the system being modified. If the file does not exist, create it.
2. Change the following parameters:
● Set the conf_cassandra_seeds and
conf_cassandra_listen_address parameters to specify the
system's new IP address.
● Change the conf_cassandra_rpc_address to use either the new IP
address or 0.0.0.0 (which allows Cassandra Thrift to listen on all
interfaces).
3. Open /opt/apigee/apigee-cassandra/conf/cassandra-
topology.properties in an editor. You should see the old IP address and
default setting in the form:
4. 192.168.56.101=dc-1:ra-1
5. default=dc-1:ra-1
6. Save that information.
7. Edit /opt/apigee/customer/application/cassandra.properties
to change the old IP address specified to the new IP address:
8. conf_cassandra-topology_topology=192.168.56.103=dc-1:ra-1\ndefault=dc-
1:ra-1\n
9. Ensure that you insert "\n" after the IP address, and specify the same default
settings as you found above in Step 3.
10. Restart Cassandra:
11. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restart
12. If necessary also repair ZooKeeper (see below), else restart every Apigee
platform component starting with Management Server.

For configurations with multiple Cassandra nodes (ring)


1. If the node being changed is a seed node, edit
/opt/apigee/customer/application/cassandra.properties file
on each system in the ring, and change the conf_cassandra_seeds
parameter to include the modified system's new IP. If the
cassandra.properties file does not exist, create it.
2. Edit /opt/apigee/customer/application/cassandra.properties
on the system being modified, and change the following parameters:
● Set conf_cassandra_listen_address to use the new IP address.
● Set conf_cassandra_rpc_address to use either the new IP
address or "0.0.0.0" (which allows Cassandra Thrift to listen on all
interfaces).
3. Open /opt/apigee/apigee-cassandra/conf/cassandra-
topology.properties in an editor. You should see all Cassandra IP
addresses and default setting in the form:
4. 192.168.56.101=dc-1:ra-1
5. 192.168.56.102=dc-1:ra-1
6. 192.168.56.103=dc-1:ra-1
default=dc-1:ra-1
7. Save that information.
8. Edit /opt/apigee/customer/application/cassandra.properties
to change the old IP address specified to the new IP address:
9. conf_cassandra-topology_topology=192.168.56.101=dc-1:ra-1\
n192.168.56.102=dc-1:ra-1\n192.168.56.104=dc-1:ra-1\ndefault=dc-1:ra-1\n
10. Ensure that you insert "\n" after each IP address, and use the same default
settings as you recorded above in Step 3.
11. Restart Cassandra on the modified system. If the modified system is a seed
node, also restart each system that used the modified seed node.
12. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restart
13. Run the nodetool ring command on the modified node to ensure that the
ring is complete. The utility can be found at /opt/apigee/apigee-
cassandra/bin.
14. nodetool [-u username -pw password] -h localhost ring
15. You only need to pass your username and password if you enabled JMX
authentication for Cassandra.
16. Run nodetool repair on the modified node. Note that this process may
take some time, so it is highly recommended that this not be done during peak
API traffic hours.
17. nodetool [-u username -pw password] -h localhost repair -pr
18. If necessary, repair ZooKeeper (see below), then restart every Apigee platform
component, beginning with the Management Server.

Update datastore registrations


1. Find the UUIDs of datastore registrations specifying the old IP address by
using the commands below. Take note of the "type" and "UUID" parameters:
● curl -u ADMINEMAIL:PW "http://$MSIP:$port/v1/servers?
pod=central&region=DC" | egrep -i '[type|internalip|uuid|region]'
● curl -u ADMINEMAIL:PW "http://$MSIP:$port/v1/servers?
pod=gateway&region=DC" | egrep -i '[type|internalip|uuid|region]'
● curl -u ADMINEMAIL:PW "http://$MSIP:$port/v1/servers?
pod=analytics&region=DC" | egrep -i '[type|internalip|uuid|region]'
● Where DC is the data center name. In a single data center installation,
the vaue is typically "dc-1".
2. Register the new IP addresses using one of the commands below. The
command needed will depend on the type of the changed node. Note: The
REGION parameter below refers to the datacenter that the cluster is in. For example, for
high availability you would generally have a cluster in dc-1 (Dater Center 1) and a cluster
in dc-2 (Data Center 2). This parameter is defined at installation time. The default value is
dc-1.
● For type="application-datastore":

curl -u ADMINEMAIL:PW "http://MSIP:port/v1/servers -d \


"Type=application-datastore&Type=audit-
datastore&InternalIP=NEWIP&region=REGION&pod=central" \

● -H 'content-type: application/x-www-form-urlencoded' -X POST


● For type="kms-datastore":

curl -u ADMINEMAIL:PW "http://MSIP:port/v1/servers -d \


"Type=kms-datastore&Type=dc-datastore&Type=keyvaluemap-
datastore&Type=counter-datastore&Type=cache-datastore \
&InternalIP=NEWIP&region=REGION&pod=GATEWAY_POD" -H 'content-type: \

● application/x-www-form-urlencoded' -X POST
● For type="reportcrud-datastore":
curl -u ADMINEMAIL:PW "http://MSIP:port/v1/servers" -d \
"Type=reportcrud-datastore&InternalIP=NEW_IP&region=REGION&pod=analytics" \

● -H 'content-type: application/x-www-form-urlencoded' -X POST


3. Delete old registrations for the UUID of the system on which the IP address
was changed. For each of these UUIDs issue:
4. curl -u ADMINEMAIL:PW "http://MSIP:port/v1/servers/OLD_UUID" -X DELETE

Changing the IP address of a ZooKeeper node


Follow the steps below to change the IP address of a ZooKeeper node:

Change the IP Address and restart the ZooKeeper ensemble (for multi-node
ensemble configurations only)
1. Open /opt/apigee/apigee-zookeeper/conf/zoo.cfg in an editor.
You should see all ZooKeeper IP addresses and default setting in the form:
2. server.1=192.168.56.101:2888:3888
3. server.2=192.168.56.102:2888:3888
4. server.3=192.168.56.103:2888:3888
5. Save that information.
6. On each ZooKeeper node, edit the file
/opt/apigee/customer/application/zookeeper.properties file to
set the conf_zoo_quorum property to the correct IP addresses. If the file
does not exist, create it.
7. conf_zoo_quorum=server.1=192.168.56.101:2888:3888\
nserver.2=192.168.56.102:2888:3888\nserver.3=192.168.56.104:2888:3888\n
8. Ensure that you insert "\n" after each IP address and that entries are in the
same order on every node.
9. Find the leader of the ZooKeeper ensemble by using the following command
(replace node with the IP address of the Zookeeper machine):
10. echo srvr | nc node 2181
11. The Mode line in the output should say "leader".
12. Restart one ZooKeeper after the other starting with the leader and ending with
the node on which the IP address was changed. If more than one zookeeper
node changed IP addresses it may be necessary to restart all nodes.
13. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restart
14. Use the echo command described above to verify each ZooKeeper node.

Inform the Apigee nodes of the changed configuration


1. On each Router node, edit the file
/opt/apigee/customer/application/router.properties as
follows. If the file does not exist, create it.
● Change the conf_zookeeper_connection.string parameter to
include the new IP address
● Change the conf_zookeeper_zk1.host parameter to include the
new IP address
2. On every Message Processor node, edit the file
/opt/apigee/customer/application/message-
processor.properties as follows. If the file does not exist, create it.
● Change the conf_zookeeper_connection.string parameter to
include the new IP address
● Change the conf_zookeeper_zk1.host parameter to include the
new IP address
3. On the Management Server node, edit the file
/opt/apigee/customer/application/management-
server.properties as follows. If the file does not exist, create it.
● Change the conf_zookeeper_connection.string parameter to
include the new IP address
● Change the conf_zookeeper_zk1.host parameter to include the
new IP address
4. Restart all Apigee platform component by running the following command on
each node:
5. /opt/apigee/apigee-service/bin/apigee-all restart

Changing the IP address of a LDAP server (OpenLDAP)


To change the IP address of an OpenLDAP node, do the following:

1. On the Management Server node, edit the file


/opt/apigee/customer/application/management-
server.properties file. If the file does not exist, create it.
2. In the management-server.properties file, set the
conf_security_ldap.server.host parameter to the new IP address.
3. Restart the Management Server:
4. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart

Changing the IP address of other Apigee node types


To change the IP address of any of these node types (Router, Message Processor,
Postgres Server (not postgresql) and Qpid Server (not qpidd):

1. Use the following curl command to register the new internal and external IP
address:
curl -u ADMINEMAIL:PW -X PUT \
http://MSIP:8080/v1/servers/uuid -d ExternalIP=ip
curl -u ADMINEMAIL:PW -X PUT \

2. http://$MSIP:8080/v1/servers/uuid -d InternalIP=ip
3. Where uuid is the UUID of the node.
For information on how to get a component's UUID, see Get UUIDs.

Handling a PostgresSQL database


failover
Perform the following during a PostgreSQL database failover:

1. Stop apigee-postgresql on the current master if it is still running:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop
3. Go to standby node and invoke the following command to make it the master:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql promote-
standby-to-master IPorDNSofOldMaster

If old master is restored at some time in the future, make it a standby node:

1. On the current master, edit the config file to set:


PG_MASTER=IPorDNSofNewMaster

2. PG_STANDBY=IPorDNSofOldMaster
3. Note: Apigee strongly recommends that you use IP addresses rather than host names for
the PG_MASTER and PG_STANDBY properties in your silent configuration file. In addition,
you should be consistent on both nodes.
If you use host names rather than IP addresses, you must be sure sure that the host
names properly resolve using DNS.
4. Enable replication on the new master:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup-
replication-on-master -f configFIle
6. On the old master, edit the config file to set:
PG_MASTER=IPorDNSofNewMaster

7. PG_STANDBY=IPorDNSofOldMaster
8. Stop apigee-postgresql on the old master:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop
10. On the old master, clean out any old Postgres data:
11. rm -rf /opt/apigee/data/apigee-postgresql/
12. Note: If necessary, you can backup this data before deleting it.
13. Configure the old master as a standby:
14. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup-
replication-on-standby -f configFile
15. On completion of replication, verify the replication status by issuing the
following scripts on both servers. The system should display identical results
on both servers to ensure a successful replication:
a. On the master node, run:
b. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
postgres-check-master
c. Verify that it says it is the master.
d. On the standby node:
e. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
postgres-check-standby
f. Verify that it says it is the standby.

Change Postgres database settings for


monetization
In Edge for Private Cloud, you might need to change the Postgres database settings,
such as host, username, or password—for example: to implement a Postgres
database failover. If you have Monetization Services enabled, you also need to
modify the Monetization system settings for this change.

Use the following procedure to change Postgres database connection configurations


for Monetization Services.

Note: You can only use this procedure if you have monetization enabled.

1. Create a configuration file with the configurations below. The file should be
readable by the apigee user.
# IP address of a zookeeper node
ZK_HOSTS=XX.XX.XX.XX

# Postgres admin user and password


PG_USER=apigee
PG_PWD=postgres

# Postgres Host IP
MO_PG_HOST=XX.XX.XX.XX

# Postgres user and password for monetization


MO_PG_USER=postgre
MO_PG_PASSWD=changeme

# [OPTIONAL] Comma separated list of org names for which monetization is enabled

2. MO_ORG_NAMES=org1,org2
3. Run the following command on one of the management server nodes:
4. $APIGEE_ROOT/apigee-service/bin/apigee-service edge-mint-management-
server mint-pg-registration -f <path-to-config-file>
5. Restart all Management server and Message processor nodes.
$APIGEE_ROOT/apigee-service/bin/apigee-service edge-management-server restart

6. $APIGEE_ROOT/apigee-service/bin/apigee-service edge-message-processor
restart

Organization and environment


maintenance
This section covers various administrative operations, for example, creation,
management and removal of Apigee organizations, environments and virtual hosts in
an Apigee Edge for Private Cloud installation.

For an introduction to organizations, environments, and virtual hosts, see About


planets, regions, pods, organizations, environments and virtual hosts.

Checking Status of Users, Organization and


Environment

Management Server plays a vital role in holding all other components together in an
on-premises installation of Edge Private Cloud. You can check for user, organization
and deployment status on the Management Server by issuing the following curl
commands:

curl -u adminEmail:admin_passwd http://localhost:8080/v1/users


curl -u adminEmail:admin_passwd http://localhost:8080/v1/organizations

curl -u adminEmail;:admin_passwd
http://localhost:8080/v1/organizations/orgname/deployments

The system should display 200 HTTP status for all calls. If these fail, do the
following:

1. Check the Management Server logs at


/opt/apigee/var/log/apigee/management-server for any errors.
2. Make a call against Management Server to check whether it is functioning
properly.
3. Remove the server from the ELB and then restart the Management Server:
4. /opt/apigee/bin/apigee-service management-server restart

About using config files

The commands shown below take a config file as input. For example, you pass a
config file to the setup-org command to define all the properties of the organization,
including the environment and virtual host.

For a complete config file, and information on the properties that you can set in the
config file, see Onboard an organization.

About setting up a virtual host

A virtual host on Edge defines the domains and Edge Router ports on which an API
proxy is exposed, and, by extension, the URL that apps use to access an API proxy. A
virtual host also defines whether the API proxy is accessed by using the HTTP
protocol, or by the encrypted HTTPS protocol.

Use the scripts and API calls shown below to create a virtual host. When you create
the virtual host, you must specify the following information:

● The name of the virtual host that you use to reference it in your API proxies.
● The port on the Router for the virtual host. Typically these ports start at 9001
and increment by one for every new virtual host.
● The host alias of the virtual host. Typically the DNS name of the virtual host.
The Edge Router compares the Host header of the incoming request to the list
of host aliases as part of determining the API proxy that handles the request.
When making a request through a virtual host, either specify a domain name
that matches the host alias of a virtual host, or specify the IP address of the
Router and the Host header containing the host alias.

For example, if you created a virtual host with a host alias of myapis.apigee.net on
port 9001, then execute a curl request to an API through that virtual host could use
one of the following forms:

● If you have a DNS entry for myapis.apigee.net:


● curl http://myapis.apigee.net:9001/proxy-base-path/resource-path
● If you do not have a DNS entry for myapis.apigee.net:
● curl http://routerIP:9001/proxy-base-path/resource-path -H 'Host:
myapis.apigee.net'
● In the second form, you specify the IP address of the Router, and pass the
host alias in the Host header.
Note: The curl command, most browsers, and many other utilities automatically append
the Host header with the domain as part of the request, so you can actually use a curl
command in the form:
● curl http://routerIP:9001/proxy-base-path/resource-path

Options when you do not have a DNS entry for the virtual host

One option when you do not have a DNS entry is to set the host alias to the IP
address of the Router and port of the virtual host, as routerIP:port. For example:

192.168.1.31:9001

Then you make a curl command in the form below:

curl http://routerIP:9001/proxy-base-path/resource-path

This option is preferred because it works well with the Edge UI.

If you have multiple Routers, add a host alias for each Router, specifying the IP
address of each Router and port of the virtual host.

Alternatively, you can set the host alias to a value, such as temp.hostalias.com.
Then, you have to pass the Host header on every request:

curl -v http://routerIP:9001/proxy-base-path/resource-path -H 'Host:


temp.hostalias.com'

Or, add the host alias to your /etc/hosts file. For example, add this line to
/etc/hosts:

192.168.1.31 temp.hostalias.com

Then you can make a request as if you had a DNS entry:

curl -v http://myapis.apigee.net:9001/proxy-base-path/resource-path
Creating an organization, environment,
and virtual host
You can create an organization, environment, and virtual host on the command line
in a single command, or you can create each one separately. In addition, you can use
the Edge API to perform many of these actions.

Video: Watch a short video for an overview of Apigee Organization setup and
configuration.

Creating an organization,environment, and virtual host at


the same time
Before you create an API proxy on Apigee Edge, you must create at least one
organization and, within each organization, one or more environments and virtual
hosts.

NOTE: When creating a virtual host, you specify the Router port used by the virtual host.

For example, port 9001. By default, the Router runs as the user "apigee" which does not have
access to privileged ports, which are typically ports 1024 and below. To create a virtual host that
binds the Router to a protected port, you must configure the Router to run as a user with access
to those ports.

For more information, see Setting up a virtual host.

Typically, organizations and environments are created together. To simplify the


process, use the apigee-provision utility. Invoke it from the command line on
the Edge Management Server:

/opt/apigee/apigee-service/bin/apigee-service apigee-provision setup-org -f


configFile

Where configFile is the path to a configuration file that looks similar to the following:

# Set Edge sys admin credentials.


ADMIN_EMAIL=your@email.com
APIGEE_ADMINPW=admin_password # If omitted, you are prompted for it.
NEW_USER="y"
USER_NAME=orgAdmin@myCo.com
FIRST_NAME=foo
LAST_NAME=bar
USER_PWD="userPword"
ORG_NAME=example # lowercase only, no spaces, underscores, or periods.
ENV_NAME=prod # lowercase only
VHOST_PORT=9001
VHOST_NAME=default
VHOST_ALIAS="$IP1:9001"
# Optionally configure TLS/SSL for virtual host.
# VHOST_SSL=y # Set to "y" to enable TLS/SSL on the virtual host.
# KEYSTORE_JAR= # JAR file containing the cert and private key.
# KEYSTORE_NAME= # Name of the keystore.
# KEYSTORE_ALIAS= # The key alias.
# KEY_PASSWORD= # The key password, if it has one.
# Optionally set the base URL displayed by the Edge UI for an
# API proxy deployed to the virtual host.
# VHOST_BASEURL="http://myCo.com"
# AXGROUP=axgroup-001 # Default value is axgroup-001
NOTE: For TLS/SSL configuration, see Keystores and Truststores and Configuring TLS access to
an API for the Private Cloud for more information on creating the JAR file, and other aspects of
configuring TLS/SSL.

When setting up an organization, the setup-org script does the following:

● Creates the organization.NOTE: You cannot rename an organization after you create
it.
● Associates the organization with the "gateway" pod. You cannot change this.
● Adds the specified user as the organization admininstrator. If the user does
not exist, you can create one.
● Creates one or more environments.
● Creates one or more virtual hosts for each environment.
● Associates the environment with all Message Processor(s).
● Enables analytics.

By default, the maximum length of the organization name and environment name is
20 characters when using the apigee-provision utility. This limit does not apply
if you use the Edge API directly to create the organization or environment. Both the
org name and environment name must be lower case.

NOTE: Rather than using the apigee-provision utility, you can use a set of additional scripts,
or the Edge API itself, to create and configure an organization. For example, use thee scripts to
add an organization and associate it with multiple pods. The following sections describe those
scripts and APIs in more detail.

Create an organization
Use the create-org command to create an organization, as the following example
shows:

/opt/apigee/apigee-service/bin/apigee-service apigee-provision create-org -f


configFile
This script creates the organization, but does not add or configure the environments
and virtual hosts required by the organization to handle API calls.

NOTE: You cannot rename an organization after you create it.

The configuration file contains the name of the organization and the email address
of the organization admin. The characters you can use in the name attribute are
restricted to a-z0-9\-$%. Do not use spaces, periods or upper-case letters in the
name:

APIGEE_ADMINPW=admin_password # If omitted, you are prompted for it.


ORG_NAME=example # lowercase only, no spaces, underscores, or periods.
ORG_ADMIN=orgAdmin@myCo.com

The create-org script:

● Creates the organization.


● Associates the organization with the "gateway" pod.
● Adds the specified user as the organization admininstrator. The user must
already exist; otherwise the script issues an error.
NOTE: You cannot create two organizations with the same name. In that case, the second create
will fail, like this:
<Error>
<Code>organizations.OrganizationAlreadyExists</Code>
<Message>Organization : test already exists</Message>
<Contexts/>
</Error>

Create an organization by using API calls

You can use the following API calls to create an organization. The first call creates
the organization:

curl -H "Content-Type:application/xml" -u sysAdminEmail:adminPasswd \


-X POST http://management_server_IP:8080/v1/organizations \
-d '<Organization name="org_name" type="paid"/>'

The next call associates the organization with a pod:

curl -H "Content-Type:application/x-www-form-urlencoded" \
-u sysAdminEmail:adminPasswd -X POST \
http://management_server_IP:8080/v1/organizations/org_name/pods \
-d "region=default&pod=gateway"

The final call adds an existing user as the org admin for the org:

curl -H "Content-Type:application/xml" -u sysAdminEmail:adminPasswd \


-X POST
http://<ms-ip>:8080/v1/organizations/org_name/users/user_email/userroles/ \
-d '<Roles><Role name="orgadmin"/></Roles>'
NOTE: Currently, Apigee Edge for Private Cloud supports the roles – orgadmin, opsadmin, user,
and businessuser, all having a default permission of full access to entities (APIs, API products,
apps, developers, and reports) in an Apigee organization. Depending on the customer's needs,
you can customize the pre-defined or configure more complex roles and permissions using RBAC
API. See http://apigee.com/docs/api/user-roles.

Note that the readonlyadmin role is available in the Cloud only.

If the user does not exist, you can use the following call to create the user as
described in Adding a user.

Create an environment
Use the add-env script to create an environment in an existing organization:

/opt/apigee/apigee-service/bin/apigee-service apigee-provision add-env -f configFile

This config file contains the information necessary to create the environment and
virtual host:

APIGEE_ADMINPW=admin_password # If omitted, you are prompted for it.


ORG_NAME=example # lowercase only, no spaces, underscores, or periods.
ENV_NAME=prod # lowercase only
VHOST_PORT=9001
VHOST_NAME=default
VHOST_ALIAS="$IP1:9001"
# Optionally configure TLS/SSL for virtual host.
# VHOST_SSL=y # Set to "y" to enable TLS/SSL on the virtual host.
# KEYSTORE_JAR= # JAR file containing the cert and private key.
# KEYSTORE_NAME= # Name of the keystore.
# KEYSTORE_ALIAS= # The key alias.
# KEY_PASSWORD= # The key password, if it has one.
# Optionally set the base URL displayed by the Edge UI for an
# API proxy deployed to the virtual host.
# VHOST_BASEURL="http://myCo.com"
# AXGROUP=axgroup-001 # Default value is axgroup-001
NOTE: For TLS/SSL configuration, see Keystores and Truststores and Configuring TLS access to
an API for the Private Cloud for more information on creating the JAR file, and other aspects of
configuring TLS/SSL.

The add-env command:

● Creates the environment.


● Creates a single virtual host for the environment.
● Associates the environment with all Message Processor(s) in the pod
associated with the organization containing the environment.
● Enables analyticsNOTE: If you enable analytics for one environment in an organization,
you must enable analytics for all environments in the organization.

Create an environment by using API calls

Alternatively, you can use the following API calls to create an environment. The first
call creates the environment:

curl -H "Content-Type:application/xml" -u sysAdminEmail:adminPasswd \


-X POST
http://management_server_IP:8080/v1/organizations/org_name/environments \
-d '<Environment name="env_name"/>'
NOTE: The characters you can use in the name attribute are restricted to: a-zA-Z0-9._\-$ %.

The next call associates the environment with a Message Processor. Make this call
for each Message Processor that you want to associate with the environment:

curl -H "Content-Type:application/x-www-form-urlencoded" \
-u sysAdminEmail:adminPasswd -X POST \
http://management_server_IP:8080/v1/organizations/org_name/environments/
env_name/servers \
-d "action=add&uuid=uuid"

Where uuid is the UUID of Message Processor. You can obtain the UUID by using the
command:

curl http://Message_Processor_IP:8082/v1/servers/self

Where Message_Processor_IP is the IP address of the Message Processor.

The next API call enables Analytics for a given environment. It validates the
existence of Qpid and Postgres Servers in the PODs of all datacenters. Then it starts
the Analytics onboarding for the given organization and environment.

NOTE: Rather than use the following API call, use the following command to enable Analytics:
/opt/apigee/apigee-service/bin/apigee-service apigee-provision enable-ax -f configFile

This config file contains:

ORG_NAME=orgName # lowercase only, no spaces, underscores, or periods.


ENV_NAME=envName # lowercase only
NOTE: If you enable analytics for one environment in an organization, you must enable analytics
for all environments in the organization:
curl -H "Content-Type:application/json" -u sysAdminEmail:adminPasswd \
-X POST http://management_server_IP:8080/v1/organizations/org_name/environments/
env_name/analytics/admin -d "@sample.json"

Where sample.json contains the following:

{
"properties" : {
"samplingAlgo" : "reservoir_sampler",
"samplingTables" : "10=ten;1=one;",
"aggregationinterval" : "300000",
"samplingInterval" : "300000",
"useSampling" : "100",
"samplingThreshold" : "100000"
},
"servers" : {
"postgres-server" : [ "1acff3a5-8a6a-4097-8d26-d0886853239c", "f93367f7-edc8-
4d55-92c1-2fba61ccc4ab" ],
"qpid-server" : [ "d3c5acf0-f88a-478e-948d-6f3094f12e3b", "74f67bf2-86b2-44b7-
a3d9-41ff117475dd"]
}
}

The postgres-servers property contains a comma-separated list of the Postgres


UUIDs, and the qpid-server property contains the Qpid UUIDs. If you need to
obtain these UUIDs, use the following commands.

For Qpid, run the following command:

curl -u sysAdminEmail:password http://management_server_IP/v1/servers?


pod=central

The output of this command is a JSON object. For each Qpid server, you will see
output in the form:

"type" : [ "qpid-server" ],
"uUID" : "d3c5acf0-f88a-478e-948d-6f3094f12e3b"

For Postgres, run the following command:

curl -u sysAdminEmail:admin_password http://management_server_IP/v1/servers?


pod=analytics

For each Postgres server, you will see output in the form:

"type" : [ "postgres-server" ],
"uUID" : "d3c5acf0-f88a-478e-948d-6f3094f12e3b"
Create a virtual host
You can create a virtual host in an existing environment in an organization. Often an
environment supports multiple virtual hosts. For example, one virtual host might
support the HTTP protocol, while another virtual host in the same environment
supports the encrypted HTTPS protocol.

Use the following API call to create additional virtual hosts, or to create a virtual host
for an environment with no virtual host:

curl -H "Content-Type:application/xml" -u sysAdminEmail:adminPasswd \


-X POST
http://management_server_IP:8080/v1/organizations/org_name/environments/
env_name/virtualhosts \
-d '<VirtualHost name="default"> \
<HostAliases> \
<HostAlias>myorg-test.apigee.net</HostAlias> \
</HostAliases> \
<Interfaces/> \
<Port>443</Port> \
</VirtualHost>'
NOTE: The characters you can use in the name attribute are restricted to a-zA-Z0-9._\-$%.

For a complete description of creating a virtual host, including creating a secure


virtual host that uses TLS/SSL over HTTPS, see Configuring TLS access to an API for
the Private Cloud.

Deleting a virtual
host/environment/organization
This section shows how to remove organizations, environments, and virtual hosts.
The order of API calls is very important; for example, the step to remove an
organization can only be executed after you remove all associated environments in
the organization.

Delete a virtual host


Before you can delete a virtual host from an environment, you must update any API
proxies that reference the virtual host to remove the reference. See Virtual hosts for
more.

Use the following API to delete a virtual host:


curl -u <admin user>:<admin passwd> -X DELETE \
"http://ms_IP:8080/v1/organizations/org_name/environments/env_name/
virtualhosts/virtualhost_name"

Delete an environment
You can only delete an environment after you have:

1. Deleted all virtual hosts in the environment as described above.


2. Disassociated the environment from all Message Processors.
3. Cleaned up analytics.

Disassociate an environment from Message Processor

Use the following API to remove an association of an environment with a Message


Processor. If you want to delete the environment, you must disassociate it from all
Message Processors:

curl -H "Content-Type: application/x-www-form-urlencoded" \


-u ADMIN_USERNAME:ADMIN_PASSWORD -X POST \
"http://ms_IP:8080/v1/organizations/org_name/environments/env_name/servers" \
-d "action=remove&uuid=uuid"

Where uuid is the UUID of Message Processor.

NOTE: To retrieve the UUID of the Message Processor, execute the following command:
curl "http://mp_IP:8082/v1/servers/self"

Where mp_IP is the IP address of the Message Processor.

Clean up analytics

To remove analytics information about the organization:

curl -u ADMIN_EMAIL:ADMIN_PASSWORD -X DELETE \


"http://ms_IP:8080/v1/analytics/groups/ax/analytics_group/scopes?
org=org_name&env=env_name"

Where analytics_group defaults to "analytics-001".

If you are unsure of the name of the analytics group, use the following command to
display it:

apigee-adminapi.sh analytics groups list --admin ADMIN_EMAIL --pwd


ADMIN_PASSWORD --host localhost

This command returns the analytics group name in the name field.
Drop fact and aggregate tables for specific Organization and Environment
Warning: This step is irrreversible. Create a backup before proceding.

To delete fact and aggregate tables:

/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql pg-drop-tables


org_name env_name [confirm_drop-N/Y]

where confirm_drop is an optional parameter with default value N (which prompts


for confirmation).

Delete the environment

To delete an environment:

curl -u ADMIN_EMAIL:ADMIN_PASSWORD \
"http://ms_IP:8080/v1/organizations/org_name/environments/env_name" \ -X
DELETE

Delete an organization
You can only delete an organization after you have:

1. Deleted all virtual hosts in all environments in the organization as described


above.
2. Deleted all environments in the organization as described above.
3. Disassociated the organization from all pods.

Disassociate an organization from a pod

Use the following API to disassociate an organization from a pod:

curl -H "Content-Type: application/x-www-form-urlencoded" \


-u ADMIN_EMAIL:ADMIN_PASSWORD -X POST
"http://ms_IP:8080/v1/organizations/org_name/pods" \
-d "action=remove&region=region_name&pod=pod_name"
NOTE: To find out what region and pods an organization is associated with, run the following
command:
curl -u ADMIN_EMAIL:ADMIN_PASSWORD "http://ms_IP:8080/v1/organizations/org_name/pods"

Delete the organization

Use the following API to delete an organization:

curl -u ADMIN_EMAIL:ADMIN_PASSWORD -X DELETE "http


Adding a new analytics group
When you install Edge for the Private Cloud, by default the installer creates a single
analytics group named "axgroup-001". At install time, you can change the default
name of the analytics group by including the AXGROUP property in the installation
config file:

# Specify the analytics group.


# AXGROUP=axgroup-001 # Default name is axgroup-001.

See Install Edge components on a node for more.

If you later want to add a new analytics group to your installation:

1. Create and configure the new analytics group:


a. Create the analytics group, named axgroupNew:

curl -u sysAdminEmail:passWord -H "Content-Type: application/json"

b. -X POST 'https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew'
c. Add a consumer group to the new analytics group, named consumer-
group-new. Consumer group names are unique within the context of
each analytics group:

curl -u sysAdminEmail:passWord -X POST -H 'Accept:application/json'


-H 'Content-Type:application/json'

d. "https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/consumer-
groups?name=consumer-group-new"
e. Set the consumer type of the analytics group to "ax":

curl -u sysAdminEmail:passWord -X POST -H "Content-Type:application/json"

f.
"https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/properties?
propName=consumer-type&propValue=ax"
g. Add the data center name. By default, you install Edge with a data
center named "dc-1". However, if you have multiple data centers, they
each have a unique name. This call is optional if you only have a single
data center, and recommended if you have multiple data centers:

curl -u sysAdminEmail:passWord -X POST -H "Content-Type:application/json"


h.
"https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/properties?
propName=region&propValue=dc-1"
2. Add the UUIDs of the Postgres servers to the new analytics group. If you have
configured two Postgres servers to function as a master/standby pair, specify
both as a comma-separated list of UUIDs.
a. To get the UUIDs of the Postgres servers, run the following cURL
command on every Postgres server node:
b. curl -u sysAdminEmail:passWord https://PG_IP:8084/v1/servers/self
c. If you only have a single Postgres server, add it to the analytics group:

curl -u sysAdminEmail:passWord -H "Content-Type: application/json"

d. -X POST
'https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/servers?
uuid=UUID&type=postgres-server&force=true'
e. If you have multiple Postgres servers configured as a master/standby
pair, then add them by specifying a comma-separated list of UUIDs:

curl -u sysAdminEmail:passWord -H "Content-Type: application/json"

f. -X POST
'https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/servers?
uuid=UUID_Master,UUID_standby&type=postgres-server&force=true'
g. This command returns the information about the analytics group,
including the UUID of the Postgres server in the postgres-server
property under uuids:
h. {
i. "name" : "axgroupNew",
j. "properties" : {
"region" : "dc-1",
"consumer-type" : "ax"
},
"scopes" : [ ],
"uuids" : {
"qpid-server" : [ ],
"postgres-server" : [ "2cb7211f-eca3-4eaf-9146-66363684e220" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-new",
"consumers" : [ ],
"datastores" : [ ],
"properties" : {
}
} ],
"data-processors" : {
}
}
k. Add the Postgres server to the data store of the consumer group. This
call is required to route analytics messages from the Qpid servers to
the Postgres servers:

curl -u sysAdminEmail:passWord -X POST -H 'Accept:application/json'


-H 'Content-Type:application/json'

l. "https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/consumer-
groups/consumer-group-new/datastores?uuid=UUID"
m. If multiple Postgre servers are configured as a master/standby pair,
then add them by specifying a comma-separated list of UUIDs:

curl -u sysAdminEmail:passWord -X POST -H 'Accept:application/json'


-H 'Content-Type:application/json'

n. "https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/consumer-
groups/consumer-group-new/datastores?
uuid=UUID_Master,UUID_standby"
o. The UUID appears in the datastores property of the consumer-
groups in the output.
3. Add the UUIDs of all Qpid servers to the new analytics group. You must
perform this step for all Qpid servers.
a. To get the UUIDs of the Qpid servers, run the following cURL command
on every Qpid server node:
b. curl -u sysAdminEmail:passWord https://QP_IP:8083/v1/servers/self
c. Add the Qpid server to the analytics group:

curl -u sysAdminEmail:passWord -H "Content-Type: application/json"

d. -X POST
'https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/servers?
uuid=UUID&type=qpid-server'
e. Add the Qpid server to the consumer group:

curl -u sysAdminEmail:passWord -X POST -H 'Accept:application/json'


-H 'Content-Type:application/json'

f. "https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/consumer-
groups/consumer-group-new/consumers?uuid=UUID"
g. This call returns the following where you can see the UUID of the Qpid
server added to the qpid-server property under uuids, and to the
consumers property under consumer-groups:
h. {
i. "name" : "axgroupNew",
j. "properties" : {
"region" : "dc-1",
"consumer-type" : "ax"
},
"scopes" : [ ],
"uuids" : {
"qpid-server" : [ "fb6455c3-f5ce-433a-b98a-bdd016acd5af" ],
"postgres-server" : [ "2cb7211f-eca3-4eaf-9146-66363684e220" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-new",
"consumers" : [ "fb6455c3-f5ce-433a-b98a-bdd016acd5af" ],
"datastores" : [ "2cb7211f-eca3-4eaf-9146-66363684e220" ],
"properties" : {
}
} ],
"data-processors" : {
}
}
4. Provision an organizations and environment for the new AX group:
curl -u sysAdminEmail:passWord -H "Content-Type: application/json"

5. -X POST "https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/scopes?
org=org_name&env=env_name"

Managing users, roles, and permissions


The Apigee documentation site has extensive information on managing user roles
and permissions. Users can be managed using both the Edge UI and the
Management API; roles and permissions can be managed only with the Management
API.

For information on users and creating users, see:

● About global users


● Creating global users

Many of the operations that you perform to manage users requires system
administrator privileges. In a Cloud based installation of Edge, Apigee functions in
the role of system administrator. In an Edge for the Private Cloud installation, your
system administrator must perform these tasks as described below.

Adding a user
You can create a user either by using the Edge API, the Edge UI, or Edge commands.
This section describes how to use Edge API and Edge commands. For information
on creating users in the Edge UI, see Creating global users.

After you create the user in an organization, you must assign a role to the user. Roles
determine the access rights of the user on Edge.

Note: The user cannot log in to the Edge UI, and does not appear in the list of users in the Edge
UI, until you assign it to a role in an organization.

Use the following command to create a user with the Edge API:

curl -H "Content-Type:application/xml" \
-u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD -X POST
http://ms_IP:8080/v1/users \
-d '<User> \
<FirstName>New</FirstName> \
<LastName>User</LastName> \
<Password>NEW_USER_PASSWORD</Password> \
<EmailId>foo@bar.com</EmailId> \
</User>'

Or use the following Edge command to create a user:

/opt/apigee/apigee-service/bin/apigee-service apigee-provision create-user -f


configFile

Where the configFile creates the user, as the following example shows:

APIGEE_ADMINPW=SYS_ADMIN_PASSWORD # If omitted, you will be prompted.


USER_NAME=foo@bar.com
FIRST_NAME=New
LAST_NAME=User
USER_PWD="NEW_USER_PASSWORD"
ORG_NAME=myorg

You can then use this call to view information about the user:

curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD
http://ms_IP:8080/v1/users/foo@bar.com

Assigning the user to a role in an organization


Before a new user can do anything, they have to be assigned to a role in an
organization. You can assign the user to different roles, including: orgadmin,
businessuser, opsadmin, user, or to a custom role defined in the organization.
Assigning a user to a role in an organization automatically adds that user to the
organization. Assign a user to multiple organizations by assigning them to a role in
each organization.

Use the following command to assign the user to a role in an organization:

curl -X POST -H "Content-Type:application/x-www-form-urlencoded" \


http://ms_IP:8080/v1/o/org_name/userroles/role/users?id=foo@bar.com \
-u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD

This call displays all the roles assigned to the user. If you want to add the user, but
display only the new role, use the following call:

curl -X POST -H "Content-Type: application/xml" \


http://ms_IP:8080/v1/o/org_name/users/foo@bar.com/userroles \
-d '<Roles><Role name="role"/><Roles>' \
-u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD

You can view the user's roles by using the following command:

curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD
http://ms_IP:8080/v1/users/foo@bar.com/userroles

To remove a user from an organization, remove all roles in that organization from the
user. Use the following command to remove a role from a user:

curl -X DELETE -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD \


http://ms_IP:8080/v1/o/org_name/userroles/role/users/foo@bar.com

Adding a system administrator


A system administrator can:

● Create orgs
● Add Routers, Message Processors, and other components to an Edge
installation
● Configure TLS/SSL
● Create additional system administrators
● Perform all Edge administrative tasks

While only a single user is the default user for administrative tasks, there can be
more than one system administrator. Any user who is a member of the "sysadmin"
role has full permissions to all resources.

Note: The "sysadmin" role is unique in that a user assigned to that role does not have to be part
of an organization. However, you typically assign it to an organization, otherwise that user cannot
log in to the Edge UI.
You can create the user for the system administrator in either the Edge UI or API.
However, you must use the Edge API to assign the user to the role of "sysadmin".
Assigning a user to the "sysadmin" role cannot be done in the Edge UI.

To add a system administrator:

1. Create a user in the Edge UI or API.


2. Add user to the "sysadmin" role:
curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD \

3. -X POST http://ms_IP:8080/v1/userroles/sysadmin/users -d
'id=foo@bar.com'
4. Ensure that the new user is in the "sysadmin" role:
5. curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD
http://ms_IP:8080/v1/userroles/sysadmin/users
6. Returns the user's email address:
7. [ " foo@bar.com " ]
8. Check permissions of new user:
9. curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD
http://ms_IP:8080/v1/users/foo@bar.com/permissions
10. Returns:
11. {
12. "resourcePermission" : [ {
13. "path" : "/",
"permissions" : [ "get", "put", "delete" ]
}]
}
14. After you add the new system administrator, you can add the user to any orgs.
Note: The new system administrator user cannot log in to the Edge UI until you add the
user to at least one org.
15. If you later want to remove the user from the system administrator role, you
can use the following API:
curl -X DELETE -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD \

16. http://ms_IP:8080/v1/userroles/sysadmin/users/foo@bar.com
17. Note that this call only removes the user from the role, it does not delete the
user.

Changing the default system administrator user


At the time you install Edge, you specify the email address of the system
administrator. Edge creates a user with that email address, and sets that user to be
the default system administrator. You can later add additional system administrators
as described above.

This section describes how to change the default system administrator to be a


different user, and how to change the email address of the user account for the
current default system administrator.
To see the list of users currently configured as system administrators, use the
following API call:

curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD
http://ms_IP:8080/v1/userroles/sysadmin/users

To determine the current default system administrator, view the


/opt/apigee/customer/defaults.sh file. The file contains the following line
showing the email address of the current default system administrator:

ADMIN_EMAIL=foo@bar.com

To change the default system administrator to be a different user:

Note: As part of changing the default system administrator, you have to update the Edge UI.

1. Create a new system administrator as described above, or ensure that the


user account of the new system administrator is already configured as a
system administrator.
2. Edit /opt/apigee/customer/defaults.sh to set ADMIN_EMAIL to the
email address of the new system administrator.
3. Edit the silent config file that you used to install the Edge UI to set the
following properties:
ADMIN_EMAIL=NEW_SYS_ADMIN_EMAIL
APIGEE_ADMINPW=NEW_SYS_ADMIN_PASSWORD
SMTPHOST=smtp.gmail.com
SMTPPORT=465
SMTPUSER=foo@gmail.com
SMTPPASSWORD=bar

4. SMTPSSL=y
5. Note that you must include the SMTP properties because all properties on the
UI are reset.
6. Reconfigure the Edge UI:
/opt/apigee/apigee-service/bin/apigee-service edge-ui stop
/opt/apigee/apigee-service/bin/apigee-service edge-ui setup -f configFile

7. /opt/apigee/apigee-service/bin/apigee-service edge-ui start

If you just want to change the email address of the user account for the current
default system administrator, you first update the user account to set the new email
address, then change the default system administrator email address:

1. Update the user account of the current default system administrator user with
a new email address:
curl -H content-type:application/json -X PUT -u
CURRENT_SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD \
http://ms_IP:8080/v1/users/CURRENT_SYS_ADMIN_EMAIL \

2. -d '{"emailId": "NEW_SYS_ADMIN_EMAIL", "lastName": "admin", "firstName":


"admin"}'
3. Repeat steps 2, 3. and 4 from the previous procedure to update the
/opt/apigee/customer/defaults.sh file and to update the Edge UI.

Specifying the email domain of a system administrator


As an extra level of security, you can specify the required email domain of an Edge
system administrator. When adding a system administrator, if the user's email
address is not in the specified domain, then adding the user to the "sysadmin" role
fails.

By default, the required domain is empty, meaning you can add any email address to
the "sysadmin" role.

To set the email domain:

1. Open the management-server.properties file in an editor:


2. vi /opt/apigee/customer/application/management-server.properties
3. If this file does not exist, create it.
4. Set the conf_security_rbac.global.roles.allowed.domains
property to the comma-separated list of allowed domains. For example:
5. conf_security_rbac.global.roles.allowed.domains=myCo.com,yourCo.com
6. Save your changes.
7. Restart the Edge Management Server:
8. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
9. If you now attempt to add a user to the "sysadmin" role, and the email address
of the user is not in one of the specified domains, the add fails.

Deleting a user
You can create a user either by using the Edge API or the Edge UI. However, you can
only delete a user by using the API.

To see the list of current users, including email address, use the following curl
command:

curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD http://ms-IP:8080/v1/users

Use the following curl command to delete a user:

curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD -X DELETE


http://ms_IP:8080/v1/users/USER_EMAIL
Get UUIDs
A UUID (Universally Unique IDentifier) is a unique ID for a component in your system.
Some maintenance and configuration tasks for Private Cloud require you to use the
UUID of a component.

This section shows multiple methods you can use to get UUIDs of Private Cloud
components.

Use the management API


To get the UUID for Private Cloud components with the management API, use the
following API calls:

Component API call

Router curl http://router_IP:8081/v1/servers/self

Message Processor curl http://mp_IP:8082/v1/servers/self

Qpid curl http://qp_IP:8083/v1/servers/self

Postgres curl http://pg_IP:8084/v1/servers/self

Note that the port numbers are different, depending on which component you call.

If you call the API from the machine itself, then you do not need to specify a
username and password. If you call the API remotely, you must specify the Edge
administrator's username and password, as the following example shows:

curl http://10.1.1.0:8081/v1/servers/self -u user@example.com:abcd1234 \


-H "Accept:application/xml"

Each of these calls returns a JSON object that contains details about the service.
The uUID property specifies the service's UUID, as the following example shows:

{
"buildInfo" : {
...
},
...
"tags" : {
...
},
"type" : [ "router" ],
"uUID" : "71ad42fb-abd1-4242-b795-3ef29342fc42"
}

You can optionally set the Accept header to application/xml to instruct


apigee-adminapi.sh to return XML rather than JSON. For example:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh servers list --admin


user@example.com \
--pwd abcd1234 --host localhost -H "Accept:application/xml"

Use apigee-adminapi.sh
You can get the UUIDs of some components by using the servers list option of
the apigee-adminapi.sh utility. To get UUIDs with apigee-adminapi.sh, use
the following syntax:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh servers list \


--admin admin_email_address --pwd admin_password --host edge_server

Where:

● admin_email_address is the email address of the Edge administrator.


● admin_password is the Edge administrator's password.
● edge_server is the IP address of the server from which you want a list. If you
are logged into the server, you can use localhost.

For example:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh servers list --admin


user@example.com --pwd abcd1234 --host localhost

This command returns a complex JSON object that specifies the same properties for
each service as the management API calls.

As with the management API calls, you can optionally set the Accept header to
application/xml to instruct apigee-adminapi.sh to return XML rather than
JSON. For example:

/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh servers list --admin


user@example.com \
--pwd abcd1234 --host localhost -H "Accept:application/xml"
Submitting API traffic statistics to
Apigee
All Edge for Private Cloud customers are required to submit statistics about API
proxy traffic to Apigee. Apigee recommends that you upload that information once a
day, possibly by creating a cron job.

You must submit the statistics for your production API deployments, but not for APIs
in development or testing deployments. In most Edge installations, you will define
specific organizations or environments for your production APIs. The statistics that
you submit are only for those production organizations and environments.

Submit your API traffic statistics to Apigee


To submit your statistcs to Apigee:

1. Collect the data using the Edge management API.


2. Send the data through email to: edge.apiops@google.com

You must repeat this process for every production organization and environment in
your Edge installation.

Collect the data

Use the following curl command to gather traffic data for a specific organization
and environment for a specified time interval:

curl -X GET -u apigee_mgmt_api_email:apigee_mgmt_api_password \


"http://ms_IP:8080/v1/organizations/org_name/environments/env_name/stats/
apiproxy?select=sum(message_count)&timeRange=MM/DD/YYYY
%20HH:MM~MM/DD/YYYY%20HH:MM&timeUnit=hour"

This command uses the Edge Get API message count API. In this command:

● apigee_mgmt_api_email:apigee_mgmt_api_password specifies the email


address of an account with access to the Edge /stats APIs.
● ms_IP is the IP address or DNS name of the Edge Management Server.
● org_name and env_name specifies the org and environment.
● apiproxy is the dimenesion that groups metrics by API proxies.
● MM/DD/YYYY%20HH:MM~MM/DD/YYYY%20HH:MM&timeUnit=hour specifies
the time range divided into time units of the metrics to gather. Notice that the
curl command uses the hex code %20 for spaces in the time range.
For example, to gather API proxy message counts hour by hour over a 24 hour
period, use the following API call.

curl -X GET -u apigee_mgmt_api_email:apigee_mgmt_api_password \


"http://192.168.56.103:8080/v1/organizations/myOrg/environments/prod/stats/
apiproxy?
select=sum(message_count)&timeRange=01%2F01%2F2018%2000%3A00~01%2F
02%2F2018%2000%3A00&timeUnit=hour"

(Note that timeRange contains URL-encoded characters.)

You should see a response in the form:

{
"environments" : [ {
"dimensions" : [ {
"metrics" : [ {
"name" : "sum(message_count)",
"values": [
{
"timestamp": 1514847600000,
"value": "35.0"
},
{
"timestamp": 1514844000000,
"value": "19.0"
},
{
"timestamp": 1514840400000,
"value": "58.0"
},
{
"timestamp": 1514836800000,
"value": "28.0"
},
{
"timestamp": 1514833200000,
"value": "29.0"
},
{
"timestamp": 1514829600000,
"value": "33.0"
},
{
"timestamp": 1514826000000,
"value": "26.0"
},
{
"timestamp": 1514822400000,
"value": "57.0"
},
{
"timestamp": 1514818800000,
"value": "41.0"
},
{
"timestamp": 1514815200000,
"value": "27.0"
},
{
"timestamp": 1514811600000,
"value": "47.0"
},
{
"timestamp": 1514808000000,
"value": "66.0"
},
{
"timestamp": 1514804400000,
"value": "50.0"
},
{
"timestamp": 1514800800000,
"value": "41.0"
},
{
"timestamp": 1514797200000,
"value": "49.0"
},
{
"timestamp": 1514793600000,
"value": "35.0"
},
{
"timestamp": 1514790000000,
"value": "89.0"
},
{
"timestamp": 1514786400000,
"value": "42.0"
},
{
"timestamp": 1514782800000,
"value": "47.0"
},
{
"timestamp": 1514779200000,
"value": "21.0"
},
{
"timestamp": 1514775600000,
"value": "27.0"
},
{
"timestamp": 1514772000000,
"value": "20.0"
},
{
"timestamp": 1514768400000,
"value": "12.0"
},
{
"timestamp": 1514764800000,
"value": "7.0"
}
]
}
],
"name" : "proxy1"
} ],
"name" : "prod"
} ],
"metaData" : {
"errors" : [ ],
"notices" : [ "query served by:53dab80c-e811-4ba6-a3e7-b96f53433baa", "source
pg:6b7bab33-e732-405c-a5dd-4782647ce096", "Table used: myorg.prod.agg_api" ]
}
}
Disabling RPMs for patch releases
When performing an OS patch update using the command sudo yum update, if
the RPMS for Apigee have changed since their last update, the patch update will pull
down new Apigee RPMs. The next time components are restarted, change deltas will
be reported due to RPM changes, which you might not want to happen.

To prevent Apigee components from getting the new RPMs when a OS patch update
is done through the sudo yum update command, you can disable the Apigee
repos from getting automated upgrades. Later, when you need to apply Apigee
upgrades, you can re-enable those repos.

To disable the Apigee repos, enter the following commands:

yum-config-manager --disable apigee-release

yum-config-manager --disable apigee-thirdparty

When upgrades to Apigee are needed, reenable the repos with this command:

yum-config-manager --enable apigee-release

yum-config-manager --enable apigee-thirdparty

BACKUP AND RESTORE


Backup and restore
This section describes the backup and restore tasks in an on-premises installation of
Apigee Edge. It is recommended that you should always create a backup of Apigee
Edge components, i.e. configuration and data, at regular intervals and ensure that
recovery is performed in an event of a system failure. Backup and restore
procedures enable you to restore the state of an entire system (including all
components), without affecting other parts of the system.
What to back up
In an on-premises deployment of Apigee Edge, you must back up the following Edge
components:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
Note: In a Postgres master/standby configuration, you only backup the Master. You do not have
to backup the standby (or slave).

Recovery time objective (RTO) vs.Recovery point objective


(RPO)
The RTO is the duration of time and a service level within which a business process
must be restored after a disaster (or disruption) in order to avoid unacceptable
consequences associated with a break in business continuity.

A RPO is the maximum tolerable period in which data might be lost from an IT
service due to a major incident. Both objectives must be taken into consideration
before you implement a backup plan for your recovery strategy.

Before you start: useful facts


You may observe that installation data is distributed across several systems, for
example organizations are in LDAP, ZooKeeper and Cassandra. Make sure that you
take care of the following notes about backup and restore:

● If you have multiple Cassandra nodes, back them up one at a time.


● If you have multiple ZooKeeper nodes, back them up one at a time. The
backup process temporarily shuts down ZooKeeper.
● If you have multiple Postgres nodes, back them up one at a time.
● You can back up all other Edge components at the same time on all nodes by
using tools such as Ansible or Chef.
● When you restore one of ZooKeeper, Cassandra or LDAP nodes it is
recommended to restore all three nodes in order to achieve consistency
(especially when organizations/environments have been created since
backup was created).Note: The above does not affect restoration of one Cassandra or
ZooKeeper node in a datastore cluster, since no backup is used.
● If LDAP or global administrator passwords are lost/corrupted, a complete
backup is required in order to get the same credentials for the last backup and
running system.
● The backup utility writes the generated backup file to
/opt/apigee/backup/comp where comp is the name of the component.
Because you can generate many backup files, and because these files can get
large, you can mount a separate disk at /opt/apigee/backup just for
backup files.
● All backup files, except for PostgreSQL, are named in the form:
● backup-year.month.day,hour.min.seconds.tar.gz
● For example:
● backup-2018.05.29,11.13.41.tar.gz
● PostgreSQL backup files are named:
● year.month.day,hour.min.seconds.dump

How to back up
Note: When backing up a complete Edge installation, you must back up all components. See
"What to Backup" at Backup and restore for more.

Use the following command to perform a backup:

/opt/apigee/apigee-service/bin/apigee-service component_name backup

Where component_name is the name of the component. Possible values include:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
For example:

/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra backup


Warning: The backup utility writes the generated backup file to
/opt/apigee/backup/component_name where component_name is the name of the
component. Because you can generate many backup files, and because these files can get large,
you can mount a separate disk at /opt/apigee/backup just for backup files.Note: For
components not associated with a data store, such as the Router or Message Processor, the
backup command still creates a backup file containing configuration and other information.Note:
For PostgreSQL only, you can optionally create a network storage snapshot as an alternative to
using the Apigee backup command. This snapshot cab be useful because the Apigee backup
command takes time to run and PostgreSQL data generated while the backup runs could be
omitted from the resulting .dump file.

To perform a backup:

1. Stop the component (except for PostgreSQL and Cassandra which must be
running to backup):
2. /opt/apigee/apigee-service/bin/apigee-service component_name stop
3. Run the backup command:
4. /opt/apigee/apigee-service/bin/apigee-service component_name backup
5. The backup command then:
● Creates a tar file of the following directories and files, where
component_name is the name of the component:
a. Directories
● /opt/apigee/data/component_name
● /opt/apigee/etc/component_name.d
b. Files if they exist
● /opt/apigee/token/application/
component_name.properties
● /opt/apigee/customer/application/
component_name.properties
● /opt/apigee/customer/defaults.sh
● /opt/apigee/customer/conf/license.txt
● Creates a .tar.gz file in the /opt/apigee/backup/component_name
directory. The file name has the following form:
● backup-year.month.day,hour.min.seconds.tar.gz
● For example:
● backup-2018.05.29,11.13.42.tar.gz
● For PostgreSQL, the file name has the following form:
● year.month.day,hour.min.seconds.dump
6. Start the component (except for PostgreSQL and Cassandra which must be
running to backup):
7. /opt/apigee/apigee-service/bin/apigee-service component_name start

If you have multiple Edge components installed on the same node, you can back up
them all with a single command:
/opt/apigee/apigee-service/bin/apigee-all backup
NOTE: Ensure that you stop the components, except for PostgreSQL and Cassandra, before
doing the backup. Then start the components after the backup completes.

This command creates a backup file for each component on the node.

Restore from a backup


You can restore a component from the file you created when backing up that
component. You do this with the restore command.

Note that the restore command:

● Uses the specified backup file or gets the latest backup file, if a filename was
not specified.
● Ensures that the component's data directories are empty.
● Stops the component. You must explicitly restart the component after a
restore.

This section describes how to use the restore command.

To restore a component from a backup file:

1. Ensure that the following directories are empty:


/opt/apigee/data/component_name

2. /opt/apigee/etc/component_name.d
3. If they are not empty, delete their contents using commands like the following:
rm -r /opt/apigee/data/component_name

4. rm -r /opt/apigee/etc/component_name.d
5. Restore the previous configuration and data by using the following command:
6. /opt/apigee/apigee-service/bin/apigee-service component_name restore
backup_file
7. Where:
● component_name is the name of the component. Possible values
include:
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
● backup_file is the name of the file you created when you backed up that
component; this value does not include the path, but does include the
"backup-" prefix and file extensions. For example, backup-
2019.03.17,14.40.41.tar.gz.
8. Note: backup_file should not include the path or backup- prefix, or the file extension. For
example, 2019.03.17,14.40.41.Note: In the case of apigee-postgresql
components, backup_file should not include the path or database name (for example,
apigee-) as a prefix, nor should it have the .dump file extension. For example, if file
name is apigee-2019.03.17,14.40.41.dump, you should only specify the datetime
part: 2019.03.17,14.40.41.
For example:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restore
backup-2019.03.17,14.40.41.tar.gz
10. Specifying backup_file is optional. If omitted, Apigee uses the most recent file
in /opt/apigee/backup/component_name.
The restore command re-applies the backed up configuration and restores
the data from when the backup took place.
11. Restart the component, as the following example shows:
12. /opt/apigee/apigee-service/bin/apigee-service component_name start
NOTE: There is no "restore all" command. You must restore each component individually.

Restore a component to an existing


environment

This document covers restoration of any Edge component to an existing


environment without having to re-install the component. This means the node where
you are restoring the component has the same IP address or DNS name as when you
performed the backup.

If you have to re-install the component see How to Reinstall and Restore
Components.

Apache ZooKeeper
Restore one standalone node
1. Remove old ZooKeeper directories:
/opt/apigee/data/apigee-zookeeper

2. /opt/apigee/etc/apigee-zookeeper.d
3. Restore ZooKeeper data from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restore
backup-2016.03.17,14.40.41.tar.gz
5. Restart all components to establish synchronization with the new restored
ZooKeeper.

Restore one cluster node

1. If a single ZooKeeper node fails, that is part of an ensemble, you can create a
new node with the same hostname/IP address (follow the re-install steps
mentioned in How to Reinstall and Restore Components) and when it joins the
ZooKeeper ensemble it will get the latest snapshots from the Leader and start
to serve clients. You do not need to restore data in this instance.

Restore a complete cluster

1. Stop the complete cluster.


2. Restore all ZooKeeper nodes from the backup file.
3. Start the ZooKeeper cluster.
4. Restart all components.

Apache Cassandra
Restore one standalone node

1. Remove old Cassandra directory:


2. /opt/apigee/data/apigee-cassandra
3. Restore the Cassandra node from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restore
backup-2016.03.17,14.40.41.tar.gz
5. Restart all components.

Restore one cluster node

1. If a single Cassandra node fails, that is part of an ensemble, you can create a
new node with the same hostname/IP address (follow the re-install steps
mentioned in How to Reinstall and Restore Components). You only need to re-
install Cassandra, you do not need to restore the data.
When performing a restore on a non-seed node, ensure that at least one
Cassandra seed node is up.
After installing Cassandra, and the node is up, (given that RF>=2 for all
keyspaces) execute the following nodetool command to initialize the node:
2. /opt/apigee/apigee-cassandra/bin/nodetool [-u username -pw password] -h
localhost repair -pr
3. You only need to pass your username and password if you enabled JMX
authentication for Cassandra.

Restore a complete cluster

1. Stop the complete cluster.


2. Restore all Cassandra nodes from the backup file.
3. Start the Cassandra cluster.
4. Restart all components.

PostgreSQL database
PosgreSQL running standalone or as Master

1. Stop the Management Server, Qpid Server, and Postgres Server on all
nodes:Note: Your system can still handle requests to API proxies while these
components are stopped.

/opt/apigee/apigee-service/bin/apigee-service edge-management-server stop


/opt/apigee/apigee-service/bin/apigee-service apigee-sso stop
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server stop

2. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server stop


3. Make sure PostgreSQL database is running:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql status
5. Restore PostgreSQL database from the backup file:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql restore
2016.03.17,14.40.41.dump
7. Start the Management Server, Qpid Server, and Postgres Server on all nodes:
/opt/apigee/apigee-service/bin/apigee-service edge-management-server start
/opt/apigee/apigee-service/bin/apigee-service apigee-sso start
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server start

8. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server start

PosgreSQL running as Standby

1. Reconfigure PostgreSQL database using the same config file you used to
install it:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup -f
configFile
3. Start PostgreSQL:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql start

Postgres Server

1. Remove old Postgres Server directories:


/opt/apigee/data/edge-postgres-server

2. /opt/apigee/etc/edge-postgres-server.d
3. Restore Postgres Server from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restore
backup-2016.03.17,14.40.41.tar.gz
5. Start Postgres Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server start

Qpidd database

1. Remove old Qpidd directories:


/opt/apigee/data/apigee-qpidd

2. /opt/apigee/etc/apigee-qpidd.d
3. Restore Qpidd:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd restore backup-
2016.03.17,14.40.41.tar.gz
5. Start Qpidd:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd start

Qpid Server

1. Remove old Qpid Server directories:


/opt/apigee/data/edge-qpid-server

2. /opt/apigee/etc/edge-qpid-server.d
3. Restore Qpid Server from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restore
backup-2016.03.17,14.40.41.tar.gz
5. Start Qpid Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server start

OpenLDAP
1. Remove old OpenLDAP directories:
/opt/apigee/data/apigee-openldap

2. /opt/apigee/etc/apigee-openldap.d
3. Restore OpenLDAP from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap restore
2016.03.17,14.40.41
5. Restart OpenLDAP:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap start

Management Server

1. Remove old Management Server directories:


/opt/apigee/data/edge-management-server

2. /opt/apigee/etc/edge-management-server.d
3. Restore Management Server from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restore backup-2016.03.17,14.40.41.tar.gz
5. Restart the Management Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-management-server start

Message Processor

1. Remove old Message Processor directories:


/opt/apigee/data/edge-message-processor

2. /opt/apigee/etc/edge-message-processor.d
3. Restore Message Processor from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restore backup-2016.03.17,14.40.41.tar.gz
5. Restart Message Processor:
6. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor start

Router

1. Remove old Router directories:


/opt/apigee/data/edge-router

2. /opt/apigee/etc/edge-router.d
3. Restore Router from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-router restore backup-
2016.03.17,14.40.41.tar.gz
5. Restart Router:
6. /opt/apigee/apigee-service/bin/apigee-service edge-router start

Edge UI

1. Remove old UI directories:


/opt/apigee/data/edge-ui

2. /opt/apigee/etc/edge-ui.d
3. Restore UI from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-ui restore backup-
2016.03.17,14.40.41.tar.gz
5. Restart UI:
6. /opt/apigee/apigee-service/bin/apigee-service edge-ui start

Reinstall and restore components


This document covers re-installation and restoration of an Edge component. Use this
procedure if you have to re-install the Edge component before you restore the
backup.

NOTE: These procedures all assume that you are re-installing the component onto a node with
the same IP address or DNS name as the node that failed. For Cassandra, you can only use IP
addresses.

Apache ZooKeeper
Restore one standalone node
1. Stop ZooKeeper:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper stop
3. Remove old ZooKeeper directories:
/opt/apigee/data/apigee-zookeeper

4. /opt/apigee/etc/apigee-zookeeper.d
5. Re-install ZooKeeper:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper install
7. Restore ZooKeeper:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart all components:
11. /opt/apigee/apigee-service/bin/apigee-all restart

Restore one cluster node

If a single ZooKeeper node fails that is part of an ensemble, you can create a new
node with the same hostname/IP address and re-install ZooKeeper. When the new
ZooKeeper node joins the ZooKeeper ensemble it will get the latest snapshots from
the Leader and start to serve clients. You do not need to restore data in this instance.

1. Re-install ZooKeeper:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper install
3. Run setup on the ZooKeeper node using the same config file used when
installing the original node:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper setup -f
configFile
5. Start ZooKeeper:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper start

Restore a complete cluster


1. Stop the complete cluster.
2. Restore all ZooKeeper nodes from the backup file as described above for a
single node.
3. Start the ZooKeeper cluster.
4. Restart all components.

Apache Cassandra
Restore one standalone node
1. Stop Cassandra:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra stop
3. Remove old Cassandra directory:
4. /opt/apigee/data/apigee-cassandra
5. Re-install Cassandra:
6. /apigee/apigee-service/bin/apigee-service apigee-cassandra install
7. Restore Cassandra:
8. /apigee/apigee-service/bin/apigee-service apigee-cassandra restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart all components:

11. /apigee/apigee-service/bin/apigee-all restart

Restore one cluster node

If a single Cassandra node fails, that is part of an ensemble, you can create a new
node with the same hostname/IP address. You only need to re-install Cassandra, you
do not need to restore the data.

NOTE: When performing a re-install on a non-seed node, ensure that at least one Cassandra seed
node is up.

1. Re-install Cassandra:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra install
3. Run setup on the Cassandra node using the same config file used when
installing the original node:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra setup -f
configFile
5. Start Cassandra:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra start
7. After installing Cassandra, and the node is up, (given that RF>=2 for all
keyspaces) execute the following nodetool command to initialize the node:

8. /opt/apigee/apigee-cassandra/bin/nodetool [-u username -pw password] -h


localhost repair -pr
9. You only need to pass your username and password if you enabled JMX
authentication for Cassandra.

Restore a complete cluster


1. Stop the complete cluster.
2. Restore all Cassandra nodes from the backup file.
3. Start the Cassandra cluster.
4. Restart all components.

PostgreSQL database
PosgreSQL running standalone or as Master
1. Stop the Management Server, Qpid Server, and Postgres Server on all
nodes:NOTE: Your system can still handle requests to API proxies while these
components are stopped.

/apigee/apigee-service/bin/apigee-service edge-management-server stop


/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server stop

2. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server stop


3. Re-install PostgreSQL database:
4. /apigee/apigee-service/bin/apigee-service apigee-postgresql install
5. Start PostgreSQL:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql start
7. Restore PostgreSQL database from the backup file:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql restore
2019.03.17,14.40.41
9. Note that when restoring the PostgreSQL component, you do not specify the
directory path to the backup file, nor do you specify the ".dump" suffix. You
only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Start the Management Server, Qpid Server, and Postgres Server on all nodes:
/opt/apigee/apigee-service/bin/apigee-service edge-management-server start
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server start

11. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server start

PosgreSQL running as Standby


1. Re-install PostgreSQL database:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql install
3. Reconfigure PostgreSQL database using the same config file you used to
install it:

4. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup -f


configFile
5. Start PostgreSQL:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql start

Postgres Server
1. Stop Postgres Server on all master and standby nodes:
2. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server stop
3. Remove old Postgres Server directories:
4. /opt/apigee/data/edge-postgres-server /opt/apigee/etc/edge-postgres-
server.d
5. Re-install Postgres Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server install
7. Restore Postgres Server from the backup file:
8. /opt/apigee/apigee-service/bin/apigee-service edge-postgre-server restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Start Postgres Server on all master and standby nodes:
11. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server start
Qpid Server and Qpidd
1. Stop Qpidd, Qpid Server, and Postgres Server on all nodes:

/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server stop


/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server stop

2. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd stop


3. Remove old Qpid Server and Qpidd directories:
/opt/apigee/data/edge-qpid-server
/opt/apigee/etc/edge-qpid-server.d
/opt/apigee/data/apigee-qpidd

4. /opt/apigee/etc/apigee-qpidd.d
5. Re-install Qpidd:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd install
7. Restore Qpidd:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Start Qpidd:
11. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd start
12. Re-install Qpid Server:
13. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server install
14. Restore Qpid Server:
15. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restore
2019.03.17,14.40.41
16. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
17. Restart Qpid Server, Qpidd, and Postgres Servers on all nodes:
/opt/apigee/apigee-service/bin/apigee-service apigee-qpidd restart
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart

18. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart

OpenLDAP
1. Stop OpenLDAP:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap stop
3. Re-install OpenLDAP:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap install
5. Remove old OpenLDAP directories:
6. /opt/apigee/data/apigee-openldap /opt/apigee/etc/apigee-openldap.d
7. Restore OpenLDAP:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart OpenLDAP:
11. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap start
12. Restart all Management Servers:
13. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart

Management Server
1. Stop Management Server:
2. /opt/apigee/apigee-service/bin/apigee-service edge-management-server stop
3. Remove old Management Server directories:
4. /opt/apigee/data/edge-management-server /opt/apigee/etc/edge-
management-server.d
5. Re-install Management Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
install
7. Restore Management Server from the backup file:
8. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restore 2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart Management Server:
11. /opt/apigee/apigee-service/bin/apigee-service edge-management-server start

Message Processor
1. Stop Message Processor:
2. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor stop
3. Remove old Message Processor directories:
/opt/apigee/data/edge-message-processor

4. /opt/apigee/etc/edge-message-processor.d
5. Re-install Message Processor:
6. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
install
7. Restore Message Processor from the backup file:
8. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restore 2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart Message Processor:
11. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor start

Router
1. Stop Router:
2. /opt/apigee/apigee-service/bin/apigee-service edge-router stop
3. Remove old Router directories:
/opt/apigee/data/edge-router

4. /opt/apigee/etc/edge-router.d
5. Re-install Router:

6. /opt/apigee/apigee-service/bin/apigee-service edge-router install


7. Restore Router from the backup file:
8. /opt/apigee/apigee-service/bin/apigee-service edge-router restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart Router:

11. /opt/apigee/apigee-service/bin/apigee-service edge-router start

Edge UI
1. Stop UI:
2. /opt/apigee/apigee-service/bin/apigee-service edge-ui stop
3. Remove old UI directories:
/opt/apigee/data/edge-ui

4. /opt/apigee/etc/edge-ui.d
5. Re-install UI:

6. /opt/apigee/apigee-service/bin/apigee-service edge-ui install


7. Restore UI from the backup file:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart UI:

11. /opt/apigee/apigee-service/bin/apigee-service edge-ui start

Complete Site Recovery


1. Stop all component nodes. Note that the order of stopping the subsystems is
important: first stop all Edge nodes, and then stop all datastores nodes.
2. Restore all components as described above.
3. Now start all components in the following order. Note that the order of
starting the subsystems is important:
a. Start the ZooKeeper cluster
b. Start the Cassandra cluster
c. Ensure that OpenLDAP is up and running
d. Start qpid
e. Ensure that the PostgreSQL database is up and running
f. Start Management Server
g. Start Routers and Message Processors
h. Start Qpid Server
i. Start Postgres Server
j. Start Apigee UI

MONITORING
What to monitor
Generally in a production setup, there is a need to enable monitoring mechanisms
within an Apigee Edge for Private Cloud deployment. These monitoring techniques
warn the network administrators (or operators) of an error or a failure. Every error
generated is reported as an alert in Apigee Edge. For more information on alerts, see
Monitoring Best Practices.

For ease, Apigee components are classified mainly into two categories:
● Apigee-specific Java Server Services: These include Management Server,
Message Processor, Qpid Server, and Postgres Server.
● Third-party Services: These include NGINX Router, Apache Cassandra,
Apache ZooKeeper, OpenLDAP, PostgresSQL database, and Qpid.

In an on-premise deployment of Apigee Edge, the following table provides a quick


glance into the parameters that you can monitor:

Process- API- Message


System Component
Component Level Level Flow
Checks Specific
Stats Checks Checks

Apigee- Management
specific Server
Java
Services
Message
Processor

Qpid Server

Postgres
Server

Third- Apache
party Cassandra
Services
Apache
ZooKeeper

OpenLDAP

PostgreSQL
database

Qpid

NGINX
Router
In general, after Apigee Edge has been installed, you can perform the following
monitoring tasks to track the performance of an Apigee Edge for Private Cloud
installation.

System health checks

It is very important to measure the system health parameters such as CPU


utilization, memory utilization and port connectivity at a higher level. You can monitor
the following parameters to get the basics of system health.

● CPU Utilization: Specifies the basic statistics (User/System/IO Wait/Idle)


about the CPU utilization. For example, total CPU used by the system.
● Free/Used Memory: Specifies the system memory utilization as bytes. For
example, physical memory used by the system.
● Disk Space Usage: Specifies the file system information based on the current
disk usage. For example, hard disk space used by the system.
● Load Average: Specifies the number of processes waiting to run.
● Network Statistics: Network packets and/or bytes transmitted and received,
along with the transmission errors about a specified component.

Processes/application checks

At the process level, you can view important information about all the processes that
are running. For example, these include memory and CPU usage statistics that a
process or application utilizes. For processes like qpidd, postgres postmaster, java
and so on, you can monitor the following:

● Process identification: Identify a particular Apigee process. For example, you


can monitor for the existence of an Apigee server java process.
● Thread statistics: View the underlying threading patterns that a process uses.
For example, you can monitor peak thread count, thread count for all the
processes.
● Memory utilization: View the memory usage for all the Apigee processes. For
example, you can monitor the parameters like heap memory usage, non-heap
memory usage used by a process.

API-level checks

At the API level, you can monitor whether a server is up and running for frequently
used API calls proxied by Apigee. For example, you can perform API check on the
Management Server, Router, and Message Processor by invoking the following curl
command:

curl http://host:port/v1/servers/self/up

Where host is the IP address of the Apigee Edge component. The port number is
specific to each Edge component. For example:

Management Server: 8080

● Router: 8081
● Message Processor: 8082
● etc.

See the individual sections below for information on running this command for each
component

This call returns the "true" and "false". For best results, you can also issue API calls
directly on the backend (with which Apigee software interacts) in order to quickly
determine whether an error exists within the Apigee software environment or on the
backend.

Message flow checks

You can collect data from Routers and Message Processors about message flow
pattern/statistics. This allows you to monitor the following:
● Number of active clients
● Number of responses (10X, 20X, 30X, 40X and 50X)
● Connect failures

This helps you to provide dashboards for the API message flow. For more, see How
to Monitor.

Router health check of the Message Processor

The Router implements a health check mechanism to determine which of the


Message Processors are working as expected. If a Message Processor is detected
as down or slow, the Router can automatically take the Message Processor out of
rotation. If that happens, the Router writes a "Mark Down" messages to the Router
log file at /opt/apigee/var/log/edge-router/logs/system.log.

You can monitor the Router log file for these messages. For example, if the Router
takes a Message Processor out of rotation, it writes message to the log in the form:

2014-05-06 15:51:52,159 org: env: RPCClientClientProtocolChildGroup-RPC-0 INFO


CLUSTER - ServerState.setState() : State of 2a8a0e0c-3619-416f-b037-
8a42e7ad4577 is now DISCONNECTED. handle = MP_IP at 1399409512159

2014-04-17 12:54:48,512 org: env: nioEventLoopGroup-2-2 INFO HEARTBEAT -


HBTracker.gotResponse() : No HeartBeat detected from /MP_IP:PORT Mark Down

Where MP_IP:PORT is the IP address and port number of the Message Processor.

If later the Router performs a health check and determines that Message Processor
is functioning properly, the Router automatically puts the Message Processor back
into rotation. The Router also writes a "Mark Up" message to the log in the form:

2014-05-06 16:07:29,054 org: env: RPCClientClientProtocolChildGroup-RPC-0 INFO


CLUSTER - ServerState.setState() : State of 2a8a0e0c-3619-416f-b037-
8a42e7ad4577 is now CONNECTED. handle = IP at 1399410449054

2014-04-17 12:55:06,064 org: env: nioEventLoopGroup-4-1 INFO HEARTBEAT -


HBTracker.updateHB() : HeartBeat detected from IP:PORT Mark Up
How to monitor
This document describes the monitoring techniques of components supported by an
on-premise deployment of Apigee Edge for Private Cloud.

Overview
Edge supports several ways for getting details about services as well as checking
their statuses. The following table lists the types of checks you can perform on each
eligible service:

Mgmt API

Servic User/ axstat apig


Memo
e Org/ us Databa apigee-
ry ee-
Component Check Deploym se service
Usage moni
ent check Status
[JMX*] t**
Status

Management
Server

Message
Processor

Router

Qpid

Postgres

More More More More More More Info More


Info Info Info Info Info Info

* Before you can use JMX, you must enable it, as described in Enable JMX.

** The apigee-monit service checks if a component is up and will attempt to restart it if it isn't. For more information, see Self healing with

apigee-monit.

Monitoring ports and configuration files

Each component supports Java Management Extensions (JMX) and Management


API monitoring calls on different ports. The following table lists the JMX and
Management API ports for each type of server, and configuration file locations:
Warning: Apigee recommends that JMX ports be open for internal servers only. JMX ports
should be closed from external access.

JMX Management
Component Configuration file location
Port API Port

Management 1099 8080 $APIGEE_ROOT/


Server customer/
application/
management-
server.properties

Message 1101 8082 $APIGEE_ROOT/


Processor customer/
application/message-
processor.properties

Router 1100 8081 $APIGEE_ROOT/


customer/
application/
router.properties

Qpid 1102 8083 $APIGEE_ROOT/


customer/
application/qpid-
server.properties

Postgres 1103 8084 $APIGEE_ROOT/


customer/
application/
postgres-
server.properties

Use JMX to monitor components


The following sections explain how to use JMX to monitor Edge components.

Enable JMX

To enable JMX without authentication or SSL based communication, perform the


steps below. Note: In production systems, both encrypted authentication and SSL
should be enabled for security.
1. Edit the appropriate configuration file (see Configuration file reference).
Create the configuration file if it doesn’t exist.
2. conf_system_jmxremote_enable=true
3. Save the configuration file and make sure it is owned by apigee:apigee.
4. Restart appropriate Edge component
5. apigee-service edge-management-server restart

To disable JMX, either remove the property conf_system_jmxremote_enable or


change its value to false. Then restart the appropriate Edge component.

Authentication in JMX

Edge for Private Cloud supports password based authentication using details stored
in files. You can store passwords as a Hash for added security.

1. To enable JMX authentication in an edge-* component, edit the appropriate


configuration file (see Configuration file reference). Create the configuration
file if it doesn’t exist:
conf_system_jmxremote_enable=true
conf_system_jmxremote_authenticate=true
conf_system_jmxremote_encrypted_auth=true
conf_system_jmxremote_access_file=/opt/apigee/customer/application/
management-server/jmxremote.access

2. conf_system_jmxremote_password_file=/opt/apigee/customer/application/
management-server/jmxremote.password
3. Save the configuration file and make sure it is owned by apigee:apigee.
4. Create a SHA256 hash of the password:
5. echo -n '' | openssl dgst -sha256
6. Create a jmxremote.password file with JMX user credentials:
a. Copy the following files from your $JAVA_HOME directory to the
directory /opt/apigee/customer/application/<component>/:
b. cp ${JAVA_HOME}/lib/management/jmxremote.password.template
$APIGEE_ROOT/customer/application/management-server/jmxremote
.password
c. Edit the file and add your JMX username and password using the
following syntax:
d. USERNAME <HASH-PASSWORD>
e. Make sure the file is owned by apigee and that the file mode is 400:

chown apigee:apigee
$APIGEE_ROOT/customer/application/management-server/jmxremote.password

f. chmod 400 $APIGEE_ROOT/customer/application/management-


server/jmxremote.password
7. Create a jmxremote.access file with JMX user permissions:
a. Copy the following files from your $JAVA_HOME directory to the
directory /opt/apigee/customer/application/<component>/
b. cp
${JAVA_HOME}/lib/management/jmxremote.access$APIGEE_ROOT/c
ustomer/application/management-server/jmxremote.password/
jmxremote.access
c. Edit the file and add your JMX username followed by a permission
(READONLY/READWRITE)
d. USERNAME READONLY
e. Make sure the file is owned by apigee and that the file mode is 400:

chown apigee:apigee
$APIGEE_ROOT/customer/application/management-server/jmxremote.password

f. chmod 400 $APIGEE_ROOT/customer/application/management-


server/jmxremote.access
8. Restart the appropriate Edge component:
9. apigee-service edge-management-server restart

To disable JMX authentication, either remove the property


conf_system_jmxremote_authenticate or change the value to false and
restart appropriate Edge component.

SSL in JMX

To enable SSL based JMX in an edge-* component:

1. Edit the appropriate configuration file (see Configuration file reference).


Create the configuration file if it doesn’t exist:
conf_system_jmxremote_enable=true
conf_system_jmxremote_ssl=true
conf_system_javax_net_ssl_keystore=/opt/apigee/customer/application/
management-server/jmxremote.keystore

2. conf_system_javax_net_ssl_keystorepassword=<keystore-password>
3. Save the configuration file and make sure it is owned by apigee:apigee.
4. Prepare a keystore containing the server key and place it at the path provided
in the configuration conf_system_javax_net_ssl_keystore above.
Ensure the keystore file is readable by apigee:apigee.
5. Restart the appropriate Edge component:
6. apigee-service edge-management-server restart

To disable SSL-based JMX, either remove the property


conf_system_jmxremote_ssl or change the value to false. Restart the
appropriate Edge component.
Monitoring via Jconsole

Monitoring instructions via jconsole remain the same as described in


https://docs.apigee.com/private-cloud/v4.50.00/how-monitor#jconsole.

One line can be added that “jconsole will need to be started with truststore and
truststore password if SSL is enabled for JMX.” Reference:
https://docs.oracle.com/javase/8/docs/technotes/guides/management/
jconsole.html

Monitor with JConsole

Use JConsole (a JMX compliant tool) to manage and monitor health check and
process statistics. With JConsole, you can consume JMX statistics exposed by your
servers and display them in a graphical interface. For more information, see Using
JConsole.

You need to start JConsole with truststore and truststore password if SSL is enabled
for JMX. See Using JConsole.

JConsole uses the following service URL to monitor the JMX attributes (MBeans)
offered via JMX:

service:jmx:rmi:///jndi/rmi://IP_address:port_number/jmxrmi

Where:

● IP_address is the IP address of the server you want to monitor.


● port_number is the JMX port number of the server you want to monitor.

For example, to monitor the Management Server, issue a command like the following
(assuming the server's IP address is 216.3.128.12):

service:jmx:rmi:///jndi/rmi://216.3.128.12:1099/jmxrmi

Note that this example specifies port 1099, which is the Management Server JMX
port. For other ports, see JMX and Management API monitoring ports.

The following table shows the generic JMX statistics:

JMX MBeans JMX Attributes

Memory HeapMemoryUsage

NonHeapMemoryUsage
Usage

NOTE
Attribute values are displayed in four values: committed, init, max, and used.

Configuration file reference

The following sections describe changes you might need to make to Edge
component configuration files for JMX related configurations. See Monitoring ports
and configuration files for more information.

JMX configuration to be added to the appropriate component’s configuration file

● Enable JMX agent on the edge-component. False by default.


● conf_system_jmxremote_enable=true

Configurations for password based authentication

● Enable password based authentication. False by default.


● conf_system_jmxremote_authenticate=true
● Path to access file. Should be owned and readable by Apigee user only.
● conf_system_jmxremote_access_file=/opt/apigee/customer/application/
management-server/jmxremote.access
● Path to password file. Should be owned and readable by Apigee user only.
● conf_system_jmxremote_password_file=/opt/apigee/customer/application/
management-server/jmxremote.password
● Enable storing password in encrypted format. False by default.
● conf_system_jmxremote_encrypted_auth=true

Configurations for SSL based JMX

● Enable SSL for JMX communication. False by default.


● conf_system_jmxremote_ssl=true
● Path to keystore. Should be owned and readable by Apigee user only.
● conf_system_javax_net_ssl_keystore=/opt/apigee/customer/application/
management-server/jmxremote.keystore
● Keystore password:
● conf_system_javax_net_ssl_keystorepassword=changeme

Optional JMX configurations

Values listed are default values and can be changed.

● JMX Port. Default values are listed in the table below.


● conf_system_jmxremote_port=
● JMX RMI Port. By default, Java process will pick a random port.
● conf_system_jmxremote_rmi_port=
● The host name for remote stubs. Default IP address of localhost.
● conf_system_java_rmi_server_hostname=
● Protect JMX registry with SSL. Default false. Only applicable if SSL is enabled.
● conf_system_jmxremote_registry_ssl=false

Monitor with the Management API


Edge includes several APIs that you can use to perform service checks on your
servers as well as check your users, organizations, and deployments. This section
describes these APIs.

Perform service checks

The Management API provides several endpoints for monitoring and diagnosing
issues with your services. These endpoints include:

Endpoint Description

/ Checks to see if a service is running. This API call does not require you
servers to authenticate.
/self/ If the service is running, this endpoint returns the following response:
up
<ServerField>
<Up>true</Up>
</ServerField>

If the service is not running, you will get a response similar to the
following (depending on which service it is and how you checked it):

curl: Failed connect to localhost:port_number; Connection refused

/ Returns information about the service, including:


servers
● Configuration properties
/self ● Start time and up time
● Build, RPM, and UUID information
● Internal and external hostname and IP address
● Region and pod
● <isUp> property, indicating whether the service is running
This API call requires you to authenticate with your Apigee admin
credentials.

To use these endpoints, invoke a utility such as curl with commands that use the
following syntax:

curl http://host:port_number/v1/servers/self/up -H "Accept: [application/json|


application/xml]"
curl http://host:port_number/v1/servers/self -u username:password -H "Accept: [application/json|
application/xml]"

Where:

● host is the IP address of the server you want to check. If you are logged into
the server, you can use "localhost"; otherwise, specify the IP address of the
server as well as the username and password.
● port_number is the Management API port for the server you want to check.
This is a different port for each type of component. For example, the
Management Server's Management API port is 8080. For a list of
Management API port numbers to use, see JMX and Management API
monitoring ports

To change the format of the response, you can specify the Accept header as
"application/json" or "application/xml".

The following example gets the status of the Router on localhost (port 8081):

curl http://localhost:8081/v1/servers/self/up -H "Accept: application/xml"

The following example gets information about the Message Processor at


216.3.128.12 (port 8082):

curl http://216.3.128.12:8082/v1/servers/self -u sysAdminEmail:password


-H "Accept: application/xml"

Monitor user, organization, and deployment status

You can use the Management API to monitor user, organization, and deployment
status of your proxies on Management Servers and Message Processors by issuing
the following commands:

curl http://host:port_number/v1/users -u sysAdminEmail:password


curl http://host:port_number/v1/organizations -u sysAdminEmail:password
curl http://host:port_number/v1/organizations/orgname/deployments -u
sysAdminEmail:password
Where port_number is either 8080 for the Management Server or 8082 for the
Message Processor.

This call requires you to authenticate with your system administration username and
password.

The server should return a "deployed" status for all calls. If these fail, do the
following:

1. Check the server logs for any errors. The logs are located at:
● Management Server: opt/apigee/var/log/edge-management-
server
● Message Processor: opt/apigee/var/log/edge-message-
processor
2. Make a call against the server to check whether it is functioning properly.
3. Remove the server from the ELB and then restart it:
4. /opt/apigee/apigee-service/bin/apigee-service service_name restart
5. Where service_name is:
● edge-management-server
● edge-message-processor

Check status with the apigee-service command


You can troubleshoot your Edge services by using the apigee-service command
when you are logged into the server running the service.

To check the status of a service with apigee-service:

1. Log in to the server and run the following command:


2. /opt/apigee/apigee-service/bin/apigee-service service_name status
3. Where service_name is one of the following:
● Management Server: edge-management-server
● Message Processor: edge-message-processor
● Postgres: edge-postgres-server
● Qpid: edge-qpid-server
● Router: edge-router
4. For example:
5. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
status
6. If the service is not running, start the service:
7. /opt/apigee/apigee-service/bin/apigee-service service_name start
8. After restarting the service, check that it is functioning, either by using the
apigee-service status command you used previously or by using the
Management API described in Monitor with the Management API.
For example:
9. curl -v http://localhost:port_number/v1/servers/self/up
10. Where port_number is the Management API port for the service.
This example assumes you are logged into the server and can use "localhost"
as the hostname. To check the status remotely with the Management API, you
must specify the IP address of the server and include the system
administrator username and password in your API call.
NOTE
Ensure that you specify the correct port when checking a service. Each service uses a different
port for the Management API monitoring, as described in JMX and Management API monitoring
ports.

Postgres monitoring
Postgres supports several utilities that you can use to check its status. These
utilities are described in the sections that follow.

Check organizations and environments on Postgres

You can check for organization and environment names that are onboarded on the
Postgres Server by issuing the following curl command:

curl -v http://postgres_IP:8084/v1/servers/self/organizations

The system should display the organization and environment name.

Verify analytics status

You can verify the status of the Postgres and Qpid analytics servers by issuing the
following curl command:

curl -u userEmail:password
http://host:port_number/v1/organizations/orgname/environments/envname/
provisioning/axstatus

The system should display a success status for all analytics servers, as the following
example shows:

{
"environments" : [ {
"components" : [ {
"message" : "success at Thu Feb 28 10:27:38 CET 2013",
"name" : "pg",
"status" : "SUCCESS",
"uuid" : "[c678d16c-7990-4a5a-ae19-a99f925fcb93]"
}, {
"message" : "success at Thu Feb 28 10:29:03 CET 2013",
"name" : "qs",
"status" : "SUCCESS",
"uuid" : "[ee9f0db7-a9d3-4d21-96c5-1a15b0bf0adf]"
} ],
"message" : "",
"name" : "prod"
} ],
"organization" : "acme",
"status" : "SUCCESS"
}

PostgresSQL database

This section describes techniques that you can use specifically for monitoring the
Postgres database.

Use the check_postgres.pl script

To monitor the PostgresSQL database, you can use a standard monitoring script,
check_postgres.pl. For more information, see
http://bucardo.org/wiki/Check_postgres.

Before you run the script:

1. You must install the check_postgres.pl script on each Postgres node.


2. Ensure that you have installed perl-Time-HiRes.x86_64, a Perl module
that implements high resolution alarm, sleep, gettimeofday, and interval
timers. For example, you can install it by using the following command:

3. yum install perl-Time-HiRes.x86_64


4. CentOS 7: Before using check_postgres.pl on CentOS v7, install the perl-
Data-Dumper.x86_64 RPM.

check_postgres.pl output

The default output of the API calls using check_postgres.pl is Nagios


compatible. After you install the script, do the following checks:

1. Check the database size:


2. check_postgres.pl -H 10.176.218.202 -db apigee -u apigee -dbpass postgres -
include=apigee -action database_size --warning='800 GB' --critical='900 GB'
3. Check the number of incoming connections to the database and compares
with maximum allowed connections:
4. check_postgres.pl -H 10.176.218.202 -db apigee -u apigee -dbpass postgres -
action backends
5. Check if database is running and available:
6. check_postgres.pl -H 10.176.218.202 -db apigee -u apigee -dbpass postgres -
action connection
7. Check the disk space:
8. check_postgres.pl -H 10.176.218.202 -db apigee -u apigee -dbpass postgres -
action disk_space --warning='80%' --critical='90%'
9. Check the number of organization and environment onboarded in a Postgres
node:
10. check_postgres.pl -H 10.176.218.202 -db apigee -u apigee -dbpass postgres -
action=custom_query --query="select count(*) as result from pg_tables where
schemaname='analytics' and tablename like '%fact'" --warning='80' --
critical='90' --valtype=integer
NOTE
Please refer to http://bucardo.org/check_postgres/check_postgres.pl.html in case you need any
help on using the above commands.

Run database checks

You can verify that the proper tables are created in PostgresSQL database. Log in to
PostgreSQL database using the following command:

psql -h /opt/apigee/var/run/apigee-postgresql/ -U apigee -d apigee

Then run:

\d analytics."org.env.fact"

Check health status of postgres process

You can perform API checks on the Postgres machine by invoking the following
curl command:

curl -v http://postgres_IP:8084/v1/servers/self/health
NOTE
Ensure that you use port 8084.

This command returns the ACTIVE status when postgres process is active. If the
Postgres process is not up and running, it returns the INACTIVE status.

Postgres resources

For additional information about monitoring the Postgres service, see the following:

● http://www.postgresql.org/docs/9.0/static/monitoring.html
● http://www.postgresql.org/docs/9.0/static/diskusage.html
● http://bucardo.org/check_postgres/check_postgres.pl.html

Apache Cassandra
JMX is enabled by default for Cassandra and remote JMX access to Cassandra does
not require a password.

Enable JMX authentication for Cassandra

You can enable JMX authentication for Cassandra. After doing so, you will then be
required to pass a username and password to all calls to the nodetool utility.

To enable JMX authentication for Cassandra:

1. Create and edit the cassandra.properties file:


a. Edit the
/opt/apigee/customer/application/cassandra.propertie
s file. If the file does not exist, create it.
b. Add the following to the file:
c. conf_cassandra_env_com.sun.management.jmxremote.authenticate=t
rue
d. conf_cassandra_env_com.sun.management.jmxremote.password.file=
${APIGEE_ROOT}/customer/application/apigee-cassandra/
jmxremote.password
e. conf_cassandra_env_com.sun.management.jmxremote.access.file=$
{APIGEE_ROOT}/customer/application/apigee-cassandra/
jmxremote.access
f. Save the cassandra.properties file.
g. Change the owner of the file to apigee:apigee, as the following
example shows:
h. chown apigee:apigee
/opt/apigee/customer/application/cassandra.properties
2. For more information on using properties files to set tokens, see How to
configure Edge.
3. Create and edit jmx_auth.sh:
a. Create a file at the following location if it does not exist:
b. /opt/apigee/customer/application/jmx_auth.sh
c. Add the following properties to the file:

export CASS_JMX_USERNAME=JMX_USERNAME

d. export CASS_JMX_PASSWORD=JMX_PASSWORD
e. Save the jmx_auth.sh file.
f. Source the file:
g. source /opt/apigee/customer/application/jmx_auth.sh
4. Copy and edit the jmxremote.password file:
a. Copy the following file from your $JAVA_HOME directory to
/opt/apigee/customer/application/apigee-cassandra/:
b. cp ${JAVA_HOME}/lib/management/jmxremote.password.template
$APIGEE_ROOT/customer/application/apigee-cassandra/jmxremote.p
assword
c. Edit the jmxremote.password file and add your JMX username and
password using the following syntax:
d. JMX_USERNAME JMX_PASSWORD
e. Where JMX_USERNAME and JMX_PASSWORD are the JMX username
and password you set previously.
f. Make sure the file is owned by "apigee" and that the file mode is 400:

chown apigee:apigee
/opt/apigee/customer/application/apigee-cassandra/jmxremote.password

g. chmod 400
/opt/apigee/customer/application/apigee-cassandra/jmxremote.password
5. Copy and edit the jmxremote.access file:
a. Copy the following file from your $JAVA_HOME directory to
/opt/apigee/customer/application/apigee-cassandra/:

cp ${JAVA_HOME}/lib/management/jmxremote.access

b. $APIGEE_ROOT/customer/application/apigee-cassandra/jmxremote.access
c. Edit the jmxremote.access file and add the following role:
d. JMX_USERNAME readwrite
e. Make sure the file is owned by "apigee" and that the file mode is 400:

chown apigee:apigee
/opt/apigee/customer/application/apigee-cassandra/jmxremote.access

f. chmod 400
/opt/apigee/customer/application/apigee-cassandra/jmxremote.access
6. Run configure on Cassandra:
7. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra configure
8. Restart Cassandra:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restart
10. Repeat this process on all other Cassandra nodes.

Enable JMX password encryption

To enable JMX password encryption, do the following steps:

1. Open the file source/conf/casssandra-env.sh.


2. Create and edit the cassandra.properties file:
a. Edit the
/opt/apigee/customer/application/cassandra.propertie
s file. If the file does not exist, create it.
b. Add the following to the file:
c. conf_cassandra_env_com.sun.management.jmxremote.encrypted.auth
enticate=true
d. Save the cassandra.properties file.
e. Change the owner of the file to apigee:apigee, as the following example
shows:
f. chown apigee:apigee
/opt/apigee/customer/application/cassandra.properties
3. On the command line, generate SHA1 hash(es) of the desired password(s) by
entering echo -n 'Secret' | openssl dgst -sha1
4. Set the password(s) against the username in
$APIGEE_ROOT/customer/application/apigee-cassandra/jmxrem
ote.password (created in in the previous section).
5. Run configure on Cassandra:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra configure
7. Restart Cassandra:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restart
9. Repeat this process on all other Cassandra nodes.

Enable JMX with SSL for Cassandra

Enabling JMX with SSL provides additional security and encryption for JMX-based
communication with Cassandra. To enable JMX with SSL, you need to provide a key
and a certificate to Cassandra to accept SSL-based JMX connections. You also need
to configure nodetool (and any other tools that communicate with Cassandra over
JMX) for SSL.

SSL-enabled JMX supports both plaintext and encrypted JMX passwords.

To enable JMX with SSL for Cassandra, use the following procedure:

1. Enable JMX. Enable password encryption if required.


2. Enable JMX authentication for Cassandra. as described above. Ensure
nodetool is working with the configured username and password.
3. /opt/apigee/apigee-cassandra/bin/nodetool -u <JMX_USER> -pw
<JMX_PASS> ring
4. Prepare keystore and truststore.
● Keystore should contain a key and certificate, and is used to configure
the Cassandra server. If keystore contains multiple key pairs,
Cassandra uses the first key pair to enable SSL.
Note that the passwords for keystore and the key should be the same
(the default when you generate the key using keytool).
● Truststore should contain the certificate only and is used by clients
(apigee-service based commands or nodetool) to connect over JMX.
5. After verifying the above requirements:
● Place the keystore file in
/opt/apigee/customer/application/apigee-cassandra/.
● Ensure the keystore file is readable by Apigee user only by entering

chown apigee:apigee
/opt/apigee/customer/application/apigee-cassandra/keystore.node1
● chmod 400 /opt/apigee/customer/application/apigee-cassandra/keystore.node1
6. Configure Cassandra for JMX with SSL by doing the following steps:
● Stop the Cassandra node by entering
● apigee-service apigee-cassandra stop
● Enable SSL in Cassandra by opening the file
/opt/apigee/customer/application/cassandra.propertie
s and adding the following lines:

conf_cassandra_env_com.sun.management.jmxremote.ssl=true
conf_cassandra_env_javax.net.ssl.keyStore=/opt/apigee/customer/application/
apigee-cassandra/keystore.node1

● conf_cassandra_env_javax.net.ssl.keyStorePassword=keystore-
password
● Change the owner of the file to apigee:apigee, as the following example
shows:
● chown apigee:apigee
/opt/apigee/customer/application/cassandra.properties
● Run configure on Cassandra:
● /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra
configure
● Restart Cassandra:
● /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra
restart
● Repeat this process on all other Cassandra nodes.
● Start the Cassandra node by entering
● apigee-service apigee-cassandra start
7. Configure the apigee-service Cassandra commands. You need to set
certain environment variables while running apigee-service commands,
including the ones below:
apigee-service apigee-cassandra stop
apigee-service apigee-cassandra wait_for_ready
apigee-service apigee-cassandra ring

8. apigee-service apigee-cassandra backup


9. There are several options for configuring apigee-service for JMX
authentication and SSL. Choose an option based on usability and your
security practices.
● Option 1 (SSL arguments stored in file)
● Option 2 (SSL arguments stored in environment variables)
● Option 3 (SSL arguments passed directly to apigee-service)
10. Option 1 (SSL arguments stored in file)
Set the following environment variables:
export CASS_JMX_USERNAME=ADMIN
# Provide encrypted password here if you have setup JMX password encryption
export CASS_JMX_PASSWORD=PASSWORD
11. export CASS_JMX_SSL=Y
12. Create a file in Apigee user’s home directory (/opt/apigee).
13. $HOME/.cassandra/nodetool-ssl.properties
14. Edit the file and add the following lines:
-Djavax.net.ssl.trustStore=<path-to-truststore.node1>
-Djavax.net.ssl.trustStorePassword=<truststore-password>

15. -Dcom.sun.management.jmxremote.registry.ssl=true
16. Make sure the trustore file is readable by Apigee user.
Run the following apigee-service command. If it runs without error, your
configurations are correct.
17. apigee-service apigee-cassandra ring
18. Option 2 (SSL arguments stored in environment variables)
Set the following environment variables:
export CASS_JMX_USERNAME=ADMIN
# Provide encrypted password here if you have setup JMX password encryption
export CASS_JMX_PASSWORD=PASSWORD
export CASS_JMX_SSL=Y
# Ensure the truststore file is accessible by Apigee user.
export CASS_JMX_TRUSTSTORE=<path-to-trustore.node1>

19. export CASS_JMX_TRUSTSTORE_PASSWORD=<truststore-password>


20. Run the following apigee-service command. If it runs without error, your
configurations are correct.
21. apigee-service apigee-cassandra ring
22. Option 3 (SSL arguments passed directly to apigee-service)
Run any apigee-service command like the one below. You don't need to
configure any environment variables.
23. CASS_JMX_USERNAME=ADMIN CASS_JMX_PASSWORD=PASSWORD
CASS_JMX_SSL=Y CASS_JMX_TRUSTSTORE=<path-to-trustore.node1>
CASS_JMX_TRUSTSTORE_PASSWORD=<trustore-password>
/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra ring
24. Set up nodetool. Nodetool requires JMX parameters to be passed to it. There
are two ways you can configure nodetool to run with SSL-enabled JMX, as
described in the configuration options below:
● Configuration Option 1
● Configuration Option 2
25. The options differ in the way SSL related configurations are passed to
nodetool. In both cases, the user running nodetool should have READ
permissions on the truststore file. Choose an appropriate option based on
usability and your security practices.
To learn more about nodetool parameters, see the DataStax documentation.
Configuration Option 1
Create a file in the home directory of the user running nodetool.
26. $HOME/.cassandra/nodetool-ssl.properties
27. Add the following lines to the file:
-Djavax.net.ssl.trustStore=<path-to-truststore.node1>
-Djavax.net.ssl.trustStorePassword=<truststore-password>

28. -Dcom.sun.management.jmxremote.registry.ssl=true
29. The truststore path specified above should be accessible by any user running
nodetool.
Run nodetool with the --ssl option.
30. /opt/apigee/apigee-cassandra/bin/nodetool --ssl -u <jmx-user-name> -pw
<jmx-user-password> -h localhost ring
31. Configuration option 2
Run nodetool as a single command with the extra parameters listed below.
32. /opt/apigee/apigee-cassandra/bin/nodetool -Djavax.net.ssl.trustStore=<path-
to-truststore.node1> -Djavax.net.ssl.trustStorePassword=<truststore-
password> -Dcom.sun.management.jmxremote.registry.ssl=true -
Dssl.enable=true -u <jmx-user-name> -pw <jmx-user-password> -h localhost
ring

Revert SSL configurations

If you need to revert the SSL configurations described in the procedure above, do the
following steps:

1. Stop apigee-cassandraby entering


2. apigee-service apigee-cassandra stop
3. Remove the line conf_cassandra-
env_com.sun.management.jmxremote.ssl=true from the file
/opt/apigee/customer/application/cassandra.properties.
4. Comment out the following lines in
/opt/apigee/apigee-cassandra/source/conf/cassandra-env.sh
# JVM_OPTS="$JVM_OPTS -Djavax.net.ssl.keyStore=/opt/apigee/data/apigee-
cassandra/keystore.node0"
# JVM_OPTS="$JVM_OPTS -Djavax.net.ssl.keyStorePassword=keypass"

5. # JVM_OPTS="$JVM_OPTS -
Dcom.sun.management.jmxremote.registry.ssl=true”
6. Start apigee-cassandra by entering
7. apigee-service apigee-cassandra start
8. Remove the environment variable CASS_JMX_SSL if it was set.
9. unset CASS_JMX_SSL
10. Check that apigee-service based commands like ring, stop, backup,
and so on, are working.
11. Stop using the --ssl switch with nodetool

Disable JMX authentication for Cassandra

To disable JMX authentication for Cassandra:


1. Edit /opt/apigee/customer/application/cassandra.properties.
2. Remove the following line in the file:
3. conf_cassandra-env_com.sun.management.jmxremote.authenticate=true
4. Run configure on Cassandra:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra configure
6. Restart Cassandra:
7. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restart
8. Repeat this process on all other Cassandra nodes.

Use JConsole: Monitor task statistics

Use JConsole and the following service URL to monitor the JMX attributes (MBeans)
offered via JMX:

service:jmx:rmi:///jndi/rmi://IP_address:7199/jmxrmi

Where IP_address is the IP of the Cassandra server.

Cassandra JMX statistics

JMX MBeans JMX Attributes

ColumnFamilies/apprepo/environments PendingTasks

ColumnFamilies/apprepo/organizations
MemtableColumnsCount
ColumnFamilies/apprepo/apiproxy_revisions
MemtableDataSize
ColumnFamilies/apprepo/apiproxies

ColumnFamilies/audit/audits ReadCount

ColumnFamilies/audit/audits_ref
RecentReadLatencyMicros

TotalReadLatencyMicros

WriteCount

RecentWriteLatencyMicros

TotalWriteLatencyMicros

TotalDiskSpaceUsed
LiveDiskSpaceUsed

LiveSSTableCount

BloomFilterFalsePositives

RecentBloomFilterFalseRatio

BloomFilterFalseRatio

Use nodetool to manage cluster nodes

The nodetool utility is a command line interface for Cassandra that manages cluster
nodes. The utility can be found at /opt/apigee/apigee-cassandra/bin.

The following calls can be made on all Cassandra cluster nodes:

1. General ring info (also possible for single Cassandra node): Look for the "Up"
and "Normal" for all nodes.
2. nodetool [-u username -pw password] -h localhost ring
3. You only need to pass your username and password if you enabled JMX
authentication for Cassandra.
The output of the above command looks as shown below:
4. Datacenter: dc-1
5. ==========
6. Address Rack Status State Load Owns Token
192.168.124.201 ra1 Up Normal 1.67 MB 33,33% 0
192.168.124.202 ra1 Up Normal 1.68 MB 33,33% 5671...5242
192.168.124.203 ra1 Up Normal 1.67 MB 33,33% 1134...0484
7. General info about nodes (call per node)
8. nodetool [-u username -pw password] -h localhost info
9. The output of the above command looks like the following:
10. ID : e2e42793-4242-4e82-bcf0-oicu812
11. Gossip active : true
12. Thrift active : true
Native Transport active: true
Load : 273.71 KB
Generation No : 1234567890
Uptime (seconds) : 687194
Heap Memory (MB) : 314.62 / 3680.00
Off Heap Memory (MB) : 0.14
Data Center : dc-1
Rack : ra-1
Exceptions :0
Key Cache : entries 150, size 13.52 KB, capacity 100 MB, 1520781 hits,
1520923 requests, 1.000 recent hit rate, 14400 save period in seconds
Row Cache : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 requests,
NaN recent hit rate, 0 save period in seconds
Counter Cache : entries 0, size 0 bytes, capacity 50 MB, 0 hits, 0 requests,
NaN recent hit rate, 7200 save period in seconds
Token :0
13. Status of the thrift server (serving client API)
14. nodetool [-u username -pw password] -h localhost statusthrift
15. The output of the above command looks like the following:

16. running
17. Status of data streaming operations: Observe traffic for cassandra nodes:
18. nodetool [-u username -pw password] -h localhost netstats
19. The output of the above command looks like the following:

20. Mode: NORMAL


21. Not sending any streams.
22. Read Repair Statistics:
Attempted: 151612
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool Name Active Pending Completed Dropped
Commands n/a 0 0 0
Responses n/a 0 0 n/a

For more info on nodetool, see About the nodetool utility.

Cassandra resource

Refer to the following URL:


http://www.datastax.com/docs/1.0/operations/monitoring.

Apache ZooKeeper
Check ZooKeeper status
1. Ensure the ZooKeeper process is running. ZooKeeper writes a PID file to
opt/apigee/var/run/apigee-zookeeper/apigee-zookeeper.pid.
2. Test ZooKeeper ports to ensure that you can establish a TCP connection to
ports 2181 and 3888 on every ZooKeeper server.
3. Ensure that you can read values from the ZooKeeper database. Connect using
a ZooKeeper client library (or
/opt/apigee/apigee-zookeeper/bin/zkCli.sh) and read a value
from the database.
4. Check the status:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper status

Use ZooKeeper four-letter words

ZooKeeper can be monitored via a small set of commands (four-letter words) that
are sent to the port 2181 using netcat (nc) or telnet.
For more info on ZooKeeper commands, see: Apache ZooKeeper command
reference.

For example:

● srvr: Lists full details for the server.


● stat: Lists brief details for the server and connected clients.

The following commands can be issued to the ZooKeeper port:

1. Run the four-letter command ruok to test if server is running in a non-error


state. A successful response returns "imok".
2. echo ruok | nc host 2181
3. Returns:
4. imok
5. Run the four-letter command, stat, to list server performance and connected
clients statistics:
6. echo stat | nc host 2181
7. Returns:
8. Zookeeper version: 3.4.5-1392090, built on 09/30/2012 17:52 GMT
9. Clients:
10. /0:0:0:0:0:0:0:1:33467[0](queued=0,recved=1,sent=0)
/192.168.124.201:42388[1](queued=0,recved=8433,sent=8433)
/192.168.124.202:42185[1](queued=0,recved=1339,sent=1347)
/192.168.124.204:39296[1](queued=0,recved=7688,sent=7692)
Latency min/avg/max: 0/0/128
Received: 26144
Sent: 26160
Connections: 4
Outstanding: 0
Zxid: 0x2000002c2
Mode: follower
Node count: 283
11. Note: It is sometimes important to see whether a ZooKeeper is in Mode: leader, follower
or observer.
12. If netcat (nc) is not available, you can use the python as an alternative. Create
a file named zookeeper.py that contains the following:
13. import time, socket,
14. sys c = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
15. c.connect((sys.argv[1], 2181))
c.send(sys.argv[2])
time.sleep(0.1)
print c.recv(512)
16. Now run the following python lines:

python zookeeper.py 192.168.124.201 ruok

17. python zookeeper.py 192.168.124.201 stat


LDAP level test
You can monitor OpenLDAP to see whether the specific requests are served
properly. In other words, check for a specific search that returns the right result.

1. Use ldapsearch (yum install openldap-clients) to query the entry


of the system admin. This entry is used to authenticate all API calls.
2. ldapsearch -b "uid=admin,ou=users,ou=global,dc=apigee,dc=com" -x -W -D
"cn=manager,dc=apigee,dc=com" -H ldap://localhost:10389 -LLL
3. You are then prompted for the LDAP admin password:
4. Enter LDAP Password:
5. After entering the password, you see a response in the form:
6. dn:
7. uid=admin,ou=users,ou=global,dc=apigee,dc=com
8. objectClass: organizationalPerson
objectClass: person
objectClass: inetOrgPerson
objectClass: top
uid: admin
cn: admin
sn: admin
userPassword::
e1NTSEF9bS9xbS9RbVNXSFFtUWVsU1F0c3BGL3BQMkhObFp2eDFKUytmZV
E9PQ=
=
mail: opdk@google.com
9. Check whether Management Server is still connected to LDAP with the
following command:
10. curl -u userEMail:password http://localhost:8080/v1/users/ADMIN
11. Returns:
12. {
13. "emailId" : ADMIN,
14. "firstName" : "admin",
"lastName" : "admin"
}

You can also monitor the OpenLDAP caches, which help in reducing the number of
disk accesses and hence improve the performance of the system. Monitoring and
then tuning the cache size in the OpenLDAP server can heavily impact the
performance of the directory server. You can view the log files
(opt/apigee/var/log) to obtain information about cache.

Self healing with apigee-monit


Apigee Edge for Private Cloud includes apigee-monit, a tool based on the open
source monit utility. apigee-monit periodically polls Edge services; if a service is
unavailable, then apigee-monit attempts to restart it.

NOTE: apigee-monit is not supported on all platforms. Before continuing, check the list of
compatible platforms.

To use apigee-monit, you must install it manually. It is not part of the standard
installation.

WARNING

apigee-monit attempts to restart a component every time the component fails a check. A
component fails a check when it is disabled, but also when you purposefully stop it, such as
during an upgrade, installation, or any task that requires you to stop a component.

As a result, you must stop monitoring the component before executing any procedure that
requires you to install, upgrade, stop, or restart a component.

By default, apigee-monit checks the status of Edge services every 60 seconds.

Quick start
This section shows you how to quickly get up and running with apigee-monit.

If you are using Amazon Linux, first install Fedora. Otherwise, skip this step.

sudo yum install -y


https://kojipkgs.fedoraproject.org/packages/monit/5.25.1/1.el6/x86_64/monit-
5.25.1-1.el6.x86_64.rpm

To install apigee-monit, do the following steps:

Install apigee-monit

/opt/apigee/apigee-service/bin/apigee-service apigee-monit install


/opt/apigee/apigee-service/bin/apigee-service apigee-monit configure
/opt/apigee/apigee-service/bin/apigee-service apigee-monit start

This installs apigee-monit and starts monitoring all components on the


node by default.

Stop monitoring components


/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c
component_name
/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c all

Start monitoring components

/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c


component_name
/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c all

Get summary status information

/opt/apigee/apigee-service/bin/apigee-service apigee-monit report


/opt/apigee/apigee-service/bin/apigee-service apigee-monit summary

Look at the apigee-monit log files

cat /opt/apigee/var/log/apigee-monit/apigee-monit.log

Each of these topics and others are described in detail in the sections that follow.

About apigee-monit
apigee-monit helps ensure that all components on a node stay up and running. It
does this by providing a variety of services, including:

● Restarting failed services


● Displaying summary information
● Logging monitoring status
● Sending notifications
● Monitoring non-Edge services

Apigee recommends that you monitor apigee-monit to ensure that it is running.


For more information, see Monitor apigee-monit.

apigee-monit architecture

During your Apigee Edge for Private Cloud installation and configuration, you
optionally install a separate instance of apigee-monit on each node in your
cluster. These separate apigee-monit instances operate independently of one
another: they do not communicate the status of their components to the other
nodes, nor do they communicate failures of the monitoring utility itself to any central
service.

The following image shows the apigee-monit architecture in a 5-node cluster:

Figure 1: A separate instance of apigee-monit runs in isolation on each node in a


cluster

Supported platforms

apigee-monit supports the following platforms for your Private Cloud cluster. (The
supported OS for apigee-monit depends on the version of Private Cloud.)

Operating System Private Cloud Version

v4.19.01 v4.19.06 v4.50.00


CentOS 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8,
7.7 7.9

RedHat Enterprise 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8,
Linux (RHEL) 7.7 7.9

Oracle Linux 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8
7.7

* While not technically supported, you can install and use apigee-monit on CentOS/RHEL/Oracle version 6.9 for Apigee Edge for

Private Cloud version 4.19.01.

Component configurations

apigee-monit uses component configurations to determine which components to


monitor, which aspects of the component to check, and what action to take in the
event of a failure.

TERMINOLOGY

A component configuration specifies the service to check and the action to take in the event of a
failure. Component configurations can also define how often to check a service and the number
of times to retry a service before triggering a failure response.

By default, apigee-monit monitors all Edge components on a node using their pre-
defined component configurations. To view the default settings, you can look at the
apigee-monit component configuration files. You cannot change the default
component configurations.

apigee-monit checks different aspects of a component, depending on which


component it is checking. The following table lists what apigee-monit checks for
each component and shows you where the component configuration is for each
component. Note that some components are defined in a single configuration file,
which others have their own configurations.

Configuration
Component What is monitored
location

Management Server /opt/apigee/ apigee-monit checks:


edge-
● Specified port(s) are
management- open and accepting
server/monit/ requests
default.conf ● Specified
Message Processor /opt/apigee/ protocol(s) are
edge-message- supported
● Status of the
processor/ response
monit/
In addition, for these
default.conf
components apigee-
Postgres Server /opt/apigee/ monit:
edge-postgres- ● Requires multiple
server/monit/ failures within a
default.conf given number of
cycles before taking
Qpid Server /opt/apigee/ action
● Sets a custom
edge-qpid-
request path
server/monit/
default.conf

Router /opt/apigee/
edge-router/
monit/
default.conf

Cassandra /opt/apigee/ apigee-monit checks:


Edge UI data/apigee-
OpenLDAP ● Service is running
monit/
Postgres
monit.conf
Qpid
Zookeeper

The following example shows the default component configuration for the edge-
router component:

check host edge-router with address localhost


restart program = "/opt/apigee/apigee-service/bin/apigee-service edge-router
monitrestart"
if failed host 10.1.1.0 port 8081 and protocol http
and request "/v1/servers/self/uuid"
with timeout 15 seconds
for 2 times within 3 cycles
then restart

if failed port 15999 and protocol http


and request "/v1/servers/self"
and status < 600
with timeout 15 seconds
for 2 times within 3 cycles
then restart

The following example shows the default configuration for the Classic UI (edge-ui)
component:

check process edge-ui


with pidfile /opt/apigee/var/run/edge-ui/edge-ui.pid
start program = "/opt/apigee/apigee-service/bin/apigee-service edge-ui start" with
timeout 55 seconds
stop program = "/opt/apigee/apigee-service/bin/apigee-service edge-ui stop"

This applies to the Classic UI, not the new Edge UI whose component name is edge-
management-ui.

You cannot change the default component configurations for any Apigee Edge for
Private Cloud component. You can, however, add your own component
configurations for external services, such as your target endpoint or the httpd
service. For more information, see Non-Apigee component configurations.

By default, apigee-monit monitors all components on a node on which it is


running. You can enable or disable it for all components or for individual
components. For more information, see:

● Stop and start monitoring a component


● Stop, start, and disable apigee-monit

Install apigee-monit
apigee-monit is not installed by default; you can install it manually after upgrading
or installing version 4.19.01 or later of Apigee Edge for Private Cloud.

This section describes how to install apigee-monit on the supported platforms.

For information on uninstalling apigee-monit, see Uninstall apigee-monit.

Install apigee-monit on a supported platform

This section describes how to install apigee-monit on a supported platform.

To install apigee-monit on a supported platform:

1. Install apigee-monit with the following command:


2. /opt/apigee/apigee-service/bin/apigee-service apigee-monit install
3. Configure apigee-monit with the following command:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-monit configure
5. Start apigee-monit with the following command:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-monit start
7. Repeat this procedure on each node in your cluster.

Stop and start monitoring components


When a service stops for any reason, apigee-monit attempts to restart the
service.

This can cause a problem if you want to purposefully stop a component. For
example, you might want to stop a component when you need to back it up or
upgrade it. If apigee-monit restarts the service during the backup or upgrade, your
maintenance procedure can be disrupted, possibly causing it to fail.

TIP

To prevent apigee-monit from causing failures during a planned outage, stop monitoring the
component with the stop-component option when you perform the following operations:

● Stop a component
● Install or upgrade a component
● Restart a component

The following sections show the options for stopping the monitoring of components.

Stop a component and unmonitor it

To stop a component and unmonitor it, execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit stop-component -c


component_name
component_name can be one of the following:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
Note that "all" is not a valid option for stop-component. You can stop and
unmonitor only one component at a time with stop-component.

To re-start the component and resume monitoring, execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit start-component -c


component_name

Note that "all" is not a valid option for start-component.

For instructions on how to stop and unmonitor all components, see Stop all
components and unmonitor them.

Unmonitor a component (but don't stop it)

To unmonitor a component (but don't stop it), execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c


component_name
component_name can be one of the following:

● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)

To resume monitoring the component, execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c


component_name

Unmonitor all components (but don't stop them)

To unmonitor all components (but don't stop them), execute the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c all

To resume monitoring all components, execute the following command:


/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c all

Stop all components and unmonitor them

To stop all components and unmonitor them, execute the following commands:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c all


/opt/apigee/apigee-service/bin/apigee-all stop

To re-start all components and resume monitoring, execute the following


commands:

/opt/apigee/apigee-service/bin/apigee-all start
/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c all

To stop monitoring all components, you can also disable apigee-monit, as


described in Stop, start, and disable apigee-monit.

Stop, start, and disable apigee-monit


As with any service, you can stop and start apigee-monit using the apigee-
service command. In addition, apigee-monit supports the unmonitor
command, which lets you temporarily stop monitoring components.

Stop apigee-monit
NOTEIf you use cron to watch apigee-monit, you must disable it before stopping apigee-
monit, as described in Monitor apigee-monit.

To stop apigee-monit, use the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit stop

Start apigee-monit

To start apigee-monit, use the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit start

Disable apigee-monit

You can suspend monitoring all components on the node by using the following
command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit unmonitor -c all

Alternatively, you can permanently disable apigee-monit by uninstalling it from the


node, as described in Uninstall apigee-monit.
Uninstall apigee-monit

To uninstall apigee-monit:

1. If you set up a cron job to monitor apigee-monit, remove the cron job
before uninstalling apigee-monit:
2. sudo rm /etc/cron.d/apigee-monit.cron
3. Stop apigee-monit with the following command:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-monit stop
5. Uninstall apigee-monit with the following command:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-monit uninstall
7. Repeat this procedure on each node in your cluster.

Monitor a newly installed component


If you install a new component on a node that is running apigee-monit, you can
begin monitoring it by executing apigee-monit's restart command. This
generates a new monit.conf file that will include the new component in its
component configurations.

The following example restarts apigee-monit:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit restart

Customize apigee-monit
You can customize various apigee-monit settings, including:

1. Default apigee-monit control settings


2. Global configuration settings
3. Non-Apigee component configurations

Default apigee-monit control settings

You can customize the default apigee-monit control settings such as the
frequency of status checks and the locations of the apigee-monit files. You do
this by editing a properties file using the code with config technique. Properties files
will persist even after you upgrade Apigee Edge for Private Cloud.

The following table describes the default apigee-monit control settings that you
can customize:

Property Description

conf_mo The httpd daemon's port. apigee-monit uses httpd for its
nit_htt dashboard app and to enable reports/summaries. The default value is
pd_port 2812.

conf_mo Constraints on requests to the httpd daemon. apigee-monit uses


nit_htt httpd to run its dashboard app and enable reports/summaries. This
pd_allo value must point to the localhost (the host that httpd is running on.
w
To require that requests include a username and password, use the
following syntax:

conf_monit_httpd_allow=allow username:"password"\nallow 127.0.0.1

When adding a username and password, insert a "\n" between each


constraint. Do not insert actual newlines or carriage returns in the
value.

conf_mo The directory in which event details are stored.


nit_mon
it_data
dir

conf_mo The amount of time that apigee-monit waits after it is first loaded
nit_mon into memory before it runs. This affects apigee-monit the first
it_dela process check only.
y_time

conf_mo The location of the apigee-monit log file.


nit_mon
it_logd
ir

conf_mo The frequency at which apigee-monit attempts to check each


nit_mon process; the default is 60 seconds.
it_retr
y_time

conf_mo The location of the PID and state files, which apigee-monit uses for
nit_mon checking processes.
it_rund
ir

To customize the default apigee-monit control settings:


1. Edit the following file:
2. /opt/apigee/customer/application/monit.properties
3. If the file does not exist, create it and set the owner to the "apigee" user:
4. chown apigee:apigee /opt/apigee/customer/application/monit.properties
5. Note that if the file already exists, there may be additional configuration
properties defined in it beyond what is listed in the table above. You should
not modify properties other than those listed above.
6. Set or replace property values with your new values.
For example, to change the location of the log file to /tmp, add or edit the
following property:
7. conf_monit_monit_logdir=/tmp/apigee-monit.log
8. Save your changes to the monit.properties file.
9. Re-configure apigee-monit with the following command:
10. /opt/apigee/apigee-service/bin/apigee-service apigee-monit configure
11. Reload apigee-monit with the following command:
12. /opt/apigee/apigee-service/bin/apigee-service apigee-monit reload
13. If you cannot restart apigee-monit, check the log file for errors as
described in Access apigee-monit log files.
14. Repeat this procedure for each node in your cluster.

Global configuration settings

You can define global configuration settings for apigee-monit; for example, you
can add email notifications for alerts. You do this by creating a configuration file in
the /opt/apigee/data/apigee-monit directory and then restarting apigee-
monit.

To define global configuration settings for apigee-monit:

1. Create a new component configuration file in the following location:


2. /opt/apigee/data/apigee-monit/filename.conf
3. Where filename can be any valid file name, except "monit".
4. Change the owner of the new configuration file to the "apigee" user, as the
following example shows:
5. chown apigee:apigee /opt/apigee/data/apigee-monit/my-mail-config.conf
6. Add your global configuration settings to the new file. The following example
configures a mail server and sets the alert recipients:
SET MAILSERVER smtp.gmail.com PORT 465
USERNAME "example-admin@gmail.com" PASSWORD "PASSWORD"
USING SSL, WITH TIMEOUT 15 SECONDS

SET MAIL-FORMAT {
from: edge-alerts@example.com
subject: Monit Alert -- Service: $SERVICE $EVENT on $HOST
}
SET ALERT fred@example.com
7. SET ALERT nancy@example.com
8. For a complete list of global configuration options, see the monit
documentation.
9. Save your changes to the component configuration file.
10. Reload apigee-monit with the following command:
11. /opt/apigee/apigee-service/bin/apigee-service apigee-monit reload
12. If apigee-monit does not restart, check the log file for errors as described
in Access apigee-monit log files.
13. Repeat this procedure for each node in your cluster.

Non-Apigee component configurations

You can add your own configurations to apigee-monit so that it will check
services that are not part of Apigee Edge for Private Cloud. For example, you can use
apigee-monit to check that your APIs are running by sending requests to your
target endpoint.

To add a non-Apigee component configuration:

1. Create a new component configuration file in the following location:


2. /opt/apigee/data/apigee-monit/filename.conf
3. Where filename can be any valid file name, except "monit".
You can create as many component configuration files as necessary. For
example, you can create a separate configuration file for each non-Apigee
component that you want to monitor on the node.
4. Change the owner of the new configuration file to the "apigee" user, as the
following example shows:
5. chown apigee:apigee /opt/apigee/data/apigee-monit/my-config.conf
6. Add your custom configurations to the new file. The following example checks
the target endpoint on the local server:
7. CHECK HOST localhost_validate_test WITH ADDRESS localhost
8. IF FAILED
9. PORT 15999
PROTOCOL http
REQUEST "/validate__test"
CONTENT = "Server Ready"
FOR 2 times WITHIN 3 cycles
THEN alert
10. For a complete list of possible configuration settings, see the monit
documentation.
11. Save your changes to the configuration file.
12. Reload apigee-monit with the following command:
13. /opt/apigee/apigee-service/bin/apigee-service apigee-monit reload
14. If apigee-monit does not restart, check the log file for errors as described
in Access apigee-monit log files.
15. Repeat this procedure for each node in your cluster.

Note that this is for non-Edge components only. You cannot customize the
component configurations for Edge components.
Access apigee-monit log files
apigee-monit logs all activity, including events, restarts, configuration changes,
and alerts in a log file.

The default location of the log file is:

/opt/apigee/var/log/apigee-monit/apigee-monit.log

You can change the default location by customizing the apigee-monit control
settings.

Log file entries have the following form:

'edge-message-processor' trying to restart


[UTC Dec 14 16:20:42] info : 'edge-message-processor' trying to restart
'edge-message-processor' restart: '/opt/apigee/apigee-service/bin/apigee-service
edge-message-processor monitrestart'

You cannot customize the format of the apigee-monit log file entries.

View aggregated status with apigee-monit


apigee-monit includes the following commands that give you aggregated status
information about the components on a node:

Command Usage

report /opt/apigee/apigee-service/bin/apigee-service apigee-monit report

summary /opt/apigee/apigee-service/bin/apigee-service apigee-monit summary

Each of these commands is explained in more detail in the sections that follow.

report

The report command gives you a rolled-up summary of how many components are
up, down, currently being initialized, or are currently unmonitored on a node. The
following example invokes the report command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit report

The following example shows report output on an AIO (all-in-one) configuration:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit report


up: 11 (100.0%)
down: 0 (0.0%)
initialising: 0 (0.0%)
unmonitored: 1 (8.3%)
total: 12 services

In this example, 11 of the 12 services are reported by apigee-monit as being up.


One service is not currently being monitored.

You may get a Connection refused error when you first execute the report
command. In this case, wait for the duration of the
conf_monit_monit_delay_time property, and then try again.

summary

The summary command lists each component and provides its status. The following
example invokes the summary command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit summary

The following example shows summary output on an AIO (all-in-one) configuration:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit summary


Monit 5.25.1 uptime: 4h 20m
Service Name Status Type
host_name OK System
apigee-zookeeper OK Process
apigee-cassandra OK Process
apigee-openldap OK Process
apigee-qpidd OK Process
apigee-postgresql OK Process
edge-ui OK Process
edge-qpid-server OK Remote Host
edge-postgres-server OK Remote Host
edge-management-server OK Remote Host
edge-router OK Remote Host
edge-message-processor OK Remote Host

If you get a Connection refused error when you first execute the summary
command, try waiting the duration of the conf_monit_monit_delay_time
property, and then try again.

Monitor apigee-monit
It is best practice to regularly check that apigee-monit is running on each node.
To check that apigee-monit is running, use the following command:

/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor_monit

Apigee recommends that you issue this command periodically on each node that is
running apigee-monit. One way to do this is with a utility such as cron that
executes scheduled tasks at pre-defined intervals.

To use cron to monitor apigee-monit:

1. Add cron support by copying the apigee-monit.cron directory to the


/etc/cron.d directory, as the following example shows:
2. cp /opt/apigee/apigee-monit/cron/apigee-monit.cron /etc/cron.d/
3. Open the apigee-monit.cron file to edit it.
The apigee-monit.cron file defines the cron job to execute as well as the
frequency at which to execute that job. The following example shows the
default values:
# Cron entry to check if monit process is running. If not start it

4. */2 * * * * root /opt/apigee/apigee-service/bin/apigee-service apigee-monit


monitor_monit
5. This file uses the following syntax, in which the first five fields define the time
at which apigee-monit executes its action:
6. min hour day_of_month month day_of_week task_to_execute
7. For example, the default execution time is */2 * * * *, which instructs
cron to check the apigee-monit process every 2 minutes.
You cannot execute a cron job more frequently than once per minute.
For more information on using cron, see your server OS's documentation or
man pages.
8. Change the cron settings to match your organization's policies. For example,
to change the execution frequency to every 5 minutes, set the job definition to
the following:
9. */5 * * * * root /opt/apigee/apigee-service/bin/apigee-service apigee-monit
monitor_monit
10. Save the apigee-monit.cron file.
11. Repeat this procedure for each node in your cluster.

If cron does not begin watching apigee-monit, check that:

● There is a blank line after the cron job definition.


● There is only one cron job defined in the file. (Commented lines do not
count.)

If you want to stop or temporarily disable apigee-monit, you must disable this
cron job, too, otherwise cron will restart apigee-monit.

To disable cron, do one of the following:


● Delete the /etc/cron.d/apigee-monit.cron file:
● sudo rm /etc/cron.d/apigee-monit.cron
● You will have to re-copy it if you later want to re-enable cron to watch
apigee-monit.
OR
● Edit the /etc/cron.d/apigee-monit.cron file and comment out the job
definition by adding a "#" to the beginning of the line; for example:
● # 10 * * * * root /opt/apigee/apigee-service/bin/apigee-service apigee-monit
monitor_monit

Monitoring best practices


Monitoring alerts

Apigee Edge allows you to forward alerts to syslogs or external monitoring


systems/tools when an error or a failure occurs due to failure of an event. These
alerts can be system-level or application-level alerts/events. Application level alerts
are mostly custom alerts that are created based on events generated. The network
administrator usually configures the custom conditions. For more information on
alerts, contact Apigee Support.

Setting alert thresholds

Set a threshold after which an alert needs to be generated. What you set depends on
your hardware configuration. Threshold should be set in relation to your capacity. For
example, Apigee Edge might be too low if you only have 6GB capacity. You can
assign threshold with equal to (=) or greater than (>) criterion. You can also specify a
time interval between two consecutive alerts generation. You can use the
hours/minutes/seconds option.

Criteria for Setting System-level Alerts

The following table describes the criteria:

Alert Suggested Threshold Description

Low memory 500MB Memory is too low to start a


component
Low disk space 8GB Disk space has fallen too low.
(/var/log)

High load 3+ Processes waiting to run have


increased unexpectedly

Process stopped N/A, a Boolean value Apigee Java process in the


of true or false system has stopped

Note: For disk space, monitor all mounted partitions.

Checking on Apigee-specific and Third-party Ports

Monitor the following ports to make sure they're active

● Port 4526, 4527 and 4528 on Management Server, Router and Message
Processor
● Port 1099, 1100 and 1101 on Management Server, Router and Message
Processor
● Port 8081 and 15999 on Routers
● Port 8082 and 8998 on Message Processors
● Port 8080 on Management Server

Check the following third-party ports to make sure they’re active:

● Qpid port 5672


● Postgres port 5432
● Cassandra port 7000, 7199, 9042, 9160
● ZooKeeper port 2181
● OpenLDAP port 10389

In order to determine which port each Apigee component is listening for API calls on,
issue the following API calls to the Management Server (which is generally on port
8080):

curl -v -u username:password http://host:port/v1/servers?pod=gateway&region=dc-1


curl -v -u username:password http://host:port/v1/servers?pod=central&region=dc-1

curl -v -u username:password http://host:port/v1/servers?pod=analytics&region=dc-1


The output of these commands will contain sections similar to that shown below.
The http.management.port section gives the port number for the specified
component.

"externalHostName" : "localhost",

"externalIP" : "111.222.333.444",

"internalHostName" : "localhost",

"internalIP" : "111.222.333.444",

"isUp" : true,

"pod" : "gateway",

"reachable" : true,

"region" : "default",

"tags" : {

"property" : [ {

"name" : "Profile",

"value" : "Router"

}, {

"name" : "rpc.port",

"value" : "4527"

}, {

"name" : "http.management.port",

"value" : "8081"

}, {

"name" : "jmx.rmi.port",

"value" : "1100"

}]

},

"type" : [ "router" ],
"uUID" : "2d4ec885-e20a-4173-ae87-10be38b35750"

Viewing Logs

Log files keep track of messages regarding the event/operation of the system.
Messages appear in the log when processes begin and complete or when an error
condition occurs. By viewing log files, you can obtain information about system
components, for example, CPU, memory, disk, load, processes, so on, before and
after attaining a failed state. This also allows you to identify and diagnose the source
of current system problems or help you predict potential system problems.

For example, a typical system log of a component contains following entries as seen
below:

TimeStamp = 25/01/13 19:25 ; NextDelay = 30

Memory

HeapMemoryUsage = {used = 29086176}{max = 64880640} ;

NonHeapMemoryUsage = {init = 24313856}{committed = 57278464} ;

Threading

PeakThreadCount = 53 ; ThreadCount = 53 ;

OperatingSystem

SystemLoadAverage = 0.25 ;

You can edit the /opt/apigee/conf/logback.xml file to control the logging


mechanism without having to restart a server. The logback.xml file contains the
following property that sets the frequency that the logging mechanism checks the
logback.xml file for configuration changes:

<configuration scan="true" scanPeriod="30 seconds" >

By default, the logging mechanism checks for changes every minute. If you omit the
time units to the scanPeriod attribute, it defaults to milliseconds.

Note: Setting the scanPeriod attribute to a short interval might affect system performance.

The following table tells the log files location of Apigee Edge Private Cloud
components.
Components Location

Management Server opt/apigee/var/log/edge-


management-server

Router opt/apigee/var/log/edge-router

Message Processor opt/apigee/var/log/edge-message-


processor

Qpid Server opt/apigee/var/log/edge-qpid-


server

Apigee Postgres Server opt/apigee/var/log/edge-postgres-


server

Edge UI opt/apigee/var/log/edge-ui

ZooKeeper opt/apigee/var/log/apigee-
zookeeper

OpenLDAP opt/apigee/var/log/apigee-
openldap

Cassandra opt/apigee/var/log/apigee-
cassandra

Qpidd opt/apigee/var/log/apigee-qpidd

PostgreSQL database opt/apigee/var/log/apigee-


postgresql

Enabling debug logs for the Message Processor and


Edge UI

To enable debug logs for Message Processor:


1. On the Message Processor node, edit
/opt/apigee/customer/application/messsage-
processor.properties. If that file does not exist, create it.
2. Add the following property to the file:
3. conf_system_log.level=DEBUG
4. Restart the Message Processor:
5. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart

To enable debug logs for Edge UI:

1. On the Edge UI node, edit


/opt/apigee/customer/application/ui.properties. If that file
does not exist, create it.
2. Add the following property to the file:
3. conf_application_logger.application=DEBUG
4. Restart the Edge UI:
5. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

apigee-monit best practices

When using apigee-monit, Apigee recommends that you:

● Stop monitoring a component before you perform any operation that starts or
stops it such as a backup or an upgrade.
● Monitor apigee-monit by using a tool such as cron. For more information,
see Monitor apigee-monit.

Monitoring Tools

Monitoring tools such as Nagios, Collectd, Graphite, Splunk, Sumologic, and Monit
can help you monitor your entire enterprise environment and business processes.
PORTAL
Portal overview
Support for Apigee modules for Drupal 7-based developer portals will end on January 2, 2023.
While Drupal 7-based portals will continue to work, Apigee will not issue security patches for its
Drupal 7 modules or send PHP or Drupal 7 updates to OPDK customers after January 2, 2023.

Because Apigee Drupal 7 modules depend on PHP 7.4, you may also be
affected by the impending end of security patches for PHP 7.4 at the end of
November, 2022. Drupal 7 will reach its end-of-life date on November 23, 2023.

We encourage you to upgrade to Drupal 9. Please check with your team and
evaluate your upgrade options to avoid any potential security risks. For more
information on moving to Drupal 9, see Build your portal using Drupal 9.

Apigee Developer Services portal (or simply, the portal) is a template


portal for content and community management. The on-premises
version is based on the open source Drupal project. The default
portal setup provides the following services:

● Content management: Use the portal to create and manage:


● API documentation
● Forums
● Blog postings
● Testing: Use the portal to test APIs in real time using a built-in
test console
● Community management: The portal manages:
● Manual or automatic user registration
● Moderation of user comments

Portal's Role-Based Access Control (RBAC) model controls the


access to features on the portal. For example, you can enable
controls to allow registered user to create forum posts, use test
consoles, and so on.

This version of this document has details specific to version 4.51.00.


Any references that are specific to previous versions are oversights
and should be reported as bugs.

For more information, see What is a developer portal?

Supported network topologies


NOTE
Do not install the portal on the same servers as Apigee Edge.

The portal components can be installed in the following


configurations, or topologies:

● 1 node: All portal components (Drupal, NGINX, PHP, Soir)


installed on a single machine with Postgres.
● 2 nodes: All portal components on one machine; Postgres on
the second machine

The following images show the supported topologies:

1 NODE
2 NODES

Figure 1 shows a 1-node portal topology in which all portal


components are on a single machine:

Figure 1
Portal Topology

Note that:

● These topologies are the only topologies supported by


Apigee. If you use a different network topology, Apigee will not
be able to support it.
● On a new install of 4.51.00, the installation script installs
Postgres and NGINX.
● On an update to 4.51.00 from an install that uses Postgres
and NGINX, the installation script updates Postgres and
NGINX.
● On an update to 4.51.00 from an install that uses
MySQL/MariaDB/Apache, you must first convert your
installation to Postgres/NGINX before you can update to
4.51.00. For more information, see Convert a tar-based portal
to an RPM-based portal.
● You can leverage features of Drupal to ensure high availability
in large and custom topologies. For information on setting up
and maintaining these configurations, Apigee recommends
that you engage with the Drupal community.
In this figure, the Public core contains the components that are
publicly accessible. The Private core contains components that are
not publicly accessible.

Component Description Installed by

ELB An Enterprise Your network


Load Balancer provider.
(ELB).
For example, both
Amazon and
Rackspace provide
Enterprise load
balancers for use
with their instances.

NGINX 1.10.1 The NGINX web Apigee


server used for
installations of
4.51.00.

Postgres 9.6 The database Apigee, or connect to


used by Drupal an existing
for new installation.
installations of
4.51.00. If you want to
connect to a remote
Postgres installation,
it must be version
9.6.

Drupal shared The shared Apigee


storage storage area used
by Drupal for
uploaded files,
static scripts, and
other information.

Drush 6.2 The Drupal Apigee


command line
interface.

PHP 7.0 Server-side Apigee


scripting engine.
Apache Solr The Drupal search Apigee, but it is not
server. Apache enabled by default.
Solr uses the Only enable it if you
Apache Lucene have a large amount
search library. of data on the portal.
See Install the portal
for instructions on
enabling it.

Access the Apigee community for your questions


The Apigee Community is a free resource where you can contact
Apigee as well as other Apigee customers with questions, tips, and
other issues. Before posting to the community, be sure to first search
existing posts to see if your question has already been answered.

Portal requirements
Note: Do not install the portal on the same servers as Apigee Edge.

Apigee Developer Services portal (or simply, the portal) requires the
following minimal hardware and software:

Hardware Requirement

Operating system These installation instructions and related


installation files have been tested on the
operating systems listed in Supported
software and supported versions.

CPU 2-core

RAM 4 GB

Hard disk 120 GB

Java You must have version Java 1.8 installed on


each Postgres machine prior to the
installation. Supported JDKs are listed at
Supported software and supported versions.

Network interface Active internet connection required.

As part of the installation process, the


installer downloads resources from the web.
If your environment is set up to proxy
outgoing HTTP and HTTPS requests, then
your proxy must be configured to correctly
handle redirected requests that might occur
during a download.

For example, a request to


https://drupal.org/ returns an HTTP
301 status code and redirects to
https://www.drupal.org/.

Your proxy should be configured to return an


HTTP 200 status code with the requested
content from the redirect.

For SAP installations, if your environment is


set up to proxy outgoing HTTPS requests,
then your proxy must support TLSv1.0.
OpenSSL 0.9.8 does not support TLSv1.1 or
TLSv1.2, only TLSv1.0.

Red Hat Enterprise Linux requirements

Red Hat Enterprise Linux (RHEL) has extra requirements due to a


subscription needed to access software downloads from Red Hat.
The server must be able to connect to the Internet to download
RPMs via yum. If using RHEL, the server must be registered on the
Red Hat Network (RHN) and registered to the server optional
channel.

The Red Hat requirements are checked during the install and the
portal installer prompts you if RHEL is not already registered. If you
already have Red Hat login credentials, you can use the following
command to register RHEL before beginning the installation process:
subscription-manager register --username=username --
password=password --auto-attach

Where username and password are your Red Hat credentials.

If you have a trial version of RHEL, you can obtain a 30-day trial
license. See https://access.redhat.com/solutions/32790 for more
information.

SMTP requirements

Apigee recommends, but does not require, that you configure an


SMTP server to send email messages from the portal. If configured,
you must ensure that Drupal can access the necessary port on the
SMTP server. For non-TLS SMTP, the port number is typically 25. For
TLS-enabled SMTP, it is often 465, but check with your SMTP
provider.

Additional requirements

To perform the installation, you must have root access.

Deployment architecture requirements

The portal has a single interface with the Apigee Management Server
via a REST API in order to store and retrieve information about a
user's applications. The portal must be able to connect to the
Management Server via HTTP or HTTPS, depending on your
installation.
Information required before you start the
install

Before starting the install, you must have the following information
available:

1. Which platform are you configuring: Red Hat or CentOS? If this


is a Red Hat install, the machine must be registered on the
Red Hat Network to download RPMs.
2. Do you plan on installing Postgres on the local machine? If
you want a simple install with everything on the same
machine, then install Postgres locally.
3. If you intend to access a remote Postgres server, get the
following information about the Postgres server:
a. hostname
b. port
c. database name
d. username
e. password
4. The remote Postgres server should already be configured
before you start the installation.
5. What is the fully-qualified domain name of the web server?
(This information will be added to /etc/hosts.) This should be
an IP address or hostname, such as
portalserver.example.com. The default value is
localhost.
6. There are three pieces of information that allow your portal to
communicate with the Apigee Edge management server. This
information is as follows:
a. URL of the Apigee Management API Endpoint: This will
be either a hostname or an IP address. This is the REST
endpoint to which all calls are made to create apps and
register developers for app keys. The default endpoint
is https://api.enterprise.apigee.com/v1.
For an Edge for Private Cloud installation, the URL is in
the form:
b. http://Edge_IP:8080/v1
c. or:
d. https://Edge_IP:SSL_port/v1
e. Where Edge_IP is the IP address of the Edge
management server and SSL_port is the SSL port for
the Edge management API. For example, 8443.
f. Apigee Organization name: There is a relationship
between portals and Apigee Edge organizations. You
will set up the default organization when you set up the
Management API Endpoint. The default value is my-org.
g. Username and password for the management API
endpoint: The calls from the portal to Edge must be
performed by an administrator for your organization.
This username/password is for an administrator on
your organization and should be used only for
connecting to Edge from the portal. For example, if you
specify the credentials of a user, and that user is ever
deleted on Edge, then the portal will no longer be able
to connect to Edge. Therefore, create an administrator
on your organization just for this connection.
For example:
h. dc_devportal+ORGNAME@apigee.com:MyP@ssw0rd

Install the portal


NOTE

Do not install the portal on the same servers as Apigee Edge.

Before you install Apigee Developer Services portal (or simply, the
portal), ensure that:

1. You install Postgres before installing the portal. You can either
install Postgres as part of installing Edge, or install Postgres
standalone for use by the portal.
● If you install Postgres standalone, it can be on the
same node as the portal.
● If you are connecting to Postgres installed as part of
Edge, and Postgres is configured in master/standby
mode, specify the IP address of the master Postgres
server.
2. You are performing the install on the 64-bit version of a
supported version of Red Hat Enterprise Linux, CentOS, or
Oracle. See the list of supported versions at Supported
software and supported versions.
3. Yum is installed.

The installer only includes Drupal-contributed modules that are


required by the Apigee Developer Services portal (or simply, the
portal). For information about installing other contributed modules,
see Extending Drupal 7.

Installation overview
NOTE
Do not install the portal on the same servers as Apigee Edge.

To install the portal, you will perform the following steps. Each of
these steps is described in more detail in the sections that follow.

1. Test your connection


2. Remove pre-7.0 versions of PHP
3. Install Postgres
4. Install the portal
5. Ensure Update manager is enabled
6. (Optional) Configure Apache Solr
7. (Optional) Install SmartDocs
8. (Optional) Configure JQuery

Deprecation of the SMTPSSL property


In previous releases, you used the SMTPSSL property to set the
protocol used by the SMTP server connected to the portal. That
property has been deprecated.

You now use the SMTP_PROTOCOL property, instead of the SMTPSSL


property, to set the protocol used by the SMTP server connected to
the portal. The valid values are: "standard", "ssl", or "tls".

Create a portal configuration file


Shown below is an example silent configuration file for a portal
installation. Edit this file as necessary for your configuration. Use the
-f option to setup.sh to include this file.

NOTE

Do not set DEVPORTAL_ADMIN_USERNAME=admin. The admin user account


is created by default. If specified, an error message is returned prompting you
to configure a different username.
IP1=IPorDNSnameOfNode

# Must resolve to IP address or DNS name of host - not to 127.0.0.1 or localhost.


HOSTIP=$(hostname -i)

# Specify the name of the portal database in Postgres.


PG_NAME=devportal

# Specify the Postgres admin credentials.


# The portal connects to Postgres by using the 'apigee' user.
# If you changed the Postgres password from the default of 'postgres'
# then set PG_PWD accordingly.
# If connecting to a Postgres node installed with Edge,
# contact the Edge sys admin to get these credentials.
PG_USER=apigee
PG_PWD=postgres

# The IP address of the Postgres server.


# If it is installed on the same node as the portal, specify that IP.
# If connecting to a remote Postgres server,specify its IP address.
PG_HOST=$IP1

# The Postgres user credentials used by the portal


# to access the Postgres database,
# This account is created if it does not already exist.
DRUPAL_PG_USER=drupaladmin
DRUPAL_PG_PASS=portalSecret

# Specify 'postgres' as the database.


DEFAULT_DB=postgres

# Specify the Drupal admin account details.


# DO NOT set DEVPORTAL_ADMIN_USERNAME=admin.
# The installer creates this user on the portal.
DEVPORTAL_ADMIN_FIRSTNAME=firstName
DEVPORTAL_ADMIN_LASTNAME=lastName
DEVPORTAL_ADMIN_USERNAME=userName
DEVPORTAL_ADMIN_PWD=PORTAL_ADMIN_PASSWORD
DEVPORTAL_ADMIN_EMAIL=foo@bar.com

# Edge connection details.


# If omitted, you can set them in the portal UI.
# Specify the Edge organization associated with the portal.
EDGE_ORG=edgeOrgName

# Specify the URL of the Edge management API.


# For a Cloud based installation of Edge, the URL is:
# https://api.enterprise.apigee.com/v1
# For a Private Cloud installation, it is in the form:
# http://ms_IP_or_DNS:8080/v1 or
# https://ms_IP_or_DNS:TLSport/v1
MGMT_URL=https://api.enterprise.apigee.com/v1
# The org admin credentials for the Edge organization in the form
# of Edge emailAddress:pword.
# The portal uses this information to connect to Edge.
DEVADMIN_USER=orgAdmin@myCorp.com
DEVADMIN_PWD=ORG_ADMIN_PASSWORD

# The PHP port.


# If omitted, it defaults to 8888.
PHP_FPM_PORT=8888

# Optionally configure the SMTP server used by the portal.


# If you do, the properties SMTPHOST and SMTPPORT are required.
# The others are optional with a default value as notated below.
# SMTP hostname. For example, for the Gmail server, use smtp.gmail.com.
SMTPHOST=smtp.gmail.com

# Set the SMTP protocol as "standard", "ssl", or "tls",


# where "standard" corresponds to HTTP.
# Note that in previous releases, this setting was controlled by the
# SMTPSSL property. That property has been deprecated.
SMTP_PROTOCOL="standard"

# SMTP port (usually 25).


# The value can be different based on the selected encryption protocol.
# For example, for Gmail, the port is 465 when using SSL and 587 for TLS.
SMTPPORT=25

# Username used for SMTP authentication, defaults is blank.


SMTPUSER=your@email.com

# Password used for SMTP authentication, default is blank.


SMTPPASSWORD=YOUR_EMAIL_PASSWORD

1. Test your connection to Apigee Edge


Test your connection between the server you're going to install the
portal on and the Edge management server by executing the
following curl command on the portal server:

curl -u EMAIL:PASSWORD http://ms_IP_or_DNS:8080/v1/organizations/

or:
curl -u EMAIL:PASSWORD https://ms_IP_or_DNS:TLSPort/v1/organizations/

Where EMAIL and PASSWORD are the email address and password
of the administrator for ORGNAME.

Be sure to specify the hostname and port number specific to your


installation of Edge. Port 8080 is the default port used by Edge. If you
are connecting to an organization in the cloud, then the request URL
is:
https://api.enterprise.apigee.com/v1/organizations/
ORGNAME.

If successful, curl returns a response similar to the following:

{
"createdAt" : 1348689232699,
"createdBy" : "USERNAME",
"displayName" : "cg",
"environments" : [ "test", "prod" ],
"lastModifiedAt" : 1348689232699,
"lastModifiedBy" : "foo@bar.com",
"name" : "cg",
"properties" : {
"property" : [ ]
},
"type" : "trial"
}

2. Remove pre-7.0 versions of PHP


The install script checks for pre-7.0 versions of PHP on the system
before starting the installation. If pre-7.0 versions of PHP exist, the
following warning message displays:

The following packages present on your system conflict with software we are
about to install. You will need to manually remove each one, then re-run this install script.

php
php-cli
php-common
php-gd
php-mbstring
php-mysql
php-pdo
php-pear
php-pecl-apc
php-process
php-xml

Remove the PHP packages using the following command:

yum remove package_name

Note: To remove PHP on CentOS 6.6, you might also have to remove Ganglia, if it is installed.

If you are not sure if PHP is installed on your server, use the following
command:

rpm -qa | grep -i php

Note that the portal uses PHP version 4.18.01-0.0.49. This is not
intended to match the version number of Apigee Edge for Private
Cloud.

3. Install Postgres
NOTE

Do not install the portal on the same servers as Apigee Edge.

The portal requires Postgres to be installed before you can install the
portal. You can either install Postgres as part of installing Edge, or
install Postgres standalone for use by the portal.

● If you are connecting to Postgres installed as part of Edge,


and Postgres is configured in master/standby mode, specify
the IP address of the master Postgres server.
● If you install Postgres standalone, it can be on the same node
as the portal.

For information on installing Postgres as part of installing Edge, see


Install Edge components on a node.

To install Postgres standalone:

1. Install the Edge apigee-setup utility on the node using the


internet or non-internet procedure. See Install the Edge apigee-
setup utility for more.
2. Create a Postgres configuration file, as the following example
shows:
3. # Must resolve to IP address or DNS name of host - not to
127.0.0.1 or localhost
4. HOSTIP=$(hostname -i)
5.
# The pod and region of Postgres. Use the default values
shown below.
MP_POD=gateway
REGION=dc-1

# Set the Postgres password. The default value is 'postgres'.


PG_PWD=postgres
6. At the command prompt, run the setup script to install
Postgres:
7. /opt/apigee/apigee-setup/bin/setup.sh -p pdb -f
postgres_config_file
8. The -p pdb option specifies to install Postgre. The
configuration file must be accessible or readable by the
"apigee" user.

4. Install the portal


Before you can install the portal, be sure that you have done the
following as described in 3. Install Postgres:

1. Install the Edge apigee-setup utility on portal's node


2. Install Postgres, either Postgres standalone or as part of
installing Edge

To install the portal:

1. At the command prompt, run the setup script:


2. /opt/apigee/apigee-setup/bin/setup.sh -p dp -f configFile
3. Where:
● configFile is the portal configuration file as described in
Create a portal configuration file.The configuration file
must be accessible or readable by the "apigee" user.
● -p dp instructs the setup script to install the portal.

To verify that the portal installation was successful:

1. Navigate to the portal home page at


http://localhost:8079 or to the DNS name of your
portal.
2. Log in to the portal using the administrator credentials that
you set in the portal configuration file.
3. Select Reports > Status Report in the Drupal menu to ensure
that you can see the current status of the portal.
4. Be sure that the Management Server connection was
successful. If it was not:
a. Navigate to the portal Connection Configuration page
(for example,
http://portal_IP:8079/admin/config/devcon
nect).
b. Click the Test Connection button. If the connection is
successful, you are done. If the connection fails,
continue.
c. Check the endpoint and authentication settings:
● Management API endpoint URL: Check that the
protocol (HTTP or HTTPS), IP or DNS name, and
port number are correct; for example:
● http://10.10.10.10:8080/v1
● Endpoint authenticated user: The organization
admin's username.
● Authenticated user's password: The
organization admin's password.
d. The default values reflect the settings in your portal
configuration file that you created during the
installation process.
These values should match the ms_IP_or_DNS, email,
and password values you used in step 1: Test your
connection to Apigee Edge. The username and
password should also match the values of the
USER_NAME and USER_PWD properties in the
onboarding configuration file, or the credentials of any
user whose role is Organization Administrator.
e. After you successfully connect to the Management
Server, click the Save configuration button at the
bottom of the page to save your changes.

5. Ensure that the Update manager module is enabled


To receive notifications of Drupal updates, ensure that the Drupal
Update manager module is enabled. From the Drupal menu, select
Modules and scroll down to the Update manager module. If it is not
enabled, enable it.

Once enabled, you can see the available updates by using the
Reports > Available Updates menu item. You can also use the
following Drush command:

drush pm-info update

You have to run this command from the root directory of the site. By
default, the portal is installed at
/opt/apigee/apigee-drupal/wwwroot. Therefore, you should
first change directory to /opt/apigee/apigee-drupal/wwwroot
before running the command. If you did not install the portal in the
default directory, change to your installation directory.

Use the Reports > Available Updates > Settings menu item to
configure the module to email you when updates are available and to
set the frequency for checking for updates.

6. Configure the Apache Solr search engine (Optional)


By default, the Drupal modules that connect to the Apache Solr
search engine are disabled when you install the portal. Most portals
use the internal Drupal search engine, and therefore do not require
the Drupal Solr modules.

If you decide to use Solr as your search engine, you must install Solr
locally on your server and then enable and configure the Drupal Solr
modules on the portal.

To enable the Drupal Solr modules:

1. Log in to your portal as a user with admin or content creation


privileges.
2. Select Modules in the Drupal menu.
3. Enable the Apache Solr Framework module and the Apache
Solr Search module.
4. Save your changes.
5. Configure Solr as described at
https://drupal.org/node/1999280.

7. Install SmartDocs (Optional)


SmartDocs lets you document your APIs on the portal in a way that
makes the API documentation fully interactive. However, to use
SmartDocs with the portal, you must first install SmartDocs on Edge.

● If you are connecting the portal to an Edge Cloud installation,


SmartDocs is already installed and no further configuration is
necessary.
● If you are connecting the portal to an Edge for Private Cloud
installation, you must ensure that SmartDocs is installed on
Edge. For more on installing Edge and SmartDocs, see Install
SmartDocs.

You must also enable SmartDocs on the portal. For more information
on SmartDocs, see Using SmartDocs to document APIs.

8. Configure the JQuery Update module for non-internet installations


If you install and use the JQuery Update module in a non-internet
installation, you need to configure the module to use the local
version of JQuery. If you configure the module to use a CDN for a
non-internet installation, it will attempt to access the CDN and cause
delays with page loading. For more information about the JQuery
Update module see https://www.drupal.org/project/jquery_update.

To configure the JQuery Update module to use the local version of


JQuery:

1. Log in to your portal as a user with admin or content creation


privileges.
2. Select Configuration > Development > JQuery Update in the
Drupal menu.
3. Click Performance in the left navigation.
4. In the JQuery and JQuery UI CDN drop-down select None.
5. Click Save configuration.

9. Next steps
The following table lists some of the most common tasks that you
perform after installation, and includes links to the Apigee
documentation where you can find more information:

Task Description

Customizing The theme defines the appearance of the portal,


the theme including colors, styling, and other visual aspects.

Customize The home page includes the main menu, welcome


the message, header, footer, and title.
appearance

Add and The registration process controls how new


manage user developers register an account on the portal. For
accounts example, do new developers get immediate
access to the portal, or do they have to be verified
by an administrator. This process also controls
how a portal administrator is notified when a new
account is created.

Configuring The portal sends emails in response to certain


email events. For example, when a new developer
registers on the portal and when a developer loses
their password.

Add and Add a Terms & Conditions page that developers


manage user must accept before being allowed to access the
accounts portal.

Add and The portal implements a role-based authorization


manage user model. Before allowing developers to register,
accounts define the permissions and roles used by the
portal.

Add blog and The portal has built-in support for blogs and
forum posts threaded forums. Define the permissions required
to view, add, edit, and delete blog and forum posts.

Ensure you Ensure that you are backing up the Drupal


are doing database. Note that because every installation is
database different, it is up to you to determine how best to
back up the database.
backups
Note: The Backup and Migrate module is not compatible
with Postgres databases.

See also How to Perform a Backup.

Set up a If you do not set up a hostname in your DNS


hostname server, you can always access the site via the
server's IP address. If you want to use a hostname,
you can configure DNS for the server, which
should work correctly without any other
configuration on a basic setup.

If you set up a load balancer or are getting


incorrect URLs on your site for some other reason,
you can set $base_url for Drupal by following
these steps:

1. Create the directory


/opt/apigee/data/apigee-drupal-
devportal/sites/default/includes
if it does not exist.
2. Create a file named settings.php in that
directory.
3. Add the following to the settings.php
file:
4. /**
5. * Base URL (optional).
6. *
* If Drupal is generating incorrect URLs on
your site, which could
* be in HTML headers (links to CSS and JS
files) or visible links
* on pages (such as in menus), uncomment
the Base URL statement
* below (remove the leading hash sign) and
fill in the absolute URL
* to your Drupal installation.
*
* You might also want to force users to use
a given domain.
* See the .htaccess file for more
information.
*
* Examples:
* $base_url = 'http://www.example.com';
* $base_url =
'http://www.example.com:8888';
* $base_url =
'http://www.example.com/drupal';
* $base_url =
'https://www.example.com:8888/drupal';
*
* It is not allowed to have a trailing slash;
Drupal will add it
* for you.
*/
# $base_url =
'http://www.example.com/'; // NO trailing
slash!
$base_url = ‘http://www.example.com’;
7. Change the last $base_url line to be the
hostname of your site.
8. Save the file.

Note that you can put any other settings from


/opt/apigee/data/apigee-drupal-
devportal/
sites/default/default.settings.php in
this file.

For more information about the $base_url


property, see the following:

● https://codeburst.io/security-for-drupal-
and-related-server-software-3c979b041595
● https://www.drupal.org/node/1992030

Custom You may also want to extend your portal's


development capabilities with custom code outside of your
theme. To do this, create your own Drupal module
as described in Drupal's module development
topics, and put the module in the
/sites/all/modules directory

Uninstall the portal


To uninstall Apigee Developer Services portal (or simply, the portal)
completely from your system, perform the steps described in
Uninstalling Edge.

Alternatively, you can uninstall the following Developer Services


portal components individually:

● apigee-drupal
● apigee-drupal-contrib
● apigee-drupal-devportal
● apigee-drush
● apigee-php

For complete details, see Uninstalling Edge.

USE THE PORTAL


Start, stop, or restart the portal
To start, stop, or restart Apigee Developer Services portal (or simply,
the portal) components, use the apigee-service command:

/opt/apigee/apigee-service/bin/apigee-service component_name [ stop | start | restart ]

Where component_name can be one of the following:


● apigee-drupal-devportal
● apigee-drupal-contrib
● apigee-drupal
● apigee-drush
● apigee-php

The following scripts detect the Apigee components configured to


run on the system on which the script is executed, and will start or
stop only those components in the correct order for that node.

● To stop all Apigee components:


● /opt/apigee/apigee-service/bin/apigee-all stop
● To start all Apigee components:
● /opt/apigee/apigee-service/bin/apigee-all start
● To check which components are running:
● /opt/apigee/apigee-service/bin/apigee-all status

Set the HTTP port


By default, the NGINX server installed with Apigee Developer
Services portal (or simply, the portal) listens for request on port
8079. Use the procedure below to configure NGINX to use a different
port:

1. Ensure that the desired port number is open on the Edge node.
2. Open /opt/apigee/customer/application/drupal-
devportal.properties in an editor. If the file and directory
does not exist, create it.
3. Set the following propertty in drupal-
devportal.properties:
4. conf_devportal_nginx_listen_port=PORT
5. Where PORT is the new port number.
6. Save the file.
7. Restart the portal:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-drupal-
devportal restart
Common Drush commands
Drush is the Drupal command line interface. You can use it to
perform many tasks as a Drupal administrator. Drush is installed for
you when you install Apigee Developer Services portal (or simply, the
portal).

How to run Drush commands

You must run Drush commands from the root directory of the portal
site. By default, the portal is installed at:

● /opt/apigee/apigee-drupal/wwwroot (NGINX)

Therefore, you should first change directory to the correct root


directory before running Drush commands.

Example Drush Commands

The following table lists common Drush commands:

Command Description

drush status Check Drupal


status.

drush archive-dump Back up Drupal to a


--destination=../tmp/dc.tar specific location.

drush dc-getorg Return the Edge


organization
associated with the
portal.

drush dc-setorg org_name Set the Edge


organization
associated with the
portal.
drush dc-getauth Get the Edge
organization
administrator
username (email
address) and
password
associated with the
portal.

drush dc-setauth org_admin_email Set the Edge


organization
administrator
username (email
address). You will
be prompted to set
the password.

drush dc-getend Get the Edge


endpoint associated
with the portal.

drush dc-setend Set the Edge


http://server_endpoint:8080/v1 endpoint associated
with the portal.

drush dc-test Test the connection


from the portal to
the Edge
organization using
the organization
administrator's
credentials.

Use HTTPS with the portal


All Apigee recommended Private Cloud installations of Apigee
Developer Services portal (or simply, the portal) require that the
portal be behind a load balancer. Therefore, you configure TLS on the
load balancer itself, and not on the portal. The procedure that you
use to configure TLS is therefore dependent on the load balancer.

However, if necessary, you can configure TLS on the web server that
hosts the portal.

See Using TLS on the portal for an overview of using TLS on the
portal.

For the portal running on NGINX


By default, a portal using the NGINX web server listens for HTTP
requests on port 8079. If you enable TLS, then the portal listens only
for HTTPS requests on 8079. That is, you can either configure the
portal to listen for HTTP requests or HTTPS requests, but not both.

You can also change the port number as described in Set the HTTP
port used by the portal.

To configure TLS:

1. Obtain your TLS key and certificate. For this example, the cert
is in a file named server.crt and the key is in server.key.
2. Upload your cert and key to the portal server to
/opt/apigee/customer/nginx/ssl.
If the directory does not exist, create it and change the owner
to the "apigee" user:

mkdir /opt/apigee/customer/nginx/ssl
3. chown apigee:apigee /opt/apigee/customer/nginx/ssl
4. Change the owner of the cert and key to the "apigee" user:

chown apigee:apigee /opt/apigee/customer/nginx/ssl/server.crt


5. chown apigee:apigee /opt/apigee/customer/nginx/ssl/server.key
6. Open /opt/apigee/customer/application/drupal-
devportal.properties in an editor. If the file and directory
do not exist, create them.
7. Set the following properties in drupal-
devportal.properties:
8. conf_devportal_ssl_block=ssl on; ssl_certificate
/opt/apigee/customer/nginx/ssl/server.crt;
ssl_certificate_key
/opt/apigee/customer/nginx/ssl/server.key;
9. conf_devportal_http_https_redirect=
10. conf_devportal_fastcgi_https=fastcgi_param HTTPS on;
fastcgi_param HTTP_SCHEME https;
11. Set conf_devportal_ssl_block to the path to the cert
and key files. You are not required to modify the other
properties.
12. Save the file.
13. Restart the portal:
14. /opt/apigee/apigee-service/bin/apigee-service apigee-drupal-
devportal restart

You should be able to access the portal over TLS.

Back up the portal


This section describes how to back up and restore an on-premises
installation of Apigee Developer Services portal (or simply, the portal)
using the Postgres pg_dump and pg_restore commands.

Before you back up

Before you can back up the portal, you must know the name of the
portal's database.

The PG_NAME property in the portal installation config file specifies


the name of the portal's database. The example configuration file in
the portal installation instructions uses the name "devportal". If you
are unsure of the database name, check the config file, or use the
following psql command to show the list of databases:

psql -h localhost -d apigee -U postgres -l

Where -U specifies the Postgres username used by the portal to


access the database. This is the value of the DRUPAL_PG_USER
property in the portal installation config file. You will be prompted for
the database password.

This command displays the following list of databases:

Name | Owner | Encoding | Collate | Ctype | Access


privileges

-------------+--------+----------+-------------+-------------+---------------------
apigee | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
=Tc/apigee +

| | | | | apigee=CTc/apigee +

| | | | | postgres=CTc/apigee

devportal | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 |

newportaldb | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 |

postgres | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 |

template0 | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 |


=c/apigee +

| | | | | apigee=CTc/apigee

template1 | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 |


=c/apigee +

| | | | | apigee=CTc/apigee

Back up the portal

To backup the portal:

1. Change to the Drupal directory, /opt/apigee/apigee-


drupal by default:
2. cd /opt/apigee/apigee-drupal
3. Back up your Drupal database instance with the pg_dump
command:

pg_dump --dbname=portal_db --host=host_IP_address --username=drupaladmin

4. --password --format=c > /tmp/portal.bak


5. Where:
● portal_db is the database name. This is the PG_NAME
property in the portal installation configuration file. If
you are unsure of the database name, see Before you
back up.
● host_IP_address is the IP address of the portal node.
● drupaladmin is the Postgres username used by the
portal to access the database. You defined this with the
DRUPAL_PG_USER property in the portal installation
configuration file.
6. When pg_dump prompts you for the Postgres user password,
use the password that you specified with the
DRUPAL_PG_PASS property in the portal installation
configuration file.
The pg_dump command creates a copy of the database.
7. Make a backup of your entire Drupal web root directory. The
default webroot location is /opt/apigee/apigee-
drupal/wwwroot.
8. Make a backup of the public files. By default, these files are
located in
/opt/apigee/apigee-drupal/wwwroot/sites/defaul
t/files. If that is the correct location, they will be backed up
in Step 3. You must explicitly back them up if you moved them
from the default location.
9. Make a backup of the private files in
/opt/apigee/data/apigee-drupal-devportal/priva
te.
If you are unsure of the location of this directory, use the
drush status command to determine the location of the
private file system.

Restore the portal

After you have backed up the portal, you can restore from your
backup using the pg_restore command.

To restore from the backup to an existing database, use the


following command:

pg_restore --clean --dbname=portal_db --host=localhost --


username=apigee < /tmp/portal.bak

To restore from the backup and create a new database, use the
following command:

pg_restore --clean --create --dbname=portal_db --host=localhost --


username=apigee < /tmp/portal.bak

You can also restore the backup files to the Drupal web root directory
and the private files.
Upgrade the portal
This procedure describes how to upgrade an existing Apigee
Developer Services portal (or simply, the portal) on-premise
installation.

NOTE: All new installations of the portal from version 4.17.01 onward used Postgres as the database and NGINX
However, previous releases of the portal use MySQL/MariaDB as the database and Apache web server. That mea
might be using one of the following configurations:

● NGINX/Postgres for all new 4.17.0x, 4.18.0x, and 4.51.00 installations


● Apache/MySQL 5.0.15 or later for all 6.x operating systems previous to
4.17.01
● Apache/MariaDB 5.1.38 or later for all 7.x operating systems previous
to 4.17.01

Before you start the update process, you must determine your current
configuration and then perform the correct update procedure.

WARNING: You cannot update a portal using MySQL/MariaDB and Apache. You must instead convert the installa
the database and NGINX as the web server. As part of this conversion, you also update your portal to 4.51.00. Se
portal to an RPM-based portal for more.NOTE: This procedure performs a complete upgrade of Drupal and of the
ships with the portal. You might want to upgrade just Drupal itself, possibly because a new version of Drupal is a
version can mean a Drupal feature release, patch, security update, or other type of Drupal update. If you only wan
of Drupal, see Upgrade Drupal.

Determine the correct update procedure


The procedure that you use to update the portal is based on your
current installation:

● If your installation uses NGINX/Postgres, then use Upgrading


a portal using RPMs below.
● If your installation uses Apache/MySQL or Apache/MariaDB,
then see Convert a tar-based portal to an RPM-based portal.

Determine your current installation type

If you are unsure about your current installation type, use the
following command to determine it:

● ls /opt
● If you are using NGINX/Postgres, you will see the following
directories: /opt/apigee and /opt/nginx.
If you are using Apache/MySQL or Apache/MariaDB, then
these directories should not be present.
● /opt/apigee/apigee-service/bin/apigee-all status
● If you are using NGINX/Postgres, you will see the following
output:

+ apigee-service

apigee-drupal-devportal status

OK: apigee-drupal-devportal is up and running

+ apigee-service apigee-lb status

apigee-service: apigee-lb: OK

+ apigee-service apigee-postgresql status


● apigee-service: apigee-postgresql: OK
● apachectl -S
● If you are using Apache/MySQL or Apache/MariaDB, then this
command should return the web root directory of the portal, in
the form:

*:80
● 192.168.56.102 (/etc/httpd/conf/vhosts/devportal.conf:1)

Default installation directory


The upgrade process assumes that the portal was installed at:

● 4.17.05 and later:


/opt/apigee/apigee-drupal/wwwroot
● Prior to 4.17.05: /opt/apigee/apigee-drupal (NGINX) or
/var/www/html (Apache)

If you did not install the portal in the default directory, modify the
paths in the procedure below to use your installation directory.

Supported upgrade versions


This upgrade procedure is supported on portal versions OPDK-17-
01.x and later.

To determine your portal version, open the following URL in a


browser:

http://yourportal.com/buildInfo

NOTE: In 4.17.05 this link was removed. To determine the version, open the Reports > Status report
version appears in the table in the row named Install profile.NOTE: If nothing is displayed or a version other than
displayed, then you cannot use this upgrade process. For information on upgrading from any other portal version
Support.NOTE: If for some reason you removed the buildInfo file, then you can determine the version from th
profiles/apigee/apigee.info file. The default install location is /var/www/html/profiles/apigee/a
might have changed it at install time if you installed the portal in a directory other than
contains a line in the form below containing the version:
version = "OPDK-4.17.05"

Before you update


For existing installations, if you have modified any code in Drupal
core or in any non-custom modules, your modifications will be
overwritten. This includes, among other things, any changes you may
have made to .htaccess. You should assume that anything outside
the /sites directory is owned by Drupal. An exception to this rule is
robots.txt; if this file exists in the web root, it will be preserved for
you.

Before proceeding with the installation, make a backup of your entire


Drupal web root directory. After performing the installation steps
described below, you can restore your customizations from the
backup.

Upgrade a portal using RPMs


To update the portal RPM on a node:

1. Change to the Drupal directory, /opt/apigee/apigee-


drupal by default:
2. cd /opt/apigee/apigee-drupal
3. Back up your Drupal database instance. The pg_dump
command creates a copy of the database:

pg_dump --dbname=devportal --host=host_IP_address --username=drupaladmin


4. --password --format=c > /tmp/portal.dmp
5. Where:
a. devportal is the database name as specified by the
PG_NAME property in the portal installation
configuration file.
b. host_IP_address is the IP address of the portal node.
c. drupaladmin is the Postgres username used by the
portal to access the database as specified by the
DRUPAL_PG_USER property in the portal installation
configuration file.
6. You are prompted for the Postgres user password as defined
by the DRUPAL_PG_PASS property in the portal installation
configuration file.
If you later want to restore from the backup, use the following
command:
7. pg_restore --clean --dbname=devportal --host=localhost --
username=apigee < /tmp/portal.dmp
8. Make a backup of your entire Drupal web root directory. The
default install location is /opt/apigee/apigee-drupal,
but you might have changed it.
If you are unsure of the location of this directory, use the
drush status command or the Configuration > Media > File
entry in the Drupal menu to determine the location of the
public file system and private file system path (for the next
step).
TIP: Before backing up, check if the target of your backup is a soft link
(or symbolic link) to the actual files. To do this, look for the file type l
when you list the contents of a directory. In the following example, the
second and fourth directories are symbolic links (the file type is
indicated at the beginning of the permissions list):

ls -l /opt/apigee

drwxr-xr-x. 16 apigee apigee 195 Jan 29 15:49 edge-qpid-server-4.19.06-0.0.20013

lrwxrwxrwx. 1 apigee apigee 41 Jan 12 08:18 edge-router -> /opt/apigee/edge-router-4.19.06-0.0.20013

drwxr-xr-x. 15 apigee apigee 195 Jan 29 15:50 edge-router-4.19.06-0.0.20013

9. lrwxrwxrwx. 1 apigee apigee 37 Jan 9 17:13 edge-ui ->


/opt/apigee/edge-ui-4.19.06-0.0.20115
10. If the target you want to back up is a soft link, then you should
dereference the links to back up the actual content. How you do this
depends on the tool that you use to perform the back up. For example,
if you use tar, use the -h option, as the following example shows:
11. tar -hcfz /etc/my-drupal-backup.tar.gz /opt/apigee/apigee-drupal
12.
13. Make a backup of the files in /opt/apigee/data/apigee-
drupal-devportal/private.
14. Set Drupal to maintenance mode:
a. Select Configuration in the Drupal menu.
b. On the Configuration page, select Maintenance mode
under Development.
c. Select the Put site into maintenance mode box.
d. Enter a message users see during maintenance.
e. Select Save configuration.
15. Disable SELinux as described in Install the Edge apigee-setup
utility.
16. Change to the /opt directory:
17. cd /opt
18. For an upgrade on a server with an Internet connection:
a. Download the Edge 4.51.00 bootstrap_4.51.00.sh
file to /tmp/bootstrap_4.51.00.sh:
b. curl https://software.apigee.com/bootstrap_4.51.00.sh
-o /tmp/bootstrap_4.51.00.sh
c. Install the Edge 4.51.00 apigee-service utility and
dependencies:
d. sudo bash /tmp/bootstrap_4.51.00.sh
apigeeuser=uName apigeepassword=pWord
e. Where uName and pWord are the username and
password you received from Apigee. If you omit pWord,
you will be prompted to enter it.
By default, the installer checks to see that you have
Java 1.8 installed. You can use the "C" option to
continue without installing Java.
19. For an upgrade on a server with no Internet connection:
a. Create a local 4.51.00 repo as described in Create a
local Apigee repository.NOTE: If you already have an
existing repo, you can add the 4.51.00 repo to it as described in
"Update a local Apigee repository" at Install the Edge apigee-
setup utility.
b. To install apigee-service from a .tar file:
i. On the node with the local repo, use the
following command to package the local repo
into a single .tar file named
/opt/apigee/data/apigee-mirror/apige
e-4.51.00.tar.gz:
ii. /opt/apigee/apigee-service/bin/apigee-service
apigee-mirror package
iii. Copy the .tar file to the node where you want to
update Edge. For example, copy it to the /tmp
directory on the new node.
iv. On the new node, untar the file to the /tmp
directory:
v. tar -xzf apigee-4.51.00.tar.gz
vi. This command creates a new directory, named
repos, in the directory containing the .tar file. For
example /tmp/repos.
vii. Install the Edge apigee-service utility and
dependencies from /tmp/repos:
viii.sudo bash /tmp/repos/bootstrap_4.51.00.sh
apigeeprotocol="file://"
apigeerepobasepath=/tmp/repos
ix. Notice that you include the path to the repos
directory in this command.
c. To install apigee-service using the NGINX webserver:
i. Configure the NGINX web server as described in
"Install from the repo using the NGINX
webserver" at Install the Edge apigee-setup
utility.
ii. On the remote node, download the Edge
bootstrap_4.51.00.sh file to
/tmp/bootstrap_4.51.00.sh:

/usr/bin/curl http://uName:pWord@remoteRepo:3939/bootstrap_4.51.00.sh

iii. -o /tmp/bootstrap_4.51.00.sh
iv. Where uName and pWord are the username and
password you set above for the repo, and
remoteRepo is the IP address or DNS name of
the repo node.
v. On the remote node, install the Edge apigee-
service utility and dependencies:

sudo bash /tmp/bootstrap_4.51.00.sh apigeerepohost=remoteRepo:3939

vi. apigeeuser=uName apigeepassword=pWord


apigeeprotocol=http://
vii. Where uName and pWord are the repo username
and password.
20. Use apigee-service to update the apigee-setup utility:
21. /opt/apigee/apigee-service/bin/apigee-service apigee-setup
update
22. Run the update utility on your Postgres node:
23. /opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
24. Where configFile is the configuration file that you used to
install the Postgres database. The only requirement on the
config file is that the configuration file must be accessible or
readable by the "apigee" user.
25. Remove the PHP RPMs but not the Apigee Drupal Devportal
RPM dependencies by executing the following command:
26. rpm -ev --nodeps $(rpm -qa | grep php | awk '{printf "%s ", $1}')
27. This command does the following:
a. rpm -ev --nodeps removes the RPMs but not their
dependencies.
b. rpm -qa builds the list of RPMs to remove.
c. grep php searches for all PHP RPMs.
d. awk '{printf "%s ", $1}' prints out the RPM
names.
28. Why do this?
29. TIP: To validate which RPMs will be removed, you can execute just the
following portion of the command:
30. rpm -qa | grep php | awk '{printf "%s ", $1}'
31. To improve the appearance of the output, you can change the
formatting string to print out a newline after each RPM, as the
following example shows:
32. rpm -qa | grep php | awk '{printf "%s\n", $1}'
33.
34. Run the update utility on your node to update the portal:
35. /opt/apigee/apigee-setup/bin/update.sh -c dp -f configFile
36. Where configFile is the configuration file that you used to
install the portal. The only requirement on the config file is
that the configuration file must be accessible or readable by
the "apigee" user.
37. Run Drupal's update.php script by opening the following
URL in a browser window:
38. http://portal_IP_DNS:8079/update.php
39. Disable maintenance mode:
a. Select Configuration in the Drupal menu.
b. On the Configuration page, select Maintenance mode
under Development.
c. Deselect the Put site into maintenance mode box.
d. Select Save configuration.

Note that the root directory after the update is:

/opt/apigee/apigee-drupal/wwwroot

The upgrade is now complete. If the Apigee update utility


downgraded your version of Drupal, you might need to re-run the
Drupal upgrade utility. For more information, see Re-run the Drupal
upgrade.

Re-run the Drupal upgrade


If running the Apigee update utility to upgrade Edge for Private
Cloud actually results in a downgrade of your Drupal version, reinstall
the Drupal upgrade. This might be the case if you upgraded only
Drupal in between Private Cloud updates.

For example:

1. You were running version 4.18.05 of Edge for Private Cloud,


which included Drupal 7.59.
2. You upgraded Drupal to 7.64 due to a required security
update.
3. You are now upgrading Private Cloud to 4.19.01, which
includes Drupal 7.61.

As this case illustrates, the Drupal version used by the Apigee


update utility might not reference the latest Drupal upgrade. As a
result, you must now re-run your Drupal upgrade to return your Drupal
installation to the later version.

NOTE: This applies to Private Cloud installations of Drupal only. For customers using a cloud deployment of the
automatically applies all Drupal security updates.

Upgrade Drupal
NOTE: This upgrade is for your Private Cloud installation of Drupal only. For customers using a cloud deploymen
will automatically apply all Drupal security updates.

In an Edge for Private Cloud installation of Apigee Developer


Services portal (or simply, the portal), you might get a notification
that a new version of Drupal is available. A new version can mean a
Drupal feature release, patch, security update, or other type of Drupal
update. In the case of a security update, you want to upgrade your
installation of Drupal as soon as possible to ensure that your site
remains secure.

Upgrade Drupal core


The procedure below describes how to update a Private Cloud
installation of Drupal 7.x.y to another minor version (for example,
Drupal 7.54 to 7.59).

Please note the following:

● This procedure only updates your installation of Drupal. It


does not update the Apigee software that ships as part of the
portal. For information on upgrading the Apigee portal
software, see Upgrade the portal.
If during a Private Cloud upgrade (for example, from 4.18.05
to 4.19.01), the Apigee update utility actually downgrades your
version of Drupal, you might need to re-run the Drupal upgrade
utility. For more information, see Re-run the Drupal upgrade.
● You must execute the Drush (Drupal Shell) commands from
the root directory of the portal site. By default, the portal is
installed at:
● /opt/apigee/apigee-drupal/wwwroot (NGINX)
● /var/www/html (Apache)
● The procedure below assumes an NGINX server installation at
the default location above.

Determine your current Drupal version

Before you start the Drupal update, you can determine your current
Drupal version by running the following command from the Drupal
installation folder. By default, Drupal is installed in
/opt/apigee/apigee-drupal/wwwroot:

cd /opt/apigee/apigee-drupal/wwwroot
drush status | grep 'Drupal version'

You should see output in the form:

Drupal version : 7.54

If you installed Drupal in a directory other than


/opt/apigee/apigee-drupal/wwwroot, make sure to change
to that directory before running the drush command.

Update the Drupal version

This section describes how to use Drush commands from a


command line to update your Drupal version. See also Updating
Drupal Using Drush (Drupal.org).

NOTE: If your portal is on a server with no external internet connection, then:

1. Download the Drupal installation to a server with internet access.


2. Copy the drupal-x.y directory to your portal server.
3. Follow the steps described in the README.txt file in the Drupal
documentation.

During the upgrade of Drupal core, the instructions tell you to remove all core
files and directories. However, do not delete the /profiles/apigee
directory because this is where the Apigee Drupal Devportal distribution is
located.

To update your Drupal installation:

1. Change to the /opt/apigee/apigee-drupal/wwwroot


directory, or the directory where you installed the portal.
2. Make a full backup of all files, directories, and databases.
Save the backup in a location outside of the Drupal
installation. For complete instructions, see Back up the portal.
If you made modifications to files such as .htaccess,
robots.txt, or defaults.settings.php (in the sites
directory), you will have to reapply the changes after the
update. You will also need to reapply any customizations
made in the sites/all directory.
3. Put your site into maintenance mode:
4. drush vset --exact maintenance_mode 1
5. drush cache-clear all
6. Install the desired version of Drupal by using the following
command:
7. drush pm-update drupal-version
8. Where version is the desired version.
Alternatively, you can run drush pm-update drupal to
update to the latest Drupal core version. You can run drush
pm-updatestatus to list available minor updates to Drupal
core and contrib projects.
9. Reapply any changes made to .htaccess, robots.txt, or
defaults.settings.php (in the sites directory).
10. Reapply any changes made to the sites/all directory.
11. Take your site out of maintenance mode:

12. drush vset --exact maintenance_mode 0


13. drush cache-clear all

Upgrade PHP and Drupal contrib modules


When you upgrade Drupal using the above instructions in this
section, modules used by Drupal such as contrib and PHP are also
upgraded. However, you should keep up with the latest Drupal
modules in between Private Cloud releases.

TIP: To be notified about Drupal security releases, subscribe to Drupal security announcements

Note that if the module is in


/profiles/apigee/modules/contrib, you can replace it with a
newer version of that module by storing the newer version in
/sites/all/modules/contrib. Edge for Private Cloud uses the
newer version in /sites/all/modules/contrib rather than the
older version in /profiles/apigee/modules/contrib. For
more information, see Updating modules (Drupal.org).

If you install a new Private Cloud version that includes a more recent
version of the module previously stored in
/sites/all/modules/contrib, remove the module from
/sites/all/modules/contrib. For more information, see
Moving modules and themes (Drupal.org).

What if I encounter an issue during the update?


Restore your site to its previous state using the backup files that you
created. Contact Apigee Edge Support and provide any error
messages that were reported during the update.
Convert a tar-based portal to an RPM-based po
The 4.18.05 release of Apigee Developer Services portal (or simply,
the portal) does not let you update a previous tar-based version of
the portal. You can only directly update an RPM-based version of the
portal to 4.18.05.

However, you can convert a tar-based version of the portal to an


4.18.05 RPM-based instance of the portal. As part of this process,
you migrate the MySQL/MariaDB of the existing portal to a Postgres
database. Once converted, your portal remains as an RPM-based
portal.

You can migrate many previous versions of the tar-based portal to an


RPM-based portal, including versions 4.16.09 and 4.17.01, not just
version 4.17.05. The only requirements is that the portal is running
Drupal 7 or later. To check your version of Drupal, select Reports >
Status Reports in the Drupal menu. The version of Drupal appears in
the first row of the output.

The high-level steps that you use to migrate from a tar-based portal
to an RPM-based portal are:

● Install the RPM-based 4.18.05 version of the portal on a new


node.
● Create a new Postgres database on the RPM-based portal.
● Migrate the portal database from the tar-based portal.
● Copy all accessory file from the tar-based portal to the RPM-
based portal.
● Update DNS entries to point to the new RPM-based portal.
Note that the RPM-based version of the portal uses port 8079
by default, while the tar-based version uses port 80. Make
sure you use the correct port number in your DNS entry. See
Set the HTTP port used by the portal for information on using
a different port.

New default installation directory after


conversion

After updating an installation that now uses NGINX/Postgres, the


root directory changed from:

/opt/apigee/apigee-drupal

to:

/opt/apigee/apigee-drupal/wwwroot

Portal conversion procedure

To convert the portal to an RPM-based installation:

1. Install the RPM-based 4.18.05 version of the portal on a


different node from your tar-based portal.
2. On the RPM-based portal, create a new Postgres database.
Later, you migrate the database from the tar-based portal to
this new database:
a. Log in to psql:
b. psql -h localhost -p 5432 -U apigee
c. Enter your Postgres password as defined by the
PG_PWD property in the portal config file.
d. Create a new Postgred database:
e. CREATE DATABASE newportaldb;
f. Exit psql:
g. \q
3. On the tar-based portal, remove old modules that are no
longer used:

cd /var/www/html

drush sql-query --db-prefix "DELETE from {system} where name =


'apigee_account' AND type = 'module';"

drush sql-query --db-prefix "DELETE from {system} where name =


'apigee_checklist' AND type = 'module';"

4. drush sql-query --db-prefix "DELETE from {system} where name =


'apigee_sso_ui' AND type = 'module';"
5. On the tar-based portal, Install and configure the Migrator
Drupal module:
a. cd /tmp
b. wget
https://ftp.drupal.org/files/projects/dbtng_migrator-
7.x-1.4.tar.gz
c. gunzip /tmp/dbtng_migrator-7.x-1.4.tar.gz
d. tar -xvf /tmp/dbtng_migrator-7.x-1.4.tar --directory
/var/www/html/sites/all/modules
e. Log in to the portal as an admin.
f. Select Modules in the Drupal menu.
g. Enable the DBTNG Migrator module.
h. Save the configuration.
6. On the tar-based portal, edit
/var/www/html/sites/default/settings.php to add
a second database configuration pointing to the newly created
database on the RPM-based portal. The current database
configuration is named "default". Name your new
configuration "custom", as the following example shows:

$databases = array (

'default' =>

array (

'default' =>

array (

'database' => 'devportal',

'username' => 'devportal',

'password' => 'devportal',

'host' => 'localhost',

'port' => '',

'driver' => 'mysql',

'prefix' => '',

),

),

'custom' =>

array (

'default' =>

array (

'database' => 'newportaldb',


'username' => 'apigee',

'password' => 'postgres',

'host' => '192.168.168.100',

'port' => '5432',

'driver' => 'pgsql',

'prefix' => '',

7. );
8. Where host and port specify the IP address and port of the
Postgres server. Postgres uses port 5432 for connections.
9. On the tar-based portal, install the Postgres driver:
a. Use Yum to install the driver:
b. yum install php-pdo_pgsql
c. Edit /etc/php.ini to add the following line anywhere
in the file:
d. extension=pgsql.so
e. Restart Apache:
f. service httpd restart
10. On the tar-based portal, migrate the portal database to the
RPM-based portal:
a. Log in to the portal as an admin.
b. Select Structure->Migrator in the Drupal menu.
c. Choose your origin database on the tar-based portal,
default, and the destination database, custom,
based on settings.php file shown above.
d. Click Migrate. The tar-based database is migrated to
the RPM-based database.
11. Copy the sites directory from the tar-based server to the
RPM-based server. The paths shown in the following steps are
based on default paths. Modify them as necessary for your
installation.
a. On the tar-based portal, bundle the
/var/www/html/sites directory:

cd /var/www/html/sites

b. tar -cvzf /tmp/sites.tar.gz .


c. Copy /tmp/sites.tar.gz to
/opt/apigee/apigee-drupal/wwwroot/sites
on the RPM-based server.
d. Unbundle sites directory, but do not overwrite
important files.
i. Backup the settings.php file:

sudo cp
/opt/apigee/apigee-drupal/wwwroot/sites/default/settings.php

ii.
/opt/apigee/apigee-drupal/wwwroot/sites/defa
ult/settings.bak.php
iii. Backup the existing files directory:

sudo mv /opt/apigee/apigee-drupal/wwwroot/sites/default/files

iv.
/opt/apigee/apigee-drupal/wwwroot/sites/defa
ult/files_old
v. Backup the existing sites directory:
vi. tar -cvzf /tmp/sites_old.tar.gz
/opt/apigee/apigee-drupal/wwwroot/sites
vii. Unzip and untar the sites directory from the
tar-based server:

gunzip /opt/apigee/apigee-drupal/wwwroot/sites/sites.tar.gz

viii.tar -xvf
/opt/apigee/apigee-drupal/wwwroot/sites/sites.tar
ix. Make sure that the copied files have proper
ownership:
x. chown -R apigee:apigee /opt/apigee/apigee-
drupal/wwwroot/sites/
xi. Restore the settings.php file:

sudo cp
/opt/apigee/apigee-drupal/wwwroot/sites/default/settings.bak.php
xii.
/opt/apigee/apigee-drupal/wwwroot/sites/defa
ult/settings.php
xiii.Move private files to new location:

cp -r
/opt/apigee/apigee-drupal/wwwroot/sites/default/files/private/*
/opt/apigee/data/apigee-drupal-devportal/private

rm -rf /opt/apigee/apigee-drupal/wwwroot/sites/default/files/private

xiv.chown -R apigee:apigee /opt/apigee/data/apigee-sap-


drupal-devportal/private
12. On the tar-based portal, only if you changed the path to the
web root directory on the tar-based portal from the default
path of /var/www/html: run drush status and look at
files path and private files path:

cd /var/www/html

13. drush status


14. If files/private files are not under the sites directory, copy
them to the RPM-based server as shown above.
15. On the RPM-based portal, update /opt/apigee/apigee-
drupal/wwwroot/sites/default/settings.php to set
the properties of the default database:
16. vi
/opt/apigee/apigee-drupal/wwwroot/sites/default/settings.ph
p
17. Set the default database description in settings.php:

$databases = array (

'default' =>

array (

'default' =>

array (

'database' => 'newportaldb',


'username' => 'apigee',

'password' => 'postgres',

'host' => 'localhost', 'port' => '5432',

'driver' => 'pgsql',

'prefix' => '',

18. );
19. Where database specifies the new database you created,
username and password are as defined for the custom
database on the tar-based portal, and prefix is empty.
20. On the RPM-based portal, the RPM-based version of the
portal contains fewer Drupal modules than the tar-based
version. After you migrate to the RPM-based portal, you must
check for any missing modules and install them as necessary.
a. Install the Drupal missing_module used to detect
missing modules:

cd /opt/apigee/apigee-drupal/wwwroot

drush dl missing_module

b. drush en missing_module
c. Log in to the RPM-based portal as an admin.
d. Select Reports > Status reports in the Drupal menu and
check for any missing modules.
e. Use that report to install any missing modules, or use
the following commands:

cd /opt/apigee/apigee-drupal/wwwroot

drush dl <moduleA> <moduleB> ...

f. drush en <moduleA> <moduleB> ...


g. After you enable all the modules, make sure the files
are owned by the apigee user:
h. chown -LR apigee:apigee
/opt/apigee/apigee-drupal/wwwroot
i. For more on file permissions, see
https://www.drupal.org/node/244924.
21. On the RPM-based portal, run update.php in a browser to
remove any errors on missing modules:
a. Log in to the RPM-based portal as an admin.
b. In the browser, navigate to the following URL:
c. http://portal_IP_or_DNS:8079/update.php
d. Where portal_IP_or_DNS is the IP address or domain
name of the RPM-based portal.
e. Follow the screen prompts.
22. Update DNS entries to point to your new RPM-based portal.

Note that the RPM-based version of the portal uses port 8079
by default, while the tar-based version uses port 80. Make
sure you use the correct port number in your DNS entry. See
Set the HTTP port used by the portal for information on using
a different port.

The conversion is complete.

When the contents of the file settings.php for a new RPM-based portal are
different from those for a tar-based portal, you need to update the file

/opt/apigee/apigee-drupal-devportal-opdk_version/source/conf/settings.php

with the new contents to prevent it from being restored to the old settings.

ANEXOS
How-to guides
The how-to guides provide detailed instructions to help Apigee Edge
Public Cloud and Private Cloud users accomplish the following
tasks:

● Configure various Apigee Edge resources and properties.


● Verify that the resources and properties have been configured
properly.
● Share best practices (where applicable) that can be followed
while configuring the resources and properties.

The following table describes the current how-to guides:


Configuration

HTTP properties

Configuring ignore How to configure the Edge Private


allow header for 405 ignore allow header for Cloud users
property in Message 405 property in Apigee only
Processors Edge's Message
Processors.

Configuring How to eliminate sending Edge Private


Message duplicate headers and how Cloud users
Processors to allow to configure Message only
duplicate headers Processors to allow
duplicate headers.

JVM parameters

Configuring heap How to configure the heap Edge Private


memory size on the memory size on Apigee Cloud users
Message Edge’s Message only
Processors Processors.

Configuring heap How to configure the heap Edge Private


memory size on the memory size on Apigee Cloud users
Qpid servers Edge’s Qpid servers. only

Enabling G1GC on How to enable Garbage Edge Private


the Message First Garbage Collector Cloud users
Processors (G1GC) on the Apigee only
Edge’s Message
Processors.

Enabling String How to enable String Edge Private


Deduplication on the Deduplication on the Cloud users
Message Apigee Edge’s Message only
Processors Processors.

Log rotation

Enable log rotation How to enable log rotation Edge Private


for edge-message- for Cloud users
processor.log /opt/apigee/var/log/ only
edge-message-
processor/edge-
message-
processor.log logs on
the Edge Message
Processors.

Enable log rotation How to enable log rotation Edge Private


for edge- for Cloud users
router.log /opt/apigee/var/log/ only
edge-router/edge-
router.log logs on the
Edge Routers.

SNI

Configuring SNI How to enable and disable Edge Private


between Edge SNI on Message Cloud users
Message Processor Processors. only
and the backend
server
Timeout properties

Best practices for Describes best practices Edge Public


configuring I/O for configuring the I/O and Private
timeout timeout property on various Cloud users
components through which
the API requests flow in
Apigee Edge.

Configuring I/O How to configure the I/O Edge Public


timeout on Message timeout property in API and Private
Processors Proxy and Apigee Edge's Cloud users
Message Processor
component.

Configuring I/O How to configure the I/O Edge Public


timeout on Routers timeout property in Virtual and Private
Host and Apigee Edge's Cloud users
Router component.

Configuring How to configure the Edge Public


connection timeout connection timeout in API and Private
on Message Proxy and Apigee Edge's Cloud users
Processors Message Processor
component.

Configuring keep How to configure the keep Edge Public


alive timeout on alive timeout in API Proxy and Private
Message and Apigee Edge's Cloud users
Processors Message Processor
component.

TLS certificate

Converting How to convert the TLS Edge Public


certificates to certificate and associated and Private
supported format Private key to the PEM or Cloud users
the PFX (PKCS #12)
formats.

Validating certificate How to validate a Edge Public


chain certificate chain before you and Private
upload the certificate to a Cloud users
keystore or a truststore in
Apigee Edge.

Validating certificate How to validate a Edge Public


purpose certificate’s purpose before and Private
you upload the certificate Cloud users
to a keystore or a
truststore in Apigee Edge.

Validating client How to verify that the Edge Private


certificate against correct client certificates Cloud users
trustore have been uploaded to only
Apigee Edge Routers.

Virtual host

Configuring cipher How to configure cipher Edge Public


suites on virtual suites on virtual hosts and and Private
hosts and Routers Routers in Apigee Edge. Cloud users

Operations

Restarting Routers How to restart Routers and Edge Private


and Message Message Processors Cloud users
Processors without (MPs) without impacting only
traffic impact incoming API traffic.

Downgrading Apigee How to check whether you Edge Private


Components and need to downgrade, and Cloud users
NGINX how to downgrade Apigee only
components, if necessary.

Configuring ignore allow header


for 405 property in Message
Processors

Note: This document is applicable for Edge Private Cloud users only.

In the client-server communication, a server responds back with


HTTP status code 405 Method Not Allowed if the HTTP request
method presented by the client is known to the server but is not
supported by the target resource. Similarly in Apigee Edge, the
backend server can respond back with the HTTP status code 405
Method Not Allowed.

Apigee Edge expects the backend server to send 405 Method Not
Allowed responses with the list of allowed methods in the Allow
header, as per specification RFC 7231, section 6.5.5: 405 Method Not
Allowed.

The Allow header must be sent in the following format:

Allow: HTTP_METHODS

For example, if your backend server allows GET, POST and HEAD
methods, you need to ensure that the Allow header contains them
as follows:

Allow: GET, POST, HEAD

If the backend server does not send the Allow header with the HTTP
status code 405 Method Not Allowed, then Apigee returns
HTTP status code 502 Bad Gateway with error code
protocol.http.Response405WithoutAllowHeader to the
client application. The recommended solution to address this error is
to fix the backend server to adhere to the specification RFC 7231,
section 6.5.5: 405 Method Not Allowed or use the fault handling to
respond back with HTTP status code 405 Method Not Allowed
including the Allow header as explained in the troubleshooting
playbook 502 Bad Gateway - Response 405 without Allow header.

However, in some exceptional cases, it may not be possible to fix


your backend or modify your API Proxy to address this issue
immediately.

In such cases, you can set the ignore allow header for the 405
property HTTP.ignore.allow_header.for.405 at the Message
Processor level temporarily. Setting this property to true prevents
Apigee from returning the 502 Bad Gateway response to client
applications even if the backend server sends HTTP status code 405
Method Not Allowed without the Allow header.

Once you are in a position to fix your backend server to send HTTP
status code 405 Method Not Allowed with the Allow header,
you can revert the property
HTTP.ignore.allow_header.for.405 to its default value
false.

Note: Since setting the property HTTP.ignore.allow_header.for.405 to


true at the Message Processor level is not a recommended solution, it is
advised to use this solution only if you have ascertained that you are unable to
use the recommended solutions to solve the error 502 Bad Gateway with
error code protocol.http.Response405WithoutAllowHeader
described in 502 Bad Gateway - response 405 without Allow header.

Before you begin


Before you use the steps in this document, be sure you understand
the following topics:

● Read the Playbook - 502 Bad Gateway - response 405 without


Allow header.
● If you aren’t familiar with configuring properties for Edge on
Private Cloud, read How to configure Edge.

Configuring ignore allow header for 405


property to true on Message Processors

In Apigee Edge, the property


HTTP.ignore.allow_header.for.405 is set to false, by
default. This enables Apigee Edge to return the 502 Bad Gateway
with error code
protocol.http.Response405WithoutAllowHeader to the
client applications if the backend server sends HTTP status code
405 Method Not Allowed without the Allow header. If you want
to prevent Apigee Edge from sending 502 Bad Gateway to client
applications, then you need to set the value of the property
HTTP.ignore.allow_header.for.405 to trueon the Message
Processors.

This section explains how to configure the property


HTTP.ignore.allow_header.for.405 to true on the Message
Processors, using the token as per the syntax described in How to
configure Edge.

Note: If you modify the property HTTP.ignore.allow_header.for.405 to


true at the Message Processor level, then it is important to note that it affects
all the API proxies associated with the organizations and environments served
by the specific Message Processors.

1. On the Message Processor machine, open the following file in


an editor. If it does not already exist, then create it.
2. /opt/apigee/customer/application/message-
processor.properties
3. For example, to open the file using vi, enter the following:
4. vi /opt/apigee/customer/application/message-
processor.properties
5. Add a line in the following format to the properties file:
6. conf_http_HTTP.ignore.allow_header.for.405=true
7. Save your changes.
8. Ensure the properties file is owned by the apigee user as
shown below:

chown apigee:apigee /opt/apigee/customer/application/message-


processor.properties

9.
10. Restart the Message Processor as shown below:

/opt/apigee/apigee-service/bin/apigee-service edge-message-
processor restart

11.
12. If you have more than one Message Processor, repeat the
above steps on all the Message Processors.

Verifying ignore allow header for 405


property is set to true on Message
Processors

This section explains how to verify that the property


HTTP.ignore.allow_header.for.405 has been successfully
updated to true on the Message Processors.

Even though you use the token


conf_http_HTTP.ignore.allow_header.for.405 to update
the value of the property on the Message Processor, you need to
verify if the actual property
HTTP.ignore.allow_header.for.405 has been set to true.

1. On the Message Processor machine, search for the property


HTTP.ignore.allow_header.for.405 in the
/opt/apigee/edge-message-processor/conf directory
and check to see if it has been set to true as shown below:

grep -ri "HTTP.ignore.allow_header.for.405" /opt/apigee/edge-


message-processor/conf

2.
3. If the property is successfully updated on the Message
Processor, then the above command should show the value of
the property HTTP.ignore.allow_header.for.405 as
true in the http.properties file as shown below:
4. /opt/apigee/edge-message-processor/conf/
http.properties:HTTP.ignore.allow_header.for.405=true
5. If you still see the value for the property
HTTP.ignore.allow_header.for.405 as false then
verify that you have followed all the steps outlined in
Configuring ignore allow header for 405 property to true in
Message Processors correctly. If you have missed any step,
repeat all the steps again correctly.
6. If you are still not able to modify the property
HTTP.ignore.allow_header.for.405, then contact
Apigee Edge Support.

Configuring ignore allow header for 405


property to false on Message Processors

This section explains how to configure the property


HTTP.ignore.allow_header.for.405 to its default value
false on the Message Processor, using the token as per the syntax
described in How to configure Edge.

1. Verify if the property


HTTP.ignore.allow_header.for.405 is modified to
true. You can do this by searching for this property in the
/opt/apigee/edge-message-processor/conf directory
and checking its value using the following command:

grep -ri "HTTP.ignore.allow_header.for.405" /opt/apigee/edge-


message-processor/conf

2.
3. If the property is set to true on the Message Processor, then
the above command should show the value of the property
HTTP.ignore.allow_header.for.405 as true in the
http.properties file as shown below:
4. /opt/apigee/edge-message-processor/conf/
http.properties:HTTP.ignore.allow_header.for.405=true
5. If the above command shows that the property
HTTP.ignore.allow_header.for.405 is set to false
(default value), then you don’t have to do anything else. That
is, skip the following steps.
6. If the property HTTP.ignore.allow_header.for.405 is
set to true, then perform the following steps to revert to its
default value false.
7. On the Message Processor machine, open the following file in
an editor:
8. /opt/apigee/customer/application/message-
processor.properties
9. For example, to open the file using vi, enter the following:

vi /opt/apigee/customer/application/message-processor.properties

10.
11. Remove the following line from the properties file:
12. conf_http_HTTP.ignore.allow_header.for.405=true
13. Save your changes.
14. Ensure the properties file is owned by the apigee user as
shown below:

chown apigee:apigee /opt/apigee/customer/application/message-


processor.properties

15.
16. Restart the Message Processor as shown below:

/opt/apigee/apigee-service/bin/apigee-service edge-message-
processor restart

17.
18. If you have more than one Message Processor, repeat the
above steps on all the Message Processors.

Verifying ignore allow header for 405


property is set to false on Message
Processors

This section explains how to verify that the property


HTTP.ignore.allow_header.for.405 has been successfully
updated to false on the Message Processors.

Even though you use the token


conf_http_HTTP.ignore.allow_header.for.405 to update
the value on the Message Processor, you need to verify if the actual
property HTTP.ignore.allow_header.for.405 has been set to
false.
1. On the Message Processor machine, search for the property
HTTP.ignore.allow_header.for.405 in the
/opt/apigee/edge-message- processor/conf
directory and check to see if it has been set to false as
shown below:

grep -ri "HTTP.ignore.allow_header.for.405" /opt/apigee/edge-


message-processor/conf

2.
3. If the property is successfully updated on the Message
Processor, then the above command should show the value of
the property HTTP.ignore.allow_header.for.405 as
false in the http.properties file as shown below:
4. /opt/apigee/edge-message-processor/conf/
http.properties:HTTP.ignore.allow_header.for.405=false
5. If you still see the value for the property
HTTP.ignore.allow_header.for.405 as true, then
verify that you have followed all the steps outlined in
Configuring ignore allow header for 405 property to false on
Message Processors correctly. If you have missed any step,
repeat all the steps again correctly.
6. If you are still not able to modify the property
HTTP.ignore.allow_header.for.405, then contact
Apigee Edge Support.

Was this helpful?

Configuring Message
Processors to allow duplicate
headers
Note: This document is applicable for Edge Private Cloud users only.

As per the HTTP Specification RFC 7230, section 3.2.2: Field Order,
Apigee Edge expects that the HTTP request from the client or HTTP
response from the backend server do not contain the same header
being passed more than once with the same or different values,
unless the specific header has an exception and is allowed to have
duplicates.

By default, Apigee Edge allows duplicates and multiple values to be


passed to most of the HTTP headers. However, it does not allow
certain headers which are listed in Headers that are not allowed to
have duplicates and multiple values. Therefore:

● You will get 400 Bad Request with error code


protocol.http.DuplicateHeader if the client sends an
HTTP request with particular header more than once or with
multiple values for the HTTP headers which are not allowed to
have duplicates/multiple values in Apigee Edge.
● Similarly, you will get 502 Bad Gateway with error code
protocol.http.DuplicateHeader if the backend server
sends an HTTP response with particular header more than
once or with multiple values for the HTTP headers which are
not allowed to have duplicates or multiple values in Apigee
Edge

The recommended solution to address these errors is to fix the client


application and backend server to not send duplicate headers and
adhere to the specification RFC 7230, section 3.2.2: Field Order as
explained in the following troubleshooting playbooks:

● 400 Bad Request - Duplicate Header


● 502 Bad Gateway - Duplicate Header

However, in some cases you may want to add an exception to


include duplicates and multiple values for some HTTP headers. In
such situations, you can allow duplicate headers and multiple values
for a specific HTTP header by setting a property
HTTPHeader.HEADER_NAME at the Message Processor level.

This document provides information about this property, explains


how to enable this property to avoid the above-mentioned errors and
shares best practices around the same.

HTTP header properties to allow duplicates and


multiple values
Apigee Edge provides the following two properties to control the
behaviour of allowing duplicates and multiple values for HTTP
headers. Note that these can be configured only on the Message
Processors using the token syntax explained in How to configure
Edge.

Property name Description Values allowed

HTTPHeader.ANY This property 1. blank:


indicates whether Duplicates
duplicates or and multiple
multiple values values for
are allowed for all HTTP
HTTP headers headers are
including the not allowed.
custom headers 2. multiValu
sent as part of ed: Split the
HTTP request multi-valued
made by the header into
client or HTTP multiple
response sent by headers.
the backend Multiple
server to Apigee values are
Edge. allowed for
HTTP
Default value:
headers, but
duplicates
multivalued,
are not
allowDuplicat
allowed.
e, The value
Note: It is multiValu
recommended NOT ed is
to modify this enabled,
property. which
implies that
test-
header=a,
b would be
converted
to test-
header=a
and test-
header=b.
3. allowDupl
icate:
Allows
multiple
(duplicate)
HTTP
headers
with the
same name.
4. multivalu
ed,
allowDupl
icate:
Both
multiple
values and
duplicates
are allowed
for HTTP
headers.

HTTPHeader.HEADER_N This property is Same as above.


AME used to override
the behavior of a
specific header
from what is
specified by
HTTPHeader.AN
Y

Note: You can


modify this property
as explained in this
document.

Headers that are not allowed to have duplicates


and multiple values
As explained earlier, Apigee Edge allows duplicates and multiple
values for most of the HTTP headers by default. This is because the
property HTTPHeader.ANY is configured with the value
multivalued, allowDuplicate.

Configuration overwritten

For some specific headers, the default configuration is overwritten


using one of the following methods:

● HTTPHeader.HEADER_NAME=multivalued,
allowDuplicate
This configuration does not change the default behaviour.
That is, the specific header is allowed to have duplicates and
multiple values
.
● HTTPHeader.HEADER_NAME=
This configuration changes the default behaviour. That is, the
specific header is not allowed to have duplicates and multiple
values.

Note: There are some headers which are already overwritten using one of the
above methods. Such headers are referred to as headers with pre-existing
config hereafter in this document.

Determining headers that are not allowed to have duplicates


and multiple values

This section describes how to identify the following:

● The specific headers that are not allowed to have duplicates


and multiple values on your Apigee Edge Private Cloud setup,
and
● The specific headers with pre-existing config
1. On the Message Processor machine, search for the property
HTTPHeader. in the /opt/apigee/edge-message-
processor/conf directory as shown below:

grep -ri "HTTPHeader." /opt/apigee/edge-message-processor/conf


2.
3. Sample output:
4. # grep -ri "HTTPHeader" /opt/apigee/edge-message-
processor/conf
5. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPHeader.ANY=allowDuplicates,
multiValued
6. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPHeader.Connection=allowDuplicates,
multiValued
… <snipped>
/opt/apigee/edge-message-processor/conf/http.properties:H
TTPHeader.Host=
/opt/apigee/edge-message-processor/conf/http.properties:H
TTPHeader.Expires=
/opt/apigee/edge-message-processor/conf/http.properties:H
TTPHeader.Date=allowDuplicates

<snipped>
7. As explained in the Configuration overwritten section, note the
following information in the sample output above:
a. The HTTP header Connection is overwritten, but
allowed to have duplicates and multiple values
b. The HTTP headers Host and Expires are overwritten
and not allowed to have duplicates and multiple values
c. The HTTP header Date is overwritten and allowed to
have duplicates but not allowed to have multiple values
d. All the headers which appear here (Connection,
Host, Expires, and Date in the above sample) are
referred to as headers with pre-existing config in this
document.
8. Note: Using the above steps, you can determine which headers are not
allowed to have duplicates and multiple values in your Private Cloud
setup.

Behaviour of Apigee Edge

The following table describes the behaviour of Apigee Edge when the
headers are sent as duplicates and with multiple values depending
on how the HTTPHeader properties are configured on the Message
Processors with an example HTTPHeader of test-header.

Outgoing HEADERS based on value of


conf/http.properties+HTTPHeader.test-header=
Request
allowDuplicate
allowDup
<Blank> multiValued , multiValued
licate
(DEFAULT)

test-he test-he test-he protocol.ht


test-header
ader=a, ader=a, ader=a, tp.
=a,b
b b b
DuplicateHe
ader Internally we
split test-
Internally we header=a,b
split test- into:
header=a,b
into: ● test-
header
● test- =a, and
header ● test-
=a, and header
● test- =b,
header
=b, but then the
original form
and then the is sent to
DuplicateHe target.
ader error is
thrown.

test-he protoco test-he protocol.ht test-header


ader=a l.http. ader=a tp. =a
test-he Duplica test-he DuplicateHe test-header
ader=b teHeade ader=b ader =b
r

Note: The property should be enabled only after careful consideration. To


troubleshoot the protocol.http.DuplicateHeader error, refer to the
following playbooks:

● 400 Bad Request - Duplicate Header


● 502 Bad Gateway - Duplicate Header

This guide should be followed only as an alternative approach. Wherever


possible, it's advisable to follow the recommended approach in the above
playbooks.

Before you begin


Before you use the steps in this document, be sure you understand
configuring properties for Edge on Private Cloud, described in How to
configure Edge.

Configuring allowDuplicates and multiple


values for headers
Note: Make sure to test the steps in your non-production organizations and
environments before applying the changes in production.

As explained in HTTP header properties to allow duplicates and


multiple values, the value of the property HTTPHeader.ANY =
allowDuplicates, multivalued implies that all headers are
allowed to have duplicates and multiple values in Apigee Edge.
However, there are certain headers whose values are overwritten
explicitly to not allow duplicate headers or multiple values for thi
using the property HTTPHeader.HEADER_NAME.

This section explains how to configure the property


HTTPHeader.HEADER_NAME to allow duplicates and multiple values
for any such HTTP headers on the Message Processors, using the
corresponding token as per the syntax described in How to configure
Edge.

Note: If you modify the default value of the property


HTTPHeaderHEADER_NAME on the Message Processors, then it is important
to note that it affects all the API proxies associated with the organizations and
environments served by the specific Message Processors.

In this section, we will use Expires (and myheader) as an example


header for which we want to allow duplicates and multiple values as
explained below:

1. Determine the current value of the property


HTTPHeaderHEADER_NAME to make sure it is not already
enabled to allow duplicates and multiple values using the
following command:

grep -ri "HTTPHeader.HEADER_NAME" /opt/apigee/edge-message-


processor/conf
2.
3. For example, if you are trying to set the property for the
Expires header, then check the current value of the property
HTTPHeader.Expires token on the Message Processor:

grep -ri "HTTPHeader.Expires" /opt/apigee/edge-message-


processor/conf
4.
5. The output of the above command results in one of the
following:
a. The property is set to blank, then it implies that the
value is overwritten (and this is a header with pre-
existing config) to NOT allow duplicate headers and
multiple values. That is, you are not allowed to send the
Expires header more than once as part of the HTTP
request or HTTP response to Apigee.
b. There are no hits for the specific property, then that
means that value is not overwritten (and this is NOT a
header with pre-existing config). This means that the
specific header can be sent more than once (duplicates
are allowed) as part of the HTTP request or HTTP
response to Apigee Edge.
c. The property is set with the value allowDuplicates,
multivalued then that means that value is
overwritten explicitly (and this is a header with pre-
existing config). This means that the specific header
can be sent more than once (duplicates are allowed) as
part of the HTTP request or HTTP response to Apigee.
6. Sample output of the search command:
7. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPHeader.Expires=
8. The above sample output shows that the property
HTTPHeader.Expires is set to blank. This means that the
property is overwritten to not allow duplicate or multiple
values for the header Expires.
9. If you notice that the property corresponding to the specific
header is explicitly overwritten to not allow duplicate or
multiple values as in the example output above, only then
perform the following steps. If it is not explicitly overwritten,
then skip the rest of the steps in this section.
10. Edit. If it does not exist, you can create it:
11. /opt/apigee/customer/application/message-
processor.properties
12. For example, to open the file using vi, enter the following:

vi /opt/apigee/customer/application/message-processor.properties
13.
14. Add a line in the following format:
15. conf_http_HTTPHeader.Expires=allowDuplicates, multiValued
16. Save your changes.
17. Ensure that the properties file is owned by the apigee user. If
it is not, execute the following command:

chown apigee:apigee /opt/apigee/customer/application/message-


processor.properties
18.
19. Restart the Message Processor:

/opt/apigee/apigee-service/bin/apigee-service edge-message-
processor restart
20.
21. To restart without traffic impact, see Rolling restart of
Message Processors without traffic impact.
22. If you have more than one Message Processor, repeat the
above steps on all the Message Processors.

Verifying the header is configured to have


duplicates and multiple values
Note: The instructions in this section are for Edge for Private Cloud users only.

This section explains how to verify that the property


HTTPHeader.HEADER_NAME for a specific header has been updated
successfully to allow duplicates on the Message Processors.

We will use Expires as an example header and check if the


corresponding property HTTPHeader.Expires has been updated.

Even though you use the token conf_http_HTTPHeader.Expires


to update the value on the Message Processor, you need to verify if
the actual property HTTPHeader.Expires has been set with the
new value.

1. On the Message Processor machine, search for the property


HTTPHeader.HEADER_NAME in the /opt/apigee/edge-
message-processor/conf directory and check to see if it
has been set with the new value as shown below:

grep -ri "HTTPHeader.HEADER_NAME" /opt/apigee/edge-message-


processor/conf
2.
3. For example, if you want to check that the property
HTTPHeader.Expires is set with the new value, then run the
following command:

grep -ri "HTTPHeader.Expires" /opt/apigee/edge-message-


processor/conf
4.
5. If the new value is successfully set for
HTTPHeader.HEADER_NAME on the Message Processor,
then the above command shows the new value in the
http.properties file.
6. The sample result from the above command after you have
configured allowDuplicates and multiValued is as
follows:
7. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPHeader.Expires=allowDuplicates,
multiValued
8. In the example output above, note that the property
HTTPHeader.Expires has been set with the new value
allowDuplicates, multiValued in http.properties.
This indicates that the behaviour to allow duplicates and
multiple values in HTTPHeader is successfully configured on
the Message Processor.
9. If you still see the old value for the property
HTTPHeader.HEADER_NAME, then verify that you have
followed all the steps outlined in Configuring allowDuplicates
and multiple values for headers correctly. If you have missed
any step, repeat all the steps again correctly.Note: Once the
above changes are complete and reflected in each Message
Processor, you should not see the error
protocol.http.DuplicateHeader even if duplicate headers are
passed to Apigee Edge as described in Behaviour of Apigee Edge.
Make sure your proxies are working as expected, especially if
there is a functional logic to get and set the headers in the
proxy.
10. If you are still not able to modify the property, then contact
Apigee Edge Support

Disabling allowDuplicates for headers


This section explains how to configure the property HTTPHeader.
{Headername} to not allow duplicates and multiple values for a
specific HTTP header on the Message Processors, using the
corresponding token according to the syntax described in How to
configure Edge.

Note: If you modify the default value of the property


HTTPHeader{Headername} on the Message Processors, then it is important
to note that it affects all the API proxies associated with the organizations and
environments served by the specific Message Processors.

In this section, we will use Expires (and myheader) as an example


header for which we do not want to allow duplicates as explained
below:

1. Determine the current value of of the property


HTTPHeaderHEADER_NAME to make sure it is not already
disabled to allow duplicates and multiple values using the
following command:

grep -ri "HTTPHeader.HEADER_NAME" /opt/apigee/edge-message-


processor/conf
2.
3. For example if you are trying to set the property for Expires
header then check the current value of the property
HTTPHeader.Expires token on the Message Processor:

grep -ri "HTTPHeader.Expires" /opt/apigee/edge-message-


processor/conf
4.
5. The output of the above command results in one of the
following:
a. The property is set to blank, then it implies that the
value is overwritten to NOT to allow duplicate headers
and multiple values. That is you are not allowed to send
the Expires header more than once as part of the
HTTP request or HTTP response to Apigee.
b. There are no hits for the specific property, then that
means that value is not overwritten and this is a NOT
header with pre-existing config. This means that the
specific header can be sent more than once (duplicates
are allowed) as part of the HTTP request or HTTP
response to Apigee Edge.
c. The property is set with the value allowDuplicates,
multivalued then that means that value is
overwritten explicitly and this is an existent config.
However, this means that the specific header can be
sent more than once (duplicates are allowed) as part of
the HTTP request or HTTP response to Apigee.
6. SAMPLE OUTPUT #1
7. SAMPLE OUTPUT #2
8. Sample Output #1 of the search command:
9. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPHeader.Expires=allowDuplicates, multiValued
10. The sample output shows that the property
HTTPHeader.Expires is set to allowDuplicates,
multiValued. This means that the property is overwritten to
allow duplicate or multiple values for the header Expires.
11. If you notice one of the following, perform the rest of the
steps in this section:
a. The property corresponding to the specific header is
overwritten to allow duplicates and multiple values as
in the Sample output #1 above (header with pre-
existing config)
b. There are no hits for the property corresponding to the
specific header as in the Sample output #2 above (not
a header with pre-existing config)
12. Otherwise, skip the rest of the steps in this section.
13. Edit the following file. If it does not exist, you can create it.
14. /opt/apigee/customer/application/message-
processor.properties
15. For example, to open the file using vi, enter the following:

vi /opt/apigee/customer/application/message-processor.properties
16.
17. Add a line in the following format to the properties file:
18. PRE-EXISTING CONFIG
19. NO PRE-EXISTING CONFIG
20. Scenario #1: Header with pre-existing config:
21. conf_http_HTTPHeader.Expires=
22. Save your changes.
23. Ensure that the properties file is owned by the apigee user. If
it is not, execute the following:

chown apigee:apigee /opt/apigee/customer/application/message-


processor.properties
24.
25. Restart the Message Processor:

/opt/apigee/apigee-service/bin/apigee-service edge-message-
processor restart
26.
27. To restart without traffic impact, see Rolling restart of
Message Processors without traffic impact.
28. If you have more than one Message Processor, repeat the
above steps on all the Message Processors.

Verifying the header is configured to not allow


duplicates and multiple values
This section explains how to verify that the property
HTTPHeader.HEADER_NAME for a specific header has been updated
successfully to not allow duplicates on the Message Processors.

We will use Expires (and myheader) as an example header and


check if the corresponding property HTTPHeader.Expires (and
HTTPHeader.myheader) has been updated.

Note: Even though you use the token conf_http_HTTPHeader.Expires to


update the value on the Message Processor, you need to verify if the actual
property HTTPHeader.Expires has been set with the new value.

1. On the Message Processor machine, search for the property


HTTPHeader.HEADER_NAME in the /opt/apigee/edge-
message- processor/conf directory and check to see if it
has been set with the new value as shown below:

grep -ri "HTTPHeader.HEADER_NAME" /opt/apigee/edge-message-


processor/conf
2.
3. For example, if you want to check the property
HTTPHeader.Expires is set with the new value, then you
can run the following command:
4. PRE-EXISTING CONFIG
5. NO PRE-EXISTING CONFIG

grep -ri "HTTPHeader.Expires" /opt/apigee/edge-message-


processor/conf
6.
7. If the new HTTP header value is successfully set for
HTTPHeader.HEADER_NAME I on the Message Processor,
then the above command shows the new value in the
http.properties file.
8. The sample result from the above command after you have
disabled allowDuplicates is as follows:
9. PRE-EXISTING CONFIG
10. NO PRE-EXISTING CONFIG
11. Scenario #1: Expires Header (header with pre-existing config)
12. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPHeader.Expires=
13. In the example output above, note that the property
HTTPHeader.Expires ( and HTTPHeader.myheader ) has
been set with the new value {blank} in http.properties.
This indicates that the behaviour to allow duplicates and
multiple values for the specific HTTP header Expires (and
myheader) is successfully disabled on the Message
Processor.
14. If you still see the old value for the property
HTTPHeader.Expires (or HTTPHeader.myheader),
then verify that you have followed all the steps outlined in
Configuring allowDuplicates and multiple values for headers
correctly. If you have missed any step, repeat all the steps
again correctly.
Note: Once the above changes are done and reflected in each Message
Processor, you should see the error
protocol.http.DuplicateHeader if duplicate headers are passed
to Apigee Edge as described in Behaviour of Apigee Edge
Make sure your proxies are working as expected, especially if
there is a functional logic to get and set the headers in the
proxy.
15. If you are still not able to modify the property, then contact
Apigee Edge Support.
Configuring heap memory size
on the Message Processors
Note: The instructions in this document are applicable for Edge Private Cloud
users only.

Apigee Edge’s Message Processor is a Java-based component and


uses a default heap memory size of 512 MB. However, the default
heap memory size may not be sufficient for all use cases on Apigee
Edge. You may need to tune the heap memory size for your Message
Processors depending on your traffic and processing requirements
or to address any memory-related issues.

The heap memory size of a Java application is controlled through the


Java command line parameters -Xms (minimum heap size) and -
Xmx (maximum heap size). On the Apigee Edge Message
Processors, these are controlled through the properties
bin_setenv_min_mem and bin_setenv_max_mem. You can read
more about these properties in Modifying Java memory settings.

This document explains how to configure the heap memory size on


Apigee Edge’s Message Processors.

Before you begin


● If you aren’t familiar with configuring properties on Edge for
Private Cloud, read How to configure Edge.
● For default and recommended Java memory settings, read
Modifying Java memory settings.

Changing Heap memory size on the Message


Processors
This section explains how to change heap memory size on the
Message Processors. Minimum and maximum heap memory can be
configured through the properties bin_setenv_min_mem and
bin_setenv_max_mem on the Message Processor component.

Required permissions: To perform this task, you must have the following
permissions:

● Access to the specific Message Processors where you would like to


configure the heap memory.
● Access to create and edit the Message Processor component
properties file in the /opt/apigee/customer/application folder.
To change heap memory size on the Message Processors, perform
the following steps:

1. Open the
/opt/apigee/customer/application/message-
processor.properties file on the Message Processor
machine in an editor. If the file does not already exist, then
create it. For example:
2. vi /opt/apigee/customer/application/message-
processor.properties
3. Add the following lines to this file:
4. bin_setenv_min_mem=minimum_heap_in_megabytes
5. bin_setenv_max_mem=maximum_heap_in_megabytes
6. For example, If you want to change minimum and maximum
heap on the Message Processor to 1 GB and 2 GB
respectively, then add the following lines to this file:
7. bin_setenv_min_mem=1024m
8. bin_setenv_max_mem=2048m
9. Save your changes.
10. Ensure this properties file is owned by the apigee user. For
example:
11. chown apigee:apigee
/opt/apigee/customer/application/message-
processor.properties
12. Restart the Message Processor using the following command:
13. /opt/apigee/apigee-service/bin/apigee-service edge-
message-processor restart
14. If you have more than one Message Processor, repeat these
steps on all the Message Processors.

Verifying Heap memory size on the Message


Processors
This section explains how to verify if the heap memory changes have
been successfully modified on the Message Processors.

Even though you used the properties bin_setenv_min_mem and


bin_setenv_max_mem to change the heap memory size on the
Message Processor, you need to verify that the actual Java
command line parameters -Xms and -Xmx have been set with the
new values as follows:

1. Search to see if the command line parameters -Xms and -


Xmx have been set with the new values for the Message
Processor using the following command:
2. ps -ef | grep message-processor | egrep -o 'Xms[0-9a-z]+|
Xmx[0-9a-z]+' | tr '\r' ' '
3. If the minimum and maximum heap memory have been
changed on the Message Processor, then the previous
command shows the new value listed for -Xms and -Xmx.
The sample result from the previous command, after you have
changed the minimum and maximum heap on the Message
Processor, is as follows:
4. Xms1024m
5. Xmx2048m
6. In the example output, note that the new values for minimum
and maximum heap have been set.
7. If you still see the old values for -Xms and -Xmx, then verify
that you have followed all the steps outlined in Changing heap
memory size on the Message Processors correctly. If you
have missed any step, repeat all the steps again correctly.
8. If you are still not able to change heap memory, contact
Apigee Edge Support.

What’s next?
● Configuring heap memory size on the Qpid servers
● Enabling G1GC on the Message Processors
● Enabling String Deduplication on the Message Processors

Configuring heap memory size


on the Qpid servers
Note: The instructions in this document are applicable for Edge Private Cloud
users only.

Apigee Edge’s Qpid server is a Java-based component and uses a


default heap memory size of 512 MB. However, the default heap
memory size may not be sufficient for all use cases on Apigee Edge.
You may need to tune the heap memory size for your Qpid servers
depending on your traffic and processing requirements or to address
any memory-related issues.

The heap memory size of a Java application is controlled through the


Java command line parameters -Xms (minimum heap size) and -
Xmx (maximum heap size). On the Apigee Edge Qpid servers, these
are controlled through the properties bin_setenv_min_mem and
bin_setenv_max_mem. You can read more about these properties
in Modifying Java memory settings.

This document explains how to configure the heap memory size on


Apigee Edge’s Qpid servers.

Before you begin


● If you aren’t familiar with configuring properties on Edge for
Private Cloud, read How to configure Edge.
● For default and recommended Java memory settings, read
Modifying Java memory settings.

Changing heap memory on the Qpid servers


This section explains how to change heap memory size on the Qpid
servers. Minimum and maximum heap memory can be configured
through the properties bin_setenv_min_mem and
bin_setenv_max_mem on the Qpid server component.

Required permissions: To perform this task, you must have the following
permissions:

● Access to the specific Qpid servers where you would like to configure
the heap memory.
● Access to create and edit the Qpid server component properties file in
/opt/apigee/customer/application folder.

To change heap memory size on the Qpid servers, perform the


following steps:

1. Open the
/opt/apigee/customer/application/qpid-server.p
roperties file on the Qpid server machine in an editor. If the
file does not already exist, then create it. For example:
2. vi /opt/apigee/customer/application/qpid-server.properties
3. Add the following lines to this file:
4. bin_setenv_min_mem=minimum_heap_in_megabytes
5. bin_setenv_max_mem=maximum_heap_in_megabytes
6. For example, If you want to change minimum and maximum
heap on the Qpid server to 1 GB and 2 GB respectively, then
add the following lines to this file:
7. bin_setenv_min_mem=1024m
8. bin_setenv_max_mem=2048m
9. Save your changes.
10. Ensure this properties file is owned by the apigee user. For
example:
11. chown apigee:apigee
/opt/apigee/customer/application/qpid-server.properties
12. Restart the Qpid server using the following command:
13. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-
server restart
14. If you have more than one Qpid server, repeat these steps on
all the Qpid servers.

Verifying heap memory configuration on the


Qpid servers
This section explains how to verify if the heap memory changes have
been successfully modified on the Qpid servers.

Even though you used the properties bin_setenv_min_mem and


bin_setenv_max_mem to change the heap memory size on the
Qpid server, you need to verify that the actual Java command line
parameters -Xms and -Xmx have been set with the new values as
follows:

1. Search to see if the command line parameters -Xms and -


Xmx have been set with the new values for the Qpid server
using the following command:
2. ps -ef | grep qpid-server | egrep -o 'Xms[0-9a-z]+|Xmx[0-9a-z]+'
| tr '\r' ' '
3. If the minimum and maximum heap memory have been
changed on the Qpid server, then the previous command
shows the new value listed for -Xms and -Xmx.
The sample result from the previous command, after you have
changed the minimum and maximum heap on the Qpid server,
is as follows:
4. Xms1024m
5. Xmx2048m
6. In the example output, note that the new values for minimum
and maximum heap have been set.
7. If you still see the old values for -Xms and -Xmx, then verify
that you have followed all the steps outlined in Changing heap
memory size on the Qpid servers correctly. If you have missed
any step, repeat all the steps again correctly.
8. If you are still not able to change heap memory, then please
contact Apigee support.

What’s next?
Configuring heap memory size on Message Processors

Enabling G1GC on the Message


Processors
Note: The instructions in this document are applicable for Edge Private Cloud
users only.

This document explains how to enable Garbage First Garbage


Collector (G1GC) on the Apigee Edge’s Message Processors.

Apigee Edge’s Message Processor runs on Java Virtual Machine


(JVM) and uses the default Garbage Collector—serial or parallel
depending on the hardware and operating system configurations.
Under certain circumstances and based on your needs, you may
want to change the Garbage Collector type used on the Message
Processor.

The G1GC is the low-pause, server-style generational garbage


collector for Java HotSpot VM which improves the overall
performance of the Message Processor. Typically, it is designed for
applications with medium-sized to large-sized data sets in which
response time is more important than overall throughput. For
example: You can consider using G1GC if the heap size is big
(greater than 3GB).

It is generally recommended to set another JVM parameter


UseStringDeduplicationalong with G1GC. This parameter
optimizes the Java heap memory usage by making the duplicate or
identical String values share the same character array.

Before you begin

● If you aren't familiar with Garbage collection and different


types of Garbage Collectors in Java, read Java Garbage
Collection Basics.
● If you aren’t familiar with G1GC, read Getting started with the
G1 Garbage Collector.
● If you aren’t familiar with configuring properties for Edge on
Private Cloud, read How to configure Edge.

Enabling G1GC on the Message Processors

This section explains how to enable G1GC on the Edge Message


Processor. G1GC can be enabled through the property useG1GC on
the Message Processor component. By default, this property is set to
false on the Message Processors. To configure any property on the
Message Processor, use the token according to the syntax described
in How to configure Edge.

To enable G1GC on the Message Processors, perform the following


steps:

1. Locate token for useG1GC property


2. Enable G1GC on Message Processor

Required permissions: To perform this task, you must have the following
permissions:

● Access to the specific Message Processors where you would like to


configure the G1GC.
● Access to create and edit the Message Processor component
properties file in the /opt/apigee/customer/application folder.

Locate token for useG1GC property

The following steps describe how to locate the token for useG1GC
property:

1. Search for the useG1GC property in the Message Processor


source directory /opt/apigee/edge-message-
processor/source using the following command:
2. grep -ri "useG1GC"
/opt/apigee/edge-message-processor/source
3. The output of this command shows the token for Message
Processor’s property useG1GC as follows:
4. /opt/apigee/edge-message-processor/source/conf/
system.properties:useG1GC={T}conf_system_useG1GC{/T}
5. Where the string between the {T}{/T} tags is the name of
the token that you can set in the Message Processor's
.properties file. Thus, the token for the property useG1GC
is as follows:
6. conf_system_useG1GC

Enable G1GC on the Message Processors

The following steps describe how to enable G1GC on the Apigee


Message Processors:

1. Open the
/opt/apigee/customer/application/message-
processor.properties file on the Message Processor
machine in an editor. If the file does not already exist, then
create it. For example:
2. vi /opt/apigee/customer/application/message-
processor.properties
3. Add the following line to this file:
4. conf_system_useG1GC=true
5. Save your changes.
6. Ensure this properties file is owned by the apigee user. For
example:
7. chown apigee:apigee
/opt/apigee/customer/application/message-
processor.properties
8. Restart the Message Processor using the following command:
9. /opt/apigee/apigee-service/bin/apigee-service edge-
message-processor restart
10. If you have more than one Message Processor, repeat these
steps on all the Message Processors.

Verifying G1GC Configuration on the


Message Processors

This section explains how to verify that the G1GC configuration has
been successfully modified on the Message Processors.

Even though you use the token conf_system_useG1GC to enable


G1GC on the Message Processor, you need to verify that the actual
property useG1GC has been set with the new value as follows:

1. Search for the property useG1GC in the


/opt/apigee/edge-message-processor/conf directory
and check to see that it has been set with the new value. For
example:
2. grep -ri "useG1GC"
/opt/apigee/edge-message-processor/conf
3. If the G1GC is enabled successfully on the Message
Processor, then the previous command shows the new value
in the system.properties file.
The sample result from the previous command, after you have
enabled G1GC on the Message Processor, is as follows:
4. /opt/apigee/edge-message-processor/conf/
system.properties:useG1GC=true
5. In the example output, note that the property useG1GC has
been set with the new value true in system.properties.
This indicates that G1GC is successfully enabled on the
Message Processor.
6. If you still see the old value for the property useG1GC, then
verify that you have followed all the steps outlined in Enabling
G1GC on the Message Processors correctly. If you have
missed any step, repeat all the steps again correctly.
7. If you are still not able to enable G1GC, contact Apigee Edge
Support.

What’s next?

Enabling String Deduplication on Message Processors

Enabling String Deduplication on


the Message Processors
Note: The instructions in this document are applicable for Edge Private Cloud
users only.

This document explains how to enable String Deduplication on the


Apigee Edge’s Message Processors.

String Deduplication is a Java feature that helps you to save memory


occupied by duplicate String objects in Java applications. It reduces
the memory footprint of String objects in Java heap memory by
making the duplicate or identical String values share the same
character array.

Apigee Edge Message Processor is a Java-based component. Using


String Deduplication in a Message Processor can improve the
performance of your API Proxies by reducing the memory usage,
especially if the API Proxies make heavy use of Strings.

The String Deduplication feature can be used only with G1 Garbage


Collector (G1GC) in Java applications. If you want to enable this
feature on the Message Processor, then you need to already have
G1GC enabled or enable both G1GC and String Deduplication
together on the Message Processor.

Before you begin


● If you aren’t familiar with G1GC, read Getting started with the
G1 Garbage Collector
● If you aren't familiar with String Deduplication, read String
Deduplication of G1 Garbage Collector
● If you aren't familiar with enabling G1GC on edge Message
Processors, read Enable G1GC on the Message Processors.
● If you aren’t familiar with configuring properties for Edge on
Private Cloud, read How to configure Edge.

Enabling String Deduplication on the Message


Processors
This section explains how to enable the String Deduplication feature
on the Edge Message Processors. String Deduplication can be
enabled through the property useStringDeduplication on the
Message Processor component. By default, this property is set to
false on the Message Processors. To configure any property on the
Message Processor, use the token according to the syntax described
in How to configure Edge.

To enable String Deduplication on the Message Processors, perform


the following steps:

1. Locate token for useStringDeduplication property


2. Enable useStringDeduplication on Message Processor

Required permissions: To perform this task, you must have the following
permissions:

● Access to the specific Message Processors where you would like to


configure the String Deduplication.
● Access to create and edit the Message Processor component
properties file in the /opt/apigee/customer/application folder.

Locate token for useStringDeduplication property

The following steps describe how to locate the token for


useStringDeduplication:

1. Search for the useStringDeduplication property in the


Message Processor source directory /opt/apigee/edge-
message-processor/source using the following
command:
2. grep -ri "useStringDeduplication" /opt/apigee/edge-message-
processor/source
3. The output of this command shows the token for Message
Processor’s property useStringDeduplication as
follows:
4. /opt/apigee/edge-message-processor/source/conf/
system.properties:useStringDeduplication={T}conf_system_us
eStringDeduplication{/T}
5. Where the string between the {T}{/T} tags is the name of
the token that you can set in the Message Processor's
.properties file. Thus, the token for the property
useStringDeduplication is as follows:
6. conf_system_useStringDeduplication

Enable String Deduplication on the Message Processors


Note: The String Deduplication feature can be used only with G1 Garbage
Collector (G1GC) in Java applications. To enable String Deduplication, you
need to have already G1GC enabled or enable both G1GC and String
Deduplication together on the Message Processor.

The following steps describe how to enable String Deduplication on


the Apigee Message Processors:

1. Open the
/opt/apigee/customer/application/message-
processor.properties file on the Message Processor
machine in an editor. If the file does not already exist, then
create it. For example:
2. vi /opt/apigee/customer/application/message-
processor.properties
3. Add the following line to this file:
4. conf_system_useStringDeduplication=true
5. Note: If you would like to have both G1GC and String Deduplication
enabled together, then add the following lines to
/opt/apigee/customer/application/message-
processor.properties:
6. conf_system_useG1GC=true
7. conf_system_useStringDeduplication=true
8.
9. Save your changes.
10. Ensure this properties file is owned by the apigee user. For
example:
11. chown apigee:apigee
/opt/apigee/customer/application/message-
processor.properties
12. Restart the Message Processor using the following command:
13. /opt/apigee/apigee-service/bin/apigee-service edge-
message-processor restart
14. If you have more than one Message Processor, repeat these
steps on all the Message Processors.

Verifying String Deduplication on the Message


Processors
This section explains how to verify that the String Deduplication has
been successfully enabled on the Message Processors.

Even though you use the token


conf_system_useStringDeduplication to enable String
Deduplication on the Message Processor, you need to verify that the
actual property useStringDeduplication has been set with the
new value as follows:

1. Search for the property useStringDeduplication in the


/opt/apigee/edge-message-processor/conf directory
and check to see that it has been set with the new value. For
example:
2. grep -ri "useStringDeduplication" /opt/apigee/edge-message-
processor/conf
3. If String Deduplication is enabled successfully on the
Message Processor, then the previous command shows the
new value in the system.properties file.
The sample result from the previous command after you have
enabled String Deduplication on the Message Processor is as
follows:
4. /opt/apigee/edge-message-processor/conf/
system.properties:useStringDeduplication=true
5. In the example output, note that the property
useStringDeduplication has been set with the new value
true in system.properties. This indicates that String
Deduplication is successfully enabled on the Message
Processor.
6. If you still see the old value for the property
useStringDeduplication, then verify that you have
followed all the steps outlined in Enabling String Deduplication
on the Message Processors correctly. If you have missed any
step, repeat all the steps again correctly.
7. If you are still not able to enable String Deduplication, contact
Apigee support.

What’s next?
Enable G1GC on the Message Processors

Enable log rotation for edge-


message-processor.log
Log rotation—the process of rotating multiple log files in and out of
use— simplifies administration of systems that generate large
numbers of log files. Log rotation enables automatic rotation,
compression, removal, and mailing of log files.

In Edge for Private Cloud, some of the main log files on each apigee
component are configured with a default rotation mechanism. For
example, on the Message Processor component, the following files
are configured with a default rotation mechanism using logback:

● /opt/apigee/var/log/edge-message-processor/
logs/system.log
● /opt/apigee/var/log/edge-message-processor/
logs/events.log
● /opt/apigee/var/log/edge-message-processor/
logs/startupruntimeerrors.log
● /opt/apigee/var/log/edge-message-processor/
logs/configurations.log
● /opt/apigee/var/log/edge-message-processor/
logs/transactions.log

Similar files exist for other edge-* components (whose names


begin with edge-), such as edge-management-server, edge-
router, edge-postgres-server, and edge-qpid-server.
Each of these edge-* components also generate an
additional log file that is a redirected output of the respective
component's console. In the case of the Message Processor
component, this file is called

/opt/apigee/var/log/edge
-message-processor/edge-
message-processor.log. Other
edge-* components generate a similar file. These files'
rotation are not done by the logback library, but rather using

logrotate and crontab.


Before you begin
● If you aren't familiar with logrotate configurations, read the
logrotate manual.
● If you aren't familiar with crontab configurations, read the
crontab manual.

Enable log rotation


This section applies to Edge for Private Cloud versions 4.50.00 and
4.51.00.

Log rotation is a mechanism which is designed to ease


administration of systems that generate large numbers of log files. It
allows automatic rotation, compression, removal and mailing of log
files.

By default, some of the main log files on each of the apigee


components are configured with a default rotation mechanism. For
example, on the Message Processor component, the following files
are configured with default rotation mechanism:

● /opt/apigee/var/log/edge-message-processor/
logs/system.log
● /opt/apigee/var/log/edge-message-processor/
logs/events.log
● /opt/apigee/var/log/edge-message-processor/
logs/startupruntimeerrors.log
● /opt/apigee/var/log/edge-message-processor/
logs/configurations.log
● /opt/apigee/var/log/edge-message-processor/
logs/transactions.log

However, other log files in apigee components are not configured


with default rotation. For example, log rotation is not configured by
default on the apigee component Message Processor file edge-
message-processor.log.

Log rotation can be enabled using different utilities/frameworks


such as logrotate, logback, or log4j. This section explains how
to configure log rotation for /opt/apigee/var/log/edge-
message-processor/edge-message-processor.log file
using logrotate and crontab.

Enabling log rotation for edge-message-processor.log on


Message Processors

This section explains how to enable log rotation for


/opt/apigee/var/log/edge-message- processor/edge-
message-processor.log logs on the Edge Message Processors.

Required permissions: To perform this task, you must have the following
permissions:

● Access to the specific Message Processors, where you would like to


enable rotate logs.
● Access to create the logrotate conf files in the Message Processor
component
/opt/apigee/edge-message-processor/logrotate/ folder.
● Access to create a cron job using crontab.

The following steps describe how to enable log rotation for edge-
message-processor.log file:

1. Open the
/opt/apigee/edge-message-processor/logrotate/l
ogrotate.conf file on the Message Processor machine in
an editor. If the file does not exist, create it. For example:

vi /opt/apigee/edge-message-processor/logrotate/logrotate.conf
2.
3. Add a snippet to the file similar to the one shown below:
4. /opt/apigee/var/log/edge-message-processor/edge-
message-processor.log {
5. missingok
6. copytruncate
rotate 5
size 10M
compress
delaycompress
notifempty
nocreate
sharedscripts
}

7. Note: The sample configuration shown above rotates the file after 10
MB and log files are rotated five times before being removed. You can
modify the above configuration depending on your log rotation
requirements .
8. Save your changes.
9. Open the apigee user's crontab using the following
command:

sudo crontab -u apigee -e


10.
11. Add the following cron job to the apigee user's crontab :
12. 0 0 * * * nice -n 19 ionice -c3 /usr/sbin/logrotate -s
/opt/apigee/var/run/edge-message-processor/logrotate.statu
s -f
/opt/apigee/edge-message-processor/logrotate/logrotate.co
nf
13. Note: The sample cron job shown above is scheduled to run at 00:00
AM every day. If you want this cron job to run at different times, you
need to modify the time as needed.
14. Save the crontab and monitor log rotation during the next
run of cron job.

Verifying log rotation for edge-message-processor.log on the


Message Processor
1. Once the scheduled cron job runs, the log file will be rotated.
From the above example, cron job is scheduled to run every
day at 00:00 AM to rotate the edge-message-
processor.log file.
2. Navigate to the /opt/apigee/var/log/edge-message-
processor/ directory and verify that the edge-message-
processor.log file is rotated.
Sample listing of log files

ls -ltrh | grep 'edge-message-processor'


3.
4. -rw-r--r--. 1 apigee apigee 17K Feb 7 00:00 edge-message-
processor.log.1.gz
5. -rw-r--r--. 1 apigee apigee 5.3K Feb 7 09:12 edge-message-
processor.log
6. The above output indicates that the edge-message-
processor.log files are rotated and saved as a GZ file.
7. If you don't see the edge-message-processor.log files
are being rotated, then verify that you have followed all the
steps outlined in Enabling log rotation for edge-message-
processor.log on Message Processors correctly. If you have
missed any step, repeat all the steps again correctly.
8. If you are still not able to get the log rotation working, then
contact Apigee Edge Support.

Enable log rotation for edge-


router.log
Note: The instructions in this page are applicable to Edge Private Cloud
versions 4.50.00 and 4.51.00. For enabling log rotation on version 4.52.00, see
Enable log rotation for Edge components.

Log rotation is a mechanism which is designed to ease


administration of systems that generate large numbers of log files. It
allows automatic rotation, compression, removal and mailing of log
files.

In Edge for Private Cloud, some of the main log files on each of the
Apigee components are configured with a default rotation
mechanism.

For example, on the Router component, the following files are


configured with default rotation mechanism:

● /opt/apigee/var/log/edge-router/logs/
system.log
● /opt/apigee/var/log/edge-router/logs/
events.log
● /opt/apigee/var/log/edge-router/logs/
startupruntimeerrors.log
● /opt/apigee/var/log/edge-router/logs/
configurations.log
● /opt/apigee/var/log/edge-router/logs/
transactions.log

However, there are certain log files in Apigee components that are
not configured with default rotation. On the Apigee component
Router edge-router.log file is one of those files for which log
rotation is not configured by default.

Log rotation can be enabled using different utilities/frameworks like


logrotate, logback or log4j. This document explains how to
configure log rotation for /opt/apigee/var/log/edge-
router/edge-router.log file using logrotate and crontab.

Before you begin


● If you aren't familiar with logrotate configurations, read the
logrotate manual.
● If you aren't familiar with crontab configurations, read the
crontab manual.

Enabling log rotation for edge-router.log on the


Router
This section explains how to enable log rotation for
/opt/apigee/var/log/edge-router/edge-router.log
logs on the Edge Routers.

Required permissions:

To perform this task, you must have the following permissions:

● Access to the specific Routers, where you would like to rotate logs.
● Access to create the logrotate conf files in the Edge Router
component /opt/apigee/edge-router/logrotate/ folder.
● Access to create a cron job using crontab.

The following steps describe how to enable log rotation for edge-
router.log file.

1. Open the
/opt/apigee/edge-router/logrotate/logrotate.co
nf file on the Router machine in an editor. If the file does not
exist, create it. For example:

vi /opt/apigee/edge-router/logrotate/logrotate.conf
2.
3. Add a snippet to the file similar to the one shown below:
4. /opt/apigee/var/log/edge-router/edge-router.log {
5. missingok
6. copytruncate
rotate 5
size 10M
compress
delaycompress
notifempty
nocreate
sharedscripts
}

7. Note: The sample configuration shown above rotates the file after 10
MB and log files are rotated five times before being removed. You can
modify the above configuration depending on your log rotation
requirements.
8. Save your changes.
9. Open apigee user's crontab using the following command:

sudo crontab -u apigee -e


10.
11. Add the following cron job to the Apigee user's crontab:
12. 0 0 * * * nice -n 19 ionice -c3 /usr/sbin/logrotate -f
/opt/apigee/edge-router/logrotate/logrotate.conf
13. Note: The sample cron job shown above is scheduled to run at 00:00
AM every day. If you want this cron job to run at different times, you
need to modify the time as needed.
14. Save the crontab and monitor log rotation during the next
run of cron job.

Verifying log rotation for edge-router.log on the


Router
1. Once the scheduled cron job runs, the log file will be rotated.
From the above example, cron job is scheduled to run every
day at 00:00 AM to rotate the edge-router.log file.
2. Navigate to the /opt/apigee/var/log/edge-router/
directory and verify that the edge-router.log file is
rotated.
Sample listing of log files:

ls -ltrh | grep 'edge-router'


3.
4. -rw-r--r--. 1 apigee apigee 6.0K Feb 16 00:00 edge-
router.log.1.gz
5. -rw-r--r--. 1 apigee apigee 3.0K Feb 16 01:23 edge-router.log
6. The above output indicates that the edge-router.log files
are rotated and saved as GZ files.
7. If you don’t see the edge-router.log files are being
rotated, then verify that you have followed all the steps
outlined in Enabling log rotation for edge-router.log on the
Router correctly. If you have missed any step, repeat all the
steps again correctly.
8. If you are still not able to get the log rotation working, contact
Apigee Edge Support.

Configuring SNI between Edge


Message Processor and the
backend server
Note: The instructions in this document are applicable for Edge Private Cloud
users only.

Server Name Indication (SNI) allows multiple HTTPS backend


servers to be served off the same IP address and port without
requiring those backend servers to use the same TLS certificate. It is
an extension of the TLS protocol. When SNI is enabled on a client,
the client passes the hostname of the backend server as part of the
initial TLS handshake. This allows the TLS server to determine which
TLS certificate should be used to validate the request from the client.

By default, SNI is disabled on the Message Processor component in


Edge for the Private Cloud to ensure backward compatibility with the
existing backend servers. If your backend server is configured to
support SNI, then you need to enable the SNI on the Message
Processor component. Otherwise, the API requests going through
Apigee Edge will fail with TLS Handshake Failures.

This document explains how to do the following:

● Identify whether or not a backend server is SNI enabled


● Enable SNI on the Message Processors in order to
communicate with backend servers that support SNI
● Disable SNI on the Message Processors if needed
● Verify that the SNI configuration has been successfully
updated on the Message Processors
Before you begin
● If you aren’t familiar with SNI, read Using SNI with edge.
● If you aren’t familiar with configuring Edge on Private Cloud,
read How to configure Edge.

Identification of SNI enabled server


This section describes how to identify whether or not a backend
server is SNI enabled.

1. Execute the openssl command and try to connect to the


relevant server hostname (Edge Router or backend server)
without passing the server name, as shown below:
2. openssl s_client -connect hostname:port
3. You may get the certificates and sometimes you may observe
the handshake failure in the openssl command, as shown
below:

4. CONNECTED(00000003) 9362:error:14077410:SSL
routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake
failure:/BuildRoot/Library/Caches/com.apple.xbs/Sources/Op
enSSL098/OpenSSL098-64.50.6/src/ssl/s23_clnt.c:593
5. Execute the the openssl command and try to connect to the
relevant server hostname (Edge router or backend server) by
passing the server name as shown below:
6. openssl s_client -connect hostname:port -servername
hostname
7. If you get a handshake failure in step 1 or get different
certificates in step 1 and step 2, then it indicates that the
specified Server is SNI enabled.
8. If you want to verify this for more than one backend server,
then you need to repeat the above steps for each backend
server.

If you find that you have one or more backend servers that are SNI
enabled, then you need to enable SNI on the Message Processor
component as explained below. Otherwise, the API requests going
through Apigee Edge will fail with TLS Handshake Failures.

Enable SNI between Edge Message Processors


and backend server
This section explains how to enable SNI between the Edge Message
Processor and the backend server. SNI can be enabled through the
property jsse.enableSNIExtension on the Message Processor
component. To configure any property on the Message Processor,
use the token according to the syntax described in How to configure
Edge.

Note: When SNI is enabled on a client (Message Processor), the client passes
the hostname of the backend server as part of the initial TLS handshake for all
the API proxies.

To enable SNI on the Message Processors, perform the following


steps:

1. Locate token for jsse.enableSNIExtension property


2. Enable SNI on the Message Processor
Required permissions: To perform this task, you must have the following
permissions:

1. Access to the specific Message Processors where you would like to


enable SNI.
2. Access to create and edit the Message Processor component
properties file in the /opt/apigee/customer/application folder.

Locate token for jsse.enableSNIExtension property

The following steps describe how to locate the token for the
jsse.enableSNIExtension property:

1. Search for the jsse.enableSNIExtension property in the


Message Processor source directory /opt/apigee/edge-
message-processor/source using the following
command:
2. grep -ri "jsse.enableSNIExtension" /opt/apigee/edge-
message-processor/source
3. The output of this command shows the token for Message
Processor’s property jsse.enableSNIExtension as
follows:
4. /opt/apigee/edge-message-processor/source/conf/
system.properties:jsse.enableSNIExtension={T}conf_system_j
sse.enableSNIExtension{/T}
5. Where the string between the {T}{/T} tags is the name of
the token that you can set in the Message Processor's
.properties file.
6. Thus, the token for the property
jsse.enableSNIExtension is as follows:
7. conf_system_jsse.enableSNIExtension

Enable SNI on the Message Processor


The following steps describe how to enable SNI on the Apigee
Message Processor component.

1. On the Message Processor machine, open the following file in


an editor. If it does not already exist, then create it.
2. /opt/apigee/customer/application/message-
processor.properties
3. For example, to open the file using vi, enter the following
command:
4. vi /opt/apigee/customer/application/message-
processor.properties
5. Add a line in the following format to the properties file:
6. conf_system_jsse.enableSNIExtension=true
7. Save your changes.
8. Ensure this properties file is owned by the apigee user as
shown below:
9. chown apigee:apigee
/opt/apigee/customer/application/message-
processor.properties
10. Restart the Message Processor as shown below:
11. /opt/apigee/apigee-service/bin/apigee-service edge-
message-processor restart
12. Verify the SNI configuration is updated on the Message
Processor.
13. If you have more than one Message Processor, repeat these
above steps on all the Message Processors.

Disable SNI between Edge Message Processors


and backend server
Generally you should not see any issues after enabling SNI. However,
if you observe any connectivity issues between Edge Message
Processor and the backend server after enabling SNI, you can always
disable the SNI by performing the following steps.

SNI can be disabled by setting the property


jsse.enableSNIExtension back to false on the Message
Processor component.

Note: When SNI is disabled on a client (Message Processor), the client does
not pass the hostname of the backend server as part of the initial TLS
handshake for all the API Proxies.
Required permissions: To perform this task, you must have the following
permissions:

1. Access to the specific Message Processors, where you would like to


disable SNI.
2. Access to create and edit the Message Processor component
properties file in the /opt/apigee/customer/application folder.

Disable SNI on the Message Processors

The following steps describe how to disable SNI on the Apigee


Message Processors.

1. On the Message Processor machine, open the following file in


an editor. If it does not already exist, then create it.
2. /opt/apigee/customer/application/message-
processor.properties
3. For example, to open the file using vi, enter the following
command:
4. vi /opt/apigee/customer/application/message-
processor.properties
5. If the line
conf_system_jsse.enableSNIExtension=true exists
in /opt/apigee/customer/application/message-
processor.properties, then modify it as follows:
6. conf_system_jsse.enableSNIExtension=false
7. Save your changes.
8. Ensure this properties file is owned by the apigee user as
shown below:
9. chown apigee:apigee
/opt/apigee/customer/application/message-
processor.properties
10. Restart the Message Processor as shown below:
11. /opt/apigee/apigee-service/bin/apigee-service edge-
message-processor restart
12. Verify the SNI configuration is updated on the Message
Processor.
13. If you have more than one Message Processor, repeat the
above steps on all the Message Processors.

Verifying SNI Configuration on the Message


Processors
This section explains how to verify that the SNI configuration has
been successfully updated on the Message Processors.

Even though you use the token


conf_system_jsse.enableSNIExtension to configure SNI on
the Message Processor, you need to verify that the actual property
jsse.enableSNIExtension has been set with the new value.

1. On the Message Processor machine, search for the property


jsse.enableSNIExtension in the /opt/apigee/edge-
message-processor/conf directory and check to see if it
has been set with the new value as shown below:
2. grep -ri "jsse.enableSNIExtension" /opt/apigee/edge-
message-processor/conf
3. If the SNI configuration is updated successfully on the
Message Processor, then the command above shows the new
value in the system.properties file.
The sample result from the above command after you have
enabled SNI on the Message Processor is as follows:
4. /opt/apigee/edge-message-processor/conf/
system.properties:jsse.enableSNIExtension=true
5. Similarly, the sample result from the above command after
you have disabled SNI on the Message Processor is as
follows:
6. /opt/apigee/edge-message-processor/conf/
system.properties:jsse.enableSNIExtension=false
7. In the above example output, note that the property
jsse.enableSNIExtension has been updated to the new
value true or false in system.properties. This indicates
that SNI is successfully enabled or disabled on the Message
Processor.
8. If you still see the old value for the property
jsse.enableSNIExtension, then verify that you have
followed all the steps outlined in the appropriate section to
enable or disable SNI correctly. If you have missed any step,
repeat all the steps again correctly.
9. If you are still not able to enable/disable SNI , then contact
Apigee Edge Support.

Best practices for configuring


I/O timeout
Note: This document is applicable for Edge Public and Private Cloud users.

The API requests made by the client applications flow through


various components in Apigee Edge before they reach the backend
services. Most client applications expect the responses to these
requests to be received in a timely manner.
To achieve timely responses, the I/O timeout values are set in each
of the components through which the API requests flow through. If
any of the components in the flow take more time than the preceding
component, the preceding component times out and responds back
with 504 Gateway Timeout errors.

While configuring the timeout, the values should be configured in


each of the components with utmost care, otherwise it can lead to
504 Gateway Timeout errors.

This document describes best practices for configuring I/O timeout


on various components through which the API requests flow in
Apigee Edge.

Best practices for configuring I/O timeout


Consider the following best practices when configuring the I/O
timeout:

● First component: Always use the highest timeout on the first


component in the API request flow, which is the Client
Application in Apigee Edge.
● Last component: Always use the lowest timeout on the last
component in the API request flow, which is the Backend
Service in Apigee Edge.
● Between components: Ensure that there is a difference of at
least 2-3 seconds in the timeout value configured in each
component between the first component and the last
component in the flow.
● Router: It is always a good practice to configure (modify) the
I/O timeout value for a specific virtual host as opposed to
configuring it on the Router. This ensures that the new timeout
value affects only those API Proxies which are using the
specific virtual host and not all the API Proxies being served
by the Router.
Configure (modify) the I/O timeout on the Router only when
you are absolutely sure that the new I/O timeout value is
required or applicable for all the API Proxies running on the
Router.
● Message Processor: It is always a good practice to configure
(modify) the I/O timeout value for a specific API proxy as
opposed to configuring it on the Message Processor. This
ensures that the new timeout value affects only the specific
API proxy and not all the API proxies being served by the
Message Processor.
Configure (modify) the I/O timeout on the Message Processor
only when you are absolutely sure that the new I/O timeout
value is required or applicable for all the API Proxies running
on the Message Processor.

Example scenarios
The scenarios in this section can help you understand how to
correctly set the I/O timeout values.

Scenario 1: Requests to Apigee Edge from client applications


directly

This section describes the best practices to follow while setting up


the timeout values in an Apigee Edge setup where there are no
intermediate components between the client application and Apigee
Edge and between Apigee Edge and your backend server.

Sample Apigee setup with no intermediate components

If Apigee Edge is set up as shown in the diagram above, with no


intermediate components, then use the following best practices:

1. The client application is the first component in the flow. The


highest timeout value should be set on the client.
2. The backend server is the last component in the flow. The
lowest timeout value should be set on the backend server.
3. Configure the timeout values on each of the components in
the following order:

The following example shows timeout values set on the


various components as per guidelines given above to avoid
any issues:
Scenario 2: Requests to Apigee Edge from client applications
through intermediate components

This section describes the best practices to follow while setting up


the timeout values in an Apigee Edge setup where there are one or
more intermediate components between the client application and
Apigee Edge and also between Apigee Edge and your backend
server.

The intermediate components can be a load balancer, content


delivery network (CDN), NGINX, and so on.

Sample Apigee setup with one intermediate component between


Client and Apigee Edge and between Apigee Edge and backend
server

If Apigee Edge is set up as shown in the diagram above, with one or


more intermediate components, then use the following best
practices:

1. The client application is the first component in the flow. The


highest timeout value should be set on the client.
2. The backend server is the last component in the flow. The
lowest timeout value should be set on the backend server.
3. Configure the timeout values on each of the components,
including the intermediate components, in the following order:

The following example shows timeout values set on the


various components as per guidelines given above to avoid
any issues:
Note: There could be more than one intermediate component between the
client application and Apigee Edge and between Apigee Edge and your
backend server. Work with the team owning these intermediate components to
ensure the right I/O timeout values are configured on them.

Configuring I/O timeout on Message


Processors
Note: This document is applicable for Edge Public and Private Cloud users.

This document explains how to configure the I/O timeout for the Apigee Edge
Message Processors.

The I/O timeout on the Message Processor represents the time for which the
Message Processor waits either to receive a response from the backend server or
for the socket to be ready to write a request to the backend server, before it times
out.

The Message Processor I/O timeout default value is 55 seconds. This timeout
period is applicable to the backend servers configured in the target endpoint
configuration and in the ServiceCallout policy of your API proxy.

The I/O timeout for Message Processors can be increased or decreased from the
default value of 55 seconds based on your needs. It can be configured in the
following places:
● In the API proxy
● Target endpoint
● ServiceCallout policy
● On the Message Processor

The following properties control the I/O timeout on the Message Processors:

Property Name Location Description

io.timeout. API proxy: This is the maximum time for which the
millis Message Processor does the following:
● Target endpoint
● Service Callout ● Waits to receive a response from
policy the backend server, after
establishing the connection and
sending the request to the
backend server OR
● Waits for the socket to be ready
for the Message Processor to
send the request to the backend
server.

If there is no response from the


backend server within this timeout
period, then the Message Processor
times out.

By default, this property takes the value


set for the
HTTPTransport.io.timeout.mill
is property on the Message Processor.
The default value is 55 seconds.

If this property is modified with a new


timeout value for a specific API proxy,
then only that API proxy is affected.

HTTPTranspo Message Processor This is the maximum time for which the
rt.io.timeo Message Processor does the following:
ut.millis ● Waits to receive a response from
the backend server, after
establishing the connection and
sending the request to the
backend server OR
● Waits for the socket to be ready
for the Message Processor to
send the request to the backend
server.

If there is no response from the


backend server within this timeout
period, then the Message Processor
times out.

This property is used for all the API


Proxies running on this Message
Processor.

The default value of this property is 55


seconds.

You can either modify this property as


explained in Configuring I/O timeout on
Message Processors, or you can
overwrite this value by setting the
io.timeout.millis property at the
API proxy level.
Note:

● If the timeout happens on the Message Processor while reading the HTTP response, then
504 Gateway Timeout is returned.
● If the timeout happens on the Message Processor while writing the HTTP request, then
408 Request Timeout is returned.

Before you begin


Before you use the steps in this document, be sure you understand the following
topics:

● If you aren’t familiar with I/O timeout, see the io.timeout.millis property
description in TargetEndpoint Transport Property Specification.
● If you aren’t familiar with configuring properties for Edge for Private Cloud,
read How to configure Edge.
● Ensure that you follow the recommendations in Best practices for configuring
I/O timeout.
Note:

Increasing I/O timeout for an API Proxy:

● If you want to increase the I/O timeout value above the default value for a specific API
Proxy, then you need to increase it in the API proxy (as explained in Configuring I/O
timeout in API proxy). In addition, you need to increase the I/O timeout values on the
Router, client and backend server appropriately as documented in Best practices for
configuring I/O timeout.
● Increasing the I/O timeout value on the Message Processor only, or configuring
inappropriate I/O timeout values on Edge components can lead to 504 Gateway Timeout
errors.

Decreasing I/O timeout for an API Proxy:

If you want to decrease the I/O timeout value for a specific API Proxy, then you need to decrease
it in the API Proxy (as explained in Configuring I/O timeout in API proxy. If the timeout on the
other components (Routers, client and backend) are configured appropriately as per the
recommendations in Best practices for configuring I/O timeout, then you don't have to make any
changes in the other components. Otherwise, make the changes appropriately.

Configuring I/O timeout in API proxy


The I/O timeout can be configured in the following API proxy places:

● Target endpoint
● ServiceCallout policy

Configuring I/O timeout in target endpoint of API proxy

This section explains how to configure I/O timeout in the target endpoint of your API
proxy. The I/O timeout can be configured through the property
io.timeout.millis, which represents the I/O timeout value in milliseconds.

Note: Configuring the I/O timeout in the target endpoint of your API proxy overrides the default
value (55 seconds) with the new value and affects only the specific API proxy where it is
configured. The rest of the API Proxies continue to use the default I/O timeout value of 55
seconds.Required permissions: To perform this task, you must have edit access permissions to
the specific API proxy in which you would like to configure the I/O timeout.

1. In the Edge UI, select the specific API proxy in which you would like to
configure the new I/O timeout value.
2. Select the specific target endpoint that you want to modify.
3. Add the property io.timeout.millis with an appropriate value under the
<HTTPTargetConnection> element in the TargetEndpoint
configuration.
4. For example, to change the I/O Timeout to 120 seconds, add the following
block of code:
5. <Properties>
6. <Property name="io.timeout.millis">120000</Property>
7. </Properties>

8. Since the io.timeout.millis property is in milliseconds, the value for 120


seconds is 120000.
9. The following examples show how to configure the I/O timeout in the target
endpoint configuration of your API proxy:
10. Example target endpoint configuration using URL for backend server
11. <TargetEndpoint name="default">
12. <HTTPTargetConnection>
13. <URL>https://mocktarget.apigee.net/json</URL>
14. <Properties>
15. <Property name="io.timeout.millis">120000</Property>
16. </Properties>
17. </HTTPTargetConnection>
18. </TargetEndpoint>
19. Example target endpoint configuration using target server
20. <TargetEndpoint name="default">
21. <HTTPTargetConnection>
22. <LoadBalancer>
23. <Server name="target1" />
24. <Server name="target2" />
25. </LoadBalancer>
26. <Properties>
27. <Property name="io.timeout.millis">120000</Property>
28. </Properties>
29. <Path>/test</Path>
30. </HTTPTargetConnection>
31. </TargetEndpoint>
32. Note: Property values can be only literals. You cannot set values using variables. Save
the changes made to your API proxy.

Configuring I/O timeout in ServiceCallout policy of API proxy

This section explains how to configure the I/O timeout in the ServiceCallout policy of
your API proxy. The I/O timeout can be configured through either the <Timeout>
element or the io.timeout.millis property. Both the <Timeout> element and
the io.timeout.millis property represent the I/O timeout values in milliseconds.

Note: Configuring the I/O timeout in the ServiceCallout policy of the API proxy overrides the
default value (55 seconds) with the new value and affects only the specific ServiceCallout
policy where it is configured. The rest of the ServiceCallout policies and API Proxies continue to
use the default I/O timeout value of 55 seconds.Required permissions: To perform this task,
you must have edit access permissions to the specific API proxy in which you would like to
configure the I/O timeout for the ServiceCallout policy.

You can configure the I/O timeout in the ServiceCallout policy using one of the
following methods:

● <Timeout> element.
● io.timeout.millis property.
TIMEOUT ELEMENT
IO.TIMEOUT.MILLIS PROPERTY
To configure the I/O timeout in the ServiceCallout policy using the <Timeout>
element, do the following:

1. In the Edge UI, select the specific API proxy in which you would like to
configure the new I/O timeout value for the ServiceCallout policy.
2. Select the specific ServiceCallout policy that you want to modify.
3. Add the element <Timeout> with an appropriate value under the
<ServiceCallout> configuration.
For example, to change the I/O timeout to 120 seconds, add the following line
of code:
4. <Timeout>120000</Timeout>
5. Since the <Timeout> element is in milliseconds, the value for 120 seconds is
120000.
The following example shows how to configure the I/O timeout in the
ServiceCallout policy using the <Timeout> element:
Example ServiceCallout policy configuration using URL for backend server
6. <ServiceCallout name="Service-Callout-1">
7. <DisplayName>ServiceCallout-1</DisplayName>
8. <Timeout>120000</Timeout>
<HTTPTargetConnection>
<Properties/>
<URL>https://mocktarget.apigee.net/json</URL>
</HTTPTargetConnection>
</ServiceCallout>

9. Save the changes made to your API proxy.

Configuring I/O timeout on Message Processors


Note: The instructions in this section are for Edge for Private Cloud users only.

This section explains how to configure the I/O timeout on the Message Processors.
The I/O timeout can be configured through the property
HTTPTransport.io.timeout.millis, which represents the I/O timeout value in
milliseconds on the Message Processor component, using the token as per the
syntax described in How to configure Edge.

Note: If a change is made at the Message Processor level, it is important to note that it affects
the I/O timeout of all the API proxies associated with the organizations and environments served
by the specific Message Processors.Required permissions: To perform this task, you must have
the following permissions:

1. Access to the specific Message Processor, where you would like to configure the I/O
timeout.
2. Access to create or edit the Message Processor component properties file in the
/opt/apigee/customer/application folder.

To configure the I/O timeout on the Message Processors, do the following:


1. On the Message Processor machine, open the following file in an editor. If it
does not already exist, then create it.
2. /opt/apigee/customer/application/message-processor.properties
3. For example, to open the file using vi, enter the following command:
4. vi /opt/apigee/customer/application/message-processor.properties
5. Add a line in the following format to the properties file, substituting a value for
TIME_IN_MILLISECONDS:
6. conf_http_HTTPTransport.io.timeout.millis=TIME_IN_MILLISECONDS
7. For example, to change the I/O timeout on the Message Processor to 120
seconds, add the following line:
8. conf_http_HTTPTransport.io.timeout.millis=120000
9. Save your changes.
10. Ensure the properties file is owned by the apigee user as shown below:
11. chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties
12. Restart the Message Processor as shown below:
13. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart
14. If you have more than one Message Processor, repeat the above steps on all
the Message Processors.

Verifying I/O timeout on Message Processors


Note: The instructions in this section are for Edge for Private Cloud users only.

This section explains how to verify that the I/O timeout has been successfully
modified on the Message Processors.

Even though you use the token


conf_http_HTTPTransport.io.timeout.millis to set the I/O timeout on the
Message Processor, you need to verify if the actual property
HTTPTransport.io.timeout.millis has been set with the new value.

1. On the Message Processor machine, search for the property


HTTPTransport.io.timeout.millis in the /opt/apigee/edge-
message-processor/conf directory and check to see if it has been set
with the new value as shown below:
2. grep -ri "HTTPTransport.io.timeout.millis" /opt/apigee/edge-message-
processor/conf
3. If the new I/O timeout value is successfully set on the Message Processor,
then the above command shows the new value in the http.properties file.
4. The sample result from the above command after you have configured I/O
timeout to 120 seconds is as follows:
5. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPTransport.io.timeout.millis=120000
6. In the example output above, notice that the property
HTTPTransport.io.timeout.millis has been set with the new value
120000 in http.properties. This indicates that the I/O timeout is
successfully configured to 120 seconds on the Message Processor.
7. If you still see the old value for the property
HTTPTransport.io.timeout.millis, then verify that you have followed
all the steps outlined in Configuring I/O timeout on Message Processors
correctly. If you have missed any step, repeat all the steps again correctly.
8. If you are still not able to modify the I/O timeout, then please contact Apigee
Edge Support.

What’s next?
Learn about Configuring I/O timeout on Routers

Configuring I/O timeout on routers


Note: This document is applicable for Edge Public and Private Cloud users.

This document explains how to configure the I/O timeout on Apigee Edge’s Routers.

The I/O timeout on the Router represents the time for which the Router waits to
receive a response from the Message Processor, after establishing the connection
and sending the request to the Message Processor. The default value of the I/O
timeout on the Router is 57 seconds.

The I/O timeout for Routers can be increased or decreased from the default value of
57 seconds based on your needs. It can be configured in the following ways:

● In a virtual host
● On the Router

The following properties control the I/O timeout on the Routers:

Property
Location Description
Name

proxy_r Virtual Specifies the maximum time for which the Router waits to
ead_tim host receive a response from the Message Processor, after
establishing the connection and sending the request to the
eout Message Processor.

If there is no response from the Message Processor within


this timeout period, then the Router times out.
By default, this property takes the value set for the
conf_load_balancing_load.balancing.driver.p
roxy.read.timeout property on the Router. The default
value is 57 seconds.

If this property is modified with a new timeout value for a


specific virtual host, then only the API proxies using that
specific virtual host are affected.

conf_lo Router Specifies the maximum time for which the Router waits to
ad_bala receive a response from the Message Processor, after
establishing the connection and sending the request to the
ncing_l Message Processor.
oad.bal
If there is no response from the Message Processor within
ancing.
this timeout period, then the Router times out.
driver.
proxy.r This property is used for all the virtual hosts on this Router.
ead.tim The default value of this property is 57 seconds.
eout
You can either modify this property as explained in
Configuring I/O timeout on Routers below, or you can
overwrite this value by setting the proxy_read_timeout
property at the virtual host level.

You can set the time interval for this property as something
other than seconds using the following notation:

ms: milliseconds
s: seconds (default)
m: minutes
h: hours
d: days
w: weeks
M: months (length of 30 days)
y: years (length of 365 days)

conf_lo Router Specifies the total time the Router waits to receive a
ad_bala response from all the Message Processors, after
establishing the connection and sending the request to
ncing_l each Message Processor.
oad.bal
This is applicable when your Edge installation has multiple
ancing.
Message Processors and retry is enabled upon occurrence
driver. of errors. It has the value of one of the following:
nginx.u
● The current value of
pstream
conf_load_balancing_load.balancing.dri
_next_t
ver.proxy.read.timeout
imeout ● The default value of 57 seconds

As with the
conf_load_balancing_load.balancing.driver.p
roxy.read.timeout property, you can specify time
intervals other than the default (seconds).

Before you begin


Before you use the steps in this document, be sure you understand the following
topics:

● If you aren’t familiar with virtual host Properties, read Virtual host property
reference.
● If you aren’t familiar with configuring properties for Edge on Private Cloud,
read How to configure Edge.
● Ensure that you follow the Best practices for configuring I/O timeout
recommendations.
Note: Configuring inappropriate I/O timeout values on Edge components can lead to 504
Gateway Timeout Errors.

Configuring I/O timeout in virtual host


This section explains how to configure I/O timeout in the virtual host associated with
an organization and environment. The I/O timeout can be configured in the virtual
host through the property proxy_read_timeout, which represents the I/O timeout
value in seconds.

Note: Configuring the I/O timeout in a virtual host overrides the default value (57 seconds) with
the new value and affects only the specific API Proxies where the virtual host is used. The rest of
the API Proxies continue to use the default I/O timeout value of 57 seconds.Required
permissions: To perform this task, you must have the following permissions:

● You need to be an orgadmin of the specific organization, or


● You should be in a custom role with permission to modify the specific virtual host in
which you would like to configure the I/O timeout.

You can configure the virtual host using one of the following methods:

● Edge UI
● Edge API
EDGE UI
EDGE API
Note: The virtual host cannot be edited using classic UI. If you don't have access to the new UI,
use the Edge API. For more information on the transition to the new UI, see Transition from
Classic Edge UI.

To configure the virtual host using the Edge UI, do the following:

1. Login to Edge UI.


2. Navigate to Admin > Virtual Hosts.
3. Select a specific Environment where you want to make this change.
4. Select the specific virtual host for which you would like to configure the new
I/O timeout value.
5. Under Properties, update the Proxy Read Timeout value in seconds.
For example, if you want to change the timeout to 120 seconds, type 120 as
shown in the following figure:

6. Save the change.


Verifying I/O timeout on virtual hosts
This section explains how to verify that the I/O timeout has been successfully
modified on the virtual host using the Edge API.

Note: Use this procedure to verify that the I/O timeout has been successfully modified on the
virtual host whether you used the Edge UI or the Edge API to modify the I/O timeout value.

1. Execute the Get virtual host API to get the virtualhost configuration as
shown below:
Public Cloud user
2. curl -v -X GET
https://api.enterprise.apigee.com/v1/organizations/{organization-name}/
environments/{environment-name}/virtualhosts/{virtualhost-name} -u
<username>
3.
4. Private Cloud user
5. curl -v -X GET http://<management server-host>:<port
#>/v1/organizations/{organization-name}/environments/{environment-
name}/virtualhosts/{virtualhost-name} -u <username>
6.
7. Where:
8. {organization-name} is the name of the organization
9. {environment-name} is the name of the environment
10. {virtualhost-name} is the name of the virtual host
11. Verify that the property proxy_read_timeout has been set to the new
value.
Sample updated virtual host configuration
12. {
13. "hostAliases": [
14. "api.myCompany,com",
],
"interfaces": [],
"listenOptions": [],
"name": "secure",
"port": "443",
"retryOptions": [],
"properties": {
"property": [
{
"name": "proxy_read_timeout",
"value": "120"
}
]
},
"sSLInfo": {
"ciphers": [],
"clientAuthEnabled": "false",
"enabled": "true",
"ignoreValidationErrors": false,
"keyAlias": "myCompanyKeyAlias",
"keyStore": "ref://myCompanyKeystoreref",
"protocols": []
},
"useBuiltInFreeTrialCert": false
}

15. In the example above, note that the proxy_read_timeout has been set with
the new value of 120 seconds.
16. If you still see the old value for proxy_read_timeout,then verify that you
have followed all the steps outlined in Configuring I/O timeout in virtual host
correctly. If you have missed any step, repeat all the steps again correctly.
17. If you are still not able to modify the I/O timeout, then contact Apigee Edge
Support.

Configuring I/O timeout on Routers


Note: The instructions in this section are for Edge Private Cloud users only.

This section explains how to configure I/O timeout on the Routers. The I/O timeout
can be configured through the Router property
conf_load_balancing_load.balancing.driver.proxy.read.timeout,
which represents the I/O timeout value in seconds.

Note: Since this change is made at the Router level, it is important to note that it affects the I/O
timeout of all the API proxies associated with the organizations served by the specific
Routers.Required permissions: To perform this task, you must have the following permissions:

● Access to the specific Router machine, where you would like to configure the I/O timeout.
● Access to create/edit the Router component properties file in the
/opt/apigee/customer/application folder.

To configure I/O timeout on the Routers, do the following:

1. On the Router machine, open the following file in an editor. If it does not
already exist, then create it.
2. /opt/apigee/customer/application/router.properties
3. For example, to open the file with vi, enter the following command:
4. vi /opt/apigee/customer/application/router.properties
5. Add a line in the following format to the properties file, substituting a value
for time_in_seconds:
6. conf_load_balancing_load.balancing.driver.proxy.read.timeout=time_in_secon
ds
7. For example, to change the I/O timeout on the Router to 120 seconds, add the
following line:
8. conf_load_balancing_load.balancing.driver.proxy.read.timeout=120
9. You can also modify the I/O timeout in minutes. For example, to change the
timeout to two minutes, add the following line:
10. conf_load_balancing_load.balancing.driver.proxy.read.timeout=2m
11. Note:
12. The property
conf_load_balancing_load.balancing.driver.proxy.read.timeout
automatically takes the new value set for the property
conf_load_balancing_load.balancing.driver.proxy.read.timeout.
13. However, if you want to set a new value for the property
conf_load_balancing_load.balancing.driver.proxy.read.timeout, then
you need to add the following line to the
/opt/apigee/customer/application/router.properties file:
14. conf_load_balancing_load.balancing.driver.nginx.upstream_next_timeout=122
15. And then, follow the steps outlined below.
16. Save your changes.
17. Ensure this properties file is owned by the apigee user as shown below:
18. chown apigee:apigee /opt/apigee/customer/application/router.properties
19. Restart the Router as shown below:
20. /opt/apigee/apigee-service/bin/apigee-service edge-router restart
21. If you have more than one Router, repeat the above steps on all the Routers.

Verifying I/O timeout on the Routers


Note: The instructions in this section are for Edge Private Cloud users only.

This section explains how to verify that the I/O timeout has been successfully
modified on the Routers.

Even though you use the token


conf_load_balancing_load.balancing.driver.proxy.read.timeout to
set the I/O timeout on the Router, you need to verify if the actual property
proxy_read_timeout has been set with the new value.

1. Search for the property proxy_read_timeout in the /opt/nginx/conf.d


directory and check to see if it has been set with the new value as follows:
2. grep -ri "proxy_read_timeout" /opt/nginx/conf.d
3. If the new I/O timeout value is successfully set on the router, then the above
command shows the new value in all the virtual host configuration files.
The following is the sample result from the grep command above when the
I/O timeout is 120 seconds:
4. /opt/nginx/conf.d/0-default.conf:proxy_read_timeout 120;
5. /opt/nginx/conf.d/0-edge-health.conf:proxy_read_timeout 1s;
6. In the example output above, notice that the property proxy_read_timeout
has been set with the new value 120 in 0-default.conf which is the
configuration file for the default virtual host. This indicates that the I/O
timeout is successfully configured to 120 seconds on the Router.
7. Note: The file /opt/nginx/conf.d/0-edge-health.conf is the health check
configuration file and the property proxy_read_timeout for health check is controlled
through a different token.
8. If you still see the old value for the property proxy_read_timeout, then
verify that you have followed all the steps outlined in Configuring I/O timeout
on Routers correctly. If you have missed any step, repeat all the steps again
correctly.
9. If you are still not able to modify the I/O timeout, then contact Apigee Edge
Support.

What’s next?
Learn about Configuring I/O Timeout in Message Processor

Configuring connection timeout on


Message Processors
Note: This document is applicable for Edge Public and Private Cloud users.

This document explains how to configure the connection timeout for the Apigee
Edge Message Processors.

The connection timeout represents time for which the Message Processor waits to
establish connection with the target server. The default value of the connection
timeout property on the Message Processor is 3 seconds. This timeout period is
applicable to the backend servers configured in the target endpoint configuration
and in the ServiceCallout policy of your API proxy.

The connection timeout for Message Processors can be increased or decreased


from the default value of 3 seconds based on your needs. It can be configured in the
following ways:

● In the API proxy


● In the target endpoint
● In the ServiceCallout policy
● On the Message Processor
Note:

1. The default value of connection timeout property (3 seconds) has been found to be
largely adequate for most backend servers. You should increase the value of this
property only if you have deemed it necessary after performing all the possible
checks/optimizations on the network and your backend server.
2. Ensure that you choose an appropriate value for the connection timeout property based
on your needs. Don’t set arbitrarily high values for this property.

The following properties control the connection timeout on the Message Processors:
Property
Location Description
name

connect.t API proxy: This is the maximum time which the


imeout.mi ● Target endpoint Message Processor waits to connect with
● ServiceCallo the target server.
llis
ut policy
By default, this property takes the value set
for the
HTTPClient.connect.timeout.milli
s property on Message Processor, where
the default value is 3 seconds.

If this property is modified with a new


timeout value for the target server
associated with an API proxy, then the
connect time only for that target server is
affected.

HTTPClien Message Processor This is the maximum time which the


t.connect Message Processor waits to connect to
the target server.
.timeout.
millis This property is used for all the API proxies
running on this Message Processor.

The default value of this property is 3


seconds.

You can either modify this property as


explained in Configuring connection
timeout on Message Processors below, or
you can overwrite this value by setting the
connect.timeout.millis property at
the API proxy level.

Before you begin


Before you use the steps in this document, be sure you understand the following
topics:

● If you aren’t familiar with connection timeout, see the


connect.timeout.millis property description in TargetEndpoint
Transport Property Specification.
● If you aren’t familiar with configuring properties for Edge on Private Cloud,
read How to configure Edge.
Configuring connection timeout in API proxy
The connection timeout can be configured in the API proxy in the following places:

● Target endpoint
● ServiceCallout policy

Configuring connection timeout in target endpoint of API proxy

This section explains how to configure connection timeout in the target endpoint of
your API proxy. The connection timeout can be configured through the property
connect.timeout.millis, which represents the connection timeout value in
milliseconds.

Note: Configuring the connect time in the target endpoint of your API proxy overrides the default
value (3 seconds) with the new value and affects only the specific API proxy where it is
configured. The rest of the API Proxies continue to use the default connect timeout value of 3
seconds.Required permissions: To perform this task, you must have edit access permissions to
the specific API proxy in which you would like to configure the connection timeout.

1. In the Edge UI, select the specific API proxy in which you would like to
configure the new connection timeout value.
2. Select the specific target endpoint that you want to modify.
3. Add the property connect.timeout.millis with an appropriate value
under the <HTTPTargetConnection> element in the TargetEndpoint
configuration.
For example, to change the connection timeout to 5 seconds, add the
following block of code:
4. <Properties>
5. <Property name="connect.timeout.millis">5000</Property>
6. </Properties>

7. Since the connect.timeout.millis property is in milliseconds, the value


for 5 seconds is 5000.
8. The following examples show how to configure the connection timeout in the
target endpoint configuration of your API proxy:
9. Example target endpoint configuration using URL for backend server
10. <TargetEndpoint name="default">
11. <HTTPTargetConnection>
12. <URL>https://mocktarget.apigee.net/json</URL>
13. <Properties>
14. <Property name="connect.timeout.millis">5000</Property>
15. </Properties>
16. </HTTPTargetConnection>
17. </TargetEndpoint>
18. Example target endpoint configuration using target server
19. <TargetEndpoint name="default">
20. <HTTPTargetConnection>
21. <LoadBalancer>
22. <Server name="target1" />
23. <Server name="target2" />
24. </LoadBalancer>
25. <Properties>
26. <Property name="connect.timeout.millis">5000</Property>
27. </Properties>
28. <Path>/test</Path>
29. </HTTPTargetConnection>
30. </TargetEndpoint>
31. Note: Property values can be only literals. You cannot set values using variables.
32. Save the changes made to your API proxy.

Configuring connection timeout in ServiceCallout policy of API proxy

This section explains how to configure the connection timeout in the


ServiceCallout policy of your API proxy. The connection timeout can be
configured through the connect.timeout.millis property, which represents the
connect time value in milliseconds.

Note: Configuring the connection timeout in the ServiceCallout policy of the API proxy
overrides the default value (3 seconds) with the new value and affects only the specific
ServiceCallout policy where it is configured. The rest of the ServiceCallout policies and
API Proxies continue to use the default connect timeout value of 3 seconds.Required
permissions: To perform this task, you must have edit access permissions to the specific API
proxy in which you would like to configure the connection timeout for the ServiceCallout
policy.

To configure the connection timeout in the ServiceCallout policy using the


connect.timeout.millis property:

1. In the Edge UI, select the specific API proxy in which you would like to
configure the new connection timeout value for the ServiceCallout policy.
2. Select the specific ServiceCallout policy that you want to modify.
3. Add the property connect.timeout.millis with an appropriate value
under the <HTTPTargetConnection> element in the TargetEndpoint
configuration.
For example to change the connection timeout to 5 seconds, add the
following block of code:
4. <Properties>
5. <Property name="connect.timeout.millis">5000</Property>
6. </Properties>

7. Since the connect.timeout.millis property is in milliseconds, the value


for 5 seconds is 5000.
8. The following examples show how to configure the connection timeout in the
ServiceCallout policy of your API proxy:
9. Example ServiceCallout policy configuration using URL for backend server
10. <ServiceCallout name="Service-Callout-1">
11. <DisplayName>Service Callout-1</DisplayName>
12. <HTTPTargetConnection>
13. <Properties>
14. <Property name="connect.timeout.millis">5000</Property>
15. </Properties>
16. <URL>https://mocktarget.apigee.net/json</URL>
17. </HTTPTargetConnection>
18. </ServiceCallout>
19. Example ServiceCallout policy configuration using target server
20. <ServiceCallout enabled="true" name="Service-Callout-1">
21. <DisplayName>Service Callout-1</DisplayName>
22. <Response>calloutResponse</Response>
23. <HTTPTargetConnection>
24. <LoadBalancer>
25. <Server name="target1" />
26. <Server name="target2" />
27. </LoadBalancer>
28. <Properties>
29. <Property name="connect.timeout.millis">5000</Property>
30. </Properties>
31. <Path>/test</Path>
32. </HTTPTargetConnection>
33. </ServiceCallout>
34. Note: Property values can be only literals. You cannot set values using variables.
35. Save the changes made to your API proxy.

Configuring connection timeout on Message Processors


This section explains how to configure the connection timeout on the Message
Processors. The connection timeout can be configured through the property
conf_http_HTTPClient.connect.timeout.millis, which represents the
connection timeout value in milliseconds on the Message Processor component,
using the token according to the syntax described in How to configure Edge.

Note: If a change is made at the Message Processor level, it is important to note that it affects
the connection timeout of all the API proxies associated with the organizations and
environments served by the specific Message Processors.Required permissions: To perform
this task, you must have the following permissions:

● Access to the specific Message Processor, where you would like to configure the
connect timeout.
● Access to create or edit the Message Processor component properties file in the
/opt/apigee/customer/application folder.

To configure the connection timeout on the Message Processors, do the following:

1. On the Message Processor machine, open the following file in an editor. If it


does not already exist, then create it.
2. /opt/apigee/customer/application/message-processor.properties
3. For example, to open the file using vi, enter the following:
vi /opt/apigee/customer/application/message-processor.properties

4.
5. Add a line in the following format to the properties file, substituting a value for
TIME_IN_MILLISECONDS:
6. conf_http_HTTPClient.connect.timeout.millis=TIME_IN_MILLISECONDS
7. For example, to change the connection timeout on the Message Processor to
5 seconds, add the following line:
8. conf_http_HTTPClient.connect.timeout.millis=5000
9. Save your changes.
10. Ensure the properties file is owned by the apigee user as shown below:
chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties

11.
12. Restart the Message Processor as shown below:
/opt/apigee/apigee-service/bin/apigee-service edge-message-processor restart

13.
14. If you have more than one Message Processor, repeat the above steps on all
the Message Processors.

Verifying connection timeout on Message Processors


This section explains how to verify that the connection timeout has been
successfully modified on the Message Processors.

Even though you use the token


conf_http_HTTPClient.connect.timeout.millis to set the connection
timeout on the Message Processor, you need to verify if the actual property
HTTPClient.connect.timeout.millis has been set with the new value.

1. On the Message Processor machine, search for the property


HTTPTransport.connect.timeout.millis in the
/opt/apigee/edge-message-processor/conf directory and check to
see if it has been set with the new value as shown below:
grep -ri "HTTPClient.connect.timeout.millis"
/opt/apigee/edge-message-processor/conf

2.
3. If the new connection timeout value is successfully set on the Message
Processor, then the above command shows the new value in the
http.properties file.
The sample result from the above command after you have configured
connection timeout to 5 seconds is as follows:
4. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPClient.connect.timeout.millis=5000
5. In the example output above, notice that the property
HTTPClient.connect.timeout.millis has been set with the new value
5000 in http.properties. This indicates that the connection timeout is
successfully configured to 5 seconds on the Message Processor.
6. If you still see the old value for the property
HTTPClient.connect.timeout.millis, then verify that you have
followed all the steps outlined in Configuring connection timeout on Message
Processors correctly. If you have missed any step, repeat all the steps again
correctly.
7. If you are still not able to modify the connection timeout, then contact Google
Cloud Apigee Edge Support.

Configuring keep alive timeout on


Message Processors
Note: This document is applicable for Edge Public and Private Cloud users.

This document explains how to configure the keep alive timeout for the Apigee Edge
Message Processors.

The keep alive timeout on the Message Processor allows a single TCP connection to
send and receive multiple HTTP requests/responses from/to the backend server,
instead of opening a new connection for every request/response pair.

The default value of the keep alive timeout property on the Message Processor is 60
seconds. This timeout period is applicable to the backend servers configured in the
target endpoint configuration and in the ServiceCallout policy of your API proxy.

The keep alive timeout for Message Processors can be increased or decreased from
the default value of 60 seconds based on your needs. It can be configured in the
following ways:

● In the API proxy


● In the target endpoint
● In the ServiceCallout policy
● On the Message Processor

The following properties control the keep alive timeout on the Message Processors:
Property
Location Description
name

keepalive API proxy: This is the maximum idle time for which the
.timeout. ● Target endpoint Message Processor allows a single TCP
● ServiceCall connection to send and receive multiple
millis HTTP requests/responses, instead of
out policy
opening a new connection for every
request/response pair.

By default, this property takes the value set


for the
HTTPClient.keepalive.timeout.mil
lis property on the Message Processor,
where the default value is 60 seconds.

If this property is modified with a new


timeout value for the target server used in
the target endpoint or the
ServiceCallout policy in the specific API
proxy, then the keep alive time only for that
specific target server is affected.

HTTPClien Message Processor This is the maximum idle time for which the
t.keepali Message Processor allows a single TCP
connection to send and receive multiple
ve.timeou HTTP requests/responses, instead of
t.millis opening a new connection for every
request/response pair.

This property is used for all the API proxies


running on this Message Processor.

The default value of this property is 60


seconds.

You can either modify this property as


explained in Configuring keep alive timeout
on Message Processors below, or you can
overwrite this value by setting the
keepalive.timeout.millis property
at the API proxy level.

Before you begin


Before you use the steps in this document, be sure you understand the following
topics:
● If you aren’t familiar with keep alive timeout, see the
keepalive.timeout.millis property description in TargetEndpoint
Transport Property Specification.
● If you aren’t familiar with configuring properties for Edge on Private Cloud,
read How to configure Edge.
Note: Ensure that you set an appropriate value for the keep alive timeout property based on your
needs. Do not set very low values. For example, setting a value of zero (0) results in disabling
keep alive, which is an antipattern as described in Antipattern: Disable HTTP persistent (reusable
keep alive) connections.

Configuring keep alive timeout in API proxy


The keep alive timeout can be configured in the API proxy in the following places:

● Target endpoint
● ServiceCallout policy

Configuring keep alive timeout in target endpoint of API proxy

This section explains how to configure keep alive timeout in the target endpoint of
your API proxy. The keep alive timeout can be configured through the property
keepalive.timeout.millis, which represents the keep alive timeout value in
milliseconds.

Note: Configuring the keep alive timeout in the target endpoint of your API proxy overrides the
default value (60 seconds) with the new value and affects only the specific API proxy where it is
configured. The rest of the API Proxies continue to use the default keep alive timeout value of 60
seconds.Required permissions: To perform this task, you must have edit access permissions to
the specific API proxy in which you would like to configure the keep alive timeout.

1. In the Edge UI, select the specific API proxy in which you would like to
configure the new keep alive timeout value.
2. Select the specific target endpoint that you want to modify.
3. Add the property keepalive.timeout.millis with an appropriate value
under the <HTTPTargetConnection> element in the TargetEndpoint
configuration.
For example, to change the keep alive Timeout to 30 seconds, add the
following block of code:
4. <Properties>
5. <Property name="keepalive.timeout.millis">30000</Property>
6. </Properties>

7. Since the keepalive.timeout.millis property is in milliseconds, the


value for 30 seconds is 30000.
8. The following examples show how to configure the keep alive timeout in the
target endpoint configuration of your API proxy:
9. Example target endpoint configuration using URL for backend server
10. <TargetEndpoint name="default">
11. <HTTPTargetConnection>
12. <URL>https://mocktarget.apigee.net/json</URL>
13. <Properties>
14. <Property name="keepalive.timeout.millis">30000</Property>
15. </Properties>
16. </HTTPTargetConnection>
17. </TargetEndpoint>
18. Example target endpoint configuration using target server
19. <TargetEndpoint name="default">
20. <HTTPTargetConnection>
21. <LoadBalancer>
22. <Server name="target1" />
23. <Server name="target2" />
24. </LoadBalancer>
25. <Properties>
26. <Property name="keepalive.timeout.millis">30000</Property>
27. </Properties>
28. <Path>/test</Path>
29. </HTTPTargetConnection>
30. </TargetEndpoint>
31. Note: Property values can be only literals. You cannot set values using variables.
32. Save the changes made to your API proxy.

Configuring keep alive timeout in ServiceCallout policy of API proxy

This section explains how to configure the keep alive timeout in the
ServiceCallout policy of your API proxy. The keep alive timeout can be
configured through the keepalive.timeout.millis property, which
represents the keep alive timeout value in milliseconds.

Note: Configuring the keep alive timeout in the ServiceCallout policy of the API proxy
overrides the default value (60 seconds) with the new value and affects only the specific
ServiceCallout policy where it is configured. The rest of the ServiceCallout policies and
API Proxies continue to use the default keep alive timeout value of 60 seconds.Required
permissions: To perform this task, you must have edit access permissions to the specific API
proxy in which you would like to configure the keep alive timeout for the ServiceCallout
policy.

To configure the keep alive timeout in the ServiceCallout policy using the
keepalive.timeout.millis property:

1. In the Edge UI, select the specific API proxy in which you would like to
configure the new keep alive timeout value for the ServiceCallout policy.
2. Select the specific ServiceCallout policy that you want to modify.
3. Add the property keepalive.timeout.millis with an appropriate value
under the <HTTPTargetConnection> element in the TargetEndpoint
configuration.
For example to change the keep alive timeout to 30 seconds, add the
following block of code:
4. <Properties>
5. <Property name="keepalive.timeout.millis">30000</Property>
6. </Properties>

7. Since the keepalive.timeout.millis property is in milliseconds, the


value for 30 seconds is 30000.
8. The following examples show how to configure the keep alive timeout in the
ServiceCallout policy of your API proxy:
9. Example ServiceCallout policy configuration using URL for backend server
10. <ServiceCallout name="Service-Callout-1">
11. <DisplayName>Service Callout-1</DisplayName>
12. <HTTPTargetConnection>
13. <Properties>
14. <Property name="keepalive.timeout.millis">30000</Property>
15. </Properties>
16. <URL>https://mocktarget.apigee.net/json</URL>
17. </HTTPTargetConnection>
18. </ServiceCallout>
19. Example ServiceCallout policy configuration using target server
20. <ServiceCallout enabled="true" name="Service-Callout-1">
21. <DisplayName>Service Callout-1</DisplayName>
22. <Response>calloutResponse</Response>
23. <HTTPTargetConnection>
24. <LoadBalancer>
25. <Server name="target1" />
26. <Server name="target2" />
27. </LoadBalancer>
28. <Properties>
29. <Property name="keepalive.timeout.millis">30000</Property>
30. </Properties>
31. <Path>/test</Path>
32. </HTTPTargetConnection>
33. </ServiceCallout>
34. Note: Property values can be only literals. You cannot set values using variables.
35. Save the changes made to your API proxy.

Configuring keep alive timeout on Message Processors


This section explains how to configure the keep alive timeout on the Message
Processors. The keep alive timeout can be configured through the property
HTTPClient.keepalive.timeout.millis, which represents the keep alive
timeout value in milliseconds on the Message Processor component. Since this
property is commented on the Message Processor, you need to use the special
syntax conf/http.properties+HTTPClient.keepalive.timeout.millis
as described in the section Set a token that is currently commented out in How to
configure Edge.
Note: If a change is made at the Message Processor level, it is important to note that it affects
the keep alive timeout of all the API proxies associated with the organizations and environments
served by the specific Message Processors.
Required permissions: To perform this task, you must have the following permissions:

● Access to the specific Message Processor, where you would like to configure the keep
alive timeout.
● Access to create or edit the Message Processor component properties file in the
/opt/apigee/customer/application folder.

To configure the keep alive timeout on the Message Processors, do the following:

1. On the Message Processor machine, open the following file in an editor. If it


does not already exist, then create it.
2. /opt/apigee/customer/application/message-processor.properties
3. For example, to open the file using vi, enter the following:
vi /opt/apigee/customer/application/message-processor.properties

4.
5. Add a line in the following format to the properties file, substituting a value for
TIME_IN_MILLISECONDS:
6. conf/
http.properties+HTTPClient.keepalive.timeout.millis=TIME_IN_MILLISECONDS
7. For example, to change the keep alive timeout on the Message Processor to
30 seconds, add the following line:
8. conf/http.properties+HTTPClient.keepalive.timeout.millis=30000
9. Save your changes.
10. Ensure the properties file is owned by the apigee user as shown below:
chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties

11.
12. Restart the Message Processor as shown below:
/opt/apigee/apigee-service/bin/apigee-service edge-message-processor restart

13.
14. If you have more than one Message Processor, repeat the above steps on all
the Message Processors.

Verifying keep alive timeout on Message Processors


This section explains how to verify that the keep alive timeout has been successfully
modified on the Message Processors.

Even though you use the special syntax


conf/http.properties+HTTPClient.keepalive.timeout.millis to set
the keep alive timeout on the Message processor, you need to verify if the actual
property HTTPClient.keepalive.timeout.millis has been set with the new
value.

1. On the Message Processor machine, search for the property


HTTPClient.keepalive.timeout.millis in the /opt/apigee/edge-
message-processor/conf directory and check to see if it has been set
with the new value as shown below:
grep -ri "HTTPClient.keepalive.timeout.millis" /opt/apigee/edge-message-
processor/conf

2.
3. If the new keep alive timeout value is successfully set on the Message
Processor, then the above command shows the new value in the
http.properties file.
The sample result from the above command after you have configured keep
alive timeout to 30 seconds is as follows:
4. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPClient.keepalive.timeout.millis=30000
5. In the example output above, notice that the property
HTTPClient.keepalive.timeout.millis has been set with the new
value 30000 in http.properties. This indicates that the keep alive timeout
is successfully configured to 30 seconds on the Message Processor.
6. If you still see the old value for the property
HTTPClient.keepalive.timeout.millis, then verify that you have
followed all the steps outlined in Configuring keep alive timeout on Message
Processors correctly. If you have missed any step, repeat all the steps again
correctly.
7. If you are still not able to modify the keep alive timeout, then contact Google
Cloud Apigee Edge Support.

Converting certificates to supported


format
Note: This document is applicable for Edge Public and Private Cloud users.

This document explains how to convert the TLS certificate and associated Private
key to the PEM or the PFX (PKCS #12) formats.
Apigee Edge supports storing only PEM or PFX format certificates in keystores and
truststores. The steps used to convert certificates from any existing format to PEM
or PFX formats rely on the OpenSSL toolkit, and are applicable on any environment
where OpenSSL is available.

Before you begin

Before you use the steps in this document, be sure you understand the following
topics:

● If you aren’t familiar with the PEM or PFX format, read About TLS/SSL.
● If you aren’t familiar with certificate formats, read SSL Certificate formats.
● If you aren’t familiar with the OpenSSL library, read OpenSSL.
● If you want to use the command-line examples in this guide, install or update
to the latest version of the OpenSSL client.

Converting certificate from DER format to PEM format

This section describes how to convert a certificate and associated private key from
the DER format to the PEM format.

1. Transfer the file containing the complete certificate chain


(certificate.der) and associated private key (private_key.der) that
you want to convert to PEM format to a machine where OpenSSL is installed
using scp, sftp or any other utility.
For example, use the scp command to transfer the file to the /tmp directory
on the server containing OpenSSL as follows:
2. scp certificate.der servername:/tmp
3. scp private_key.der servername:/tmp
4. Where servername is the name of the server containing OpenSSL.
5. Login to the machine where OpenSSL is installed.
6. From the directory where the certificates are located, run the following
command to convert the certificate and associated private key from DER
format to PEM format:
7. openssl x509 -inform DER -in certificate.der -outform PEM -out certificate.pem
8. openssl rsa -inform DER -in private_key.der -outform PEM -out private.key
9. Verify that the certificate is converted to PEM format.

Converting certificate from P7B format to PEM format

This section describes how to convert certificates from the P7B format to the PEM
format.
Note: A P7B file contains only certificates and chain certificates (Intermediate CAs), not the
private key.

1. Transfer the file containing the complete certificate chain


(certificate.p7b) that you want to convert to PEM format to a machine
where OpenSSL is installed using scp, sftp or any other utility.
For example, use the scp command to transfer the file to the /tmp directory
on the server containing OpenSSL as follows:
2. scp certificate.p7b servername:/tmp
3. Where servername is the name of the server containing OpenSSL.
4. Login to the machine where OpenSSL is installed.
5. From the directory where the certificates are located, run the following
command to convert the certificate from P7B format to PEM format:
6. openssl pkcs7 -print_certs -in certificate.p7b -out certificate.pem
7. Verify that the certificate is converted to PEM format.

Converting certificate from PFX format to PEM format

This section describes how to convert TLS certificates from the PFX format to the
PEM format.

When converting a PFX file to PEM format, OpenSSL puts all the certificates and the
private key into a single file. You will need to open the file in a text editor and copy
each certificate and private key (including the BEGIN/END statements) to individual
text files and save them as certificate.pfx, Intermediate.pfx (if
applicable), CACert.pfx, and privateKey.key respectively.

Apigee does support the PFX/PKCS #12 format; however, the PEM format is
convenient for many reasons including validation.

1. Transfer the certificates and private key (certificate.pfx,


Intermediate.pfx CACert.pfx, privateKey.key) that you want to
convert to PEM format to a machine where OpenSSL is installed using scp,
sftp or any other utility.
For example, use the scp command to transfer the file to the /tmp directory
on the server containing OpenSSL as follows:
2. scp certificate.pfx servername:/tmp
3. Where servername is the name of the server containing OpenSSL.
4. Login to the machine where OpenSSL is installed.
5. From the directory where the certificates are located, run the following
command to convert the certificate from P7B format to PEM format:
6. openssl pkcs12 -in certificate.pfx -out certificate.pem -nodes
7. Verify that the certificate is converted to PEM format.
Converting certificate from P7B format to PFX format

This section describes how to convert TLS certificates from the P7B format to the
PFX format.

To convert to the PFX format, you need to get the private key as well.

Note: A P7B file contains all the certificates and chain certificates (Intermediate CAs) in a single
file.

1. Transfer the certificate (certificate.p7b) that you want to convert to PFX


to a machine where OpenSSL is installed using scp, sftp or any other utility.
For example, use the scp command to transfer the file to the /tmp directory
on the server containing OpenSSL as follows:
2. scp certificate.p7b servername:/tmp
3. scp private_key.key servername:/tmp
4. Where servername is the name of the server containing OpenSSL.
5. Login to the machine where OpenSSL is installed.
6. From the directory where the certificates are located, run the following
commands to convert the certificate from P7B to PFX format and export the
entity and Intermediate CA certificates into separate files:
7. openssl pkcs7 -print_certs -in certificate.p7b -out certificate.cer
8.
9. openssl pkcs12 -export -in certificate.cer -inkey private_key.key -out
certificate.pfx -certfile CACert.cer

Verifying certificate is in PEM format

This section describes how to verify that the certificate is in PEM format.

1. To view the certificate that is in PEM format, run the following command:
2. openssl x509 -in certificate.pem -text -noout
3. If you are able to view the contents of the certificate in a human-readable
format without any errors, then you can confirm that the certificate is in PEM
format.
4. If the certificate is in any other format, then you will see errors like the
following:
5. unable to load certificate
6. 12626:error:0906D06C:PEM routines:PEM_read_bio:no start
line:pem_lib.c:647:Expecting: TRUSTED CERTIFICATE View DER encoded
Certificate

Was this helpful?


Validating certificate chain
Note: This document is applicable for Edge Public and Private Cloud users.

This document explains how to validate a certificate chain before you upload the
certificate to a keystore or a truststore in Apigee Edge. The process relies on the
OpenSSL toolkit to validate the certificate chain and is applicable on any
environment where OpenSSL is available.

Before you begin


Before you use the steps in this document, be sure you understand the following
topics:

● If you aren’t familiar with a certificate chain, read Chain of trust.


● If you aren’t familiar with the OpenSSL library, read OpenSSL.
● If you want to use the command-line examples in this guide, install or update
to the latest version of OpenSSL client.
● Ensure the certificates are in PEM format. If the certificates are not in PEM
format, use the instructions in Converting certificates to supported format to
convert them into PEM format.

Validating the certificate subject and issuer for the


complete chain
To validate the certificate chain using OpenSSL commands, complete the steps
described in the following sections:

● Splitting the certificate chain


● Verifying the certificate subject and issuer
● Verifying the certificate subject and issuer hash
● Verifying the certificate expiry

Splitting the certificate chain

Before validating the certificate, you need to split the certificate chain into separate
certificates using the following steps:

1. Login to the server where the OpenSSL client exists.


2. Split the certificate chain into the following certificates (if not already done):
● Entity certificate: entity.pem
● Intermediate certificate: intermediate.pem
● Root certificate: root.pem
Note: In some certificate chains, there could be multiple intermediate certificates between the
entity and root certificate. In such cases, name the intermediate certificates as:
intermediate1.pem, intermediate2.pem, etc.

The following figure shows an example certificate chain:

Verifying the certificate subject and issuer

This section describes how to get the subject and issuer of the certificates and verify
that you have a valid certificate chain.

1. Run the following OpenSSL command to get the Subject and Issuer for
each certificate in the chain from entity to root and verify that they form a
proper certificate chain:
2. openssl x509 -text -in certificate | grep -E '(Subject|Issuer):'
3.
4. Where certificate is the name of the certificate.
5. Verify that the certificates in the chain adhere to the following guidelines:
● Subject of each certificate matches the Issuer of the preceding
certificate in the chain (except for the Entity certificate).
● Subject and Issuer are the same for the root certificate.
6. If the certificates in the chain adhere to these guidelines, then the certificate
chain is considered to be complete and valid.
Sample certificate chain validation
The following example is the output of the OpenSSL commands for a sample
certificate chain containing three certificates:
Entity certificate
7. openssl x509 -text -in entity.pem | grep -E '(Subject|Issuer):'
8.
9. Issuer: C = US, O = Google Trust Services, CN = GTS CA 1O1
Subject: C = US, ST = California, L = Mountain View, O = Google LLC, CN =
*.enterprise.apigee.com

10. Intermediate certificate


11. openssl x509 -text -in intermediate.pem | grep -E '(Subject|Issuer):'
12.
13. Issuer: OU = GlobalSign Root CA - R2, O = GlobalSign, CN = GlobalSign
14. Subject: C = US, O = Google Trust Services, CN = GTS CA 1O1
15.
16. Root certificate
17. openssl x509 -text -in root.pem | grep -E '(Subject|Issuer):'
18.
19. Issuer: OU = GlobalSign Root CA - R2, O = GlobalSign, CN = GlobalSign
20. Subject: OU = GlobalSign Root CA - R2, O = GlobalSign, CN = GlobalSign
21.
22. In the example shown above, notice the following:
● The Subject of the intermediate certificate matches the Issuer of
the entity certificate.
● The Subject of the root certificate matches the Issuer of the
intermediate certificate.
● The Subject and Issuer are the same in the root certificate.
23. From the example above, you can confirm that the sample certificate chain is
valid.

Verifying the certificate subject and issuer hash

This section explains how to get the hash of the subject and issuer of the certificates
and verify that you have a valid certificate chain.

It is always a good practice to verify the hash sequence of certificates as it can help
in identifying issues such as the Common Name (CN) of the certificate having
unwanted space or special characters.

1. Run the following OpenSSL command to get the hash sequence for each
certificate in the chain from entity to root and verify that they form a
proper certificate chain.
2. openssl x509 -hash -issuer_hash -noout -in certificate
3.
4. Where certificate is the name of the certificate.
5. Verify that the certificates in the chain adhere to the following guidelines:
● Subject of each certificate matches the Issuer of the preceding
certificate in the chain (except for the Entity certificate).
● Subject and Issuer are the same for the root certificate.
6. If the certificates in the chain adhere to these guidelines, then the certificate
chain is considered to be complete and valid.
7. Sample certificate chain validation through hash sequence
8. The following example is the output of the OpenSSL commands for a sample
certificate chain containing three certificates:
9. openssl x509 -in entity.pem -hash -issuer_hash -noout
10. c54c66ba #this is subject hash
11. 99bdd351 #this is issuer hash
12.
13. openssl x509 -in intermediate.pem -hash -issuer_hash -noout
14. 99bdd351
15. 4a6481c9
16.
17. openssl x509 -in root.pem -hash -issuer_hash -noout
18. 4a6481c9
19. 4a6481c9
20.
21. In the example shown above, notice the following:
● The subject hash of the intermediate certificate matches the
issuer hash of the entity certificate.
● The subject hash of the root certificate matches the issuer hash
of the issuer certificate.
● The subject and issuer hash are the same in the root certificate.
22. From the example above, you can confirm that the sample certificate chain is
valid.

Verifying the certificate expiry

This section explains how to verify whether or not all the certificates in the chain are
expired using of the following methods:

● Get the start and end date of the certificate.


● Get the expiry status.
START AND END DATE
EXPIRY STATUS

Run the following OpenSSL command to get the start and end date for each
certificate in the chain from entity to root and verify that all the certificates in the
chain are in force (start date is before today) and are not expired.

Sample certificate expiry validation through start and end dates

openssl x509 -startdate -enddate -noout -in entity.pem


notBefore=Feb 6 21:57:21 2020 GMT
notAfter=Feb 4 21:57:21 2021 GMT
openssl x509 -startdate -enddate -noout -in intermediate.pem
notBefore=Jun 15 00:00:42 2017 GMT
notAfter=Dec 15 00:00:42 2021 GMT
openssl x509 -startdate -enddate -noout -in root.pem
notBefore=Apr 13 10:00:00 2011 GMT
notAfter=Apr 13 10:00:00 2022 GMT
Note: If the certificate is found to be an invalid chain or has any expired certificates using the
methods described in this document, then work with your Certificate Authority to procure a valid
certificate chain and ensure the certificates in the chain are not expired.

Validating certificate purpose


Note: This document is applicable for Edge Public and Private Cloud users.

This document explains how to validate a certificate’s purpose before you upload the
certificate to a keystore or a truststore. The process relies on OpenSSL for validation
and is applicable on any environment where OpenSSL is available.

The TLS certificates are generally issued with one or more purposes for which they
can be used. Typically this is done to restrict the number of operations for which a
public key contained in the certificate can be used. The purpose of the certificate is
available in the following certificate extensions:

● Key usage
● Extended key usage

Key usage

The key usage extension defines the purpose (for example, encipherment, signature,
or certificate signing) of the key contained in the certificate. If the public key is used
for entity authentication, then the certificate extension should have the key usage
Digital signature.

The different key usage extensions available for a TLS certificate created using the
Certificate Authority (CA) process are as follows:

● Digital signature
● Non-repudiation
● Key encipherment
● Data encipherment
● Key agreement
● Certificate signing
● CRL signing
● Encipher only
● Decipher only

For more information on these key usage extensions, see RFC5280, Key Usage.

Extended key usage

This extension indicates one or more purposes for which the certified public key may
be used, in addition to or in place of the basic purposes indicated in the key usage
extension. In general, this extension will appear only in end entity certificates.
Some common extended key usage extensions are as follows:

● TLS Web server authentication


● TLS Web client authentication
● anyExtendedKeyUsage

An extended key can be either critical or non-critical.

● If the extension is critical, the certificate must be used only for the indicated
purpose or purposes. If the certificate is used for another purpose, it is in
violation of the CA's policy.
● If the extension is non-critical, it indicates the intended purpose or purposes of
the key is informational and does not imply that the CA restricts use of the key
to the purpose indicated. However, applications that use certificates may
require that a particular purpose be indicated in order for the certificate to be
acceptable.

If a certificate contains both the key usage field and the extended key usage field as
critical then both fields must be processed independently, and the certificate can be
used only for a purpose that satisfies both key usage values. However, if there is no
purpose that can satisfy both key usage values, then that certificate must not be
used for any purpose.

When you procure a certificate, ensure that it has the proper key usage defined to
satisfy the requirements for client or server certificates without which the TLS
handshake would fail.

Recommended key usage and extended key usages for certificates used in
Apigee Edge
Purpose Key usage Extended key
usage
(mandatory)
(optional)

Server entity certificate ● Digital TLS Web server


used in Apigee Edge signature authentication
keystore of virtual host ● Key
encipherment
or key
agreement

Client entity certificate ● Digital TLS Web client


used in Apigee Edge signature or key authentication
truststore of virtual host agreement

Server entity certificate ● Digital TLS Web server


used in Apigee Edge signature authentication
truststore of target server ● Key
encipherment
or key
agreement

Client entity certificate ● Digital TLS Web client


used in Apigee Edge signature or key authentication
keystore of target server agreement

Intermediate and root ● Certificate sign


certificates ● Certificate
revocation list
(CRL) sign

Before you begin


Before you use the steps in this document, be sure you understand the following
topics:

● If you aren’t familiar with a certificate chain, read Chain of trust.


● If you aren’t familiar with the OpenSSL library, read OpenSSL
● If you want to learn more about key usage extensions and extended key
usage, read RFC5280.
● If you want to use the command-line examples in this guide, install or update
to the latest version of OpenSSL client
● Ensure the certificates are in PEM format and if not, convert the certificates to
PEM format.

Validate the purpose of the certificate


This section describes the steps used to validate the purpose of the certificate.

1. Login to the server where OpenSSL exists.


2. To get the key usage of a certificate, run the following OpenSSL command:
3. openssl x509 -noout -ext keyUsage < certificate
4. Where certificate is the name of the certificate.
Sample output
5. openssl x509 -noout -ext keyUsage < entity.pem
6. X509v3 Key Usage: critical
7. Digital Signature, Key Encipherment

openssl x509 -noout -ext keyUsage < intermediate.pem


X509v3 Key Usage: critical
Certificate Sign, CRL Sign
8. If a key usage is mandatory, then it will be defined as critical as follows:
9. openssl x509 -noout -ext keyUsage < intermediate.pem
10. X509v3 Key Usage: critical
11. Certificate Sign, CRL Sign
12. Run the following command to get the extended key usage for a certificate. If
the extended key usage is not defined as critical, then it is a recommendation
and not a mandate.
13. openssl x509 -noout -ext extendedKeyUsage < certificate
14. Where certificate is the name of the certificate.
15. Sample output
16. openssl x509 -noout -ext extendedKeyUsage < entity.pem
17. X509v3 Extended Key Usage:
18. TLS Web Server Authentication, TLS Web Client Authentication
19.
20. openssl x509 -noout -ext extendedKeyUsage < intermediate.pem
21. X509v3 Extended Key Usage:
22. TLS Web Server Authentication, TLS Web Client Authentication
Note: If the certificates that you would like to use in Apigee Edge do not have the recommended
key usage and extended key usage as explained in Recommended key usage, then you need to
work with your Certificate Authority and procure the certificates with the right key usage and
extended usage.

Validating client certificate against


truststore
Note: The instructions in this document are applicable for Edge Private Cloud users only.

This document explains how to verify that the correct client certificates have been
uploaded to Apigee Edge Routers. The process of validating certificates relies on
OpenSSL, which is the underlying mechanism used by NGINX on Apigee Edge
Routers.

Any mismatch in the certificates sent by the client applications as part of the API
request and the certificates stored on Apigee Edge Routers will lead to 400 Bad
Request - SSL Certificate errors. Validating the certificates using the process
described in this document can help you to proactively detect these issues and
prevent any certificate errors at runtime.

Note: This is applicable only if you have configured two-way mutual TLS between the client
application and Apigee Edge.

Before you begin


Before you use the steps in this document, be sure you understand the following
topics:

● If you aren’t familiar with the OpenSSL library, read OpenSSL.


● If you want to use the command-line examples in this guide, install or update
to the latest version of OpenSSL client.
● Ensure the certificates are in PEM format and if not, convert the certificates to
PEM format.

Validating client certificates against truststore on Apigee


Routers
This section describes the steps used to verify that the client certificates are
identical to certificates stored in the truststore on Apigee Edge Routers.

1. Login to one of the Router machines.


2. Navigate to the /opt/nginx/conf.d folder, where the certificates are
stored in Apigee Edge Routers’ truststore.
3. Identify the truststore for which you would like to validate the client
certificates. The truststore name is in the following format:
4. org-env-virtualhost-client.pem
5. Where:
● org is your Apigee organization name
● env is your Apigee environment name
● virtualhost is your Apigee virtual host name
● For example, to validate for the following:
● Organization: myorg
● Environment: test
● Virtual host: secure
6. The truststore name is: myorg-test-secure-client.pem
7. From your local machine, transfer the actual client certificate that you want to
validate to the /tmp directory on the Router, using scp, sftp or any other
utility.
For example, use the scp command as follows:
8. scp client_cert.pem router-host:/tmp
9. Where router-host is the name of Router machine.
10. Verify the client certificate using OpenSSL as follows:
11. openssl verify -trusted org-env-virtualhost-client.pem /tmp/client-cert.pem
12. Where:
● org is your Apigee organization name
● env is your Apigee environment name
● virtualhost is your Apigee virtual host name
13. Fix any errors that are returned from the command above.
If the truststore on the Apigee Edge Router doesn’t contain the correct
certificates, delete and upload the correct certificates in PEM format to the
truststore using this Upload certificate to truststore API.
Note: The steps above can also be used to validate the client certificate stored on any NGINX
based backend server’s truststore. The location of the truststore and naming convention of the
file on your NGINX based backend server may be different from that of Apigee.

Configuring cipher suites on virtual hosts


and Routers

Note: This document is applicable for Edge Public and Private Cloud users.

This document explains how to configure cipher suites on virtual hosts and Routers
in Apigee Edge.

A cipher suite is a set of algorithms that help secure a network connection that
uses TLS. The client and the server must agree on the specific cipher suite that is
going to be used in exchanging messages. If the client and server do not agree on a
cipher suite, then the requests fail with TLS Handshake failures.

In Apigee, cipher suites should be mutually agreed upon between the client
applications and Routers.

You may want to modify the cipher suites on Apigee Edge for the following reasons:

● To avoid any cipher suites mismatch between client applications and Apigee
Routers
● To use more secure cipher suites either to fix any security vulnerabilities or
for enhanced security

Cipher suites can be configured either on the virtual hosts or Apigee Routers. Note
that Apigee accepts cipher suites only in the OpenSSL cipher strings format at both
the virtual host and the Router. The OpenSSL ciphers manpage provides the SSL or
TLS cipher suites from the relevant specification and their OpenSSL equivalents.

For example:

If you want to configure the TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256


cipher suite at the virtual host or the Apigee Router, then you need to identify the
corresponding OpenSSL cipher string from the OpenSSL ciphers manpage. The
OpenSSL cipher string for the cipher suite
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 is ECDHE-RSA-AES128-GCM-
SHA256. So, you need to use the OpenSSL cipher string ECDHE-RSA-AES128-
GCM-SHA256 while configuring the cipher suite in the virtual host or on the Apigee
Router.

Before you begin


● To learn about the OpenSSL cipher strings for the different cipher suites,
read OpenSSL ciphers manpage.
● If you aren’t familiar with virtual host properties, read Virtual host property
reference.
● If you aren’t familiar with configuring properties for Edge on Private Cloud,
read How to configure Edge.

Configuring cipher suites on virtual hosts


This section explains how to configure cipher suites in the virtual hosts associated
with an organization and environment. Cipher suites can be configured in the virtual
host through the property ssl_ciphers, which represents the list of cipher suites
supported by the virtual host.

Required permissions: To perform this task, you must have one of the following permissions:

● You need to be an org admin of the specific organization


● You should be in a custom role with permission to modify the specific virtual host in
which you need to configure the cipher suites.

You can configure the virtual host using one of the following methods:

● Using the Edge UI


● Using the Edge API
Note:

1. If you use the cipher suite name TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, any


other form of cipher suite name, or invalid cipher suite names other than OpenSSL
cipher strings to configure in the virtual host, you will see the following error:
With Edge UI:
2. Invalid value TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 for ssl_ciphers. Expected
values are openssl cipher strings separated by :
3. With Management API:
4. <Error>
5. <Code>messaging.config.beans.InvalidValue</Code>
6. <Message>Invalid value TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 for
ssl_ciphers. Expected values are openssl cipher strings separated by :</Message>
7. <Contexts/>
8. </Error>
9. If you specify one or more cipher suite names or invalid ciphers, along with at least one
valid OpenSSL cipher string, then Apigee Edge will consider only the valid OpenSSL
cipher strings and ignore the rest. You will not see any errors in this case.
10. Only the supported OpenSSL Cipher strings corresponding to the protocol version will
be accepted while the rest of them will be ignored. For example: If TLSv1.2 protocol is
used, then the old ciphers that don't support TLSv1.2 will be ignored.

USING EDGE UI
USING EDGE API
Note: The virtual host cannot be edited using classic UI. If you don't have access to the new UI,
use the Edge API. For more information on the differences, see Transition from Classic Edge
UI.
To configure the virtual host using the Edge UI, do the following:

1. Login to the Edge UI.


2. Navigate to Admin > Virtual Hosts.
3. Select a specific Environment where you want to make this change.
4. Select the specific virtual host for which you would like to configure the
cipher suites.
5. Under Properties, update the Ciphers value with a colon-delimited list of
OpenSSL cipher strings.
For example, if you want to allow only the cipher suites
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 and
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, then determine the
corresponding OpenSSL cipher strings from the OpenSSL ciphers manpage
as shown in the following table:

OpenSSL cipher
Cipher Suite
string

TLS_DHE_RSA_WITH_AES_ DHE-RSA-AES128-
128_GCM_SHA256 GCM-SHA256

TLS_ECDHE_RSA_WITH_AE ECDHE-RSA-
S_128_GCM_SHA256 AES128-GCM-
SHA256

6.
Add the OpenSSL cipher string with colon separation as shown in the
following figure:

7. Save the change.

Verifying cipher suites on virtual hosts


This section explains how to verify that the cipher suites have been successfully
modified on the virtual host using the Edge API.

Note: Use this procedure to verify that the cipher suites have been successfully modified on the
virtual host whether you used the Edge UI or the Edge API to configure the cipher suites.
1. Execute the Get virtual host API to get the virtualhost configuration as
shown below:
Public Cloud user:
2. curl -v -X GET
https://api.enterprise.apigee.com/v1/organizations/{organization-name}/
environments/{environment-name}/virtualhosts/{virtualhost-name} -u
{username}
3. Private Cloud user:
4. curl -v -X GET
http://{management_server_IP}:8080/v1/organizations/{organization-
name}/environments/{environment-name}/virtualhosts/{virtualhost-name} -u
{username}
5. Verify that the property ssl_ciphers has been set to the new value.
Sample updated virtual host configuration:
6. {
7. "hostAliases": [
8. "api.myCompany,com",
],
"interfaces": [],
"listenOptions": [],
"name": "secure",
"port": "443",
"retryOptions": [],
"properties": {
"property": [
{
"name": "ssl_ciphers",
"value": "DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-
SHA256"
}
]
},
"sSLInfo": {
"ciphers": [],
"clientAuthEnabled": "false",
"enabled": "true",
"ignoreValidationErrors": false,
"keyAlias": "myCompanyKeyAlias",
"keyStore": "ref://myCompanyKeystoreref",
"protocols": []
},
"useBuiltInFreeTrialCert": false
}

9. In the example above, note that ssl_ciphers has been set with the new
value.
10.If you still see the old value for ssl_ciphers, then verify that you have
followed all the steps outlined in Configuring cipher suites on virtual hosts
correctly. If you have missed any step, repeat all the steps again correctly.
11.If you are still not able to update or add cipher suites to virtual host, then
contact Apigee Edge Support.

Configuring cipher suites on Routers


Note: The instructions in this section are for Edge Private Cloud users only.

This section explains how to configure cipher suites on the Routers. Cipher suites
can be configured through the Router property
conf_load_balancing_load.balancing.driver.server.ssl.ciphers,
which represents the colon-separated accepted cipher suites.

Note:

1. Since this change is made at the Router level, it is important to note that it affects all
the virtual hosts associated with the organizations served by the specific Routers.
2. If you specify one or more cipher suite names or invalid ciphers, along with at least one
valid OpenSSL cipher string, then Apigee Edge will consider only the valid OpenSSL
cipher strings and ignore the rest. You will not see any errors/issues in this case.
3. Only the supported OpenSSL Cipher strings corresponding to the protocol version will
be accepted while the rest of them will be ignored. For example: If TLSv1.2 protocol is
used, then the old ciphers that don't support TLSv1.2 will be ignored.
4. Ensure you are using all the valid OpenSSL cipher strings. If all the ciphers in the list are
invalid, then this change may cause all the secure virtual hosts to go down and may
result in runtime impact.

Required permissions: To perform this task, you must have the following permissions:

● Access to the specific Router machine where you would like to configure the Cipher
suites
● Access to create and edit the Router component properties file in the
/opt/apigee/customer/application folder

To configure cipher suites on the Routers, do the following:

1. On the Router machine, open the following file in an editor. If it does not
already exist, then create it.
2. /opt/apigee/customer/application/router.properties
3. For example, to open the file with vi, enter the following:
4. vi /opt/apigee/customer/application/router.properties
5. Add a line in the following format to the properties file, substituting a
value for colon_separated_cipher_suites:
6. conf_load_balancing_load.balancing.driver.server.ssl.ciphers=colon_separat
ed_cipher_suites
7. For example, if you want to allow only the cipher suites
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 and
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, then determine the
corresponding OpenSSL cipher strings from the OpenSSL ciphers manpage
as shown in the following table:
Cipher Suite OpenSSL cipher string

TLS_DHE_RSA_WITH_AES_128 DHE-RSA-AES128-
_GCM_SHA256 GCM-SHA256

TLS_ECDHE_RSA_WITH_AES_1 ECDHE-RSA-AES128-
28_GCM_SHA256 GCM-SHA256

8. And then add the following line:


9. conf_load_balancing_load.balancing.driver.server.ssl.ciphers=DHE-RSA-
AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256
10.Save your changes.
11.Ensure this properties file is owned by the apigee user as shown below:
12.chown apigee:apigee /opt/apigee/customer/application/router.properties
13.Restart the Router as shown below:
14./opt/apigee/apigee-service/bin/apigee-service edge-router restart
15.If you have more than one Router, repeat the above steps on all the Routers.

Verifying cipher suite on Routers


Note: The instructions in this section are for Edge Private Cloud users only.

This section explains how to verify that the cipher suites have been successfully
modified on the Routers.

On the Router, search for the property


conf_load_balancing_load.balancing.driver.server.ssl.ciph
ers using the Apigee search utility from /opt/apigee folder and check to
see if it has been set with the new value as follows:
/opt/apigee/apigee-service/bin/apigee-service edge-router configure -search
conf_load_balancing_load.balancing.driver.server.ssl.ciphers
If the new cipher suites are successfully set on the router, then the above
command shows the new values.
The following is the sample result from the search command above when
the cipher suites were updated to DHE-RSA-AES128-GCM-SHA256 and
ECDHE-RSA-AES128-GCM-SHA256:
Found key conf_load_balancing_load.balancing.driver.server.ssl.ciphers, with
value, DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256,
in /opt/apigee/customer//application/router.properties
In the example output above, notice that the property
conf_load_balancing_load.balancing.driver.server.ssl.ciph
ers has been set with the new cipher suite values. This indicates that the
cipher suite was successfully updated to the OpenSSL cipher strings DHE-
RSA-AES128-GCM-SHA25and ECDHE-RSA-AES128-GCM-SHA256 on the
Router.
If you still see the old values for cipher suites
conf_load_balancing_load.balancing.driver.server.ssl.ciph
ers, then verify that you have followed all the steps outlined in Configuring
cipher suites on Routers correctly. If you have missed any step, repeat all the
steps again correctly.
If you are still not able to modify the Cipher suites on Routers, then contact
Apigee Edge Support.

Restarting Routers and Message


Processors without traffic impact

Note: The instructions in this document are applicable for Edge Private Cloud users only.

This document explains how to restart Routers and Message Processors (MPs)
without impacting incoming API traffic. You may have to restart the Routers and
MPs under certain circumstances. Some examples are as follows:

● When a keystore that is directly referred to in the virtual host, target server
or target endpoint is updated without using references.
● When API proxies are partially deployed on a few MPs.

Before you begin


If you aren’t familiar with Routers and Message Processors, read Edge for Private
Cloud overview.

Rolling restart of Routers without traffic impact


This section describes the steps used to restart Routers without impacting the
incoming API traffic.

Required permissions: To perform this task, you must have permission to access the specific
Router machines where you would like to restart the Router process.

1. Login to the Router that needs to be restarted.


2. Block the health check port on the Router using the following command.
This ensures that the Router is considered unhealthy and no traffic will be
routed to this Router.
3. sudo iptables -A INPUT -i eth0 -p tcp --dport 15999 -j REJECT
4.
5. Note: 15999 is the default health check port on Apigee Routers. However, if you
configured a different port for health check on Routers, then block that port.
6. Wait for two minutes to ensure that any inflight traffic is handled smoothly
before you restart the Router. You can do this by running the sleep
command as follows:
7. for i in {001..120}; do sleep 1; printf "\r ${i}"; done
8.
9. Note: You can modify the wait time based on your requirements.
10.Stop the Apigee Monit service as follows:
11.apigee-service apigee-monit stop
12.
13.Stop the Apigee Router service as follows:
14.apigee-service edge-router stop
15.
16.Start the Apigee Router service as follows:
17.apigee-service edge-router start
18.
19.Wait until the Apigee Router service is started and ready to handle the
incoming traffic using the following command:
20.apigee-service edge-router wait_for_ready
21.
22.Start the Apigee Monit service as follows:
23.apigee-service apigee-monit start
24.
25.Flush the IP tables to unblock the health check port 15999 and allow the
Router to handle the traffic again by running the following commands:
26.sudo iptables -F
27.sudo iptables -L
28.
Note: You may have more than one Router that needs to be restarted and they may be located
in one or more data centers. If you want to restart all the Routers without impacting traffic, then
login to one Router at a time and run all the commands (explained in the section above). For
ease of use, all the commands are listed below, which can be copied and pasted.
sudo iptables -A INPUT -i eth0 -p tcp --dport 15999 -j REJECT
for i in {001..120}; do sleep 1; printf "\r ${i}"; done
apigee-service apigee-monit stop
apigee-service edge-router stop
apigee-service edge-router start
apigee-service edge-router wait_for_ready
apigee-service apigee-monit start
sudo iptables -F
sudo iptables -L
Rolling restart of Message Processors without traffic
impact
This section describes the steps used to restart Message Processors (MPs)
without impacting the incoming API traffic.

Required permissions: To perform this task, you must have permission to access the specific
Message Processor machines where you would like to restart the Message Processor process.

1. Login to the Message Processor that needs to be restarted.


2. Identify the health check port of the Message Processor using the following
command:
3. curl 0:8082/v1/servers/self -s | jq '.tags.property' | jq '.[] |
select(.name=="http.port")'
4.
5. Block the health check port (identified in step 2) on the Message Processor.
This ensures that the Message Processor is considered unhealthy and no
traffic will be routed to this Message Processor.
6. sudo iptables -A INPUT -i eth0 -p tcp --dport port # -j REJECT
7.
8. Where port # is the port number returned from the command performed in
step 2.
9. Wait for two minutes to ensure that any inflight traffic is handled smoothly
before you restart the Message Processor. You can do this by running the
sleep command as follows:
10.for i in {001..120}; do sleep 1; printf "\r ${i}"; done
11.
12. Note: You can modify the wait time based on your requirements.
13.Stop the Apigee Monit service as follows:
14.apigee-service apigee-monit stop
15.
16.Stop the Apigee Message Processor service as follows:
17.apigee-service edge-message-processor stop
18.
19.Sart the Apigee Message Processor service as follows:
20.apigee-service edge-message-processor start
21.
22.Wait until the Apigee Message Processor service is started and ready to
handle the incoming traffic using the following command:
23.apigee-service edge-message-processor wait_for_ready
24.
25.Start the Apigee Monit service as follows:
26.apigee-service apigee-monit start
27.
28.Flush the IP tables to unblock the health check port and allow the Message
Processor to handle the traffic again by running the below commands:
29.sudo iptables -F
30.sudo iptables -L
31.
Note: You may have more than one Message Processor that needs to be restarted and they
may be located in one or more data centers. If you want to restart all the Message Processors
without impacting traffic, then login to one Message Processor at a time and run all the
commands (explained in the section above). For ease of use, all the commands are listed
below, which can be copied and pasted.
curl 0:8082/v1/servers/self -s | jq '.tags.property' | jq '.[] |
select(.name=="http.port")'

sudo iptables -A INPUT -i eth0 -p tcp --dport port # -j REJECT


for i in {001..120}; do sleep 1; printf "\r ${i}"; done
apigee-service apigee-monit stop
apigee-service edge-message-processor stop
apigee-service edge-message-processor start
apigee-service edge-message-processor wait_for_ready
apigee-service apigee-monit start
sudo iptables -F
sudo iptables -L

Where port # is the port number returned from the command performed in step 2.

Upgrading your operating system


This page describes how to upgrade major versions of the following operating
systems in a data center:

● Red Hat Enterprise Linux (RHEL): RHEL 6 to RHEL 7 and RHEL 7 to RHEL 8
● Amazon Web Services Linux: AWS Linux 1 to AWS Linux 2

See Edge for Private Cloud supported versions for information on supported
versions of these operating systems.

Note: Apigee does not support an in-place operating system (OS) upgrade. An in-place OS
upgrade may cause Apigee Edge for Private Cloud to stop working due to package and library
incompatibilities.

To upgrade your operating system (OS), you need to install Edge for Private Cloud
in a new data center containing the new version of OS, and then decommission the
old data center, as follows:

1. Set up a new Apigee data center with servers containing the new version of
the OS. See Adding a data center.
2. Install Edge for Private Cloud on the new servers, using a silent configuration
file that expands on the existing installation.Note: If you are upgrading to RHEL 8,
see Prerequisites for RHEL 8.
3. Add the servers to an existing cluster.
4. Finally, decommission the old data center, as explained in Decommissioning
a data center.

Downgrading Apigee Components and


NGINX
Note: This document is applicable for Edge Private Cloud users only.

March 2021 patch release


The RPMs for the March 2021 patch release of Edge for Private Cloud, which were
pushed to the Apigee production repository, had an unintended dependency update
for apigee-nginx-1.18. As a result, we have removed the RPMs from the
repository and replaced them with correct RPMs. The invalid RPMs were in the
repository on March 25, 2021 from 08:45 AM to 03:45 PM PST. If you downloaded
and installed Edge RPMs on that date, you may need to downgrade the following
Apigee components to previous versions:

● edge-gateway
● edge-management-server
● edge-message-processor
● edge-postgres-server
● edge-qpid-server
● edge-router
● nginx

The following sections describe how to check whether you need to downgrade, and
how to downgrade Apigee components, if necessary.

Checking whether you need to downgrade


To see whether you need to downgrade Apigee components or NGINX, do one of
the following procedures, depending on whether you are using Edge for Private
Cloud 4.50.00 or 4.19.06.

Procedure for Edge 4.50.00

On each node, enter the following to find your Gateway version:

-- apigee-service edge-gateway version

If the version number for edge-gateway is:


● Less than 20113, you don't need to take any further action.
● Equal to 20113, you need to downgrade Apigee components and NGINX.
● Greater than 20113, find your NGINX version by entering the following:
● -- sudo yum list installed apigee-nginx
● Here is some sample output from the command:
Installed Packages
apigee-nginx.x86_64 1.18.0-1.el7

● @apigee-thirdparty
● If the NGINX version is apigee-nginx.x86_64 1.18.0-XXX, you only
need to downgrade NGINX.

Procedure for Edge 4.19.06

On each node, enter the following to find your Gateway version:

-- apigee-service edge-gateway version

If the version number for edge-gateway is:

● Less than 20114, you don't need to take any further action.
● Equal to 20114, you need to downgrade Apigee components and downgrade
NGINX.
● Greater than 20114, find your NGINX version by entering the following:
● -- sudo yum list installed apigee-nginx
● Here is some sample output from the command:
Installed Packages
apigee-nginx.x86_64 1.18.0-1.el7

● @apigee-thirdparty
● If the NGINX version is apigee-nginx.x86_64 1.18.0-XXX, you only
need to downgrade NGINX.
Components to downgrade
If you have installed any of the RPMs on the following lists, you need to
downgrade to the previous version of these RPMs.
Components to downgrade for Edge for Private Cloud 4.50.00
edge-gateway-4.50.00-0.0.20113.noarch.rpm
edge-management-server-4.50.00-0.0.20113.noarch.rpm
edge-message-processor-4.50.00-0.0.20113.noarch.rpm
edge-postgres-server-4.50.00-0.0.20113.noarch.rpm
edge-qpid-server-4.50.00-0.0.20113.noarch.rpm

● edge-router-4.50.00-0.0.20113.noarch.rpm
● Components to downgrade for Edge for Private Cloud 4.19.06
edge-gateway-4.19.06-0.0.20114.noarch.rpm
edge-management-server-4.19.06-0.0.20114.noarch.rpm
edge-message-processor-4.19.06-0.0.20114.noarch.rpm
edge-postgres-server-4.19.06-0.0.20114.noarch.rpm
edge-qpid-server-4.19.06-0.0.20114.noarch.rpm

● edge-router-4.19.06-0.0.20114.noarch.rpm
● To check whether these RPMs are installed, on each node where any of the
components in the appropriate list above are installed, enter the following
command for each component:
● -- apigee-service component version
● Downgrade Apigee components
To downgrade Apigee components, use the following procedure.
On each node that has any of the following components installed:
1. edge-gateway
2. edge-management-server
3. edge-message-processor
4. edge-postgres-server
5. edge-qpid-server
6. edge-router
● Stop the component by entering
● --apigee-service component stop
● Then downgrade the components:
● -- sudo yum downgrade
● Here are some examples:
If gateway and edge-message-processor are installed:
● -- sudo yum downgrade edge-gateway edge-message-processor
● If gateway and edge-router installed:
● -- sudo yum downgrade edge-gateway edge-router
● If AIO setup :
● -- sudo yum downgrade edge-gateway edge-postgres-server edge-router
edge-management-server edge-message-processor edge-qpid-server
● Once you are done downgrading, run configure for each component and
re-start it.
--apigee-service component configure

● --apigee-service component start


● The correct version of RPMs that you would have after downgrading are
shown below.
Edge for Private Cloud 4.50.00
edge-gateway-4.50.00-0.0.20110
Edge-management-server-4.50.00-0.0.20110
edge-message-processor-4.50.00-0.0.20110
edge-postgres-server-4.50.00-0.0.20110
edge-qpid-server-4.50.00-0.0.20110

● edge-router-4.50.00-0.0.20110
● Edge for Private Cloud 4.19.06
edge-gateway-4.19.06-0.0.20112
Edge-management-server-4.19.06-0.0.20112
edge-message-processor-4.19.06-0.0.20112
edge-postgres-server-4.19.06-0.0.20112
edge-qpid-server-4.19.06-0.0.20112

● edge-router-4.19.06-0.0.20112
● Downgrade NGINX
To downgrade apigee-nginx, do the following steps for Edge router, one
node at a time:
1. Stop the router.
2. --apigee-service edge-router stop
3. Downgrade apigee-ngix.
4. -- sudo yum downgrade apigee-nginx
5. Expected apigee-nginx version after downgrade:

-- yum list installed apigee-nginx

6. apigee-nginx.x86_64 -1.16.1-6.el7
7. Configure the router.
8. apigee-service edge-router configure
9. Start the router.
10.apigee-service edge-router start

Apigee Support
Apigee Support is here to help. Learn about and access the self-service resources
that will help you diagnose and resolve common problems and optimally use the
Apigee platform.

Engage with Apigee Support


Information to help you engage with Apigee Support and Apigee Community
Getting started with Apigee
Best practices for Apigee Support
Apigee Community
Support Cases

Ask questions and search


Engage with Apigee Support,
Learn about the best practicesfor
andexisting solutions from
get access to the SupportSupport Case templates to be the Apigee Community.
Portal, and create and used while creating Support
manage Support Cases. Cases.

Antipatterns and product limits


Information to help you avoid common mistakes and optimize performance with
Apigee

Apigee antipatterns Apigee product limits

Avoid common pitfalls and


Learn about the product
maximize the power of your
configuration limits for Apigee
APIs. features.

Tools
Tools to help you monitor, troubleshoot and resolve runtime errors, latency and
performance issues with Apigee Edge

Trace tool API Monitoring


Troubleshoot and resolveMonitor, troubleshoot, and resolve
live issues with API proxies
API traffic, runtime, latency, and
running on Apigee Edge.performance issues on Apigee
Edge.

Troubleshooting
Documentation and videos to help you troubleshoot and resolve problems with
Apigee Edge

Apigee Edge playbooks Apigee Adaptor for Envoy Apigee Edge Microgateway
playbooks playbooks

Troubleshoot and resolve errors and problems encountered with Apigee Edge,
Apigee Adaptor for Envoy, and Apigee Edge Microgateway.

Integrated portal playbooks


Policy error playbooks Apigee configuring and
troubleshooting videos

Troubleshoot and resolveTroubleshoot and resolve


errors associated with the
deployment and runtime errorsWatch short videos with
integrated portal. associated with policies. demos and learn how to
configure resources and
troubleshoot common
Apigee errors.

Configuration and Service Requests


Documentation to help you configure Apigee Edge resources and properties, and
perform service requests

How-to guides Service requests catalog

Use instructions and bestLearn about the different types of


practices to configure service requests supported, and
resources and properties.
how to perform them yourself or
seek assistance from Apigee.

More support resources


Additional resources to help you find more information about Apigee Edge, Support
plans, and pricing

Documentation Release notes Known issues

Explore documentation, View the changelog of our View the list of current
installation guides, proxy samples,
latest Apigee release. known issues.
tools, and tutorials about Apigee.

Pricing Apigee status Support specs

Check the status of currentLearn about the different


Get information about the pricing
for Apigee. support specifications
issues, outages, and the Apigee
release schedule. and plans.
Getting started with Apigee Support
If you are an existing Apigee Edge customer with a paid Apigee support plan, you
can request help from Apigee Edge Support.

The following documents explain how to access and use the Apigee Support Portal,
perform common Support procedures and activities such as creating and managing
cases, escalating cases, and best practices to be followed while creating cases.

Note: This topic applies to existing Apigee Edge Subscription customers with a paid Apigee
support plan.

If you are a new Google Cloud Apigee X customer, see Getting started with Apigee X Support.

If you have a free trial Apigee eval account, you can can post questions and find answers in the
Apigee community, where Apigee engineers are able to respond.

Document Description

Access to Support Provides information on how you can get


Portal access to Apigee Support Portal, how to add
users to, and how to remove users from
Apigee Support Portal.

Creating and Provides information on how you can create,


Managing Cases view, and update Support Cases, and
describes Case Priorities, Case Status, and
other important fields.

Escalating Cases Provides information on how and when you


can escalate cases.

Best practices for Provides information on the best practices


Google Cloud Apigee that need to be followed while creating a
support cases Support case, along with templates and
sample cases.

Apigee Support Portal Some common questions and answers


FAQ related to Apigee Support Portal.
Access to Support Portal
If you are a Google Cloud Apigee Edge customer and have any one of the following
support plans, you can request help from the Support team on any issues with
Apigee products directly by reporting Cases:

● Developer
● Starter
● Basic
● Enterprise
● Mission Critical

To be able to create and manage support cases, you need to have access to a
support portal. The documents in this section describe how to access the Apigee
Edge Support Portal.

If you do not have accesss to the Apigee Edge Support Portal, you are probably an
Apigee X customer and should use the Apigee X Support Portal.

Accessing the Support Portal


Note: The Apigee Support Portal login account is different from the Apigee UI login account.

Apigee provides a separate login account to Apigee Support Portal. Generally when
you are onboarded to Apigee, one of your users is granted the Support Portal admin
role. The Support Portal admin has the ability to add additional users to Apigee
Support Portal as needed.

Support Portal user roles


Within the support portal, use a combination of Role and Profile to grant the user
specific permissions.

Use one of the following Roles to define the structure of your company. Roles grant
no additional features in the support portal.

● Customer Executive
● Customer Manager
● Customer User

The Profile greatly influences how you interact with and work inside the support
portal. Descriptions for each of the available Profiles are as follows:

● Overage Customer Portal Manager - Admin


Has the ability to add and remove other users from the Support Portal as
well as view all cases opened by all the users pertaining to the specific user's
company.
● Overage Customer Portal Manager - SU
Can view all cases opened by all the users pertaining to the specific user's
company and create their own cases.
● Overage Customer Portal Manager - User
Can create and view only their own cases.

Adding more users to the support portal


If you are not an Apigee Support Portal admin, then contact one of your company's
Apigee Support Portal admins and ask them to add you.

If you are an Apigee Support Portal admin of your company, then you can add more
users according to your company's needs using the following steps:

1. Visit the Apigee Support Portal Login Page.


2. Enter your Support Portal username and password and click LOGIN.
3. To view and manage your portal users, select the Portal Contacts tab.

4. The user you want to grant Apigee Support Portal access to must exist as a
contact.
5. If the user does not exist, click New Contact. You will be redirected to enter
the users information.

6. From the New Contact page, enter the required information: First Name, Last
Name, Title, Phone number, and Email address. All other information is
optional.

7. In addition to the mandatory fields, ensure that you choose a value for
Preferred region timezone for support.

8. After filling in all the details, click Save. You have now created the new user's
Contact details.
9. If the user already exists, or after you have created the new user, click their
name to display the Edit Contact window.
10.Click Enable Customer User. This grants the ability for the user to login to
the Support Portal.
11.

Next, select a Role and Profile for the user.


a. Choose the Role that is appropriate for the user.
b. Choose the Profile based on your needs:
● Admin: Select if you want this user to be an admin for your
company
● User: Select if you don't want this user to have admin privileges
12.

Note: Admin privileges enable the user to create contacts and enable other contacts as
support portal users.
13.Click Save. The contact will receive an email from Apigee with their login
details.

Removing users from Support Portal


If you are an Apigee Support Portal admin of your company, then you can remove
users from Apigee Support Portal using the following steps:

1. Visit the Apigee Support Portal Login Page.


2. Enter your Support Portal username and password and click LOGIN.
3. To view and manage your portal users, select the Portal Contacts tab.
4. Select the desired user (name), that you would like to change permissions
from the list of portal users (Do not click Edit).
5. From the user contents select Manage External User > Disable Customer
User. This revokes the ability to login to the support portal preventing further
access. You can enable the user again in the future if needed.

Note: Users cannot be removed from the Support Portal account. They can only be
disabled. Disabling the user completely revokes their ability to login and see any
information in your company's support portal.

Creating and managing support cases


If you are a Google Cloud Apigee customer and have any one of the following
support plans, you can create and manage cases through Apigee Support Portal as
well as post to the Apigee Community to seek additional community support.

● Developer
● Starter
● Basic
● Enterprise
● Mission Critical

To be able to create or manage Google Cloud Apigee Support Cases, you will need
to have access to Apigee Support Portal.

Users must have one of the following roles to create or modify support cases:

● Overage Customer Portal Manager - Admin


● Overage Customer Portal Manager - SU
● Overage Customer Portal Manager - User
For more information, see Access to Support Portal.

Creating Cases

To create a new support case:

1. Visit the Apigee Support Portal Login Page.


2. Enter your Support Portal username and password and click LOGIN. The
Support Portal Home Page is displayed as follows:

3. Click Open a New Case.


4. Select a Case Record Type from the menu based on the issue for which you
need assistance from the Google Cloud Apigee Support team:
● Support Request: Choose this option if you:
● Are seeing an issue such as an error, unexpected behaviour
● Want to know how to configure a resource, or
● Have a question about any feature with any of the Apigee
Products
● Service Request: Choose this option if you need Apigee Support
team's assistance to perform any of these Service requests.
5. Click Continue.
6. Enter or select the following details:
● Priority of Your Case: Select priority that best matches the impact of
the issue described.
● Product: You may be using more than one Google Cloud Apigee
Product. Choose the right Apigee Product in which you are observing
a problem or you have a question about:
● Apigee Hybrid
● API Monitoring
● API Sense
● Connectors
● Edge Cloud
● Microgateway
● Private Cloud
● Security Reporting
● Component
● Subject: Describe the issue being observed or ask a question in a few
words
7. Sample Case Details

8. Click Continue.
9. Enter or select the following details:
● I need help with: Select the value that best suits the issue that you are
observing.
Note:
For information about the different types of issues and guidance on choosing
the right value that best explains the issue for which you need Support team's
assistance, see I need help with.
This field contains information about the different types of issues that may be
encountered with Apigee products or other requirements that you may have,
such as Service requests and Query. Choose the value that best matches why
you are reaching out to Apigee Support.
● Description: Include the following information:
● Precise information about the problem being observed
● Error message
● Details about your product setup
● Diagnostic logs and information
● Steps to reproduce
10. Note: Refer to Support Case Best Practices for guidelines about the key information and
artifacts that need to be provided along with examples.
Sample Case Details

11.Click Submit.

After you submit, you will be redirected to the Case page where you can comment
on the case, upload file attachments, or modify case attributes. The Support team
will respond to the case based on its priority and your Support plan entitlements.

Viewing Cases

To view existing support cases:

1. Visit the Apigee Support Portal Login Page.


2. Enter your Support Portal username and password and click LOGIN.
3. Click View Cases.

4. To filter the list of cases, use Search or the options from the View menu.
5. Click the specific Case Number or Subject from the list on the Cases page to
select it.

Note: Depending on your Support Portal Account profile, you will be able to either view and
manage all the cases created only by you or your company's cases in the Cases page. You can
find out the current status and updates from the Apigee Support team (if any) on the active
cases.

Updating Cases

To update a Support Case:

1. Click the specific Case Number or Subject from the list on the Cases page in
Support Portal to select it.
2. If the case is open, you can add comments, upload file attachments, or edit
case attributes.Note: If the case was closed less than 30 days ago, you can reopen it.
Otherwise, to reactivate a closed case, you need to create a new support case.
3. Once the specific case page is open, click Add Comment to add the
comments and respond back to any questions or information requested by
the Apigee Support team.
4. Click Save after adding the comments.

Attach a file

Sometimes the Apigee Support team may ask you to share artifacts such as
screenshots, tcpdumps, and component diagnostic logs which contain more
information about the issue. You can do this by attaching a file or a Google doc.

To attach a file:

1. Go to the bottom of the specific Case page, click Attach File.

2. Click Choose File to attach the file you would like to attach.
3. Click Attach File for each file added.
4. Repeat the above steps to attach multiple files.
5. Once you have added all the files, click Done.

Note:
● Skipping these steps will result in your files not being correctly attached.
● The maximum size of each file that you can attach is 25 MB. If you want to
attach files larger than 25 MB, then the Support team can provide you with a
Google Drive link where you can add more files. If this is needed, you will need
to request this in your communications with the team in the Support case.
6.

Reference
Case Priority

When creating a support case, it's important to assign it the correct priority. As per
the Apigee Support Specification, the case priority determines the initial response
time for the case.

The following table defines support case priorities.

Note: Response times vary by issue priority and which Support plan you have. For response
times, refer to the Apigee Support Specification.

Priority Impact Example situations

P1 Critical ● Business impact is critical (such as


revenue loss or potential data integrity
issue) with no workaround available.
● Your API Proxies are failing with errors or
having latency issues in production
organization/environment
● You are unable to access or manage your
API Proxies
● Your production runtime infrastructure is
non-responsive or is crashing

P2 Major ● Visible end user impact with a temporary


workaround available
● Services are available in a restricted
fashion
● Your API Proxies are failing with errors or
having latency issues in a non-production
organization/environment

P3 Minor ● The issue has no user-visible impact.


● You have questions about product features
or usage that is not currently covered in our
public docs

P4 No ● Enhancement request
Impact

I need help with

When creating a support case, select a value in I need help with that best describes
the issue that is being observed by you, or that best describes the assistance
required from the Apigee Support team. The following table describes the different
types of issues and their meanings:

Issue Description

Analytics Problem You are having issues while viewing or


accessing Apigee Analytics dashboards.

For example: The Analytics dashboard is not


displaying any data or you are getting an error
while viewing the dashboard.

An error I'm seeing An error encountered while using the Apigee


product.

For example: 5XX, 4XX errors observed for API


requests

Apigee Support This is created using the Request a Support


Portal Access Portal Account page

Data Transfer You need data associated with your org from
Request Apigee datastore (Analytics/Cassandra)

Deployment Error You are having issues while deploying your API
Proxy and/or received an error message about
partial deployment.

For example: Error in deployment for


environment prod. The revision is deployed, but
traffic cannot flow. Received an unknown event
with description SYNC.

Designing and You need help in designing API Proxies or


Building my API would like to know how to achieve a specific
Proxies use case with API Proxies.

For example: Need help in retrieving multiple


values from a comma separated header list

DevPortal Problem You are having issues with the Integrated


Developer portal or the Drupal-based developer
portal.

Feature The product is working as expected but does


Enhancement not fully support your use case and could be
Request better supported with a new feature.

Installation / You have encountered an issue during


Migration Problem installation or migration of an Apigee product.

Latency Problem Your API Proxies are experiencing an


unexpected increase in latency that is not
currently explained by available analytics.

Management / UI You are experiencing an issue that involves the


Problem Management API, Management server, or the
UI.

Monetization You have encountered an issue with the


Problem Monetization feature.

Query You have a question about a feature or policy


which is not covered in the documentation or
Apigee community.

Service Request A service request is a situation where you may


need assistance from the Support team to
perform certain tasks. You can read more in our
docs about Apigee Service Requests.

Case status
Once a Support Case is created, you can view the status of the case. The following
table describes the various statuses and their meaning:

Status Description

New New case that is not yet assigned to a Support


Engineer.

Assigned The case has been assigned to a Support


Engineer. You'll see a response from the
Support Engineer within the response time
mentioned in the priorities table.

Auto-Close The case has been closed due to inactivity from


you, while in Awaiting Customer Response. If
you are still seeing the issue described you can
either open this issue again, or create a new
case with any new developments.

Awaiting Customer We require further information from you before


Response we can proceed.

Awaiting Customer We believe we have resolved this case and are


Validation seeking validation from you to ascertain that
everything looks as expected at your end.

In Progress - The issue reported in the case is being


Engineering investigated by Product Engineers. Turnaround
times vary, depending on the issue complexity
and product component in question.

In Progress - The issue reported in the case is currently being


Support worked by a Support Engineer.

Resolved This issue has been resolved, if you have


similar issues please consider opening a new
issue and reference this existing case if
additional context is required.

Resolved as Multiple cases were reported for the same


Duplicate issue at or around the same time. Hence, this
case has been resolved to prevent any
duplication of work. The Support team is
working on another case reported by you for the
same issue.

Escalating cases
Escalate an existing case
If you have a support case that has the appropriate priority, but is not meeting the
expectations, then you can escalate the case. Escalation is a process to get higher
attention for an ongoing Google Cloud Apigee Support Case.

You might escalate a support case under the following circumstances:

● The business priority has changed and requires immediate attention on a


case
● The issue blocks developmental activities
● Go-live dates are at risk
● The issue is stuck without progress after exchanging several messages

How to escalate a Support Case


Use the following steps to escalate a Support Case:

1. Visit the Apigee Support Portal Login Page.


2. Enter your Support Portal username and password and click LOGIN.
3. Click View Cases.
4. To filter the list of cases, use Search or the options from the View menu.
5. Click the specific Case Number or Subject from the list on the Cases page,
that you would like to escalate.
6. Once you're inside the specific case, click Escalate available at both the top
and bottom of the case details.

7. A new form is displayed as shown below:

8. Ensure that you enter the following details:


● Escalated by Name
● Reason for Escalation
9. Click Save.
10.This will escalate the Case.

When a Support case is escalated, the Apigee Support team is immediately notified
and will update the case accordingly. You can find out more about our escalation
process document titled Support Escalation Process from Apigee Support Portal.

Note: A Support case can be escalated only after you have not received a response within the
initial response time based on the priority.
Best practices for Google Cloud Apigee
support cases
Providing detailed and required information in the support case makes it easier for
the Google Cloud Apigee Support team to respond to you quickly and efficiently.
When your support case is missing critical details, we need to ask for more
information, which may involve going back and forth multiple times. This takes
more time and can lead to delays in resolution of issues. This Best Practices Guide
lets you know the information we need to resolve your technical support case
faster.

Describing the issue

An issue should contain information explaining the details about what happened
versus what was expected to happen, as well as when and how it happened. A good
Apigee support case should contain the following key information for each of the
Apigee products:

Note: For more information about the different Apigee products, see Compare Apigee products.

Key Apigee Edge for Public Apigee Edge for


Description
information Cloud Private Cloud

Product Specific ● Version


Apigee
product in
which the
problem is
being
observed,
including
version
information
where
applicable.

Problem Clear and ● Error message ● Error message


Details detailed ● Trace tool ● Trace tool
problem output output
description ● Steps to ● Steps to
which reproduce reproduce
outlines the problem problem
issue, ● Complete API ● Complete API
request/comma request/comma
including the nd nd
complete ● Component
error diagnostic logs
message, if
any.

Time The specific ● Date, time, and ● Date, time, and


timestamp timezone of timezone of
when the problem problem
issue started occurrence occurrence
and how long ● Duration of the ● Duration of the
it lasted. problem problem

Setup Detailed ● Org name ● Network


information ● Env name topology
where the ● API proxy name ● Failing Edge
problem is ● Revision component
being
observed.

The following sections describe these concepts in greater detail.

Product

There are different Apigee products, Apigee Edge on Public Cloud and Apigee Edge
on Private Cloud, so we need specific information about which particular product is
having the issue.

The following table provides some examples showing complete information in the
DOs column, and incomplete information in the DON'Ts column:

DOs DON'Ts

Deployment of API proxy OAuth2 Deployment of API proxy failed


failed on our Public Cloud org ...
(We need to know Apigee product
in which you are seeing the
issue.)
Installation failed with the Installation failed on our Private
following error on our Edge Cloud setup.
Private Cloud version 4.50.00 ...

(Version information is missing)

Problem Details

Provide the precise information about the issue being observed including the error
message (if any) and expected and actual behaviour observed.

The following table provides some examples showing complete information in the
DOs column, and incomplete information in the DON'Ts column:

DOs DON'Ts

New edgemicro proxy edgemicro_auth is failing with New edgemicro proxy


the following error: created today not
working

{"error":"missing_authorization","error_de
scription":"Missing Authorization header"} (The proxy name is
unknown. It is not clear
whether the proxy is
returning an error or
any unexpected
response.)

Our clients are getting 500 errors with the following Our clients are getting
error message while making requests to API proxy: 500 Errors while
making requests to API
proxy.
{"fault":{"faultstring":"Execution of
JSReadResponse failed with error:
Javascript runtime error: \"TypeError: (Just conveying 500
Cannot read property \"content\" from Errors doesn't provide
adequate information
undefined.
for us to investigate
(JSReadResponse.js:23)","detail": the issue. We need to
{"errorcode":"steps.javascript.ScriptExecu know the actual error
tionFailed"}}} message and error
code that is being
observed.)
Time

Time is a very critical piece of information. It is important for the Support Engineer
to know when you first noticed this issue, how long it lasted, and if the issue is still
going on.

The Support Engineer resolving the issue may not be in your timezone, so relative
statements about time make the problem harder to diagnose. Hence, it is
recommended to use the ISO 8601 format for the date and time stamp to provide
the exact time information on when the issue was observed.

The following table provides some examples showing accurate time and duration
for which the problem occurred in the DOs column, and ambiguous or unclear
information on when the problem occurred in the DON'Ts column:

DOs DON'Ts

Huge number of 503s were Huge number of 503s were


observed yesterday between observed yesterday at 5:30pm for
2020-11-06 17:30 PDT and 2020- 5 mins.
11-06 17:35 PDT...

(We are forced to use the implied


date and it is also unclear in
which timezone this issue was
observed.)

High latencies were observed on High latencies were observed on


the following API Proxies from some API Proxies last week.
2020-11-09 15:30 IST to 2020-
11-09 18:10 IST ...
(It is unclear which day and
duration this issue was observed
in the last week.)

Setup

We need to know the details about where exactly you are seeing the issue.
Depending on the product you are using, we need the following information:

● If you are using Apigee Cloud, you may have more than one organization, so
we need to know the specific org and other details where you are observing
the issue:
● Organization and Environment names
● API proxy name and revision numbers (for API request failures)
● If you are using Private Cloud, you may be using one of the many supported
installation topologies. So we need to know what topology you are using,
including the details such as number of data centres and nodes.

The following table provides some examples showing complete information in the
DOs column, and incomplete information in the DON'Ts column:

DOs DON'Ts

401 Errors have increased on Edge Public Cloud since 401 Errors have
2020-11-06 09:30 CST. increased.

Edge setup details: (It does not give


any information on
the product being
Details of the failing API are as follows: used, since when
Org names: myorg the issue is being
Env names: test observed or any
API proxy names: myproxy setup details.)
Revision numbers: 3

Error:

{"fault":{"faultstring":"Failed to resolve
API Key variable request.header.X-APP-
API_KEY","detail":
{"errorcode":"steps.oauth.v2.FailedToResolv
eAPIKey"}}}

Unable to start Message Processor on Edge Private Unable to start


Cloud version 4.19.06, after adding additional gateway Message Processor
nodes. on Edge Private
Cloud version
4.19.06, after
Diagnostic logs: adding additional
Attached the Message Processor logs. gateway nodes.

Network topology: (The Message


Attached the file network-topology.png containing Processor Logs and
the additional nodes.
the network
topology is
missing.)

Useful artifacts

Providing us with artifacts related to the issue will speed up the resolution, as it
helps us understand the exact behavior you are observing and get more insights
into it.

Caution: Do not share any confidential or sensitive information such as username, password,
API Keys, OAuth tokens, or private key associated with your TLS certificates in the case
description, updates or file attachments.

This section describes some useful artifacts that are helpful for all Apigee
products:

Common artifacts for all Apigee products

The following artifacts are useful for all Apigee products: Apigee Edge on Public
Cloud and Apigee Edge on Private Cloud:

Artifact Description

Trace tool The Trace tool output contains detailed information


output about the API requests flowing through Apigee
products. This is useful for any runtime errors such
as 4XX, 5XX, and latency issues.

Screenshots Screenshots help relay the context of the actual


behaviour or error being observed. It is helpful for
any errors or issues observed, such as in the UI or
Analytics.

HAR (Http HAR is a file that is captured by HTTP session tools


ARchive) for debugging any UI related issues. This can be
captured using browsers such as Chrome, Firefox, or
Internet Explorer.
tcpdumps The tcpdump tool captures TCP/IP packets
transferred or received over the network. This is
useful for any network-related issues such as TLS
handshake failures, 502 errors, and latency issues,
etc.

Additional artifacts for Apigee Edge for Private Cloud

For Apigee Edge for Private Cloud, we may need some additional artifacts that will
facilitate faster diagnosis of issues.

Artifact Description

Network topology The Edge installation topology diagram


describing your Private Cloud setup including all
the data centers, nodes, and components
installed in each node.

Edge component The diagnostic logs related to the specific Apigee


diagnostic logs Edge component such as Message Processor,
Router, or Cassandra.

Installation The silent configuration file that is used when


Configuration File installing or upgrading Apigee Edge.

This file is useful to validate if all the settings are


correct in cases where installation or migration
issues are encountered.

Heap dumps Heap dumps are a snapshot of the Java memory


process. This is helpful if high memory utilization
or OutOfMemory errors are seen on certain Edge
components.

Thread dumps A thread dump is a snapshot of all the threads of


a running Java process.

This is helpful if high CPU or Load is observed on


certain Edge components.
Case templates and sample cases

This section provides case templates and sample cases for different products
based on the best practices described in this document:

Apigee Edge on Public Cloud


TEMPLATE

SAMPLE CASE

This section provides a sample template for Apigee Edge on Public Cloud.

Problem:

<Provide detailed description of the problem or the behaviour being observed at


your end. Include the product name and version where applicable.>

Error message:

<Include the complete error message observed (if any)>

Problem start time (ISO 8601 format):

Problem end time (ISO 8601 format):

Apigee setup details:


Org names:
Env names:
API proxy names:
Revision numbers:

Steps to reproduce:

<Provide steps to reproduce the issue where possible>

Diagnostic information:

<List of files attached>


Apigee Edge for Private Cloud
TEMPLATE

SAMPLE CASE

This section provides a sample template for Apigee Edge for Private Cloud.

Problem:

<Provide detailed description of the problem or the behaviour being observed at


your end. Include the product name and version where applicable.>

Error message:

<Include the complete error message observed (if any)>

Problem start time (ISO 8601 format):

Problem end time (ISO 8601 format):

Edge Private Cloud setup details:

<Attach the network topology describing the setup of your Private Cloud including
data centers and nodes>

Steps to reproduce:

<Provide steps to reproduce the issue where possible>

Diagnostic information

<List of files attached>


Apigee Support Portal FAQ
Getting access
How can I get access to Google Cloud Apigee Support Portal?

You can find instructions on how to access Apigee Support Portal in Apigee
Support Portal login page, click Forgot your password?. Follow the instructions to
reset your password.

Roles and permissions


What are roles?

Roles define the structure of your company and can be one of the following:

● Customer Executive
● Customer Manager
● Customer User

Read more about the Support portal roles in Support Portal user roles.

What are the different types of profiles that are available in Apigee Support Portal?
What permissions does each of these profiles have?

There are three profiles in Apigee Support Portal:

● Overage Customer Portal Manager - Admin


Has the ability to add and remove other users from the Support Portal as
well as view all cases opened by all the users pertaining to the specific user's
company.
● Overage Customer Portal Manager - SU
Can view all cases opened by all the users pertaining to the specific user's
company and create their own cases.
● Overage Customer Portal Manager - User
Can create and view only their own cases.

Read more about the Support portal roles in Support Portal user roles.

Why am I seeing only the Cases created by me and not the cases created by my
team?

You likely have the role Overage Customer Portal Manager - User which allows you
to see only your cases. Your portal administrator can grant you the Overage
Customer Portal Manager - SU role which allows you to see cases opened by other
people in your company.
How can I see what role/permissions I have?

The Support portal administrator of your company will be able to tell you what role
you have been assigned. You can also infer this from the permissions available to
you based on the role that you have. Available permissions and their respective
abilities are described in Support Portal user roles.

Adding and removing users


Can I add new Support Portal users?

Yes. You can find instructions on how to add new Support Portal users in Adding
more users to the support portal.

One of the employees has left the company, can we remove that user from Apigee
Support Portal?

Yes. You can find instructions to remove the user in Remove users from Support
Portal.

I am a Partner and would be helping in creating and managing cases for an Apigee
customer. Can I get access to Apigee Support Portal?

Yes. The Support Portal admin from the customer side can add the relevant Partner
team members to Apigee Support Portal.

Is there any limit on the number of Support Portal users that can be added for a
particular customer account? If the customer requests an increase in the number,
do you charge them?

We do not impose a limit on the number of support users, though we do


recommend that you keep the number of contacts to those that are knowledgeable
of and work on the Apigee product. There is no additional charge for additional
users.

Once the customer nominates a partner to raise cases on their behalf, is there a
limit to the number of partner people who could raise cases? Again, if they want to
increase the number of partner people to have access to the support portal, is there
a cost?

We do not impose a limit on the number of support users. We recommend that you
only add users that are knowledgeable of and work on the Apigee product in your
company. Additionally, it is best to add users with knowledge of your specific
implementation. There is no additional charge for additional users.

Getting support
I have a free trial account, can I get access to Apigee Support Portal and raise
Support Cases?
Unfortunately no. However, if you do have questions about or issues with Apigee
products, you can find answers to your questions, and can post questions in the
Apigee community, where Apigee Engineers are able to respond.

Introduction to antipatterns
This section is about common antipatterns that are observed as part of the API
proxies deployed on the Apigee Edge platform.

The good news is that each of these antipatterns can be clearly identified and
rectified with appropriate good practices. Consequently, the APIs deployed on Edge
would serve their intended purpose and be more performant.

Summary of antipatterns
The following table lists the antipatterns in this section:

Category Antipatterns
Policy antipatterns ● Use waitForComplete() in JavaScript
code
● Set a long expiration time for OAuth
tokens
● Use greedy quantifiers in the
RegularExpressionProtection policy
● Cache error responses
● Store data greater than 512kb size in
cache
● Log data to third party servers using the
JavaScript policy
● Invoke MessageLogging multiple times
in an API proxy
● Configure non-distributed quota
● Reuse a Quota policy
● Use the RaiseFault policy under
inappropriate conditions
● Access multi-value HTTP headers
incorrectly in an API Proxy
● Use Service Callout to invoke backend
service in no target proxy
● Use of any OAuth V1 policies, which are
deprecated. See Deprecations.

Performance ● Leave unused NodeJS API Proxies


antipatterns deployed
Generic antipatterns ● Invoke Management API calls from an
API Proxy
● Invoke a proxy within a proxy using
custom code or as a target
● Manage Edge resources without using
source control management
● Define multiple virtual hosts with same
host alias and port number
● Load Balance with a single target server
with MaxFailures set to a non-zero value
● Access the request/response payload
when streaming is enabled
● Define multiple ProxyEndpoints in an API
Proxy

Backend ● Allow a slow backend


antipatterns ● Disable HTTP persistent (reusable keep-
alive) connections

Edge for Private ● Add custom information to Apigee-


Cloud antipatterns owned schema in Postgres database

Download antipatterns eBook

In addition to the links above, you can also download the antipatterns in eBook
format:

● The Book of Apigee Edge Antipatterns v2.0 (PDF)

What is an antipattern?
Wikipedia defines a software antipattern as:

format_quote
In software engineering, an anti-pattern is a pattern that may be commonly used but
is ineffective and/or counterproductive in practice.

format_quote
Simply put, an antipattern is something that the software allows it’s "user" to do,
but is something that may have adverse functional, serviceable, or performance
impact.

For example, consider the omnipotent-sounding "God Class/Object".

In objected oriented parlance, a god class is a class that controls too many classes
for a given application.

For example, consider an application with the following reference tree:

Figure 1: God class

As the image illustrates, the god class uses and references too many classes.

The framework on which the application was developed does not prevent the
creation of such a class, but it has many disadvantages, the primary ones being:

● Hard to maintain
● Single point of failure when the application runs

Consequently, creation of such a class should be avoided. It is an antipattern.

Target audience
This section best serves Apigee Edge developers as they progress through the
lifecycle of designing and developing API proxies for their services. It should
ideally be used as a reference guide during the API development lifecycle and
during troubleshooting.

Antipattern: Use waitForComplete() in


JavaScript code
The JavaScript policy in Apigee Edge allows you to add custom code that executes
within the context of an API proxy flow. For example, the custom code in the
JavaScript policy can be used to:

● Get and set flow variables


● Execute custom logic and perform fault handling
● Extract data from requests or responses
● Dynamically edit the backend target URL
● Dynamically add or remove headers from a request or a response
● Parse a JSON response

HTTP Client

A powerful feature of the Javascript policy is the HTTP client. The HTTP client (or
the httpClient object) can be used to make one or multiple calls to backend or
external services. The HTTP client is particularly useful when there's a need to
make calls to multiple external services and mash up the responses in a single API.

Sample JavaScript Code making a call to backend with httpClient object

var headers = {'X-SOME-HEADER' : 'some value' };


var myRequest = new Request("http://www.example.com","GET",headers);
var exchange = httpClient.send(myRequest);

The httpClient object exposes two methods get and send (send is used in the
above sample code) to make HTTP requests. Both methods are asynchronous and
return an exchange object before the actual HTTP request is completed.

The HTTP requests may take a few seconds to a few minutes. After an HTTP
request is made, it's important to know when it is completed, so that the response
from the request can be processed. One of the most common ways to determine
when the HTTP request is complete is by invoking the exchange object's
waitForComplete() method.

waitForComplete()

The waitForComplete() method pauses the thread until the HTTP request
completes and a response (success/failure) is returned. Then, the response from a
backend or external service can be processed.

Sample JavaScript code with waitForComplete()

var headers = {'X-SOME-HEADER' : 'some value' };


var myRequest = new Request("http://www.example.com","GET",headers);
var exchange = httpClient.send(myRequest);
// Wait for the asynchronous GET request to finish
exchange.waitForComplete();
// Get and Process the response
if (exchange.isSuccess()) {
var responseObj = exchange.getResponse().content.asJSON;
return responseObj.access_token;
} else if (exchange.isError()) {
throw new Error(exchange.getError());
}

Antipattern
Using waitForComplete() after sending an HTTP request in JavaScript code will
have performance implications.

Consider the following JavaScript code that calls waitForComplete() after


sending an HTTP request.

Code for sample.js

// Send the HTTP request


var exchangeObj = httpClient.get("http://example.com");
// Wait until the request is completed
exchangeObj.waitForComplete();
// Check if the request was successful
if (exchangeObj.isSuccess()) {

response = exchangeObj.getResponse();
context.setVariable('example.status', response1.status);
} else {
error = exchangeObj.getError();
context.setVariable('example.error', 'Woops: ' + error);
}

In this example:

1. The JavaScript code sends an HTTP request to a backend API.


2. It then calls waitForComplete() to pause execution until the request
completes.
The waitForComplete() API causes the thread that is executing the
JavaScript code to be blocked until the backend completes processing the
request and responds back.

There's an upper limit on the number of threads (30%) that can concurrently
execute JavaScript code on a Message Processor at any time. After that limit is
reached, there will not be any threads available to execute the JavaScript code. So,
if there are too many concurrent requests executing the waitForComplete() API
in the JavaScript code, then subsequent requests will fail with a 500 Internal Server
Error and 'Timed out' error message even before the JavaScript policy times out.

In general, this scenario may occur if the backend takes a long time to process
requests or there is high traffic.

Impact
1. The API requests will fail with 500 Internal Server Error and with error
message 'Timed out' when the number of concurrent requests executing
waitForComplete() in the JavaScript code exceeds the predefined limit.
2. Diagnosing the cause of the issue can be tricky as the JavaScript fails with
'Timed out' error even though the time limit for the specific JavaScript policy
has not elapsed.

Best Practice
Use callbacks in the HTTP client to streamline the callout code and improve
performance and avoid using waitForComplete() in JavaScript code. This
method ensures that the thread executing JavaScript is not blocked until the HTTP
request is completed.

When a callback is used, the thread sends the HTTP requests in the JavaScript
code and returns back to the pool. Because the thread is no longer blocked, it is
available to handle other requests. After the HTTP request is completed and the
callback is ready to be executed, a task will be created and added to the task
queue. One of the threads from the pool will execute the callback based on the
priority of the task.

Sample JavaScript code using Callbacks in httpClient

function onComplete(response,error) {
// Check if the HTTP request was successful
if (response) {
context.setVariable('example.status', response.status);
} else {
context.setVariable('example.error', 'Woops: ' + error);
}
}
// Specify the callback Function as an argument
httpClient.get("http://example.com", onComplete);

Further reading
● JavaScript policy
● JavaScript callback support in httpClient for improved callouts
● Making JavaScript callouts with httpClient

Antipattern: Set a long expiration time


for OAuth tokens
Apigee Edge provides the OAuth 2.0 framework to secure APIs. OAuth2 is one of
the most popular open-standard, token-based authentication and authorization
schemes. It enables client applications to access APIs on behalf of users without
requiring users to divulge their username and password.

Apigee Edge allows developers to generate access and/or refresh tokens by


implementing any one of the four OAuth2 grant types - client credentials,
password, implicit, and authorization code - using the OAuthv2 policy. Client
applications use access tokens to consume secure APIs. Each access token has its
own expiry time, which can be set in the OAuthv2 policy.

Refresh tokens are optionally issued along with access tokens with some of the
grant types. Refresh tokens are used to obtain new, valid access tokens after the
original access token has expired or been revoked. The expiry time for refresh
tokens can also be set in the OAuthv2 policy.

Antipattern
Setting a long expiration time for an access token and/or refresh token in the
OAuthv2 policy leads to accumulation of OAuth tokens and increased disk space
use on Cassandra nodes.

The following example OAuthV2 policy shows a long expiration time of 200 days
for refresh tokens:

<OAuthV2 name="GenerateAccessToken">
<Operation>GenerateAccessToken</Operation>
<ExpiresIn>1800000</ExpiresIn> <!-- 30 minutes -->
<RefreshTokenExpiresIn>17280000000</RefreshTokenExpiresIn> <!-- 200 days
-->
<SupportedGrantTypes>
<GrantType>password</GrantType>
</SupportedGrantTypes>
<GenerateResponse enabled="true"/>
</OAuthV2>
In the above example:

● The access token is set with a reasonably lower expiration time of 30 mins.
● The refresh token is set with a very long expiration time of 200 days.
● If the traffic to this API is 10 requests/second, then it can generate as many
as 864,000 tokens in a day.
● Since the refresh tokens expire only after 200 days, they persist in the data
store (Cassandra) for a long time leading to continuous accumulation.

Impact
● Leads to significant growth of disk space usage on the data store
(Cassandra).
● For Private Cloud users, this could increase storage costs, or in the worst
case, the disk could become full and lead to runtime errors or an outage.

Best Practice
Use an appropriate lower expiration time for OAuth access and refresh tokens
depending on your specific security requirements, so that they get purged quickly
and thereby avoid accumulation.

Set the expiration time for refresh tokens in such a way that it is valid for a little
longer period than the access tokens. For example, if you set 30 minutes for access
token and then set 60 minutes for refresh token.

This ensures that:

● There's ample time to use a refresh token to generate new access and
refresh tokens after the access token is expired.
● The refresh tokens will expire a little while later and can get purged in a
timely manner to avoid accumulation.

Further reading
● OAuth 2.0 framework
● Secure APIs with OAuth
● Edge Product Limits

Antipattern: Set no expiration time for


OAuth tokens
Apigee Edge provides the OAuth 2.0 framework to secure APIs. OAuth2 is one of
the most popular open-standard, token-based authentication and authorization
schemes. It enables client applications to access APIs on behalf of users without
requiring users to divulge their username and password.

Apigee Edge allows developers to generate access and/or refresh tokens by


implementing any one of the four OAuth2 grant types - client credentials,
password, implicit, and authorization code - using the OAuthv2 policy. Client
applications use access tokens to consume secure APIs. Each access token has its
own expiration time, which can be set in the OAuthv2 policy.

Refresh tokens are optionally issued along with access tokens with some of the
grant types. Refresh tokens are used to obtain new, valid access tokens after the
original access token has expired or been revoked. The expiration time for refresh
tokens can also be set in the OAuthv2 policy.

This antipattern is related to the antipattern of setting a long expiration time for
OAuth tokens.

Antipattern
Setting no expiration time for a refresh token in the OAuthv2 policy leads to
accumulation of OAuth tokens and increased disk space use on Cassandra nodes.

The following example OAuthV2 policy shows a missing configuration for


<RefreshTokenExpiresIn>:

<OAuthV2 name="GenerateAccessToken">
<Operation>GenerateAccessToken</Operation>
<ExpiresIn>1800000</ExpiresIn> <!-- 30 minutes -->
<!--<RefreshTokenExpiresIn> is missing -->
<SupportedGrantTypes>
<GrantType>password</GrantType>
</SupportedGrantTypes>
<GenerateResponse enabled="true"/>
</OAuthV2>

In the above example:

● The access token is set with a reasonably low expiration time of 30 minutes.
● The refresh token's expiration is not set.
● Refresh token persists in data store (Cassandra) forever, causing data
accumulation.
● A refresh token minted without an expiration can be used indefinitely to
generate access tokens.
● If the traffic to this API is 10 requests per second, it can generate as many as
864,000 tokens in a day.

Impact
● If the refresh token is created without an expiration, there are two major
consequences:
● Refresh token can be used at any time in the future, possibly for
years, to obtain an access token. This can have security implications.
● The row in Cassandra containing the refresh token will never be
deleted. This will cause data to accumulate in Cassandra.
● If you don't use the refresh token to obtain a fresh access token, but instead
create a fresh refresh token and access token, the older refresh token will
remain in Cassandra. As a result, refresh tokens will continue accumulating
in Cassandra, further adding to bloat, increased disk usage, and heavier
compactions, and will eventually cause read/write latencies in Cassandra.

Best Practice
Use an appropriately low expiration time for both refresh and access tokens. See
best practice for setting expiration times of refresh and access tokens. Make sure
to specify an expiration configuration for both access and refresh token in policy.
Refer to the OauthV2 policy documentation for further details about policy
configuration.

Best practices specifically for Edge for Private Cloud


customers
The section describes best practices specifically for Edge for Private Cloud
customers.

Specify a default refresh token expiration

By default, if a refresh token expiration is not specified in a policy configuration,


Edge creates a refresh token without any expiration. You can override this behavior
by the following procedure:

1. On a message processor node, edit or create the configuration override file


$APIGEE_ROOT/customer/application/message-
processor.properties. Make sure this file is readable by the apigee
user.
2. Add the following line to the file:
3. conf_keymanagement_oauth_refresh_token_expiry_time_in_millis=3600000
4. This will set the default refresh token expiration, if none is specified in a
policy, to 1 hour. You can change this default value based on your business
needs.
5. Restart the Message processor service:
6. apigee-service edge-message-processor restart
7. Repeat the above steps in all message processor nodes one by one.
Note: The above procedure will only set a refresh token expiration if none is specified in a
policy configuration. Any refresh token expiration set in policy configuration (even if it is a
large value) will take precedence over this default. So when setting the expiration of both
access and refresh tokens in policy, ensure that a lower value is set as allowed by your use
case.Note: This change does not retroactively change expiration of tokens already created
before this change was made. For example, refresh tokens already present in your database
with large or no expiration will continue to retain their expiration values. If you have already
accumulated a large number of tokens in your database that are not being purged, send a
message to Apigee support about the purge tool. This tool is currently in alpha stage, and you
will be given specific guidance to run the tool.

Best practices in Cassandra


Try to upgrade to the latest version of Apigee available publicly. Apigee continues
to release fixes and enhancements which continue to improve and optimize
management of tokens within Apigee. In Apigee, access and refresh tokens are
stored in Cassandra within the “kms” keyspace. You should ensure that the
compaction strategy of this keyspace is set to

LeveledCompactionStrategy. You should


check that the following indices are not present:

● kms.oauth_20_access_tokens.oauth_20_access_tokens_organization_name_
idx#f0f0f0 and
● kms.oauth_20_access_tokens.oauth_20_access_tokens_status_idx

You can also reduce gc_grace_seconds in the table


kms.oauth_20_access_tokens from the default 10 days to a lower value (such
as 3 days) to ensure tombstones generated due to tokens being deleted are purged
from the data store faster.

Further reading
● OAuth 2.0 framework
● Secure APIs with OAuth
● Edge Product Limits

Antipattern: Use greedy quantifiers in


the RegularExpressionProtection
policy
The RegularExpressionProtection policy defines regular expressions that are
evaluated at runtime on input parameters or flow variables. You typically use this
policy to protect against content threats like SQL orJavaScript injection, or to check
against malformed request parameters like email addresses or URLs.

The regular expressions can be defined for request paths, query parameters, form
parameters, headers, XML elements (in an XML payload defined using XPath),
JSON object attributes (in a JSON payload defined using JSONPath).

The following example RegularExpressionProtection policy protects the backend


from SQL injection attacks:

<!-- /antipatterns/examples/greedy-1.xml -->


<RegularExpressionProtection async="false" continueOnError="false"
enabled="true"
name="RegexProtection">
<DisplayName>RegexProtection</DisplayName>
<Properties/>
<Source>request</Source>
<IgnoreUnresolvedVariables>false</IgnoreUnresolvedVariables>
<QueryParam name="query">
<Pattern>[\s]*(?i)((delete)|(exec)|(drop\s*table)|
(insert)|(shutdown)|(update)|(\bor\b))</Pattern>
</QueryParam>
</RegularExpressionProtection>

Antipattern
The default quantifiers (*, +, and ?) are greedy in nature: they start to match with
the longest possible sequence. When no match is found, they backtrack gradually
to try to match the pattern. If the resultant string matching the pattern is very short,
then using greedy quantifiers can take more time than necessary. This is especially
true if the payload is large (in the tens or hundreds of KBs).

The following example expression uses multiple instances of .*, which are greedy
operators:

<Pattern>.*Exception in thread.*</Pattern>

In this example, the RegularExpressionProtection policy first tries to match the


longest possible sequence—the entire string. If no match is found, the policy then
backtracks gradually. If the matching string is close to the start or middle of the
payload, then using a greedy quantifier like .* can take a lot more time and
processing power than reluctant qualifiers like .*? or (less commonly) possessive
quantifiers like .*+.

Reluctant quantifiers (like X*?, X+?, X??) start by trying to match a single
character from the beginning of the payload and gradually add characters.
Possessive quantifiers (like X?+, X*+, X++) try to match the entire payload only
once.

Given the following sample text for the above pattern:

Hello this is a sample text with Exception in thread


with lot of text after the Exception text.

Using the greedy .* is non-performant in this case. The pattern .*Exception in


thread.* takes 141 steps to match. If you used the pattern .*?Exception in
thread.* (which uses a reluctant quantifier) instead, the result would be only 55
steps.

Impact
Using greedy quantifiers like wildcards (*) with the RegularExpressionProtection
policy can lead to:

● An increase in overall latency for API requests for a moderate payload size
(up to 1MB)
● Longer time to complete the execution of the RegularExpressionProtection
policy
● API requests with large payloads (>1MB) failing with 504 Gateway Timeout
Errors if the predefined timeout period elapses on the Edge Router
● High CPU utilization on Message Processors due to large amount of
processing which can further impact other API requests

Best practice
● Avoid using greedy quantifiers like .* in regular expressions with the
RegularExpressionProtection policy. Instead, use reluctant quantifiers like
.*? or possessive quantifiers like .*+ (less commonly) wherever possible.

Further reading
● Regular Expression Quantifiers
● RegularExpressionProtection policy
Antipattern: Cache error responses
Caching is a process of storing data temporarily in a storage area called cache for
future reference. Caching data brings significant performance benefits because it:

● Allows faster retrieval of data


● Reduces processing time by avoiding regeneration of data again and again
● Prevents API requests from hitting the backend servers and thereby reduces
the overhead on the backend servers
● Allows better utilization of system/application resources
● Improves the response times of APIs

Whenever we have to frequently access some data that doesn’t change too often,
we highly recommend to use a cache to store this data.

Apigee Edge provides the ability to store data in a cache at runtime for persistence
and faster retrieval. The caching feature is made available through the
PopulateCache policy, LookupCache policy, InvalidateCache policy, and
ResponseCache policy.

In this section, let’s look at Response Cache policy. The Response Cache policy in
Apigee Edge platform allows you to cache the responses from backend servers. If
the client applications make requests to the same backend resource repeatedly and
the resource gets updated periodically, then we can cache these responses using
this policy. The Response Cache policy helps in returning the cached responses
and consequently avoids forwarding the requests to the backend servers
unnecessarily.

The Response Cache policy:

● Reduces the number of requests reaching the backend


● Reduces network bandwidth
● Improves API performance and response times

Antipattern
The ResponseCache policy lets you cache HTTP responses with any possible
Status code, by default. This means that both success and error responses can be
cached.

Here’s a sample Response Cache policy with default configuration:

<!-- /antipatterns/examples/1-1.xml -->


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ResponseCache async="false" continueOnError="false" enabled="true"
name="TargetServerResponseCache">
<DisplayName>TargetServer ResponseCache</DisplayName>
<CacheKey>
<Key Fragment ref="request.uri" /></CacheKey>
<Scope>Exclusive</Scope>
<ExpirySettings>
<TimeoutInSec ref="flow.variable.here">600</TimeoutInSec>
</ExpirySettings>
<CacheResource>targetCache</CacheResource>
</ResponseCache>

The Response Cache policy caches error responses in its default configuration.
However, it is not advisable to cache error responses without considerable thought
on the adverse implications because:

● Scenario 1: Failures occur for a temporary, unknown period and we may


continue to send error responses due to caching even after the problem has
been fixed
OR
● Scenario 2: Failures will be observed for a fixed period of time, then we will
have to modify the code to avoid caching responses once the problem is
fixed

Let’s explain this by taking these two scenarios in more detail.

Scenario 1: Temporary backend/resource failure


Consider that the failure in the backend server is because of one of the following
reasons:

● A temporary network glitch


● The backend server is extremely busy and unable to respond to the requests
for a temporary period
● The requested backend resource may be removed/unavailable for a
temporary period of time
● The backend server is responding slow due to high processing time for a
temporary period, etc

In all these cases, the failures could occur for a unknown time period and then we
may start getting successful responses. If we cache the error responses, then we
may continue to send error responses to the users even though the problem with
the backend server has been fixed.

Scenario 2: Protracted or fixed backend/resource failure


Consider that we know the failure in the backend is for a fixed period of time. For
instance, you are aware that either:

● A specific backend resource will be unavailable for 1 hour


OR
● The backend server is removed/unavailable for 24 hours due to a sudden
site failure, scaling issues, maintenance, upgrade, etc.

With this information, we can set the cache expiration time appropriately in the
Response Cache policy so that we don’t cache the error responses for a longer
time. However, once the backend server/resource is available again, we will have to
modify the policy to avoid caching error responses. This is because if there is a
temporary/one off failure from the backend server, we will cache the response and
we will end up with the problem explained in scenario 1 above.

Impact
● Caching error responses can cause error responses to be sent even after the
problem has been resolved in the backend server
● Users may spend a lot of effort troubleshooting the cause of an issue
without knowing that it is caused by caching the error responses from the
backend server

Best practice
● Don’t store the error responses in the response cache. Ensure that the
<ExcludeErrorResponse> element is set to true in the ResponseCache
policy to prevent error responses from being cached as shown in the below
code snippet. With this configuration only the responses for the default
success codes 200 to 205 (unless the success codes are modified) will be
cached.
● <!-- /antipatterns/examples/1-2.xml -->
● <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
● <ResponseCache async="false" continueOnError="false" enabled="true"
name="TargetServerResponseCache">
<DisplayName>TargetServerResponseCache</DisplayName>
<CacheKey>
<KeyFragment ref="request.uri" />
</CacheKey>
<Scope>Exclusive</Scope>
<ExpirySettings>
<TimeoutinSec ref="flow.variable.here">600</TimeoutinSec>
</ExpirySettings>
<CacheResource>targetCache</CacheResource>
<ExcludeErrorResponse>true</ExcludeErrorResponse>
</ResponseCache>
● If you have the requirement to cache the error responses for some specific
reason, then you can determine the maximum/exact duration of time for
which the failure will be observed (if possible):
● Set the Expiration time appropriately to ensure that you don’t cache
the error responses longer than the time for which the failure can be
seen.
● Use the ResponseCache policy to cache the error responses without
the <ExcludeErrorResponse> element.
● Do this only if you are absolutely sure that the backend server failure is not
for a brief/temporary period.
● Apigee does not recommend caching 5xx responses from backend servers.
NOTE
After the failure has been fixed, remove/disable the ResponseCache policy. Otherwise, the error
responses for temporary backend server failures may get cached.

Antipattern: Store data greater than


512 KB size in cache

Apigee Edge provides the ability to store data in a cache at runtime for persistence
and faster retrieval.

● The data is initially stored in the Message Processor’s in-memory cache,


referred to as L1 cache.
● The L1 cache is limited by the amount of memory reserved for it as a
percentage of the JVM memory.
● The cached entries are later persisted in L2 cache, which is accessible to all
Message Processors. More details can be found in the section below.
● The L2 cache does not have any hard limit on the number of cache entries,
however the maximum size of the entry that can be cached is restricted to
512 KB. The cache size of 512 KB is the recommended size for optimal
performance.

Antipattern
This particular antipattern talks about the implications of exceeding the current
cache size restrictions within the Apigee Edge platform.

When data > 512 KB is cached, the consequences are as follows:

● API requests executed for the first time on each of the Message Processors
need to get the data independently from the original source (policy or a
target server), as entries > 512 KB are not available in L2 cache.
● Storing larger data (> 512 KB) in L1 cache tends to put more stress on the
platform resources. It results in the L1 cache memory being filled up faster
and hence lesser space being available for other data. As a consequence,
one will not be able to cache the data as aggressively as one would like to.
● Cached entries from the Message Processors will be removed when the limit
on the number of entries is reached. This causes the data to be fetched from
the original source again on the respective Message Processors.
Impact
● Data of size > 512 KB will not be stored in L2/persistent cache.
● More frequent calls to the original source (either a policy or a target server)
leads to increased latencies for the API requests.

Best practice
● It is preferred to store data of size < 512 KB in cache to get optimum
performance.
● If there’s a need to store data > 512 KB, then consider:
● Using any appropriate database for storing large data
OR
● Compressing the data

Further reading
● Cache internals
Antipattern: Log data to third party
servers using the JavaScript policy
Logging is one of the efficient ways for debugging problems. The information about
the API request such as headers, form params, query params, dynamic variables,
etc. can all be logged for later reference. The information can be logged locally on
the Message Processors (Apigee Edge for Private Cloud only) or to third party
servers such as Sumo Logic, Splunk, or Loggly.

Antipattern
In the code below, the httpClient object in a JavaScript policy is used to log the
data to a Sumo Logic server. This means that the process of logging actually takes
place during request/response processing. This approach is counterproductive
because it adds to the processing time, thereby increasing overall latencies.

LogData_JS:

<!-- /antipatterns/examples/1-4.xml -->


<Javascript async="false" continueOnError="false" enabled="true" timelimit="2000"
name="LogData_JS">
<DisplayName>LogData_JS</DisplayName>
<ResourceURL>jsc://LogData.js</ResourceURL>
</Javascript>

LogData.js:

<!-- /antipatterns/examples/1-5.xml -->


var sumoLogicURL = "...";
httpClient.send(sumoLogicURL);
waitForComplete();

The culprit here is the call to waitForComplete(), which suspends the caller's
operations until the logging process is complete. A valid pattern is to make this
asynchronous by eliminating this call.

Impact
● Executing the logging code via the JavaScript policy adds to the latency of
the API request.
● Concurrent requests can stress the resources on the Message Processor
and consequently adversely affecting the processing of other requests.

Best Practice
● Use the MessageLogging policy to transfer/log data to Log servers or third
party log management services such as Sumo Logic, Splunk, Loggly, etc.
● The biggest advantage of the Message Logging policy is that it can be
defined in the Post Client Flow, which gets executed after the response is
sent back to the requesting client app.
● Having this policy defined in Post Client flow helps separate the logging
activity and the request/response processing thereby avoiding latencies.
● If there is a specific requirement or a reason to do logging via JavaScript
(not the Post Client flow option mentioned above), then it needs to be done
asynchronously. In other words, if you are using a httpClient object, you
would need to ensure that the JavaScript does NOT implement
waitForComplete

Further reading
● JavaScript policy
● MessageLogging policy
● Community article: Using httpClient from within JavaScript callout, is
waitForComplete() necessary?

Antipattern: Invoke the


MessageLogging policy multiple times
in an API proxy
Apigee Edge's MessageLogging policy lets API Proxy developers log custom
messages to syslog or to disk (Edge for Private Cloud only). Any important
information related to the API request such as input parameters, request payload,
response code, error messages (if any), and so on, can be logged for later
reference or for debugging. While the policy uses a background process to perform
the logging, there are caveats to using the policy.

Antipattern
The MessageLogging policy provides an efficient way to get more information
about an API request and debugging any issues that are encountered with the API
request. However, using the same MessageLogging policy more than once or
having multiple MessageLogging policies log data in chunks in the same API proxy
in flows other than the PostClientFlow may have adverse implications. This is
because Apigee Edge opens a connection to an external syslog server for a
MessageLogging policy. If the policy uses TLS over TCP, there is additional
overhead of establishing a TLS connection.

Let's explain this with the help of an example API Proxy.

API proxy

In the following example, a MessageLogging policy named "LogRequestInfo" is


placed in the Request flow, and another MessageLogging policy named
"LogResponseInfo" is added to the Response flow. Both are in the ProxyEndpoint
PreFlow. The LogRequestInfo policy executes in the background as soon as the API
proxy receives the request, and the LogResponseInfo policy executes after the
proxy has received a response from the target server but before the proxy returns
the response to the API client. This will consume additional system resources as
two TLS connections are potentially being established.

In addition, there's a MessageLogging policy named "LogErrorInfo" that gets


executed only if there's an error during API proxy execution.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>


<ProxyEndpoint name="default">
...
<FaultRules>
<FaultRule name="fault-logging">
<Step>
<Name>LogErrorInfo</Name>
</Step>
</FaultRule>
</FaultRules>
<PreFlow name="PreFlow">
<Request>
<Step>
<Name>LogRequestInfo</Name>
</Step>
</Request>
</PreFlow>
<PreFlow name="PreFlow">
<Response>
<Step>
<Name>LogResponseInfo</Name>
</Step>
</Response>
</PreFlow>
...
</ProxyEndpoint>
Message Logging policy

In the following example policy configurations, the data is being logged to third-
party log servers using TLS over TCP. If more than one of these policies is used in
the same API proxy, the overhead of establishing and managing TLS connections
would occupy additional system memory and CPU cycles, leading to performance
issues at scale.

LogRequestInfo policy

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>


<MessageLogging name="LogRequestInfo">
<Syslog>
<Message>[3f509b58 tag="{organization.name}.{apiproxy.name}.
{environment.name}"] Weather request for WOEID
{request.queryparam.w}.</Message>
<Host>logs-01.loggly.com</Host>
<Port>6514</Port>
<Protocol>TCP</Protocol>
<FormatMessage>true</FormatMessage>
<SSLInfo>
<Enabled>true</Enabled>
</SSLInfo>
</Syslog>
<logLevel>INFO</logLevel>
</MessageLogging>

LogResponseInfo policy

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>


<MessageLogging name="LogResponseInfo">
<Syslog>
<Message>[3f509b58 tag="{organization.name}.{apiproxy.name}.
{environment.name}"] Status: {response.status.code}, Response
{response.content}.</Message>
<Host>logs-01.loggly.com</Host>
<Port>6514</Port>
<Protocol>TCP</Protocol>
<FormatMessage>true</FormatMessage>
<SSLInfo>
<Enabled>true</Enabled>
</SSLInfo>
</Syslog>
<logLevel>INFO</logLevel>
</MessageLogging>
LogErrorInfo policy

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>


<MessageLogging name="LogErrorInfo">
<Syslog>
<Message>[3f509b58 tag="{organization.name}.{apiproxy.name}.
{environment.name}"] Fault name: {fault.name}.</Message>
<Host>logs-01.loggly.com</Host>
<Port>6514</Port>
<Protocol>TCP</Protocol>
<FormatMessage>true</FormatMessage>
<SSLInfo>
<Enabled>true</Enabled>
</SSLInfo>
</Syslog>
<logLevel>ERROR</logLevel>
</MessageLogging>

Impact
● Increased networking overhead due to establishing connections to the log
servers multiple times during API Proxy flow.
● If the syslog server is slow or cannot handle the high volume caused by
multiple syslog calls, then it will cause back pressure on the message
processor, resulting in slow request processing and potentially high latency
or 504 Gateway Timeout errors.
● Increased number of concurrent file descriptors opened by the message
processor on Private Cloud setups where file logging is used.
● If the MessageLogging policy is placed in flows other than PostClient flow,
there's a possibility that the information may not be logged, as the
MessageLogging policy will not be executed if any failure occurs before the
execution of this policy.
In the previous ProxyEndpoint example, the information will not be logged
under the following circumstances:
● If any of the policies placed ahead of the LogRequestInfo policy in the
request flow fails.
or
● If the target server fails with any error (HTTP 4XX, 5XX). In this
situation, when a successful response isn't returned, the
LogResponseInfo policy won't get executed.
● In both cases, the LogErrorInfo policy will get executed and log only the
error-related information.

Best practice
● Use an ExtractVariables policy or JavaScript policy to set all the flow
variables that are to be logged, making them available for the
MessageLogging policy.
● Use a single MessageLogging policy to log all the required data in the
PostClientFlow, which gets executed unconditionally.
● Use the UDP protocol, where guaranteed delivery of messages to the syslog
server is not required and TLS/SSL is not mandatory.

The MessageLogging policy was designed to be decoupled from actual API


functionality, including error handling. Therefore, invoking it in the PostClientFlow,
which is outside of request/response processing, means that it will always log data
regardless of whether the API failed or not.

Here's an example invoking MessageLogging Policy in PostClientFlow:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>


...
<PostClientFlow>
<Request/>
<Response>
<Step>
<Name>LogInfo</Name>
</Step>
</Response>
</PostClientFlow>
...

Here's an example of a MessageLogging policy, LogInfo, that logs all the data:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>


<MessageLogging name="LogInfo">
<Syslog>
<Message>[3f509b58 tag="{organization.name}.{apiproxy.name}.
{environment.name}"] Weather request for WOEID {woeid} Status:
{weather.response.code}, Response {weather.response}, Fault:
{fault.name:None}.</Message>
<Host>logs-01.loggly.com</Host>
<Port>6514</Port>
<Protocol>TCP</Protocol>
<FormatMessage>true</FormatMessage>
<SSLInfo>
<Enabled>true</Enabled>
</SSLInfo>
</Syslog>
<logLevel>INFO</logLevel>
</MessageLogging>
Because response variables are not available in PostClientFlow following an Error
Flow, it's important to explicitly set woeid and weather.response* variables
using ExtractVariables or JavaScript policies.

Further reading
● JavaScript policy
● ExtractVariables policy
● Having code execute after proxy processing, including when faults occur,
with the PostClientFlow

Antipattern: Configure non-distributed


quota
Apigee Edge provides the ability to configure the number of allowed requests to an
API Proxy for a specific period of time using the Quota policy.

Antipattern
An API proxy request can be served by one or more distributed Edge components
called Message Processors. If there are multiple Message Processors configured
for serving API requests, then the quota will likely be exceeded because each
Message Processor keeps it’s own "count" of the requests it processes.

Let's explain this with the help of an example. Consider the following Quota policy
for an API proxy:

<!-- /antipatterns/examples/1-6.xml -->


<Quota name="CheckTrafficQuota">
<Interval>1</Interval>
<TimeUnit>hour</TimeUnit>
<Allow count="100"/>
</Quota>

The above configuration should allow a total of 100 requests per hour.

In practice, however, when multiple message processors are serving the API
requests, the following happens:
In the above illustration:

● The quota policy is configured to allow 100 requests per hour.


● The requests to the API Proxy are being served by two Message Processors.
● Each Message Processor maintains its own quota count variable,
quota_count_mp1 and quota_count_mp2, to track the number of
requests they are processing.
● Therefore each of the Message Processor will allow 100 API requests
separately. The net effect is that a total of 200 requests are processed
instead of 100 requests.

Impact
This situation defeats the purpose of the quota configuration and can have
detrimental effects on the backend servers that are serving the requests.

The backend servers can:

● Be stressed due to higher than expected incoming traffic


● Become unresponsive to newer API requests leading to 503 errors

Best Practice
Consider, setting the <Distributed> element to true in the Quota policy to
ensure that a common counter is used to track the API requests across all Message
Processors. The <Distributed> element can be set as shown in the code snippet
below:

<!-- /antipatterns/examples/1-7.xml -->


<Quota name="CheckTrafficQuota">
<Interval>1</Interval>
<TimeUnit>hour</TimeUnit>
<Distributed>true</Distributed>
<Allow count="100"/>
</Quota>
Antipattern: Re-use a Quota policy
Apigee Edge provides the ability to configure the number of allowed requests for an
API Proxy for a specific period of time using the Quota policy.

Antipattern
If a Quota policy is reused, then the quota counter will be decremented each time
the Quota policy gets executed irrespective of where it is used. That is, if a Quota
policy is reused:

● Within the same flow or different flows of an API Proxy


● In different target endpoints of an API Proxy

Then the quota counter gets decremented each time it is executed and we will end
up getting Quota violation errors much earlier than the expected for the specified
interval of time.

Let’s use the following example to explain how this works.

API Proxy

Let’s say we have an API Proxy named “TestTargetServerQuota”, which routes


traffic to two different target servers based on the resource path. And we would like
to restrict the API traffic to 10 requests per minute for each of these target servers.
Here’s the table that depicts this scenario:

Resource Path Target Server Quota

/target-us target- 10 requests per minute


US.somedomain.com

/target-eu target- 10 requests per minute


EU.somedomain.com

Quota policy

Since the traffic quota is same for both the target servers, we define single Quota
policy named “Quota-Minute-Target-Server” as shown below:

<!-- /antipatterns/examples/1-8.xml -->


<Quota name="Quota-Minute-Target-Server">
<Interval>1</Interval>
<TimeUnit>minute</TimeUnit>
<Distributed>true</Distributed>
<Allow count="10"/>
</Quota>

Target endpoints

Let’s use the Quota policy “Quota-Minute-Target-Server” in the preflow of the


target endpoint “Target-US”:

<!-- /antipatterns/examples/1-9.xml -->


<TargetEndpoint name="Target-US">
<PreFlow name="PreFlow">
<Request>
<Step>
<Name>Quota-Minute-Target-Server</Name>
</Step>
</Request>
</PreFlow>
<HTTPTargetConnection>
<URL>http://target-us.somedomain.com</URL>
</HTTPTargetConnection>
</TargetEndpoint>

And reuse the same Quota policy “Quota-Minute-Target-Server” in the preflow of


the other target endpoint “Target-EU” as well:

<!-- /antipatterns/examples/1-10.xml -->


<TargetEndpoint name="Target-EU">
<PreFlow name="PreFlow">
<Request>
<Step>
<Name>Quota-Minute-Target-Server</Name>
</Step>
</Request>
<Response/>
</PreFlow>
<HTTPTargetConnection>
<URL>http://target-us.somedomain.com</URL>
</HTTPTargetConnection>
</TargetEndpoint>

Incoming traffic pattern


Let’s say we get a total of 10 API requests for this API Proxy within the first 30
seconds in the following pattern:

/target-us /target-eu All


Resource Path

4 6 10
# Requests

A little later, we get the 11th API request with the resource path as /target-us,
let’s say after 32 seconds.

We expect the request to go through successfully assuming that we still have 6 API
requests for the target endpoint target-us as per the quota allowed.

However, in reality, we get a Quota violation error.

Reason: Since we are using the same Quota policy in both the target endpoints, a
single quota counter is used to track the API requests hitting both the target
endpoints. Thus we exhaust the quota of 10 requests per minute collectively rather
than for the individual target endpoint.

Impact
This antipattern can result in a fundamental mismatch of expectations, leading to a
perception that the quota limits got exhausted ahead of time.

Best practice
● Use the <Class> or <Identifier> elements to ensure multiple, unique
counters are maintained by defining a single Quota policy. Let's redefine the
Quota policy “Quota-Minute-Target-Server” that we just explained in the
previous section by using the header target_id as the <Identifier> for
as shown below:
● <!-- /antipatterns/examples/1-11.xml -->
● <Quota name="Quota-Minute-Target-Server">
● <Interval>1</Interval>
<TimeUnit>minute</TimeUnit>
<Allow count="10"/>
<Identifier ref="request.header.target_id"/>
<Distributed>true</Distributed>
</Quota>

● We will continue to use this Quota policy in both the target endpoints
“Target-US” and “Target-EU” as before.
● Now let’s say if the header target_id has a value “US” then the
requests are routed to the target endpoint “Target-US”.
● Similarly if the header target_id has a value “EU” then the requests
are routed to the target endpoint “Target-EU”.
● So even if we use the same Quota policy in both the target endpoints,
separate quota counters are maintained based on the <Identifier>
value.
● Therefore, by using the <Identifier> element we can ensure that
each of the target endpoints get the allowed quota of 10 requests.
● Use separate Quota policy in each of the flows/target endpoints/API Proxies
to ensure that you always get the allowed count of API requests. Let’s now
look at the same example used in the above section to see how we can
achieve the allowed quota of 10 requests for each of the target endpoints.
● Define a separate Quota policy, one each for the target endpoints
“Target-US” and “Target-EU”
Quota policy for Target Endpoint “Target-US”:
● <!-- /antipatterns/examples/1-12.xml -->
● <Quota name="Quota-Minute-Target-Server-US">
● <Interval>1</Interval>
<TimeUnit>minute</TimeUnit>
<Distributed>true</Distributed>
<Allow count="10"/>
</Quota>

● Quota policy for Target Endpoint “Target-EU”:


● <!-- /antipatterns/examples/1-13.xml -->
● <Quota name="Quota-Minute-Target-Server-EU">
● <Interval>1</Interval>
<TimeUnit>minute</TimeUnit>
<Distributed>true</Distributed>
<Allow count="10"/>
</Quota>

● Use the respective quota policy in the definition of the target


endpoints as shown below:
Target Endpoint “Target-US”:
● <!-- /antipatterns/examples/1-14.xml -->
● <TargetEndpoint name="Target-US">
● <PreFlow name="PreFlow">
<Request>
<Step>
<Name>Quota-Minute-Target-Server-US</Name>
</Step>
</Request>
<Response/>
</PreFlow>
<HTTPTargetConnection>
<URL>http://target-us.somedomain.com</URL>
</HTTPTargetConnection>
</TargetEndpoint>
●Target Endpoint “Target-EU”:
●<!-- /antipatterns/examples/1-15.xml -->
●<TargetEndpoint name="Target-EU">
● <PreFlow name="PreFlow">
● <Request>
● <Step>
● <Name>Quota-Minute-Target-Server-EU</Name>
● </Step>
● </Request>
● <Response/>
● </PreFlow>
● <HTTPTargetConnection>
● <URL>http://target-eu.somedomain.com</URL>
● </HTTPTargetConnection>
●</TargetEndpoint>
●Since we are using separate Quota policy in the target endpoints
“Target-US” and “Target-EU”, a separate counter will be maintained.
This ensures that we get the allowed quota of 10 API requests per
minute for each of the target endpoints.
● Use the <Class> or <Identifier> elements to ensure multiple, unique
counters are maintained.

Antipattern: Use the RaiseFault Policy


under inappropriate conditions
The RaiseFault policy lets API developers initiate an error flow, set error variables
in a response body message, and set appropriate response status codes. You can
also use the RaiseFault policy to set flow variables pertaining to the fault, such as,
fault.name, fault.type, and fault.category. Because these variables are
visible in analytics data and Router access logs used for debugging, it's important
to identify the fault accurately.

You can use the RaiseFault policy to treat specific conditions as errors, even if an
actual error has not occurred in another policy or in the backend server of the API
proxy. For example, if you want the proxy to send a custom error message to the
client app whenever the backend response body contains the string unavailable,
then you could invoke the RaiseFault policy as shown in the code snippet below:

<!-- /antipatterns/examples/raise-fault-conditions-1.xml -->


<TargetEndpoint name="default">
...
<Response>
<Step>
<Name>RF-Service-Unavailable</Name>
<Condition>(message.content Like "*unavailable*")</Condition>
</Step>
</Response>
...

The RaiseFault policy's name is visible as the fault.name in API Monitoring and
as x_apigee_fault_policy in the Analytics and Router access logs. This helps
to diagnose the cause of the error easily.

Antipattern
Using the RaiseFault policy within FaultRules after another policy has already
thrown an error

Consider the example below, where an OAuthV2 policy in the API Proxy flow has
failed with an InvalidAccessToken error. Upon failure, Edge will set the
fault.name as InvalidAccessToken, enter into the error flow, and execute any
defined FaultRules. In the FaultRule, there is a RaiseFault policy named
RaiseFault that sends a customized error response whenever an
InvalidAccessToken error occurs. However, the use of RaiseFault policy in a
FaultRule means that the fault.name variable is overwritten and masks the true
cause of the failure.

<!-- /antipatterns/examples/raise-fault-conditions-2.xml -->


<FaultRules>
<FaultRule name="generic_raisefault">
<Step>
<Name>RaiseFault</Name>
<Condition>(fault.name equals "invalid_access_token") or (fault.name equals
"InvalidAccessToken")</Condition>
</Step>
</FaultRule>
</FaultRules>

Using RaiseFault policy in a FaultRule under all conditions

In the example below, a RaiseFault policy named RaiseFault executes if the


fault.name is not RaiseFault:

<!-- /antipatterns/examples/raise-fault-conditions-3.xml -->


<FaultRules>
<FaultRule name="fault_rule">
....
<Step>
<Name>RaiseFault</Name>
<Condition>!(fault.name equals "RaiseFault")</Condition>
</Step>
</FaultRule>
</FaultRules>

As in the first scenario, the key fault variables fault.name, fault.code, and
fault.policy are overwritten with the RaiseFault policy's name. This behavior
makes it almost impossible to determine which policy actually caused the failure
without accessing a trace file showing the failure or reproducing the issue.

Using RaiseFault policy to return a HTTP 2xx response outside of the error
flow.

In the example below, a RaiseFault policy named HandleOptionsRequest


executes when the request verb is OPTIONS:

<!-- /antipatterns/examples/raise-fault-conditions-4.xml -->


<PreFlow name="PreFlow">
<Request>

<Step>
<Name>HandleOptionsRequest</Name>
<Condition>(request.verb Equals "OPTIONS")</Condition>
</Step>

</PreFlow>

The intent is to return the response to the API client immediately without
processing other policies. However, this will lead to misleading analytics data
because the fault variables will contain the RaiseFault policy's name, making the
proxy more difficult to debug. The correct way to implement the desired behavior is
to use Flows with special conditions, as described in Adding CORS support.

Impact
Use of the RaiseFault policy as described above results in overwriting key fault
variables with the RaiseFault policy's name instead of the failing policy's name. In
Analytics and NGINX Access logs, the x_apigee_fault_code and
x_apigee_fault_policy variables are overwritten. In API Monitoring, the
Fault Code and Fault Policy are overwritten. This behavior makes it difficult
to troubleshoot and determine which policy is the true cause of the failure.
In the screenshot below from API Monitoring, you can see that the Fault Code and
Fault policy were overwritten to generic RaiseFault values, making it impossible
to determine the root cause of the failure from the logs:

Best Practice
When an Edge policy raises a fault and you want to customize the error response
message, use the AssignMessage or JavaScript policies instead of the RaiseFault
policy.

The RaiseFault policy should be used in a non-error flow. That is, only use
RaiseFault to treat a specific condition as an error, even if an actual error has not
occurred in a policy or in the backend server of the API Proxy. For example, you
could use the RaiseFault policy to signal that mandatory input parameters are
missing or have incorrect syntax.

You can also use RaiseFault in a fault rule if you want to detect an error during the
processing of a fault. For example, your fault handler itself could cause an error
that you want to signal by using RaiseFault.

Further reading
● Handling Faults
● RaiseFault policy
● Community discussion on Fault Handling Patterns
Antipattern: Access multi-value HTTP
headers incorrectly in an API Proxy
The HTTP headers are the name value pairs that allows the client applications and
backend services to pass additional information about requests and responses
respectively. Some simple examples are:

● Authorization request header passes the user credentials to the server:


● Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l
● The Content-Type header indicates the type of the request/response
content being sent:
● Content-Type: application/json

The HTTP Headers can have one or more values depending on the header field
definitions. A multi-valued header will have comma separated values. Here are a
few examples of headers that contain multiple values:

● Cache-Control: no-cache, no-store, must-revalidate


● Accept: text/html, application/xhtml+xml,
application/xml;q=0.9, */*;q=0.8
● X-Forwarded-For: 10.125.5.30, 10.125.9.125
Note: The order of the multiple header values is important and must be preserved. For example
with X-Forwarded-For, the IP addresses are listed in order of network hops from first to last.

Apigee Edge allows the developers to access headers easily using flow variables in
any of the Edge policies or conditional flows. Here are the list of variables that can
be used to access a specific request or response header in Edge:

Flow variables:

● message.header.header-name
● request.header.header-name
● response.header.header-name
● message.header.header-name.N
● request.header.header-name.N
● response.header.header-name.N

Javascript objects:

● context.proxyRequest.headers.header-name
● context.targetRequest.headers.header-name
● context.proxyResponse.headers.header-name
● context.targetResponse.headers.header-name
Here's a sample AssignMessage policy showing how to read the value of a request
header and store it into a variable:

<AssignMessage continueOnError="false" enabled="true" name="assign-message-


default">
<AssignVariable>
<Name>reqUserAgent</Name>
<Ref>request.header.User-Agent</Ref>
</AssignVariable>
</AssignMessage>

Antipattern
Accessing the values of HTTP headers in Edge policies in a way that returns only
the first value is incorrect and can cause issues if the specific HTTP Header(s) has
more than one value.

The following sections contain examples of header access.

Example 1: Read a multi-valued Accept header using JavaScript code

Consider that the Accept header has multiple values as shown below:

Accept: text/html, application/xhtml+xml, application/xml

Here's the JavaScript code that reads the value from Accept header:

// Read the values from Accept header


var acceptHeaderValues = context.getVariable("request.header.Accept");

The above JavaScript code returns only the first value from the Accept header,
such as text/html.

Example 2: Read a multi-valued Access-Control-Allow-Headers header in


AssignMessage or RaiseFault policy

Consider that the Access-Control-Allow-Headers header has multiple values


as shown below:

Access-Control-Allow-Headers: content-type, authorization

Here's the part of code from AssignMessage or RaiseFault policy setting the
Access-Control-Allow-Headers header:

<Set>
<Headers>
<Header name="Access-Control-Allow-Headers">{request.header.Access-
Control-Request-Headers}</Header>
</Headers>
</Set>

The above code sets the Header Access-Control-Allow-Headers with only the
first value from the request header Access-Control-Allow-Headers, in this
example content-type.

Impact
1. In both the examples above, notice that only the first value from multi-valued
headers are returned. If these values are subsequently used by another
policy in the API Proxy flow or by the backend service to perform some
function or logic, then it could lead to an unexpected outcome or result.
2. When request header values are accessed and passed onto the target server,
API requests could be processed by the backend incorrectly and hence they
may give incorrect results.
3. If the client application is dependent on specific header values from the
Edge response, then it may also process incorrectly and give incorrect
results.

Best Practice
1. Use the appropriate built-in flow variables:
request.header.header_name.values.count,
request.header.header_name.N,
response.header.header_name.values.count,
response.header.header_name.N.
Then iterate to fetch all the values from a specific header in JavaScript or
Java callout policies.
Example: Sample JavaScript code to read a multi-value header
2. for (var i = 1; i <=context.getVariable('request.header.Accept.values.count');
i++)
3. {
4. print(context.getVariable('request.header.Accept.' + i));
}

5. Note: Edge splits the header values considering comma as a delimiter and not through
other delimiters like semicolon etc.
6. For example application/xml;q=0.9, */*;q=0.8 will appear as one
value with the above code.
7. If the header values need to be split using semicolon as a delimiter, then use
string.split(";") to separate these into values.
8. Use the substring() function on the flow variable
request.header.header_name.values in RaiseFault or AssignMessage
policy to read all the values of a specific header.
Example: Sample RaiseFault or AssignMessage Policy to read a multi-value
header
9. <Set>
10. <Headers>
11. <Header name="Access-Control-Allow-
Headers">{substring(request.header.Access-Control-Request-
Headers.values,1,-1)}</Header>
</Headers>
</Set>

Further reading
● How to handle multi-value headers in Javascript?
● List of HTTP header fields
● Request and response variables
● Flow Variables

Antipattern: Use Service Callout Policy


to invoke a backend service in a No
Target API Proxy

An API proxy is a managed facade for backend services. A basic API proxy
configuration consists of a ProxyEndpoint (defining the URL of the API proxy) and a
TargetEndpoint (defining the URL of the backend service).

Apigee Edge offers a lot of flexibility for building sophisticated behavior on top of
this pattern. For example, you can add policies to control the way the API
processes a client request before sending it to the backend service, or manipulate
the response received from the backend service before forwarding it to the client.
You can invoke other services using service callout policies, add custom behavior
by adding Javascript code, and even create an API proxy that doesn't invoke a
backend service.

Antipattern
Using service callouts to invoke a backend service in an API proxy with no routes to
a target endpoint is technically feasible, but results in the loss of analytics data
about the performance of the external service.

An API proxy that does not contain target routes can be useful in cases where you
do not need to forward the request message to the TargetEndpoint. Instead, the
ProxyEndpoint performs all necessary processing. For example, the ProxyEndpoint
could retrieve data from a lookup to the API service's key/value store and return the
response without invoking a backend service.

You can define a null Route in an API proxy, as shown here:

<RouteRule name="noroute"/>

A proxy using a null route is a "no target" proxy, because it does not invoke a target
backend service.

It is technically possible to add a service callout to a no target proxy to invoke an


external service, as shown in the example below:

<!-- /antipatterns/examples/service-callout-no-target-1.xml -->


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ProxyEndpoint name="default">
<Description/>
<FaultRules/>
<PreFlow name="PreFlow">
<Request>
<Step>
<Name>ServiceCallout-InvokeBackend</Name>
</Step>
</Request>
<Response/>
</PreFlow>
<PostFlow name="PostFlow">
<Request/>
<Response/>
</PostFlow>
<Flows/>
<HTTPProxyConnection>
<BasePath>/no-target-proxy</BasePath>
<Properties/>
<VirtualHost>secure</VirtualHost>
</HTTPProxyConnection>
<RouteRule name="noroute"/>
</ProxyEndpoint>
However, the proxy cannot provide analytics information about the external service
behavior (such as processing time or error rates), making it difficult to assess the
performance of the external service.

Impact
● Analytics information on the interaction with the external service ( error
codes, response time, target performance, etc.) is unavailable
● Any specific logic required before or after invoking the service callout is
included as part of the overall proxy logic, making it harder to understand
and reuse.

Best Practice
If an API proxy interacts with only a single external service, the proxy should follow
the basic design pattern, where the backend service is defined as the target
endpoint of the API proxy. A proxy with no routing rules to a target endpoint should
not invoke a backend service using the ServiceCallout policy.

The following proxy configuration implements the same behavior as the example
above, but follows best practices:

<!-- /antipatterns/examples/service-callout-no-target-2.xml -->


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ProxyEndpoint name="default">
<Description/>
<FaultRules/>
<PreFlow name="PreFlow">
<Request/>
<Response/>
</PreFlow>
<PostFlow name="PostFlow">
<Request/>
<Response/>
</PostFlow>
<Flows/>
<HTTPProxyConnection>
<BasePath>/simple-proxy-with-route-to-backend</BasePath>
<Properties/>
<VirtualHost>secure</VirtualHost>
</HTTPProxyConnection>
<RouteRule name="default">
<TargetEndpoint>default</TargetEndpoint>
</RouteRule>
</ProxyEndpoint>
Use Service callouts to support mashup scenarios, where you want to invoke
external services before or after invoking the target endpoint. Service callouts are
not meant to replace target endpoint invocation.

Further reading
● Understanding APIs and API proxies
● How to configure Route Rules
● Null Routes
● Service Callout policy

Antipattern: Leave unused NodeJS API


Proxies deployed

One of the unique and useful features of Apigee Edge is the ability to wrap a
NodeJS application in an API Proxy. This allows developers to create event-driven
server-side applications using Edge.

Antipattern

Deployment of API Proxies is the process of making them available to serve API
requests. Each of the deployed API Proxies is loaded into Message Processor’s
runtime memory to be able to serve the API requests for the specific API Proxy.
Therefore, the runtime memory usage increases with the increase in the number of
deployed API Proxies. Leaving any unused API Proxies deployed can cause
unnecessary use of runtime memory.

In the case of NodeJS API Proxies, there is a further implication.

The platform launches a “Node app” for every deployed NodeJS API Proxy. A Node
app is akin to a standalone node server instance on the Message Processor JVM
process.

In effect, for every deployed NodeJS API Proxy, Edge launches a node server each,
to process requests for the corresponding proxies. If the same NodeJS API Proxy is
deployed in multiple environments, then a corresponding node app is launched for
each environment. In situations where there are a lot of deployed but unused
NodeJS API Proxies, multiple Node apps are launched. Unused NodeJS proxies
translate to idle Node apps which consume memory and affect start up times of the
application process.

Used Proxies Unused Proxies

# Node
# # Deployed # nodeapps # # Deployed
apps
Proxies Environments Launched Proxies Environments
Launched

10 dev, test, prod 10x3=30 12 dev, test, prod 12x3=36


(3) (3)

In the illustration above, 36 unused nodeapps are launched, which uses up system
memory and has an adverse effect on start up times of the process.

Impact

● High memory usage and cascading effect on application’s ability to process


further requests
● Likely performance impact on the API Proxies that are actually serving traffic

Best practice

● Undeploy any unused API Proxies


● Use the Analytics Proxy Performance dashboard to determine which proxies
are not serving traffic; undeploy the ones you don't need

Further reading

● Overview of Node.js on Apigee Edge


● Getting started with Node.js on Apigee Edge
● Debugging and troubleshooting Node.js proxies

Antipattern: Invoke Management API
calls from an API Proxy
Edge has a powerful utility called “management APIs” which offers services such
as:

● Deploying or undeploying API Proxies


● Configuring virtual hosts, keystores and truststores, etc.
● Creating, deleting and/or updating entities such as KeyValueMaps, API
Products, Developer Apps, Developers, Consumer Keys, etc.
● Retrieving information about these entities

These services are made accessible through a component called Management


Server in the Apigee Edge platform. These services can be invoked easily with the
help of simple management API calls.

Sometimes we may need to use one or more of these services from API Proxies at
runtime. This is because the entities such as KeyValueMaps, OAuth Access
Tokens, API Products, Developer Apps, Developers, Consumer Keys, etc. contain
useful information in the form of key-value pairs, custom attributes or as part of its
profile.

For instance, you can store the following information in KeyValueMap to make it
more secure and accessible at runtime:

● Back-end target URLs


● Environment properties
● Security credentials of backend or third party systems

Similarly, you may want to get the list of API Products or developer’s email address
at runtime. This information will be available as part of the Developer Apps profile.

All this information can be effectively used at runtime to enable dynamic behaviour
in policies or custom code within Apigee Edge.

Antipattern
The management APIs are preferred and useful for administrative tasks and should
not be used for performing any runtime logic in API Proxies flow. This is because:

● Using management APIs to access information about the entities such as


KeyValueMaps, OAuth Access Tokens or for any other purpose from API
Proxies leads to dependency on Management Servers.
● Management Servers are not a part of Edge runtime component and
therefore, they may not be highly available.
● Management Servers also may not be provisioned within the same network
or data center and may therefore introduce network latencies at runtime.
● The entries in the management servers are cached for longer period of time,
so we may not be able to see the latest data immediately in the API Proxies if
we perform writes and reads in a short period of time.
● Increases network hops at runtime.

In the code sample below, the management API call is made via the custom
JavaScript code to retrieve the information from the KeyValueMap:

var response =
httpClient.send('https://api.enterprise.apigee.com/v1/o/org_name/e/env_name/
keyvaluemaps/kvm_name')

If the management server is unavailable, then the JavaScript code invoking the
management API call fails. This subsequently causes the API request to fail.

Impact
● Introduces additional dependency on Management Servers during runtime.
Any failure in Management servers will affect the API calls.
● User credentials for management APIs need to be stored either locally or in
some secure store such as Encrypted KVM.
● Performance implications owing to invoking the management service over
the network.
● May not see the updated values immediately due to longer cache expiration
in management servers.

Best practice
There are more effective ways of retrieving information from entities such as
KeyValueMaps, API Products, DeveloperApps, Developers, Consumer Keys, etc. at
runtime. Here are a few examples:

● Use a KeyValueMapOperations policy to access information from


KeyValueMaps. Here’s sample code that shows how to retrieve information
from the KeyValueMap:
● <!-- /antipatterns/examples/2-6.xml -->
● <KeyValueMapOperations mapIdentifier="urlMap" async="false"
● continueOnError="false" enabled="true" name="GetURLKVM">
<DisplayName>GetURLKVM</DisplayName>
<ExpiryTimeInSecs>86400</ExpiryTimeInSecs>
<Scope>environment</Scope>
<Get assignTo="urlHosti" index="2">
<Key>
<Parameter>urlHost_1</Parameter>
</Key>
</Get>
</KeyValueMapOperations>
● To access information about API Products, Developer Apps, Developers,
Consumer Keys, etc. in the API Proxy, you can do either of the following:
● If your API Proxy flow has a VerifyAPIKey policy, then you can access
the information using the flow variables populated as part of this
policy. Here is sample code that shows how to retrieve the name and
created_by information of a Developer App using JavaScript:
● <!-- /antipatterns/examples/2-7.xml -->
● print("Application Name ", context.getVariable(""verifyapikey.
VerifyAPIKey.app.name"));
● print("Created by:", context.getVariable("verifyapikey.
VerifyAPIKey.app.created_by"));
● If your API Proxy flow doesn’t have a VerifyAPIKey policy, then you
can access the profiles of API Products, Developer Apps, etc. using
the Access Entity and Extract Variables policies:
1. Retrieve the profile of DeveloperApp with the AccessEntity
policy:
2. <!-- /antipatterns/examples/2-8.xml -->
3. <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
4. <AccessEntity async="false" continueOnError="false"
enabled="true" name="GetDeveloperApp">
<DisplayName>GetDeveloperApp</DisplayName>
<EntityType value="app"></EntityType>
<EntityIdentifier ref="developer.app.name" type="appname"/>
<SecondaryIdentifier ref="developer.id" type="developerid"/>
</AccessEntity>
5. Extract the appId from DeveloperApp with the ExtractVariables
policy:
6. <!-- /antipatterns/examples/2-9.xml -->
7. <ExtractVariables name="Extract-Developer App-Info">
8. <!--
The source element points to the variable populated by
AccessEntity policy.
The format is <policy-type>.<policy-name>
In this case, the variable contains the whole developer
profile.
-->
<Source>AccessEntity.GetDeveloperApp"</Source>
<VariablePrefix>developerapp</VariablePrefix>
<XMLPayload>
<Variable name="appld" type="string">
<!-- You parse elements from the developer profile using
XPath. -->
<XPath>/App/AppId</XPath>
</Variable>
</XMLPayload>
</ExtractVariables>

Further reading
● KeyValueMapOperations policy
● VerifyAPIKey policy
● AccessEntity policy

Antipattern: Invoke a proxy within a


proxy using custom code or as a target
Edge lets you invoke one API Proxy from another API Proxy. This feature is useful
especially if you have an API Proxy that contains reusable code that can be used by
other API Proxies.

Antipattern
Invoking one API Proxy from another either using HTTPTargetConnection in the
target endpoint or custom JavaScript code results in additional network hop.

Invoke Proxy 2 from Proxy 1 using HTTPTargetConnection

The following code sample invokes Proxy 2 from Proxy 1 using


HTTPTargetConnection:

<!-- /antipatterns/examples/2-1.xml -->


<HTTPTargetConnection>
<URL>http://myorg-test.apigee.net/proxy2</URL>
</HTTPTargetConnection>

Invoke Proxy 2 from Proxy 1 from the JavaScript code

The next code sample invokes Proxy 2 from Proxy 1 using JavaScript:

<!-- /antipatterns/examples/2-2.xml -->


var response = httpClient.send('http://myorg-test.apigee.net/proxy2);
response.waitForComplete();

Code Flow

To understand why this has an inherent disadvantage, we need to understand the


route a request takes as illustrated by the diagram below:
Fig
ure 1: Code Flow

As depicted in the diagram, a request traverses multiple distributed components,


including the Router and the Message Processor.

In the code samples above, invoking Proxy 2 from Proxy 1 means that the request
has to be routed through the traditional route (i.e Router > MP) at runtime. This
would be akin to invoking an API from a client thereby making multiple network
hops that add to the latency. These hops are unnecessary considering that Proxy 1
request has already "reached" the MP.

Impact
Invoking one API Proxy from another API Proxy incurs unnecessary network hops,
that is the request has to be passed on from one Message Processor to another
Message Processor.

Best practice
● Use the proxy chaining feature for invoking one API Proxy from another.
Proxy chaining is more efficient as it uses local connection to reference the
target endpoint (another API Proxy).
The code sample shows proxy chaining using LocalTargetConnection in your
endpoint definition:
● <!-- /antipatterns/examples/2-3.xml -->
● <LocalTargetConnection>
● <APIProxy>proxy2</APIProxy>
<ProxyEndpoint>default</ProxyEndpoint>
</LocalTargetConnection>

● The invoked API Proxy gets executed within the same Message Processor;
as a result, it avoids the network hop as shown in the following figure:
● Figure 2: Code
Flow with Proxy Chaining

Further reading
● Proxy chaining

Antipattern: Manage Edge resources


without using source control
management
Apigee Edge provides many different types of resources and each of them serve a
different purpose. There are certain resources that can be configured (i.e., created,
updated, and/or deleted) only through the Edge UI, management APIs, or tools that
use management APIs, and by users with the prerequisite roles and permissions.
For example, only org admins belonging to a specific organization can configure
these resources. That means, these resources cannot be configured by end users
through developer portals, nor by any other means. These resources include:

● API proxies
● Shared flows
● API products
● Caches
● KVMs
● Keystores and truststores
● Virtual hosts
● Target servers
● Resource files

While these resources do have restricted access, if any modifications are made to
them even by the authorized users, then the historic data simply gets overwritten
with the new data. This is due to the fact that these resources are stored in Apigee
Edge only as per their current state. The main exceptions to this rule are API
proxies and shared flows.

API Proxies and Shared Flows under Revision Control

API proxies and shared flows are managed -- in other words, created, updated and
deployed -- through revisions. Revisions are sequentially numbered, which enables
you to add new changes and save it as a new revision or revert a change by
deploying a previous revision of the API proxy/shared flow. At any point in time,
there can be only one revision of an API proxy/shared flow deployed in an
environment unless the revisions have a different base path.

Although the API proxies and shared flows are managed through revisions, if any
modifications are made to an existing revision, there is no way to roll back since
the old changes are simply overwritten.

Audits and History

Apigee Edge provides the Audits and API, Product, and organization history
features that can be helpful in troubleshooting scenarios. These features enable
you to view information like who performed specific operations (create, read,
update, delete, deploy, and undeploy) and when the operations were performed on
the Edge resources. However, if any update or delete operations are performed on
any of the Edge resources, the audits cannot provide you the older data.

Antipattern
Managing the Edge resources (listed above) directly through Edge UI or
management APIs without using source control system

There's a misconception that Apigee Edge will be able to restore resources to their
previous state following modifications or deletes. However, Edge Cloud does not
provide restoration of resources to their previous state. Therefore, it is the user's
responsibility to ensure that all the data related to Edge resources is managed
through source control management, so that old data can be restored back quickly
in case of accidental deletion or situations where any change needs to be rolled
back. This is particularly important for production environments where this data is
required for runtime traffic.
Let's explain this with the help of a few examples and the kind of impact that can be
caused if the data in not managed through a source control system and is
modified/deleted knowingly or unknowingly:

Example 1: Deletion or modification of API proxy

When an API proxy is deleted, or a change is deployed on an existing revision, the


previous code won't be recoverable. If the API proxy contains Java, JavaScript,
Node.js, or Python code that is not managed in a source control management
(SCM) system outside Apigee, a lot of development work and effort could be lost.

Example 2: Determination of API proxies using specific virtual hosts

A certificate on a virtual host is expiring and that virtual host needs updating.
Identifying which API proxies use that virtual host for testing purposes may be
difficult if there are many API proxies. If the API proxies are managed in an SCM
system outside Apigee, then it would be easy to search the repository.

Example 3: Deletion of keystore/truststore

If a keystore/truststore that is used by a virtual host or target server configuration


is deleted, it will not be possible to restore it back unless the configuration details
of the keystore/truststore, including certificates and/or private keys, are stored in
source control.

Impact
● If any of the Edge resources are deleted, then it's not possible to recover the
resource and its contents from Apigee Edge.
● API requests may fail with unexpected errors leading to outage until the
resource is restored back to its previous state.
● It is difficult to search for inter-dependencies between API proxies and other
resources in Apigee Edge.

Best Practice
● Use any standard SCM coupled with a continuous integration and continuous
deployment (CICD) pipeline for managing API proxies and shared flows.
● Use any standard SCM for managing the other Edge resources, including API
products, caches, KVMs, target servers, virtual hosts, and keystores.
● If there are any existing Edge resources, then use management APIs
to get the configuration details for them as a JSON/XML payload and
store them in source control management.
● Manage any new updates to these resources in source control
management.
● If there's a need to create new Edge resources or update existing Edge
resources, then use the appropriate JSON/XML payload stored in
source control management and update the configuration in Edge
using management APIs.

* Encrypted KVMs cannot be exported in plain text from the API. It is the user's
responsibility to keep a record of what values are put into encrypted KVMs.

Further reading
● Source Control for API Proxy Development
● Guide to implementing CI on Apigee Edge
● Maven Deploy Plugin for API Proxies
● Maven Config Plugin to manage Edge Resources
● Apigee Edge API (for automating backups)

Antipattern: Define multiple virtual


hosts with same host alias and port
number
In Apigee Edge, a Router handles all incoming API traffic. That means all HTTP and
HTTPS requests to an Edge API proxy are first handled by an Edge Router.
Therefore, API proxy request must be directed to the IP address and open port on a
Router.

A virtual host lets you host multiple domain names on a single server or group of
servers. For Edge, the servers correspond to Edge Routers. By defining virtual hosts
on a Router, you can handle requests to multiple domains.

A virtual host on Edge defines a protocol (HTTP or HTTPS), along with a Router
port and a host alias. The host alias is typically a DNS domain name that maps to a
Router's IP address.

For example, the following image shows a Router with two virtual host definitions:
In this example, there are two virtual host definitions. One handles HTTPS requests
on the domain domainName1, the other handles HTTP requests on domainName2.

On a request to an API proxy, the Router compares the Host header and port
number of the incoming request to the list of host aliases defined by all virtual
hosts to determine which virtual host handles the request.

Sample configuration for virtual hosts are shown below:

Antipattern
Defining multiple virtual hosts with the same host alias and port number in the
same/different environments of an organization or across organizations will lead to
confusion at the time of routing API requests and may cause unexpected
errors/behaviour.

Let's use an example to explain the implications of having multiple virtual hosts
with same host alias.
Consider there are two virtual hosts sandbox and secure defined with the same
host alias i.e., api.company.abc.com in an environment:

With the above setup, there could be two scenarios as described in the following
sections.

Scenario 1 : An API Proxy is configured to accept requests to only one of the


virtual hosts sandbox
<ProxyEndpoint name="default">
...
<HTTPProxyConnection>
<BasePath>/demo</BasePath>
<VirtualHost>sandbox</VirtualHost>
</HTTPProxyConnection>
...
</ProxyEndpoint>

In this scenario, when the client applications make the calls to specific API Proxy
using the host alias api.company.abc.com, they will intermittently get 404 errors
with the message:

Unable to identify proxy for host: secure

This is because the Router sends the requests to both sandboxand secure virtual
hosts. When the requests are routed to sandbox virtual host, the client
applications will get a successful response. However, when the requests are routed
to secure virtual host, the client applications will get 404 error as the API Proxy is
not configured to accept requests on secure virtual host.

Scenario 2 : An API Proxy is configured to accept requests to both the virtual


hosts sandbox and secure
<ProxyEndpoint name="default">
...
<HTTPProxyConnection>
<BasePath>/demo</BasePath>
<VirtualHost>sandbox</VirtualHost>
<VirtualHost>secure</VirtualHost>
</HTTPProxyConnection>
...
</ProxyEndpoint>

In this scenario, when the client applications make the calls to specific API Proxy
using the host alias api.company.abc.com, they will get a valid response based
on the proxy logic.

However, this causes incorrect data to be stored in Analytics as the API requests
are routed to both the virtual hosts, while the actual requests were aimed to be sent
to only one virtual host.

This can also affect the logging information and any other data that is based on the
virtual hosts.

Impact
1. 404 Errors as the API requests may get routed to a virtual host to which the
API Proxy may not be configured to accept the requests.
2. Incorrect Analytics data since the API requests get routed to all the virtual
hosts having the same host alias while the requests were made only for a
specific virtual host.

Best Practice
● Don't define multiple virtual hosts with same host alias and port number in
the same environment, or different environments of an organization.
● If there's a need to define multiple virtual hosts, then use different host
aliases in each of the virtual hosts as shown below:

Further reading
● Virtual hosts
Antipattern: Load Balance with a single
target server with MaxFailures set to a
non-zero value
The TargetEndpoint configuration defines the way Apigee Edge connects to a
backend service or API. It sends the requests and receives the responses to/from
the backend service. The backend service can be a HTTP/HTTPS server, NodeJS,
or Hosted Target.

The backend service in the TargetEndpoint can be invoked in one of the following
ways:

● Direct URL to an HTTP or HTTPS server


● ScriptTarget to an Edge-hosted Node.js script
● HostedTarget to NodeJS deployed on Hosted Target Environment
● TargetServer configuration

Likewise, the Service Callout policy can be used to make a call to any external
service from the API Proxy flow. This policy supports defining HTTP/HTTPS target
URLs either directly in the policy itself or using a TargetServer configuration.

TargetServer configuration
TargetServer configuration decouples the concrete endpoint URLs from
TargetEndpoint configurations or in Service Callout policies. A TargetServer is
referenced by a name instead of the URL in TargetEndpoint. The TargetServer
configuration will have the hostname of the backend service, port number, and
other details.

Here is a sample TargetServer configuration:

<TargetServer name="target1">
<Host>www.mybackendservice.com</Host>
<Port>80</Port>
<IsEnabled>true</IsEnabled>
</TargetServer>

The TargetServer enables you to have different configurations for each


environment. A TargetEndpoint/Service Callout policy can be configured with one
or more named TargetServers using a LoadBalancer. The built-in support for load
balancing enhances the availability of the APIs and failover among configured
backend server instances.

Here is a sample TargetEndpoint configuration using TargetServers:

<TargetEndpoint name="default">
<HTTPTargetConnection>>
<LoadBalancer>
<Server name="target1"/>
<Server name="target2"/>
</LoadBalancer>
</HTTPTargetConnection>
</TargetEndpoint>

MaxFailures
The MaxFailures configuration specifies the maximum number of request
failures to the target server after which the target server shall be marked as down
and removed from rotation for all subsequent requests.

An example configuration with MaxFailures specified:

<TargetEndpoint name="default">
<HTTPTargetConnection>
<LoadBalancer>
<Server name="target1"/>
<Server name="target2"/>
<MaxFailures>5</MaxFailures>
</LoadBalancer>
</HTTPTargetConnection>
</TargetEndpoint>

In the above example, if five consecutive requests failed for "target1" then "target1"
will be removed from rotation and all subsequent requests will be sent only to
target2.

Antipattern
Having single TargetServer in a LoadBalancer configuration of the
TargetEndpoint or Service Callout policy with MaxFailures set to a non-zero
value is not recommended as it can have adverse implications.

Consider the following sample configuration that has a single TargetServer named
"target1" with MaxFailures set to 5 (non-zero value):

<TargetEndpoint name="default">
<HTTPTargetConnection>
<LoadBalancer>
<Algorithm>RoundRobin</Algorithm>
<Server name="target1" />
<MaxFailures>5</MaxFailures>
</LoadBalancer>
</HTTPTargetConnection>

If the requests to the TargetServer "target1" fails five times (number specified in
MaxFailures), the TargetServer is removed from rotation. Since there are no
other TargetServers to fail over to, all the subsequent requests to the API Proxy
having this configuration will fail with 503 Service Unavailable error.

Even if the TargetServer "target1" gets back to its normal state and is capable of
sending successful responses, the requests to the API Proxy will continue to return
503 errors. This is because Edge does not automatically put the TargetServer back
in rotation even after the target is up and running again. To address this issue, the
API Proxy must be redeployed for Edge to put the TargetServer back into rotation.

If the same configuration is used in the Service Callout policy, then the API
requests will get 500 Error after the requests to the TargetServer "target1" fails 5
times.

Impact
Using a single TargetServer in a LoadBalancer configuration of TargetEndpoint or
Service Callout policy with MaxFailures set to a non-zero value causes:

● API Requests to fail with 503/500 Errors continuously (after the requests fail
for MaxFailures number of times) until the API Proxy is redeployed.
● Longer outage as it is tricky and can take more time to diagnose the cause of
this issue (without prior knowledge about this antipattern).

Best Practice
1. Have more than one TargetServer in the LoadBalancer configuration for
higher availability.
2. Always define a Health Monitor when MaxFailures is set to a non-zero
value. A target server will be removed from rotation when the number of
failures reaches the number specified in MaxFailures. Having a
HealthMonitor ensures that the TargetServer is put back into rotation as
soon as the target server becomes available again, meaning there is no need
to redeploy the proxy.
To ensure that the health check is performed on the same port number that
Edge uses to connect to the target servers, Apigee recommends that you
omit the <Port> child element under<TCPMonitor> unless it is different
from TargetServer port. By default <Port> is the same as the TargetServer
port.
Sample configuration with HealthMonitor:
3. <TargetEndpoint name="default">
4. <HTTPTargetConnection>
5. <LoadBalancer>
<Algorithm>RoundRobin</Algorithm>
<Server name="target1" />
<Server name="target2" />
<MaxFailures>5</MaxFailures>
</LoadBalancer>
<Path>/test</Path>
<HealthMonitor>
<IsEnabled>true</IsEnabled>
<IntervalInSec>5</IntervalInSec>
<TCPMonitor>
<ConnectTimeoutInSec>10</ConnectTimeoutInSec>
</TCPMonitor>
</HealthMonitor>
</HTTPTargetConnection>
</TargetEndpoint>
6. If there's some constraint such that only one TargetServer and if the
HealthMonitor is not used, then don't specify MaxFailures in the
LoadBalancer configuration.
The default value of MaxFailures is 0. This means that Edge always tries to
connect to the target for each request and never removes the target server
from the rotation.

Further reading
● Load Balancing across Backend Servers
● How to use Target Servers in your API Proxies

Antipattern: Access the


request/response payload when
streaming is enabled
In Edge, the default behavior is that HTTP request and response payloads are
stored in an in-memory buffer before they are processed by the policies in the API
Proxy.
If streaming is enabled, then request and response payloads are streamed without
modification to the client app (for responses) and the target endpoint (for
requests). Streaming is useful especially if an application accepts or returns large
payloads, or if there’s an application that returns data in chunks over time.

Antipattern
Accessing the request/response payload with streaming enabled causes Edge to go
back to the default buffering mode.

Figure 1: Accessing request/response payload with streaming enabled

The illustration above shows that we are trying to extract variables from the
request payload and converting the JSON response payload to XML using
JSONToXML policy. This will disable the streaming in Edge.

Impact
● Streaming will be disabled which can lead to increased latencies in
processing the data
● Increase in the heap memory usage or OutOfMemory Errors can be observed
on Message Processors due to use of in-memory buffers especially if we
have large request/response payloads

Best practice
● Don't access the request/response payload when streaming is enabled.
Further reading
● Streaming requests and responses
● How does APIGEE Edge Streaming work?
● How to handle streaming data together with normal request/response
payload in a single API Proxy
● Best practices for API proxy design and development

Antipattern: Define multiple


ProxyEndpoints in an API proxy
The ProxyEndpoint configuration defines the way client apps consume the APIs
through Apigee Edge. The ProxyEndpoint defines the URL of the API proxy and how
a proxy behaves: which policies to apply and which target endpoints to route to, and
the conditions that need to be met for these policies or route rules to be executed.

In short, the ProxyEndpoint configuration defines all that needs to be done to


implement an API.

Antipattern
An API proxy can have one or more proxy endpoints. Defining multiple
ProxyEndpoints is an easy and simple mechanism to implement multiple APIs in a
single proxy. This lets you reuse policies and/or business logic before and after the
invocation of a TargetEndpoint.

On the other hand, when defining multiple ProxyEndpoints in a single API proxy,
you end up conceptually combining many unrelated APIs into a single artifact. It
makes the API proxies harder to read, understand, debug, and maintain. This
defeats the main philosophy of API proxies: making it easy for developers to create
and maintain APIs.

Impact
Multiple ProxyEndpoints in an API proxy can:

● Make it hard for developers to understand and maintain the API proxy.
● Obfuscate analytics. By default, analytics data is aggregated at the proxy
level. There is no breakdown of metrics by proxy endpoint unless you create
custom reports.
● Make it difficult to troubleshoot problems with API proxies.
Best practice
When you are implementing a new API proxy or redesigning an existing API proxy,
use the following best practices:

1. Implement one API proxy with a single ProxyEndpoint.


2. If there are multiple APIs that share common target server and/or require the
same logic pre- or post-invocation of the target server, consider using
shared flows to implement such logic in different API proxies.
3. If there are multiple APIs that share a common starting base path, but differ
in the suffix, use conditional flows in a single ProxyEndpoint.
4. If there exists an API proxy with multiple ProxyEndpoints and if there are no
issues with it, then there is no need to take any action.

Using one ProxyEndpoint per API proxy leads to:

1. Simpler, easier to maintain proxies


2. Better information in Analytics, such as proxy performance and target
response time, will be reported separately instead of rolled up for all
ProxyEndpoints
3. Faster troubleshooting and issue resolution

Further reading
● API Proxy Configuration Reference
● Reusable shared flows

Antipattern: Allow a slow backend


Backend systems run the services that API Proxies access. In other words, they are
the fundamental reason for the very existence of APIs and the API Management
Proxy layer.

Any API request that is routed via the Edge platform traverses a typical path before
it hits the backend:

● The request originates from a client which could be anything from a browser
to an app.
● The request is then received by the Edge gateway.
● It is processed within the gateway. As a part of this processing, the request
passes onto a number of distributed components.
● The gateway then routes the request to the backend that responds to the
request.
● The response from the backend then traverses back the exact reverse path
via the Edge gateway back to the client.
In effect, the performance of API requests routed via Edge is dependent on both
Edge and the backend systems. In this antipattern, we will focus on the impact on
API requests due to badly performing backend systems.

Antipattern
Let us consider the case of a problematic backend. These are the possibilities:

Inadequately sized backend


Slow backend

Inadequately sized backend

The challenge in exposing the services on these backend systems via APIs is that
they are accessible to a large number of end users. From a business perspective,
this is a desirable challenge, but something that needs to be dealt with.

Many times backend systems are not prepared for this extra demand on their
services and are consequently under sized or are not tuned for efficient response.

The problem with an "inadequately sized" backend is that if there is a spike in API
requests, then it will stress the resources like CPU, Load and Memory on the
backend systems. This would eventually cause API requests to fail.

Slow backend

The problem with an improperly tuned backend is that it would be very slow to
respond to any requests coming to it, thereby leading to increased latencies,
premature timeouts and a compromised customer experience.

The Edge platform offers a few tunable options to circumvent and manage the slow
backend. But these options have limitations.

Impact
● In the case of an inadequately sized backend, increase in traffic could lead to
failed requests.
● In the case of a slow backend, the latency of requests will increase.
Best practice
● Use caching to store the responses to improve the API response times and
reduce the load on the backend server.
● Resolve the underlying problem in the slow backend servers.

Further reading
● Edge caching internals

Antipattern: Disable HTTP persistent


(reusable keep-alive) connections

An API proxy is an interface for client applications used to connect with backend
services. Apigee Edge provides multiple ways to connect to backend services
through an API proxy:

● TargetEndpoint to connect to any HTTP/HTTPs, NodeJS, or Hosted Target


services.
● ServiceCallout policy to invoke any external service pre- or post- invocation
of the target server in TargetEndpoint.
● Custom code added to the JavaScript policy or JavaCallout policy to connect
to backend services.

Persistent Connections

HTTP persistent connection, also called HTTP keep-alive or HTTP connection


reuse, is a concept that allows a single TCP connection to send and receive
multiple HTTP requests/responses, instead of opening a new connection for every
request/response pair.

Apigee Edge uses persistent connection for communicating with backend services.
A connection stays alive for 60 seconds by default. That is, if a connection is idle in
the connection pool for more than 60 seconds, then the connection closes.

The keep alive timeout period is configurable through a property named


keepalive.timeout.millis, specified in the TargetEndpoint configuration of
an API proxy. For example, the keep alive time period can be set to 30 seconds for
a specific backend service in the TargetEndpoint.

In the example below, the keepalive.timeout.millis is set to 30 seconds in


the TargetEndpoint configuration:
<!-- /antipatterns/examples/disable-persistent-connections-1.xml -->
<TargetEndpoint name="default">
<HTTPTargetConnection>
<URL>http://mocktarget.apigee.net</URL>
<Properties>
<Property name="keepalive.timeout.millis">30000</Property>
</Properties>
</HTTPTargetConnection>Disable HTTP persistent (Reusable keep-alive)
connections
</TargetEndpoint>

In the example above, keepalive.timeout.millis controls the keep alive


behavior for a specific backend service in an API proxy. There is also a property
that controls keep alive behavior for all backend services in all proxies. The
HTTPTransport.keepalive.timeout.millis is configurable in the Message
Processor component. This property also has a default value of 60 seconds.
Making any modifications to this property affects the keep alive connection
behavior between Apigee Edge and all the backend services in all the API proxies.

Antipattern
Disabling persistent (keep alive) connections by setting the property
keepalive.timeout.millis to 0 in the TargetEndpoint configuration of a
specific API Proxy or setting the HTTPTransport.keepalive.timeout.millis
to 0 on Message Processors is not recommended as it will impact performance.

In the example below, the TargetEndpoint configuration disables persistent (keep


alive) connections for a specific backend service by setting
keepalive.timeout.millis to 0:

<!-- /antipatterns/examples/disable-persistent-connections-2.xml -->


<TargetEndpoint name="default">
<HTTPTargetConnection>
<URL>http://mocktarget.apigee.net</URL>
<Properties>
<Property name="keepalive.timeout.millis">0</Property>
</Properties>
</HTTPTargetConnection>
</TargetEndpoint>

If keep alive connections are disabled for one or more backend services, then Edge
must open a new connection for each new request to the target backend service(s).
If the backend is HTTPs, Edge will also perform SSL handshake for each new
request, adding to the overall latency of API requests.
Impact
● Increases the overall response time of API requests as Apigee Edge must
open a new connection and perform SSL handshake for every new request.
● Connections may be exhausted under high traffic conditions, as it takes
some time to release connections back to the system.

Best Practice
● Backend services should honor and handle HTTP persistent connection in
accordance with HTTP 1.1 standards.
● Backend services should respond with a Connection:keep-alive header
if they are able to handle persistent (keep alive) connections.
● Backend services should respond with a Connection:close header if they
are unable to handle persistent connections.

Implementing this pattern will ensure that Apigee Edge can automatically handle
persistent or non-persistent connection with backend services, without requiring
changes to the API proxy.

Further reading
● HTTP persistent connections
● Keep-Alive
● Endpoint Properties Reference

Antipattern: Add custom information to


Apigee-owned schema in Postgres
database
Note: These instructions are applicable for Edge for Private Cloud only

Edge API Analytics is a very powerful built-in feature provided by Apigee Edge. It
collects and analyzes a broad spectrum of data that flows across APIs. The
analytics data captured can provide very useful insights. For instance, How is the
API traffic volume trending over a period of time? Which is the most used API?
Which APIs are having high error rates?

Regular analysis of this data and insights can be used to take appropriate actions
such as future capacity planning of APIs based on the current usage, business and
future investment decisions, and many more.

Analytics Data and its storage

API Analytics captures many different types of data such as:


● Information about an API - Request URI, Client IP address, Response Status
Codes, and so on
● API Proxy Performance - Success/Failure rate, Request and Response
Processing Time, and so on
● Target Server Performance - Success/Failure rate, Processing Time
● Error Information - Number of Errors, Fault Code, Failing Policy, Number of
Apigee and Target Server caused errors.
● Other Information - Number of requests made by Developers, Developer
Apps, and so on

All these data are stored in an analytics schema created and managed within a
Postgres database by Apigee Edge.

Typically, in a vanilla Edge installation, Postgres will have following schemata:

The schema named analytics is used by Edge for storing all the analytics data
for each organization and environment. If monetization is installed there will be a
rkms schema. Other schemata are meant for Postgres internals.

The analytics schema will keep changing as Apigee Edge will dynamically add
new fact tables to it at runtime. The Postgres server component will aggregate the
fact data into aggregate tables which get loaded and displayed on the Edge UI.

Antipattern
Adding custom columns, tables, and/or views to any of the Apigee owned schemata
in Postgres Database on Private Cloud environments directly using SQL queries is
not advisable, as it can have adverse implications.

Let's take an example to explain this in detail.

Consider a custom table named account has been created under analytics schema
as shown below:
After a while, let's say there's a need to upgrade Apigee Edge from a lower version
to a higher version. Upgrading Private Cloud Apigee Edge involves upgrading
Postgres amongst many other components. If there are any custom columns,
tables, or views added to the Postgres Database, then Postgres upgrade fails with
errors referencing the custom objects as they are not created by Apigee Edge.
Thus, the Apigee Edge upgrade also fails and cannot be completed.

Similarly errors can occur during Apigee Edge maintenance activities where in
backup and restore of Edge components including Postgres database are
performed.

Impact
● Apigee Edge upgrade cannot be completed because the Postgres
component upgrade fails with errors referencing to custom objects not
created by Apigee Edge.
● Inconsistencies (and failures) while performing Apigee Analytics service
maintenance (backup/restore).

Best Practice
● Don't add any custom information in the form of columns, tables, views,
functions, and procedures directly to any of the Apigee owned schema such
as analytics, and so on
● If there's a need to support custom information, it can be added as columns
(fields) using a Statistics Collector policy to analytics schema.
Further reading
● API Analytics overview
● Stats API

Introduction to playbooks
The act of troubleshooting is both an art and a science. The constant effort of
Apigee technical support teams has been to demystify the art and expose the
science behind problem identification and resolution.

What are playbooks?


Developed in collaboration with Apigee Technical Support teams, Apigee
troubleshooting playbooks are designed to provide quick and effective solutions to
errors or other issues that you may encounter when working with Apigee products.

To find troubleshooting playbooks, you can try searching for specific error
messages using the Search box at the top of this page, or use the TOC on the left to
navigate the playbook library.

Audience
Troubleshooting playbooks are intended for readers with a high-level
understanding of Apigee Edge and its architecture, as well as some understanding
of basic Edge concepts such as policies, analytics, and monetization.

Some problems can be diagnosed and solved only by Apigee Private Cloud users
and may require knowledge of internal Edge components such such as Cassandra
and Postgres datastores, Zookeeper, and the Edge Router.

If you are on Apigee Public Cloud, then we clearly specify when you can perform
the indicated troubleshooting steps and when you need to call Apigee Edge Support
for assistance.

Problem categories
Use this page to navigate to playbooks for specific areas where you may encounter
problems in Apigee Edge.
Problem origin Description Videos

Runtime Information and guidance on


troubleshooting runtime problems.
Videos

Monetizati Troubleshooting commonly observed


monetization problems.
on

Deployme Procedures that can be followed for


troubleshooting deployment errors.
nt

Developer Troubleshooting common issues


encountered when using the developer
Portal portal.

Analytics Procedures for troubleshooting commonly


observed analytics problems.

Edge Troubleshooting problems caused by bad


config files.
Router

OpenLDAP Information and guidance on


troubleshooting OpenLDAP problems.

Trace Tool Trace is a tool for troubleshooting and


monitoring API proxies running on Apigee
Edge.

Zookeeper Information and guidance on


troubleshooting common ZooKeeper
problems.

Edge Avoid common pitfalls, maximize the


power of your APIs.
Antipatter
ns
Diagnostic Information on useful diagnostic tools for
troubleshooting.
Tools

Diagnostics collection introduction


The diagnostic collection guides provide the following information:

● Troubleshooting playbooks for common errors observed in Apigee Edge:


Provides error codes and error messages that can be observed in Apigee
Edge, and lists the corresponding playbooks which contain detailed steps to
troubleshoot and resolve the errors.
● Diagnostic information: Provides detailed steps to gather the diagnostic logs
and the information required for troubleshooting the most common
problems observed with Apigee Edge. Provides instructions on how to
bundle the logs and information together to share with Google Cloud Apigee
Edge Support in a Support case.

Note: Refer to the Diagnostic information section if the problem persists, even after following
the instructions provided in the playbook, or if you couldn't find the playbook for a specific
problem or error.

Prerequisites

On the machines where the Apigee Edge components are installed, ensure that you
have the following:

● Enough disk space in the /tmp directory to capture and store the diagnostic
data
● Required permissions to login and gather the diagnostic data

Guides

The following table describes the current Private Cloud diagnostic guides:

Private Cloud diagnostic guides


Analytics Provides details about the diagnostic logs and
problems information required for Analytics problems such as
data not showing up in the Analytics dashboard,
custom reports, Postgres disk running out of space,
and so on.

Cassandra Provides details about the diagnostic logs and


problems information required for Cassandra problems.

Deployment Provides details about the diagnostic logs and


errors information required for Deployment errors.

Runtime Provides details about the diagnostic logs and


issues information required for errors, latency issues, or
unexpected results observed during the execution of
your API requests.

Unable to Provides details about the diagnostic logs and


start Edge information required for issues encountered while
components starting Apigee Edge components such as Router,
Message Processor, Management Server, and
ZooKeeper.

ZooKeeper Provides details about the diagnostic logs and


problems information required for ZooKeeper problems.

Analytics problems
Note: This topic applies to Edge Private Cloud only.

These topics explain how to troubleshoot issues with Analytics such as data not
showing up in the Analytics dashboard, issues with custom reports, Postgres disk
running out of space, and so on.

Playbooks

This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving the common Analytics problems.

Problem Error message or description Playbook


Analytics The report timed out: Try Analytics reports
Reports timing again with a smaller timing out
out date range or a larger
or:
aggregation interval.
Report timed out

Data not No traffic in the Data not


showing up in selected date range showing up on
Analytics analytics
dashboards dashboards

Custom variable None Custom variable


not visible in not visible to
Custom Reports analytics custom
reports

or

Custom
Dimensions not
appearing when
multiple
axgroups have
been configured

Diagnostic information

If you need assistance from Apigee Support on the Analytics problems, then gather
the following diagnostic information and share it in the support case:

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

Analytics Managem curl -u sysadminEmail:password


Group ent Server http://MGMT_SERVER_HOST:8080/v1/analytics
and /groups/ax
Status curl -u sysadminEmail:sysadminPwd
Output http://MGMT_SERVER_HOST:8080/v1/organizat
ions/ORGNAME/environments/ENVNAME/
provisioning/axstatus

Qpidd Qpid qpid-stat -q 2>&1 | tee /tmp/qpid_stat_q_$


queue Server (hostname)-$(date +%Y.%m.%d_%H.%M.%S).txt
stats

Qpidd and Qpid tar cvzf /tmp/qpid_logs_$(hostname)_$(date +


Qpid Server %Y.%m.%d_%H.%M.%S).tar.gz
Server /opt/apigee/var/log/apigee-qpidd/apigee-
logs qpidd*
/opt/apigee/var/log/edge-qpid-server/logs/syst
em.log

Data in Postgres Login to Postgres SQL, get the latest timestamp


Postgres Server and size of data:
DB
psql -h /opt/apigee/var/run/apigee-postgresql -
U apigee apigee
SELECT min(client_received_start_timestamp),
max(client_received_start_timestamp) FROM
analytics."ORGNAME.ENVNAME.fact";
SELECT relname as
"Table",pg_size_pretty(pg_total_relation_size(rel
id)) As "Size",
pg_size_pretty(pg_total_relation_size(relid) -
pg_relation_size(relid)) as "External Size" FROM
pg_catalog.pg_statio_user_tables ORDER BY
pg_total_relation_size(relid) DESC;
SELECT column_name, data_type,
character_maximum_length FROM
INFORMATION_SCHEMA_COLUMNS
analytics."ORGNAME.ENVNAME.fact";
Note: Replace ORGNAME and ENVNAME with the
actual org and env names before running the above
SQL queries.

Postgresq Postgres tar cvzf /tmp/postgres_logs_$(hostname)_$


l and Server (date +%Y.%m.%d_%H.%M.%S).tar.gz
Postgres /opt/apigee/var/log/apigee-postgresql/apigee-
Server postgresql* /opt/apigee/var/log/edge-postgres-
logs server/logs/system*

ZooKeepe ZooKeepe sudo /opt/apigee/apigee-zookeeper/contrib/zk-


r Tree r tree.sh > zktree-output.txt
Output

Note: Attach all of the above files and command outputs to the support case.

Postgres disk space issue

This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving Postgres running out of disk
space.

Playbooks

Problem Error message or description Playbook

Postgres server None Postgres server


running out of running out of
disk space disk space

Diagnostic information

If you need assistance from Apigee Edge Support on this issue, then gather the
following diagnostic information and share it in the support case:

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

Disk Postgres df -h 2>&1 | tee /tmp/postgres_df_$(hostname)-


Space server $(date +%Y.%m.%d_%H.%M.%S).txt
free -h 2>&1 | tee /tmp/postgres_mem_$
(hostname)-$(date +%Y.%m.%d_%H.%M.%S).txt

Data in Postgres Login to Postgres SQL, get the latest timestamp


Postgres server and size of data
DB
psql -h /opt/apigee/var/run/apigee-postgresql -
U apigee apigee
SELECT min(client_received_start_timestamp),
max(client_received_start_timestamp) FROM
analytics."ORGNAME.ENVNAME.fact";
SELECT relname as
"Table",pg_size_pretty(pg_total_relation_size(rel
id)) As "Size",
pg_size_pretty(pg_total_relation_size(relid) -
pg_relation_size(relid)) as "External Size" FROM
pg_catalog.pg_statio_user_tables ORDER BY
pg_total_relation_size(relid) DESC;

Note: Attach all of the above files and command outputs to the support case.
Cassandra problems
Note: This topic applies to Edge Private Cloud only.

Diagnostic information

This section explains the diagnostic information that needs to be gathered for
Cassandra problems on the Private Cloud setup.

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

Cassandr Cassandr cp
a a /opt/apigee/apigee-cassandra/conf/cassandra-
configura topology.properties
tion files /tmp/cassandra_topology_$(hostname)-$(date
+%Y.%m.%d_%H.%M.%S).properties
cp
/opt/apigee/apigee-cassandra/conf/cassandra-
env.sh /tmp/cassandra_envsh_$(hostname)-$
(date +%Y.%m.%d_%H.%M.%S).sh
cp
/opt/apigee/apigee-cassandra/conf/cassandra.
yaml /tmp/cassandra_conf_$(hostname)-$(date
+%Y.%m.%d_%H.%M.%S).yaml

Nodetool Cassandr alias


command a nodetool=/opt/apigee/apigee-cassandra/bin/no
s output detool

Get the current Cassandra version:

nodetool version 2>&1 | tee


/tmp/nodetool_version_$(hostname)-$(date +
%Y.%m.%d_%H.%M.%S).txt

Get the current Cassandra datacenter status:

nodetool status 2>&1 | tee


/tmp/nodetool_status_$(hostname)-$(date +
%Y.%m.%d_%H.%M.%S).txt
Get the current ring/token status across the
datacenter:

nodetool ring 2>&1 | tee /tmp/nodetool_ring_$


(hostname)-$(date +%Y.%m.%d_%H.%M.%S).txt

Get more information about heap usage,


uptime, exceptions, and so on:

nodetool info 2>&1 | tee /tmp/nodetool_info_$


(hostname)-$(date +%Y.%m.%d_%H.%M.%S).txt

Get gossip info:

nodetool gossipinfo 2>&1 | tee


/tmp/nodetool_gossipinfo_$(hostname)-$(date
+%Y.%m.%d_%H.%M.%S).txt

Get status of the current Cassandra operations:

nodetool compactionstats -H 2>&1 | tee


/tmp/nodetool_compactionstats_$(hostname)-
$(date +%Y.%m.%d_%H.%M.%S).txt
nodetool tpstats 2>&1 | tee
/tmp/nodetool_tpstats_$(hostname)-$(date +
%Y.%m.%d_%H.%M.%S).txt
nodetool netstats -H 2>&1 | tee
/tmp/nodetool_netstats_$(hostname)-$(date +
%Y.%m.%d_%H.%M.%S).txt
nodetool cfstats -H 2>&1 | tee
/tmp/nodetool_cfstats_$(hostname)-$(date +
%Y.%m.%d_%H.%M.%S).txt

Get proxy historgrams (shows the full request


latency recorded by the coordinator):

nodetool proxyhistograms 2>&1 | tee


/tmp/nodetool_proxyhistograms_$(hostname)-
$(date +%Y.%m.%d_%H.%M.%S).txt
Cassandr Cassandr tar cvzf /tmp/cassandra_logs_$(hostname)_$
a logs a (date +%Y.%m.%d_%H.%M.%S).tar.gz
/opt/apigee/var/log/apigee-cassandra/system*
/opt/apigee/var/log/apigee-cassandra/config*
tail -2000 /opt/apigee/var/log/apigee-
cassandra/apigee-cassandra.log >
/tmp/cassandra_apigee-cassandra_log_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).log

Version Cassandr apigee-all version 2>&1 | tee


and a /tmp/apigee_all_version_$(hostname)-$(date +
status of %Y.%m.%d_%H.%M.%S).txt
compone apigee-all status 2>&1 | tee
nts /tmp/apigee_all_status_$(hostname)-$(date +
%Y.%m.%d_%H.%M.%S).txt

Compress Cassandr tar -cvzf /tmp/data_CASE#_$


all the a (hostname).tar.gz /tmp/nodetool*
diagnosti /tmp/cassandra_* /tmp/apigee_all*
c data Note: Replace CASE# with the specific case number
for which you are providing these documents.

Deployment errors
Note: This topic applies to Edge Private Cloud only.

Any error that occurs during deployment of an API Proxy is called a Deployment
error. Deployment of API proxies might fail due to various reasons such as network
connectivity issues between Edge servers, issues with the Cassandra datastore,
ZooKeeper exceptions, and errors in the API proxy bundle.

Playbooks

This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving deployment errors.
Error message Playbook

Error: Call timed out; either server Timeout Error


is down or server is not reachable

Unexpected error Error while Error Fetching


fetching children for path Children for Path

Error while accessing Error Accessing


datastore;Please retry later Datastore

Configuration failed, associated Configuration Failed


contexts = []

Unexpected error occurred while Error Processing


processing the updates,associated Updates
contexts = []

Diagnostic information

If you need assistance from Apigee Edge Support on the deployment error, then
gather the following diagnostic information and share it in the support case:

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

Deployme Managem curl -s


nts API ent Server 0:8080/v1/organizations/ORGNAME/environme
Output nts/ENVNAME/apis/APINAME/deployments >
/tmp/ms_deployments_output_$(hostname)_$
(date +%Y.%m.%d_%H.%M.%S).json
Note: Gather this information on the Management
Server after replacing ORGNAME, ENVNAME and
APINAME with the actual values.

Managem Managem tar cvzf /tmp/ms_systemlogs_$(hostname)_$


ent Server ent Server (date +%Y.%m.%d_%H.%M.%S).tar.gz
logs /opt/apigee/var/log/edge-management-server/l
ogs/system*
tar cvzf /tmp/ms_transactionlogs_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz /opt/apigee/var/log/edge-
management-server/logs/transactions*

Bundle all the data on Management Server with


the following command:

tar -cvzf /tmp/ms_data_CASE#_$


(hostname).tar.gz* /tmp/ms_*
Note: Replace CASE# with the specific case number
for which you are providing these documents. Attach
this file also to the case.

Classifica Message curl -s 0:8082/v1/classification/tree >


tion Tree Processor /tmp/rmp_classification_tree_$(hostname)_$
output (date +%Y.%m.%d_%H.%M.%S).json
Note: Gather this information on the Message
Processor.

Message Message tar cvzf /tmp/rmp_systemlogs_$(hostname)_$


Processor Processor (date +%Y.%m.%d_%H.%M.%S).tar.gz
logs /opt/apigee/var/log/edge-message-processor/l
ogs/system*
tar cvzf /tmp/rmp_transactionlogs_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz /opt/apigee/var/log/edge-message-
processor/logs/transactions*
tar cvzf
/tmp/rmp_system_monitor_config_mp_logs_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz /opt/apigee/var/log/edge-message-
processor/edge-message-processor*
/opt/apigee/var/log/edge-message-
processor/config* /opt/apigee/var/log/edge-
message-processor/system-monitor*

Connectiv Message telnet CASSANDRA_IP 9042 | tee


ity with Processor /tmp/rmp_cassandra_NODE#_connectivity_$
Cassandr (hostname)_$(date +%Y.%m.%d_%H.%M.%S).txt
a telnet CASSANDRA_IP 9160 | tee
/tmp/rmp_cassandra_NODE#_connectivity_$
(hostname)_$(date +%Y.%m.%d_%H.%M.%S).txt

If you don’t have telnet, then you can use the


netcat command as follows:

nc -vz CASSANDRA_IP 9042 | tee


/tmp/rmp_cassandra_NODE#_connectivity_$
(hostname)_$(date +%Y.%m.%d_%H.%M.%S).txt
nc -vz CASSANDRA_IP 9160 | tee
/tmp/rmp_cassandra_NODE#_connectivity_$
(hostname)_$(date +%Y.%m.%d_%H.%M.%S).txt
Note: Replace NODE# with 1, 2 and so on. You need to
repeat the above steps for each of the Cassandra
nodes in each of the data centers.

Connectiv Message telnet ZOOKEEPER_IP 2181 | tee


ity with Processor /tmp/rmp_zookeeeper_NODE#_connectivity_$
ZooKeepe (hostname)_$(date +%Y.%m.%d_%H.%M.%S).txt
r
If you don’t have telnet, then you can use the
netcat command as follows:

nc -vz ZOOKEEPER_IP 2181 | tee


/tmp/rmp_zookeeper_NODE#_connectivity_$
(hostname)_$(date +%Y.%m.%d_%H.%M.%S).txt
Note: Replace NODE# with 1, 2 and so on. You need to
repeat the above steps for each of the ZooKeeper
nodes in each of the data centers.

Compress Message tar -cvzf /tmp/rmp_data_CASE#_$


all the Processor (hostname).tar.gz* /tmp/rmp_*
diagnosti Note: Replace CASE# with the specific case number
c data for which you are providing these documents.

Cassandr Cassandr tar cvzf /tmp/cassandra_logs_$(hostname)_$


a logs a (date +%Y.%m.%d_%H.%M.%S).tar.gz
/opt/apigee/var/log/apigee-cassandra/system*
/opt/apigee/var/log/apigee-cassandra/config*
tail -2000 /opt/apigee/var/log/apigee-
cassandra/apigee-cassandra.log >
/tmp/cassandra_apigee-cassandra_log_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).log
Note: Attach these files to the support case.

ZooKeepe ZooKeepe tar cvzf /tmp/zookeeper_logs_$(hostname)_$


r logs and r (date +%Y.%m.%d_%H.%M.%S).tar.gz
associate /opt/apigee/var/log/apigee-zookeeper/*.log
d files /opt/apigee/apigee-zookeeper/conf/zoo.cfg
/opt/apigee/data/apigee-zookeeper/data/myid
Note: Attach this file to the support case.

Runtime issues
Note: This topic applies to Edge Private Cloud only.

Any errors, latency issues, or unexpected results observed during the execution of
your API requests are referred to as runtime issues.

4XX/5XX errors
Playbook

This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving runtime 4XX and 5XX errors.

Error
response/messa Error code Playbook
ge

HTTP/1.1 500 Varies by the actual error 500 Internal


Internal Server Error
Server Error
and
500 Internal
Server Error -
Streaming
Enabled

HTTP/1.1 503 messaging.adaptors.http.f 503 Service


Service low.ServiceUnavailable Unavailable
Unavailable

HTTP/1.1 503 messaging.adaptors.http.f 503 Service


Service low.NoActiveTargets Unavailable -
Unavailable NoActiveTargets

HTTP/1.1 503 messaging.adaptors.http.f 503 Service


Service low.NoActiveTargets Unavailable -
Unavailable NoActiveTargets
(Cause is due to health check Health Check
failures) Failures

HTTP/1.1 503 messaging.adaptors.http.f 503 Service


Service low.ErrorResponseCodeNote: Unavailable -
Unavailable This error code is visible in the NGINX Backend Server
access logs.
(from backend
server)

HTTP/1.1 504 messaging.adaptors.http.f 504 Gateway


Gateway low.GatewayTimeout Timeout
Timeout

HTTP/1.1 504 messaging.adaptors.http.f 504 Gateway


Gateway low.ErrorResponseCodeNote: Timeout -
Timeout This error code is visible in the NGINX Backend Server
access logs.
(from backend
server)

Diagnostics information

If you need any assistance from Apigee Edge Support on 4XX runtime errors (such
as 400, 401, 404, and 499) or 5XX (such as 500, 503, and 504) errors, then gather
and share the following diagnostic logs and information in the support case:

Diagnosti Where How do I gather this information?


can I
c gather
informati this
on informati
on?

Trace tool Edge UI How to use Trace Tool


output
capturing
failed API
requests

Router Router tar cvzf


logs /tmp/router_logs_ORGNAME_ENVNAME_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz
/opt/apigee/var/log/edge-router/nginx/ORGNA
ME~ENVNAME.*

Message Message tar cvzf /tmp/rmp_systemlogs_$(hostname)_$


Processor Processor (date +%Y.%m.%d_%H.%M.%S).tar.gz
logs /opt/apigee/var/log/edge-message-processor/l
ogs/system*

Compress >tar -cvzf /tmp/data_CASE#_$(hostname).tar.gz


all the /tmp/router* /tmp/rmp_*
diagnosti Note:
c data 1. Replace CASE# with the specific case number
for which you are providing these documents.
2. In addition attach the trace files captured to
the case.

400 Bad Request Error - SSL Certificate Error


Playbook
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving 400 Bad Request - SSL
Certificate Error.

Error message Playbook

<html> 400 Bad Request Error


<head> - SSL Certificate Error
<title>400 The SSL certificate error</title>
</head>
<body bgcolor="white">
<center> <h1>400 Bad Request</h1>
</center>
<center>The SSL certificate error</center>
<hr>
<center>nginx</center>
</body>
</html>

Note: In this case, the error code (fault_code) is not set.

Diagnostic information

If you need any assistance from Apigee Edge Support on the 400 Bad Request -
SSL Certificate Error, then gather the following diagnostic information and
share it in the support case:

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

Router Router tar cvzf


logs /tmp/router_logs_ORGNAME_ENVNAME_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz
/opt/apigee/var/log/edge-router/nginx/ORGNA
ME~ENVNAME.*
Note: Gather this information on the Router after
replacing ORGNAME, ENVNAME with the actual values.
Tcpdump Router Capture network packets using the tcpdump
s command on the Router machine:

sudo tcpdump -s 0 -i any host


CLIENT_HOST_IP_ADDRESS -w
/tmp/router_tcpdump_$(hostname).pcap

Compress the tcpdump:

tar cvzf /tmp/router_tcpdumps_$(hostname)_$


(date +%Y.%m.%d_%H.%M.%S).tar.gz
/tmp/router_tcpdump_$(hostname).pcap
Note: If the issue is intermittent and you have more
than one Router instance, then it is better to initiate
the tcpdump on each instance simultaneously and
reproduce the issue.

Tcpdump Client Capture network packets using tcpdump


s machine command on client machine:

sudo tcpdump -s 0 -i any host


VIRTUAL_HOST_ALIAS -w
/tmp/client_tcpdump_$(hostname).pcap

Compress the tcpdump:

tar cvzf /tmp/client_tcpdumps_$(hostname)_$


(date +%Y.%m.%d_%H.%M.%S).tar.gz
/tmp/router_tcpdump_$(hostname).pcap

Compress Router tar -cvzf /tmp/data_CASE#_$


all the (hostname).tar.gz /tmp/router*
diagnosti Note:
c data 1. Replace CASE# with the specific case number
for which you are providing these documents.
2. In addition, attach the tcpdumps from the
client machine to the case.
404 Unable to identify proxy for host error
Playbook

This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving 404 Unable to identify
proxy for host error.

Error message or description Error code Playbook

HTTP/1.1 404 Not Found messaging.adapt 404


ors.http.flow.A Unable to
{ pplicationNotFo identify
"fault":{ proxy for
und
"faultstring":"Unable to identify host
proxy for host:
VIRTUAL_HOST_NAME and url:
PATH",
"detail":{

"errorcode":"messaging.adaptors.http
.flow.ApplicationNotFound"
}
}
}

Diagnostic information

If you need assistance from Apigee Edge Support on the 404 Unable to
identify proxy for host error, then gather the following diagnostic
information and share it in the support case:

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

Deployme Managem curl -s


nts API ent Server http://MANAGEMENT_SERVER_HOST:8080/v1/o
output rganizations/ORGNAME/environments/
ENVNAME/apis/APINAME/deployments >
/tmp/deployments_output_$(hostname)_$(date
+%Y.%m.%d_%H.%M.%S).json
Note: Gather this information on the Management
Server after replacing ORGNAME, ENVNAME and
APINAME with the actual values.

API and Message Get the environments loaded for a specific


Classifica Processor organization:
tion Tree
curl -s
output
0:8082/v1/runtime/organizations/ORGNAME/en
vironments > /tmp/rmp_environments_list_$
(hostname)_$(date +%Y.%m.%d_%H.%M.%S).txt

Get the revisions deployed for a specific API


Proxy:

curl -s
0:8082/v1/runtime/organizations/ORGNAME/en
vironments/ENVNAME/apis/APINAME/revisions
> /tmp/rmp_api_APINAME_revisions_$
(hostname)_$(date +%Y.%m.%d_%H.%M.%S).txt

Get the classification tree:

curl -s 0:8082/v1/classification/tree >


/tmp/rmp_classification_tree_$(hostname)_$
(date +%Y.%m.%d_%H.%M.%S).json
Note: Run the above commands and gather the
information on each of the Message Processor nodes
after replacing ORGNAME, ENVNAME and APINAME
with the actual values.

Message Message tar cvzf /tmp/rmp_systemlogs_$(hostname)_$


Processor Processor (date +%Y.%m.%d_%H.%M.%S).tar.gz
logs /opt/apigee/var/log/edge-message-processor/l
ogs/system*
tar cvzf /tmp/rmp_transactionlogs_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz /opt/apigee/var/log/edge-message-
processor/logs/transactions*
tar cvzf /tmp/rmp_configurationlogs_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz /opt/apigee/var/log/edge-message-
processor/logs/configurations*
tar cvzf
/tmp/rmp_system_monitor_config_mp_logs_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz /opt/apigee/var/log/edge-message-
processor/edge-message-processor*
/opt/apigee/var/log/edge-message-
processor/config* /opt/apigee/var/log/edge-
message-processor/system-monitor*

Heap Message Get the live heap dump:


dumps on Processor
sudo -u apigee jmap -
Message
dump:live,format=b,file=/opt/apigee/var/snaps
Processor
hot_$(hostname)-$(date +%Y.%m.%d_%H.%M.
s
%S).hprof $(cat /opt/apigee/var/run/edge-
message-processor/edge-message-
processor.pid)

Compress the heap dump:

tar cvzf /tmp/rmp_heapdumps_$(hostname)_$


(date +%Y.%m.%d_%H.%M.%S).tar.gz
/opt/apigee/var/snapshot_$(hostname)-$(date
+%Y.%m.%d_%H.%M.%S).hprof

Compress Message tar -cvzf /tmp/data_CASE#_$


all the Processor (hostname).tar.gz /tmp/rmp_*
diagnosti Note:
c data 1. Replace CASE# with the specific case number
for which you are providing these documents.
2. In addition, attach the Deployments API output.

502 Bad Gateway - no live upstreams while connecting to


upstream
Playbook

This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving 502 Bad Gateway - no live
upstreams while connecting to upstream.
Problem Error message in logs Playbook

HTTP/1.1 502 Bad You will see the following 502 Bad
Gateway error in NGINX error logs: Gateway

(/opt/apigee/var/log/
<html>
<head> edge-router/nginx/
<title>Error</title> ORGNAME~ENVNAME._error
<style> _log)
body {
width: 35em; [error] 4796#4796:
margin: 0 auto; *56357443 no live
font-family: Tahoma,
upstreams while
Verdana, Arial, sans-serif;
connecting to
}
</style> upstream, client:
</head> ROUTER_IP_ADDRESS,
<body> server: HOST_ALIAS,
<h1>An error request: "PUT
occurred.</h1> BASE_PATH HTTP/1.1",
<p>Sorry, the page you upstream:
are looking for is
"http://LISTOFMP_IP_R_
currently
MP_PORT/BASE_PATH",
unavailable.<br/>
Please try again host: "HOST_ALIAS"
later.</p>
</body>
</html>

Note: In this case the Error code (fault_code) is not set.

Diagnostic information

If you need assistance from Apigee Edge Support on the 502 Bad Gateway - no
live streams while connecting to upstream, then gather the following
diagnostic information and share it in the support case:

Where
Diagnosti
can I
c
gather How do I gather this information?
informati
this
on
informati
on?

Router Router tar cvzf


logs /tmp/router_logs_ORGNAME_ENVNAME_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz
/opt/apigee/var/log/edge-router/nginx/ORGNA
ME~ENVNAME.*
Note: Gather this information on the Router after
replacing ORGNAME, ENVNAME with the actual values.

Message Message tar cvzf /tmp/rmp_systemlogs_$(hostname)_$


Processor Processor (date +%Y.%m.%d_%H.%M.%S).tar.gz
logs /opt/apigee/var/log/edge-message-processor/l
ogs/system*

Top Message Get the top command output:


output, Processor
top -H -bn5 > /tmp/rmp_top_output_$
heap
(hostname)-$(date +%Y.%m.%d_%H.%M.%S).txt
dump and
thread Get the heap dump:
dumps
sudo -u apigee jcmd $(cat
/opt/apigee/var/run/edge-message-
processor/edge-message-processor.pid)
GC.heap_dump
/opt/apigee/var/rmp_heapdump_$(hostname)-$
(date +%Y.%m.%d_%H.%M.%S).hprof

Get the thread dump:

sudo -u apigee jcmd $(cat


/opt/apigee/var/run/edge-message-
processor/edge-message-processor.pid)
Thread.print > /tmp/rmp_thread_print_$
(hostname)-$(date +%Y.%m.%d_%H.%M.
%S).tdump
Note: Gather this information on the Message
Processors.

Compress tar -cvzf /tmp/data_CASE#_$


all the (hostname).tar.gz /tmp/router* /tmp/rmp_*
diagnosti /opt/apigee/var/rmp_heapdump_*
c data Note: Replace CASE# with the specific case number
for which you are providing these documents.

502 Bad Gateway - Unexpected EOF At Target


Playbook

This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving 502 Bad Gateway -
Unexpected EOF At Target:

Error response/message Error code Playbook

HTTP/1.1 502 Bad Gateway messaging.adapt 502 Bad


ors.http.flow.U Gateway
{ nexpectedEOFAtT Unexpect
"fault": { ed EOF
arget
"faultstring": "Unexpected EOF at
target",
"detail": {
"errorcode":
"messaging.adaptors.http.flow.Unexp
ectedEOFAtTarget"
}
}
}

Diagnostic information

If you need assistance from Apigee Edge Support on the 502 Bad Gateway -
Unexpected EOF At Target, then gather the following diagnostic information
and share it in the support case:

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

Trace tool Edge UI How to use Trace Tool


output
capturing
failed API
requests

Router Router tar cvzf


logs /tmp/router_logs_ORGNAME_ENVNAME_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz
/opt/apigee/var/log/edge-router/nginx/ORGNA
ME~ENVNAME.*

Message Message tar cvzf /tmp/rmp_systemlogs_$(hostname)_$


Processor Processor (date +%Y.%m.%d_%H.%M.%S).tar.gz
logs /opt/apigee/var/log/edge-message-processor/l
ogs/system*

Tcpdump Message sudo tcpdump -s 0 -i any host


s Processor BACKENDSERVER_HOSTNAME -w
/tmp/rmp_tcpdump_$(hostname).pcap
tar cvzf /tmp/rmp_tcpdumps_$(hostname)_$
(date +%Y.%m.%d_%H.%M.%S).tar.gz /tmp/$
(hostname).pcap
Note: Gather this information on the Message
Processors.

Compress Router/ tar -cvzf /tmp/data_CASE#_$


all the Message (hostname).tar.gz /tmp/router* /tmp/rmp_*
diagnosti Processor Note:
c data 1. Replace CASE# with the specific Case number
for which you are providing these documents.
2. In addition, attach the trace files to the Case.

TLS handshake failures


Playbook

This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving TLS/SSL handshake failures:

Error message Playbook

Received fatal alert: TLS/SSL Handshake


handshake_failure Failures

Received fatal alert: SSL Handshake


bad_certificate Failures - Bad Client
Certificate

Diagnostic information

If you need assistance from Apigee Edge Support on the TLS/SSL handshake
failures, then gather the following diagnostic information and share it in the support
case:

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

Trace tool Edge UI How to use Trace Tool


output
capturing
failed API
requests

Router Router tar cvzf


logs /tmp/router_logs_ORGNAME>_ENVNAME_$
(hostname)_$(date +%Y.%m.%d_%H.%M.
%S).tar.gz
/opt/apigee/var/log/edge-router/nginx/ORGNA
ME~ENVNAME.*
Note: Gather this information on the Router after
replacing the ORGNAME, ENVNAME with the actual
values.

Message Message tar cvzf /tmp/rmp_systemlogs_$(hostname)_$


Processor Processor (date +%Y.%m.%d_%H.%M.%S).tar.gz
logs /opt/apigee/var/log/edge-message-processor/l
ogs/system*

OpenSSL Message Non SNI enabled Backend Server:


Command Processor
openssl s_client -connect
Output
BACKEND_SERVER_HOSTNAME:PORT -
showcerts | tee /tmp/rmp_openssl_$
(hostname)-$(date +%Y.%m.%d_%H.%M.%S).txt

SNI enabled Backend Server:

openssl s_client -connect


BACKEND_SERVER_HOSTNAME:PORT -server
BACKEND_SERVER_HOSTNAME -showcerts |
tee /tmp/rmp_openssl_$(hostname)-$(date +
%Y.%m.%d_%H.%M.%S).txt

Tcpdump Message sudo tcpdump -s 0 -i any host


s Processor BACKEND_SERVER_HOSTNAME -w /tmp/$
(hostname).pcap
tar cvzf /tmp/rmp_tcpdumps_$(hostname)_$
(date +%Y.%m.%d_%H.%M.%S).tar.gz /tmp/$
(hostname).pcap
Note: Gather this information on the Message
Processors.

Certificat Managem Get the certificate details from the Keystore:


es from ent Server
curl -v
keystore
http://MANAGEMENT_SERVER_HOST:PORT/v1/
and
organizations/ORGNAME/environments/
truststore
ENVNAME/keystores/KEYSTORENAME/certs/
of
CERTNAME -u USERNAME
Message
Processor Get the certificate details from the Truststore:
curl -v
http://MANAGEMENT_SERVER_HOSTPORT/v1/o
rganizations/ORGNAME/environments/
ENVNAME/keystores/TRUSTSTORENAME/
certs/CERTNAME -u USERNAME

Compress Message tar -cvzf /tmp/data_CASE#_$


all the Processor (hostname).tar.gz /tmp/router* /tmp/rmp_*
diagnosti Note:
c data 1. Replace CASE# with the specific case number
for which you are providing these documents.
2. In addition, attach the trace files and
certificates from keystore and truststore to the
case.

Unable to start Edge components


Note: This topic applies to Edge Private Cloud only.

These topics explain how to troubleshoot issues encountered while starting Apigee
Edge components such as Router, Message Processor, Management Server, and
ZooKeeper.

Unable to start Router


You may observe the following error when trying to start the Router:

Error message
apigee-service: edge-router: Not running (DEAD)
apigee-all: Error: status failed on [edge-router]

Diagnostic information

If you need assistance from Apigee Edge Support on this issue, then gather the
following diagnostic information and share it in the support case:

Diagnosti Where
How do I gather this information?
c can I
gather
informati this
on informati
on?

Router Router tar cvzf /tmp/router_logs_$(hostname)_$(date +


logs %Y.%m.%d_%H.%M.%S).tar.gz
/opt/apigee/var/log/edge-router/logs/system.lo
g
/opt/apigee/var/log/edge-router/logs/startupru
ntimeerror.log /opt/apigee/var/log/edge-
router/logs/configurations.log

Unable to start Message Processor


You may observe the following error when trying to start the Message Processor:

Error message
apigee-service: edge-message-processor: Not running (DEAD)
apigee-all: Error: status failed on [edge-message-processor]

Diagnostic information

If you need assistance from Apigee Edge Support on this issue, then gather the
following diagnostic information and share it in the support case:

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

Message Message tar cvzf /tmp/rmp_logs_$(hostname)_$(date +


Processor Processor %Y.%m.%d_%H.%M.%S).tar.gz
logs /opt/apigee/var/log/edge-message-processor/l
ogs/system* /opt/apigee/var/log/edge-
message-processor/logs/startupruntimeerror.l
og /opt/apigee/var/log/edge-message-
processor/logs/configurations.log
Unable to start Management Server
You may observe the following error when trying to start the Management Server:

Error message
apigee-service: edge-management-server: Not running (DEAD)
apigee-all: Error: status failed on [edge-management-server]

Diagnostic information

If you need assistance from Apigee Edge Support on this issue, then gather the
following diagnostic information and share it in the support case:

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

Managem Managem tar cvzf /tmp/ms_logs_$(hostname)_$(date +


ent Server ent Server %Y.%m.%d_%H.%M.%S).tar.gz
logs /opt/apigee/var/log/edge-management-server/l
ogs/system* /opt/apigee/var/log/edge-
management-server/logs/startupruntimeerror.l
og

Unable to start ZooKeeper


Playbooks

This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving issues with starting ZooKeeper
servers.

Problem Error message or description Playbook

Unable to apigee-service: apigee-zookeeper: Unable to


start Not running (DEAD) apigee-all: start
ZooKeepe Error: status failed on [apigee- ZooKeepe
r r
zookeeper]
Diagnostic information

If you need assistance from Apigee Edge Support on this issue, then gather the
following diagnostic information and share it in the support case:

Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?

ZooKeepe ZooKeepe tar cvzf /tmp/zookeeper_logs_$(hostname)_$


r logs and r (date +%Y.%m.%d_%H.%M.%S).tar.gz
associate /opt/apigee/var/log/apigee-zookeeper/*.log
d files opt/apigee/apigee-zookeeper/conf/zoo.cfg
/opt/apigee/data/apigee-zookeeper/data/myid

Netstat ZooKeepe netstat -anlp | grep 2181 | tee


output r /tmp/zookeeper_netstat_$(hostname)-$(date +
%Y.%m.%d_%H.%M.%S).txt

Compress ZooKeepe tar -cvzf /tmp/data_CASE#_$


all the r (hostname).tar.gz /tmp/zookeeper_*
diagnosti Note: Replace CASE# with the specific case number
c data for which you are providing these documents.

ZooKeeper problems
Note: This topic applies to Edge Private Cloud only.

These topics explain how to troubleshoot issues with ZooKeeper such as


connection loss errors.

ZooKeeper connection loss errors


Sometimes the Edge components such as Message Processors and Management
Servers may lose connectivity with ZooKeeper. This can lead to issues such as API
Proxy deployment errors, Management API failures, and so on.

Playbooks

This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving ZooKeeper connection loss
errors.

Problem Error message in logs Playbook

ZooKeepe You may see the following error in Router or


ZooKeepe
r Message Processor logs: r
connectio Connectio
n loss org.apache.zookeeper.KeeperException n Loss
errors $ConnectionLossException: Errors
KeeperErrorCode = ConnectionLoss

at

org.apache.zookeeper.KeeperException
.create(KeeperException.java:99)
~[zookeeper-3.4.6.jar:3.4.6-1569965]

or

You may see the following error while deploying


API Proxy in Edge UI:

Error Fetching Deployments Error


while checking path existence for
path: PATH

Diagnostic information

If you need assistance from Apigee Support on the ZooKeeper connection loss
errors, then gather the following diagnostic information and share it in the support
case:

Diagnosti Location How do I gather this information?


c
informati
on

ZooKeepe ZooKeepe echo "ruok" | nc localhost 2181 | tee


r health r /tmp/zookeeper_NODE#_ruok_$(hostname)-$
check (date +%Y.%m.%d_%H.%M.%S).txt
command
s echo srvr | nc localhost 2181 | tee
/tmp/zookeeper_NODE#_srvr_$(hostname)-$
(date +%Y.%m.%d_%H.%M.%S).txt

echo mntr | nc localhost 2181 | tee


/tmp/zookeeper_NODE#_mntr_$(hostname)-$
(date +%Y.%m.%d_%H.%M.%S).txt

echo stat | nc localhost 2181 | tee


/tmp/zookeeper_NODE#_stat_$(hostname)-$
(date +%Y.%m.%d_%H.%M.%S).txt

echo cons | nc localhost 2181 | tee


/tmp/zookeeper_NODE#_cons_$(hostname)-$
(date +%Y.%m.%d_%H.%M.%S).txt

Note: Replace NODE# with 1, 2 and so on. You need to


repeat the above steps for each of the ZooKeeper
nodes in the specific data center.

ZooKeepe ZooKeepe tar cvzf /tmp/zookeeper_logs_$(hostname)_$


r logs and r (date +%Y.%m.%d_%H.%M.%S).tar.gz
associate /opt/apigee/var/log/apigee-zookeeper/*.log
d files /opt/apigee/apigee-zookeeper/conf/zoo.cfg
/opt/apigee/data/apigee-zookeeper/data/myid
Compress ZooKeepe tar -cvzf /tmp/data_CASE#_$
all the r (hostname).tar.gz /tmp/zookeeper_*
diagnosti
c data Note: Replace CASE# with the specific case number
for which you are providing these documents.

Error message reference


This page contains links to troubleshooting playbooks for errors and other issues
that you may encounter when using Apigee Edge. Each troubleshooting playbook
explains how to diagnose and resolve each type of problem.

Note: Some types of problems can only be resolved if you are an Edge Private Cloud user.
Whenever possible, we identify which troubleshooting procedures can be performed by Private
Cloud users, Public Cloud users, or both.

Analytics problems
These topics explain how to troubleshoot cases where analytics data does not
show up in the Analytics dashboards or in custom reports.

Error message or description Playbook

The report timed out: Try Analytics reports timing out


again with a smaller date
Or:
range or a larger aggregation
interval. Report timed out

You may not observe any error Postgres server running out
message unless the disk space is of disk space
completely filled on the Postgres
Server.

No errors are observed. Custom variable not visible


to analytics custom reports

No traffic in the selected Data not showing up on


date range analytics dashboards

The topic explains how to do a Adding and deleting


commonly requested task. analytics components in
analytics groups

Could not get data for path Custom Dimensions not


appearing when multiple
axgroups have been
configured

Deployment errors
Deployment of API proxies might fail due to various reasons such as network
connectivity issues between Edge servers, issues with the Cassandra datastore,
ZooKeeper exceptions, and errors in the API proxy bundle. This section provides
information and guidance on some specific procedures that can be followed for
troubleshooting deployment errors.

Error message or description Playbook

Error: Call timed out; either Timeout Error


server is down or server is
not reachable

Unexpected error Error while Error Fetching Children for


fetching children for path Path

Error while accessing Error Accessing Datastore


datastore;Please retry later

Configuration failed, Configuration failed


associated contexts = []

Unexpected error occurred Error Processing Updates


while processing the
updates,associated contexts =
[]

Developer Portal errors


These topics provide help you with issues you may encounter when using the
developer portal. Before attempting to fix developer portal problems, be sure you
have a basic understanding of how the developer portal works, as explained in
Developer Portal Troubleshooting Overiew.
Error message or description Playbook

An internal error has Developer Portal Internal Error


occurred. Please retry your
request.

The website encountered an Developer Portal


unexpected error.Please try Communication Issues
again later. OR There was an
error trying to create the
App.Please try again later.

Monetization problems
The following topics will help you troubleshoot and fix common Monetization
problems.

Error message or description Playbook

<error> Developer
<messages> Suspended
<message>Exceeded developer limit configuration
-</message>
<message>Is Developer Suspended - true</message>
</messages>
</error>

You may not observe any error messages, but you will see Monetization
issues as explained in the Symptom section in Monetization setup issues
setup issues.

Edge Router problems


The Edge Router is implemented with NGINX. During the Edge upgrade process, or
when changing the configuration of the Router, you might see NGINX configuration
errors. The following topic will help you address such problems.

Error message or description Playbook


You will not see any error messages. Bad Config Files
However, you might not be able to
execute your API proxies because of the
bad config files.

OpenLDAP problems
The following topics will help you troubleshoot and fix common OpenLDAP
problems.

Error message or description Playbook

Unknown username and password SMTP is disabled and users


combination. need to reset password

No errors appear, the Edge UI simply LDAP is not replicating.


does not show the list of users that
should have been replicated across all
OpenLDAP servers.

SLAPD Dead But Pid File Unable to start OpenLDAP


Exists

Unknown username and password OpenLDAP data corruption


combination.

Runtime errors
The following topics will help you troubleshoot and fix common runtime problems.

Error message or description Playbook

HTTP/1.1 500 Internal Server Error OR 500 Internal Server


{ Error
"fault":{
"detail":{
"errorcode":"steps.servicecallout.ExecutionFailed"
},
"faultstring":"Execution of ServiceCallout
callWCSAuthServiceCallout failed.
Reason: ResponseCode 400 is treated as error"
}
}

HTTP/1.1 502 Bad Gateway OR 502 Bad Gateway


{
"fault": {
"faultstring": "Unexpected EOF at target",
"detail": {
"errorcode":
"messaging.adaptors.http.UnexpectedEOFAtTarget"
}
}
}

HTTP/1.1 503 Service Unavailable OR HTTP/1.1 503 Service


503 Service Unavailable: Back-end server is Unavailable
at capacity OR
{
"fault": {
"faultstring": "The Service is temporarily unavailable",
"detail": {
"errorcode":
"messaging.adaptors.http.flow.ServiceUnavailable"
}
}
}

HTTP/1.1 503 Service Unavailable OR Received SSL Handshake


fatal alert: handshake_failure Failures

HTTP/1.1 503 Service Unavailable OR SSL Handshake


{ Failures - Bad
"fault": { Client Certificate
"faultstring":"The Service is temporarily unavailable",
"detail":{
"errorcode":"messaging.adaptors.http.flow.ServiceUnavaila
ble"
}
}
}

HTTP/1.1 504 Gateway Timeout OR 504 Gateway


{ Timeout
"fault": {
"faultstring": "Gateway Timeout",
"detail": {
"errorcode":
"messaging.adaptors.http.flow.GatewayTimeout"
}
}
}

Zookeeper problems
The following topics will help you troubleshoot and fix common Zookeeper
problems.

Error message or description Playbook

org: env: main ERROR ZOOKEEPER - Zookeeper


ZooKeeperServiceImpl.exists() : Could Connection Loss
not detect existence of path: Errors
/regions/dc-1/pods/analytics/servers/a
bc123/reachable ,reason:
KeeperErrorCode = ConnectionLoss

OR

org.apache.zookeeper.KeeperException$C
onnectionLossException:
KeeperErrorCode = ConnectionLoss

OR
The Edge UI may display this error:

Error Fetching Deployments Error while


checking path existence for path: path

Data related issues, commonly referred to as Zookeeper Data


wiring issues, can manifest as one of several Issues
issues. See Zookeeper Data Issues for details.

+ apigee-service apigee-zookeeper Unable to Start


status apigee-service: apigee- Zookeeper
zookeeper: Not running (DEAD) apigee-
all: Error: status failed on [apigee-
zookeeper]

Diagnostic tools and logs


These topics describe tools and logs that you can use to help diagnose certain
kinds of problems that you may encounter when using Apigee Edge.

● TCP/IP packet sniffer (tcpdump) utility


The tcpdump tool is a command-line packet sniffer tool that allows you to
capture or filter TCP/IP packets that are received or transferred over a
network.
● Heap dumps
Heap dumps are a snapshot of the memory of a Java process. They contain
the information about the Java objects and classes in the heap at the
moment the heap dump is collected.
● Thread dumps
A thread dump is a snapshot of the state of all the threads of a running Java
process. The state of each thread is presented with the contents of its stack,
referred to as a stack trace.

Introduction to integrated portal


playbooks
The Apigee integrated portal playbooks provide detailed troubleshooting
information for errors associated with the integrated portal.

The following playbooks are currently available:


Playbook/Problem Playbook
Error message
description applicable for

Unknown Error in Try An empty response or an Edge Public


this API panel Unknown Error message for Cloud users
the API requests in the
integrated portal Try this API
panel.

ERRORES MAS COMUNES


Analytics data stuck in Qpidd dead
letter queue
Was this troubleshooting playbook helpful? Please let us know by clicking Send Feedback .

Symptom

Analytics data is missing in the Edge UI due to the Qpidd server not transferring
analytics messages to PostgreSQL. In Edge, the edge-qpid-server component
corresponds to the Qpidd server.

Qpidd maintains two queues for each analytics group:

● ax-q-axgroup001-consumer-group-001
This queue holds analytics messages pushed from the Message Processors
and Routers. Messages get pulled from here by the edge-qpid-server
which parses messages and inserts them into PostgreSQL. Once messages
are processed successfully they are removed from the queue.
● ax-q-axgroup001-consumer-group-001-dl
This queue is the dead letter queue. It acts as the destination for messages
that edge-qpid-server failed to process and thus no longer wishes to
receive. This typically gets populated when the maximum delivery count is
exceeded, or if PostgreSQL rejected insertion of new data due to runtime
errors.

Error Message

The root cause could be due to various runtime errors from the edge-qpid-
server component. Typically if edge-qpid-server receives a runtime error
from PostgreSQL, it creates the dead letter queue if it doesn't exist already, and
then sends the following message there:

yyyy-MM-dd HH:mm:ss,SSS ax-q-axgroup001-consumer-group-001-persistpool-


thread-6 WARN c.a.a.m.MessageConsumer - MessageConsumer.process() :
Sending message batch to the DLQ.

Possible Causes

Troubleshooting
Cause Description Instructions
Applicable For

Messages stuck edge-qpid-server could Edge Private Cloud


in dead letter not understand messages Users
queue of qpidd that it read from Qpidd broker
or was unable to persist
messages to PostgreSQL.

Common Diagnosis Steps

Run the following command to view Qpidd queue stats:

qpid-stat -q

The output returns the set of queues registered with the broker. If the queue with
name ending in "-dl" has messages populated, then there are messages stuck on
the dead letter queue.

Queues

queue dur autoDel excl msg msgIn msgOut bytes bytesIn


bytesOut cons bind
=================================================================
=======================================================

ax-q-axgroup-001-consumer-group-001 Y 0 185 185 0


13.8m 13.8m 6 2

ax-q-axgroup-001-consumer-group-001-dl Y 0 70 70 0
3.9m 3.9m 0 2

Cause: Messages stuck in dead letter queue of qpidd


Diagnosis

This condition can happen under the following scenarios:

1. An upgrade had taken place in the past, during which time PostgreSQL was
down.
2. A temporary outage of PostgreSQL due to network issues.
3. The edge-qpid-server attempted to send a message to PostgreSQL but
PostgreSQL returned a runtime error.

Resolution

1. Note down the name of the queues from the Common Diagnosis Steps. For
example:
● ax-q-axgroup-001-consumer-group-001
● ax-q-axgroup-001-consumer-group-001-dl
2. Run the qpid-tool command to enter an interactive qpidprompt:
3. qpid-tool
4. This comamnd returns the following:
5. Management Tool for QPID
6. qpid:
7. Run list broker to obtain a list of active brokers:
8. list broker
9. This comamnd returns the following:
10.Object Summary:
11.ID Created Destroyed Index
12.=======================================
125 21:00:00 - amqp-broker
13.Where the ID column specifies the ID of the borker.
14.Note the ID of the broker. In the example it is 125.
15.Run the following command to move the messages from the dead letter
queue back to the actual queue:
16.call 125 queueMoveMessages ax-q-axgroup-001-consumer-group-001-dl
ax-q-axgroup-001-consumer-group-001 100000 {}
17.This comamnd returns the following:
18.OK (0) - {}
19.If there is no output then there was nothing to do, meaining no messages to
move. If you do not see OK(0) then you should contact Apigee Edge
Support.
20.Quit the qpid-tool terminal.
21.quit
22.Wait 5 minutes and then run the diagnosis steps again from Common
Diagnosis Steps. Verify that the messages on the actual queue are getting
processed and ensure the dead letter msg count remains at 0.

If the problem still persists, go to the next section.

Must Gather Diagnostic Information

If the problem persists even after following the above instructions, please gather
the following diagnostic information. Contact and share them with Apigee Edge
Support:

● Qpidd logs: /opt/apigee/var/log/apigee-qpidd/apigee-


qpidd.log
● Postgresql logs: /opt/apigee/var/log/apigee-postgresql/apigee-
postgresql.log
● Edge-qpid-server logs:
/opt/apigee/var/log/edge-qpid-server/logs/system.log
● Edge-postgres-server logs:/opt/apigee/var/log/edge-postgres-
server/logs/system.log
● Qpidd queue stats:
● qpid-stat -q
● Analytics group returned by the following curl command:
● curl -u sysadminEmail:password http://mgmt:8080/v1/analytics/groups/ax

Analytics reports timing out


Was this troubleshooting playbook helpful? Please let us know by clicking Send Feedback .

Symptom
The Analytics dashboards (Proxy Performance, Target Performance, Custom
Reports, etc.) in the Edge UI timeout.

Error messages
You see the following error message when the Analytics dashboards timeout:

The report timed out: Try again with a smaller date range or a larger aggregation
interval.

Possible causes
The following table lists possible causes of this issue:

Cause For

Inadequate hardware configuration Edge Private Cloud users

Large amount of Analytics data in Edge Private Cloud users


Postgres Database

Insufficient time to fetch Analytics data Edge Private and Public


Cloud users

Inadequate hardware configuration


Diagnosis

If any of the Edge components are under capacity (if they have less CPU, RAM, or
IOPS capacity than required), then the Postgres Servers/Qpid Servers may run
slowly causing Analytics dashboards to timeout.

Resolution

Ensure that all the Edge components adhere to the minimum hardware
requirements as described in Hardware Requirements.

Large amount of Analytics data in Postgres Database


Diagnosis
1. On the Postgres node, login to PostgresSQL:
2. psql -h /opt/apigee/var/run/apigee-postgresql -U apigee apigee
3. Check the duration for which the data is available in the Postgres Database
using the following SQL query:
4. select min(client_received_start_timestamp),
max(client_received_start_timestamp) from
5. analytics."orgname.envname.fact";
6. Get the sizes of all the tables in the Postgres Database:
7. SELECT relname as "Table",pg_size_pretty(pg_total_relation_size(relid)) As
"Size",
8. pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) as
"External Size"
9. FROM pg_catalog.pg_statio_user_tables ORDER BY
pg_total_relation_size(relid) DESC;

Based on the output obtained in step #2 and #3, if you notice that either the
duration for which the data has been stored is long (longer than your retention
interval) and/or the table sizes are very large, then it indicates that you have large
amounts of analytics data in the Postgres database. This could be causing the
Analytics dashboards to time out.

Resolution

Prune the data that is beyond your required retention interval:

1. Determine the retention interval, that is the duration for which you want to
retain the Analytics data in the Postgres Database.
For example, you want to retain 60 days worth of Analytics data.
2. Run the following command to prune data for a specific organization and
environment:
3. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql pg-data-
purge
4. org env num_days_to_purge_back_from_current_date
5. For more information, see Pruning Analytics data.

If the problem persists, then proceed to Insufficient time to fetch Analytics data.

Insufficient time to fetch Analytics data


Diagnosis
1. Check if you are able to view the data in the Hour/Day Tab of Analytics
dashboard (Proxy Performance/Target Performance).
2. If you are able to view the data in the Hour tab alone or Hour and Day tabs,
but are getting report timeout errors only when attempting to view the Week
or Custom tabs, then this indicates that the volume of data that needs to be
fetched from the Postgres database is very large. This could be causing the
Edge UI to time out.

Resolution

The Edge UI has a default timeout of 120 seconds for fetching and displaying the
Analytics data. If the volume of Analytics data to be fetched is very large, then 120
seconds may not be sufficient. Increase the Edge UI timeout value to 300 seconds
by following the instructions in Set the timeout used by the Edge UI for Edge API
management calls (on-premises customers only).
Reload any of the Analytics dashboard and check if you are able to view the data for
all the tabs - Hour, Day, Week and Custom.

If the problem persists, contact Apigee Edge Support.

Custom variable not visible to analytics


custom reports
Was this troubleshooting playbook helpful? Please let us know by clicking Send Feedback .

Symptom

The custom variable created using the StatisticsCollector policy is not visible under
Custom Dimensions in the Analytics Custom Reports in the Edge UI.

Error messages

No errors are observed.

Possible causes

The following table lists possible causes of this issue:

Cause For

Custom Variable not adhering to the Edge Private and Public


standard guidelines Cloud users

No traffic on API Proxy implementing the Edge Private and Public


StatisticsCollector policy Cloud users

Custom variable not pushed to Postgres Edge Private Cloud


Server users

Click a link in the table to see possible resolutions to that cause.

Custom Variable not adhering to the standard guidelines


Diagnosis
If the custom variable name used in the StatisticsCollector policy does not adhere
to the standard guidelines (see Resolution), then it will not appear in the Custom
Reports.

The code snippet below shows that the variable name "product id" has a space, so it
will not appear under the Custom Dimension in the Custom Report.

<StatisticsCollector name="publishPurchaseDetails">
<Statistics>
<Statistic name="productID" ref="product id" type="string">999999</Statistic>
</Statistics>
</StatisticsCollector>

Resolution

Custom variable names used in the StatisticsCollector policy within the API proxy
should adhere to the following guidelines:

● Names can include [a-z][0-9] and '_'.


● Names cannot include spaces. For example, in the code sample shown
above, the variable name should be changed to "product_id".
● Case is ignored.
● Reserved keywords listed in the table at the following link are not permitted.
For example, "user" is not permitted. For more information, see SQL Key
Words.

If the problem persists, then proceed to No traffic on API Proxy implementing the
StatisticsCollector policy.

No traffic on API Proxy implementing the


StatisticsCollector policy
Diagnosis

If there is no traffic on the API proxy that implements the StatisticsCollector policy,
then the custom variable will not appear in the Custom Reports.

Resolution

Make some calls to the API proxy that implements the StatisticsCollector policy.

Wait for some time and check if the custom variable(s) appears in the custom
dimensions in Custom Report.

If the problem persists, then proceed to Custom Variable not pushed to Postgres
Server.

Custom variable not pushed to Postgres Server


NOTE: The instructions in this section can be performed by Private Cloud customers only.

Diagnosis

When a custom variable is created in the API proxy and API calls are made, the
variable first gets stored in-memory on the Message Processor. The Message
Processor then sends the information about the new variable to ZooKeeper, which
in turn sends it to the Postgres server to add it as a column in the Postgres
database.

At times, the notification from ZooKeeper may not reach the Postgres server due to
network issues. Because of this error, the custom variable might not appear in the
Custom Report.

To identify where the custom variable is missing:

1. Generate the ZooKeeper tree using the following command:


2. /opt/apigee/apigee-zookeeper/contrib/zk-tree.sh > zktree-output.txt
3. Search for the custom variable in the ZooKeeper tree output.
4. If the custom variable exists in the ZooKeeper tree, then execute the
following commands to check if the custom variable is added to the
Postgres database:
a. On the Postgres node, log in to PostgresSQL:
b. psql -h /opt/apigee/var/run/apigee-postgresql -U apigee apigee
c. Run the following SQL query:
d. select column_name, data_type, character_maximum_length from
INFORMATION_SCHEMA.COLUMNS
e. where table_name = 'orgname.envname.fact';
5. Very likely you will see that custom variable column will be missing in the
fact table which is the reason for it not to appear in the Custom Dimensions.

Resolution
Solution #1: Restart the Postgres Server

1. Restart the Postgres Server to force it to read all information relevant to


Analytics from Zookeeper:
2. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart
3. If the problem persists, apply Solution #2.

Solution #2: Enable the property forceonboard

Enable the forceonboard property using the below steps:

1. Create /opt/apigee/customer/application/postgres-
server.properties file on the Postgres server machine, if it does not
already exist.
2. Add the following line to this file:
3. conf_pg-agent_forceonboard=true
4. Ensure this file is owned by Apigee by using the following command:
5. chown apigee:apigee /opt/apigee/customer/application/postgres-
server.properties
6. Restart the Postgres server:
7. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart
8. If you have more than one Postgres server, repeat the steps above on all the
Postgres servers.
9. Undeploy and deploy your API proxy that uses the StatisticsCollector policy.
10.Run the API calls.
11.Check if the custom variable(s) appear in the custom dimensions in Custom
Report.

If the problem persists, contact Apigee Edge Support.

Data not showing up on analytics


dashboards
Was this troubleshooting playbook helpful? Please let us know by clicking Send Feedback .

Symptom
The analytics dashboards (Proxy Performance, Target Performance, etc.) are not
showing any data in the Edge UI. All the dashboards show the following message:

No traffic in the selected date range

Error messages
This issue does not result in observable errors.

Possible causes
The following table lists possible causes of this issue:

Cause For

No API Traffic for Organization-Environment Edge for Private Cloud


users
Data available in Postgres Database, but not Edge for Private Cloud
displaying in UI users

Analytics Data not being pushed to Postgres Edge for Private Cloud
Database users

Incorrect Analytics Deployment Edge for Private Cloud


users

Stale Analytics Server UUIDs Edge for Private Cloud


users

No API Traffic for Organization-Environment


Diagnosis
1. Check if there is traffic for the API Proxies on the specific organization-
environment for the specific duration that you are trying to view the analytics
data using one of the following methods:
a. Enable the trace for any of your APIs that is currently being used by
your users and check if you are able to get any requests in the
trace.Note: NOTE: This technique will be useful only if you want to check if
there is traffic at this point in time rather than in the past.
b. View the NGINX access logs (/opt/apigee/var/log/edge-
router/nginx/logs/access.log) and see if there any new
entries for API Proxies for the specific duration.
c. If you log information from API Proxies to a log server such as Syslog,
Splunk, Loggly, etc., then you can check if there are any entries in
these log servers for API Proxies for the specific duration.
2. If there's no traffic (no API requests) for the specific duration, then analytics
data is not available. You will see "No traffic in the selected date range" in
the analytics dashboard.

Resolution
1. Make some calls to one or more API proxies in the specific organization-
environment.
2. Wait for a few seconds, and then view the analytics dashboards in the Hour
tab and see if the data appears.
3. If the issue persists, then proceed to Data available in Postgres Database,
but not displaying in UI.

Data available in Postgres Database, but not displaying in


UI
Symptom
First, determine the availability of latest Analytics data in Postgres database.

To check if the latest Analytics data is available in the Postgres Master node:

1. Log in to each of the Postgres servers and run the following command to
validate if you are on the Master Postgres node:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql postgres-
check-master
3. On the Master Postgres node, log in to PostgresSQL:
4. psql -h /opt/apigee/var/run/apigee-postgresql -U apigee apigee
5. Check if the table exists for your org-env using the following SQL query in
the Postgres database:
6. \d analytics."orgname.envname.fact"
7. NOTE: Replace orgname and envname with your organization and environment names.
8. Check if the latest data is available in the Postgres database using the
following SQL query:
9. select max(client_received_start_timestamp) from
analytics."orgname.envname.fact";
10.If the latest timestamp is very old (or null), then this indicates that data is not
available in the Postgres database. The likely cause for this issue would be
that the data is not pushed from the Qpid Server to the Postgres database.
Proceed to Analytics Data not being pushed to Postgres Database.
11.If the latest data is available in the Postgres database on the Master node,
then follow the below steps to diagnose why the data is not displaying in the
Edge UI.

Diagnosis
1. Enable Developer Tools in your Chrome browser and get the API used from
one of the analytics dashboards using the below steps:
a. Select the Network tab from the Developer Tools.
b. Start Recording.
c. Reload the Analytics dashboard.
d. On the left hand panel in the Developer Tools, select the row having
"apiproxy?_optimized...".
e. On the right hand panel in the Developer Tools, select the "Headers"
tab and note the "Request URL".
2. Here's sample output from the Developer Tools:
Sample output showing the API used in Proxy Performance dashboard from
Network Tab of Developer Tools for Proxy Performance dashboard

3. Run the management API call directly and check if you get the results. Here's
a sample API call for the Day tab in the Proxy Performance dashboard:
4. curl -u username:password
5. "http://management_server_IP_address:8080/v1/organizations/
6. org_name/environments/env_name/stats/apiproxy?limit=14400&
select=sum(message_count),sum(is_error),avg(total_response_time),

avg(target_response_time)&sort=DESC&sortby=sum(message_count),sum(i
s_error),

avg(total_response_time),avg(target_response_time)&timeRange=08%2F9%
2F2017+
18:00:00~08%2F10%2F2017+18:00:00&timeUnit=hour&tsAscending=true"
7. If you see a successful response but without any data, then it indicates that
the Management Server is unable to fetch the data from the Postgres server
due to network connectivity issues.
8. Check if you are able to connect to the Postgres server from the
Management Server:
9. telnet Postgres_server_IP_address 5432
10.If you are unable to connect to the Postgres server, check if there are any
firewall restrictions on port 5432.
11.If there are firewall restrictions, then that could be the cause for the
Management Server being unable to pull the data from the Postgres server.

Resolution
1. If there are firewall restrictions, then remove them so that the Management
Server can communicate with the Postgres server.
2. If there are no firewall restrictions, then this issue could be due to network
glitch.
3. If there was any network glitch on the Management Server, then restarting it
might fix the issue.
4. Restart all the Management Servers one by one using the below command:
5. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
6. Check if you are able to see the analytics data in the Edge UI.

If you still don't see the data, contact Apigee Edge Support.

Analytics Data not being pushed to Postgres Database


Diagnosis

If the data is not pushed from Qpid Server to Postgres Database as determined in
Data available in Postgres Database, but not shown up in UI, then perform the
following steps:

1. Check if each of the Qpid Server is up and running by executing the below
command:
2. /opt/apigee/bin/apigee-service edge-qpid-server status
3. If any Qpid Server is down, then restart it. If not, jump to step #5.
4. /opt/apigee/bin/apigee-service edge-qpid-server restart
5. Wait for some time and then re-check if the latest data is available in
Postgres database.
a. Log in to PostgresSQL:
b. psql -h /opt/apigee/var/run/apigee-postgresql -U apigee apigee
c. Run the below SQL query to check if the latest data is available:
d. select max(client_received_start_timestamp) from
analytics."orgname.envname.fact";
6. If the latest data is available, then skip the following steps and proceed to
last step in Resolution section. If the latest data is not available, then
proceed with the following steps.
7. Check if the messages from Qpid server queues are being pushed to
Postgres database.
a. Execute the qpid-stat -q command and check the msgIn and
msgOut column values.
b. Here's a sample output that shows the msgIn and msgOut are not
equal. This indicates that messages are not being pushed from Qpid
Server to Postgres Database.

8. If there's a mismatch in msgIn and msgOut columns, then check the Qpid
Server logs /opt/apigee/var/log/edge-qpid-server/system.log
and see if there are any errors.
9. You may see error messages such as "Probably PG is still down" or "FATAL:
sorry, too many clients already" as shown in the figure below:
10.2017-07-28 09:56:39,896 ax-q-axgroup001-persistpool-thread-3
11. WARN c.a.a.d.c.ServerHandle - ServerHandle.logRetry() : Found the
exception to be
12. retriable - . Error observed while trying to connect to
jdbc:postgresql://PG_IP_address:5432/apigee Initial referenced UUID when
execution started in this thread was a1ddf72f-ac77-49c0-a1fc-
d0db6bf9991d
Probably PG is still down. PG set used - [a1ddf72f-ac77-49c0-a1fc-
d0db6bf9991d]
2017-07-28 09:56:39,896 ax-q-axgroup001-persistpool-thread-3
WARN c.a.a.d.c.ServerHandle - ServerHandle.logRetry() : Could not get
JDBC Connection;
nested exception is org.postgresql.util.PSQLException: FATAL: sorry, too
many clients already
2017-07-28 09:56:53,617 pool-7-thread-1
WARN c.a.a.d.c.ServerHandle - ServerHandle.logRetry() : Found the
exception to be
retriable - . Error observed while trying to connect to
jdbc:postgresql://PG_IP_address:5432/apigee
Initial referenced UUID when execution started in this thread was
a1ddf72f-ac77-49c0-a1fc-d0db6bf9991d Probably PG is still down. PG set
used -
[a1ddf72f-ac77-49c0-a1fc-d0db6bf9991d]
2017-07-28 09:56:53,617 pool-7-thread-1
WARN c.a.a.d.c.ServerHandle - ServerHandle.logRetry() : Could not get
JDBC Connection;
nested exception is org.apache.commons.dbcp.SQLNestedException:
Cannot create
PoolableConnectionFactory (FATAL: sorry, too many clients already)

This could happen if the Postgres Server is running too many SQL queries or the
CPU running high and hence unable to respond to Qpid Server.

Resolution
1. Restart the Postgres Server and PostgreSQL as shown below:
/opt/apigee/bin/apigee-service edge-postgres-server restart

2.
/opt/apigee/bin/apigee-service apigee-postgresql restart

3.
4. This restart ensures that all the previous SQL queries are stopped and
should allow new connections to the Postgres database.
5. Reload the Analytics dashboards and check if the Analytics data is being
displayed.

If the problem persists, contact Apigee Edge Support.

Incorrect Analytics Deployment


Diagnosis
1. Get the analytics deployment status by using the following API call:
2. curl -u user_email:password http://management_server_host:port
3. /v1/organizations/orgname/environments/envname/provisioning/axstatus
4. Check the status of the Qpid and Postgres servers from the results of the
API call.
a. If the status of Qpid and Postgres servers are shown as "SUCCESS",
then it indicates the analytics servers are wired properly. Proceed to
Stale Analytics Server UUIDs.
b. If the status of Qpid/Postgres servers is shown as "UNKNOWN" or
"FAILURE", then it indicates an issue with the corresponding server.
For example, the following scenario shows the status of the Postgres
servers as "UNKNOWN":

This may happen if there is a failure during the onboarding of


analytics. This failure prevents the messages from Management
Servers reaching the Postgres servers.
Resolution

This issue can be typically resolved by restarting the servers which showed the
"FAILURE" or "UNKNOWN".

1. Restart each of the servers whose analytics wiring status indicated


"FAILURE" or "UNKNOWN" using the following command:
2. /opt/apigee/apigee-service/bin/apigee-service component restart
3. For example:
a. If you see the issue on Qpid Servers, then restart the Qpid Servers:
b. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server
restart
c. If you see the issue on Postgres Servers, then restart both the Master
and Slave Postgres Server nodes:
d. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server
restart
4. In the example above, the "UNKNOWN" message is shown for the Postgres
servers, so you need to restart both the Master and Slave Postgres servers:
5. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart

Stale Analytics Server UUIDs


Diagnosis
1. Get the analytics configuration using the following API call:
2. curl -u user_email:password http://management-server-
host:port/v1/analytics/groups/ax
3. Here's a sample output from the above API:
4. [ {
5. "name" : "axgroup001",
6. "properties" : {
"consumer-type" : "ax"
},
"scopes" : [ "myorg~prod", "myorg~test" ],
"uuids" : {
"aries-datastore" : [ ],
"postgres-server" : [ "6777...2db14" ],
"dw-server" : [ ],
"qpid-server" : [ "774e...fb23", "29f3...8c11" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "774e...8c11" ],
"datastores" : [ "6777...db14" ],
"properties" : {
}
} ],
"data-processors" : {
}
}]

7. Ensure the following information in the output is correct:


a. org-env names listed in the "scopes" element.
b. UUIDs of the Postgres servers and Qpid servers.
● Get the Postgres Server UUIDs by running the following
command on each of the Postgres server nodes:
● curl 0:8084/v1/servers/self/uuid
● Get the Qpid Server UUIDs by running the following command
on each of the Qpid server nodes:
● curl 0:8083/v1/servers/self/uuid
8. If all the information is correct, then proceed to Analytics Data not being
pushed to Postgres Database.
9. If the UUIDs of Postgres and/or Qpid servers are incorrect, then it could be
possible that the Management Servers are referring to stale UUIDs.

Resolution

To remove the stale UUIDs and add the correct UUIDs of the servers, contact
Apigee Edge Support.

Postgres server running out of disk


space
Was this troubleshooting playbook helpful? Please let us know by clicking Send Feedback .

Symptom
The Postgres Server containing the Analytics data has run out of disk space.

In the following example, you can see that the disk /u01 has filled up 90%
(176GB/207GB) of the disk space.

$df -g

Filesystem Size User Avail Use% Mounted on


/dev/mapper/sysvg-syslv09 207G 176G 176G 21G 90% /u01

Error messages
You may not observe any error message unless the disk space is completely filled
on the Postgres Server.

Possible causes
The following table lists possible causes of this issue:

Cause For

Inadequate disk space Edge Private Cloud users

Lack of Analytics data pruning Edge Private Cloud users

Inadequate disk space


Diagnosis

One of the typical causes for disk space errors on Postgres Servers is that you
don't have adequate disk space to store the large volumes of analytics data. The
steps provided below will help you to determine if you have enough disk space or
not and take appropriate action to address the issue.

1. Determine the rate of incoming API traffic to Edge by referring to the


Analytics Proxy Performance Dashboard.
Sample Proxy Performance showing average TPS

2. Consider the following scenario:


a. The incoming API traffic for your org is 22 TPS (transactions per
second).
i. This means that the API traffic is 1,900,800 transactions per
day (22 * 60 * 60 * 24).
ii. Note each transaction/message in Analytics is 1.5K bytes in
size.
iii. Therefore, each day generates 2.7GB of Analytics data
(1,900,800 * 1.5 K).
b. You have a requirement to retain 30 days worth of Analytics data on
your Postgres Servers for reference.
i. The total data generated for 30 days = 81GB (2.7GB * 30)
c. Therefore, to store 30 days worth of Analytics data at traffic rate of 22
TPS, you need to have 150 GB of disk space.
i. 81GB (Analytics data) + 50GB (other data such as logs, etc) +
20GB (additional buffer space) = 150GB.
3. If you have less disk space on the system i.e, less than 150 GB of space (as
per the example scenario above), then you don't have adequate disk space to
store the Analytics data.

Resolution

Add adequate disk space to the Postgres Server machine.

Lack of Analytics data pruning


Diagnosis

With the increase in API traffic to Edge, the amount of analytics data getting stored
in the Postgres database will also increase. The amount of analytics data that can
be stored in Postgres database is limited by the amount of disk space available on
the system.

Therefore, you cannot continue to keep storing additional analytics data on the
Postgres database without taking one of the following actions:

1. Add more disk space.


This is not a scalable option as we can't keep adding more disk space as it it
limited and expensive.
2. Prune the data beyond the required retention interval.
This is a preferred solution as you can ensure that the data that is no longer
required is being removed at regular intervals of time.

If you don't prune the data at regular intervals manually or by using a cron job, then
the amount of analytics data continually increases and can eventually lead to your
running out of disk space on the system.

Resolution

To prune the data that is beyond your required retention interval:

1. Determine the retention interval, that is the duration for which you want to
retain the Analytics data in the Postgres Database.
2. Run the following command to prune data for a specific organization and
environment:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql pg-data-purge
org env number_of_days_to_retain [Delete-from-parent-fact - N/Y] [Skip-
confirmation-prompt - N/Y]

3.
The script has the following options:

● Delete-from-parent-fact Default : No. Will also delete data older than


retention days from parent fact table.
● Skip-confirmation-prompt. Default: No. If No, the script will prompt for
confirmation before deleting data from parent fact. Set to Yes if the purge
script is automated.

For more information, see Pruning Analytics data.

If the problem persists, contact Apigee Edge Support.

Adding and deleting analytics


components in analytics groups
Was this troubleshooting playbook helpful? Please let us know by clicking Send Feedback .

In an Edge for Private Cloud installation, you might have to remove Postgres and
Qpid servers from an existing analytics group, or add them to an analytics group.
This document describes how to add and remove Postgres and Qpid servers in an
existing Edge installation for a single Postgres installation and a master-standby
Postgres installation.

See Set up master-standby replication for Postgres for more.

Prerequisites
Ability to make management server API calls using the system admin credentials.

Add an existing Postgres server to an analytics group


The process for adding Postgres server components depends on whether Postgres
was installed as a single server with no replication, or as two servers with master-
standby replication enabled.

Scenario #1: One Postgres server, no Postgres replication


1. Determine the name of the analytics and consumer groups.
By default, the name of the analytics group is axgroup-001, and the name
of the consumer group is consumer-group-001. In the silent config file for
a region, you can set the name of the analytics group by using the AXGROUP
property.
If you are unsure of the names of the analytics and consumer groups, use
the following command to display them:
2. curl -u adminEmail:pword "http://ms-IP:8080/v1/analytics/groups/ax"
3. This call returns a response containing the names of the analytics group:
4. [ {
5. "name" : "axgroup-001",
6. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ]

7. In this example, the analytics group name is axgroup-001.


8. Use the following API to determine the UUID of the postgres-server
component:
9. curl http://pg-IP:8084/v1/servers/self
10.In the following API calls, replace axgoupname and UUID with the analytics
group name and UUID determined above.
11.Use the following API call to add the Postgres server UUID to the
postgres-server element:
12.curl -v -u adminEmail:pword -X POST -H 'Content-Type: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/servers?
uuid=UUID&type=postgres-server&force=true"
13.Example call and output:
14.curl -v -u adminEmail:pword -X POST -H 'Content-Type: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/servers?
uuid=8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77&type=postgres-
server&force=true"
15.[ {
16. "name" : "axgroup-001",
17. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07"],
"postgres-server" : ["8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07"],
"datastores" : [ ],
"properties" : {
}
} ],
"data-processors" : {
}
18.Use the following API to add the Postgres server UUID to the consumer
group :
19.curl -v -u adminEmail:pword -X POST -H 'Content-Type: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/consumer-
groups/consumer-group-001/datastores?uuid=UUID"
20.Example call and output:
21.curl -v -u adminEmail:pword -X POST -H 'Content-Type: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/consumer-
groups/consumer-group-001/datastores?uuid=8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77"
22.[ {
23. "name" : "axgroup-001",
24. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07"],
"postgres-server" : ["8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07"],
"datastores" : ["8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77"],
"properties" : {
}
} ],
"data-processors" : {
}
25.Restart all edge-postgres-server and edge-qpid-server components
on all nodes to make sure the change is picked up by those components:
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server wait_for_ready
/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart

26. /opt/apigee/apigee-service/bin/apigee-service edge edge-postgres-server


wait_for_ready

Scenario #2: Two Postgres servers with master-standby replication


1. Determine the name of the analytics and consumer groups.
By default, the name of the analytics group is axgroup-001, and the name
of the consumer group is consumer-group-001. In the silent config file for
a region, you can set the name of the analytics group by using the AXGROUP
property.
If you are unsure of the names of the analytics and consumer groups, use
the following command to display them:
2. curl -u adminEmail:pword "http://ms-IP:8080/v1/analytics/groups/ax"
3. This call returns a response containing the names of defined analytics
groups:
4. [ {
5. "name" : "axgroup-001",
6. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ]

7. In this example, the analytics group name is axgroup-001.


8. Use the following API call to find the UUIDs of each master postgres-
server component AND the standby postgres-server component:
9. curl http://pg-IP:8084/v1/servers/self
10.In the following API calls you would have to replace axgoupname with
axgroup-001 and the UUID that was obtained in step 2 on the master server
needs to be used in place of masteruuid and the UUID that was returned in
step 2 for the standby server needs to be used as slaveuuid.
11.Use the following API to add the Postgres server UUIDs to the postgres-
server element:
12.curl -v -u adminEmail:pword -X POST -H 'Content-Type: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/servers?
uuid=masteruuid,slaveuuid&type=postgres-server&force=true"
13.Example call and output:
14.curl -v -u adminEmail:pword -X POST -H 'Content-Type: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/servers?
uuid=8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77,731c8c43-8c35-4b58-
ad1a-f572b69c5f0&type=postgres-server&force=true"
15.[ {
16. "name" : "axgroup-001",
17. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : ["54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : [],
"properties" : {
}
} ],
"data-processors" : {
}

18.Use the following API to add the Postgres server UUIDs to the consumer
group:
19.curl -v -u adminEmail:pword -X POST -H -H 'Content-Type: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/consumer-
groups/consumer-group-001/datastores?uuid=masteruuid,slaveuuid"
20.Example call and output:
21.curl -v -u adminEmail:pword -X POST -H 'Content-Type: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/consumer-
groups/consumer-group-001/datastores?uuid=8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77,731c8c43-8c35-4b58-ad1a-f572b69c5f0"
22.[ {
23. "name" : "axgroup-001",
24. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : ["54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : ["8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77:731c8c43-
8c35-4b58-ad1a-f572b69c5f0"],
"properties" : {
}
} ],
"data-processors" : {
}
25.Restart all edge-postgres-server and edge-qpid-server components
on all nodes to make sure the change is picked up by those components:
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server wait_for_ready
/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart

26. /opt/apigee/apigee-service/bin/apigee-service edge edge-postgres-server


wait_for_ready

Add an existing Qpid server to an analytics group


1. Find the analytics group name using the following API:
2. curl -u adminEmail:pword "http://ms-IP:8080/v1/analytics/groups/ax"
3. This should return a response containing the names of the analytics groups
and scopes:
4. [ {
5. "name" : "axgroup-001",
6. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ]

7. In this example, the analytics group name is axgroup-001.


8. Use the following API call to determine the UUID of each the Qpid server
component you want to add to the analytics group:
9. curl http://qp-IP:8083/v1/servers/self
10.Use the following API call to add a single Qpid server UUID to the qpid-
server element (repeat for as many UUIDs as necessary):
11.curl -v -u adminEmail:pword -X POST -H 'Content-Type: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/servers?
uuid=qpiduuid&type=qpid-server"
12.Example call and output:
13.curl -v -u adminEmail:pword -X POST -H 'Accept: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/servers?
uuid=94c96375-1ca7-412d-9eee-80fda94f6e0&type=qpid-server"
14.[ {
15. "name" : "axgroup-001",
16. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07",
"54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : ["8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77:731c8c43-
8c35-4b58-ad1a-f57 ],
"properties" : {
}
} ],
"data-processors" : {
}
17.Use the following API call to add a single Qpid server UUID to the consumers
element of the consumer group (repeat for as many UUIDs as necessary):
18.curl -v -u adminEmail:pword -X POST -H 'Content-Type: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/consumer-
groups/consumer-group-001/consumers?uuid=qpiduuid"
19.Example call and output:
20.curl -v -u adminEmail:pword -X POST -H 'Accept: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001//consumer-
groups/consumer-group-001/consumers?uuid=94c96375-1ca7-412d-9eee-
80fda94f6e0"
21.[ {
22. "name" : "axgroup-001",
23. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07",
"54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07","54a96375-
33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : ["8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77:731c8c43-
8c35-4b58-ad1a-f57 ],
"properties" : {
}
} ],
"data-processors" : {
}

24.Restart all edge-qpid-server components on all nodes to make sure the


change is picked up by those components:
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart

25. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server wait_for_ready

Remove a Postgres server from an analytics group


The process for removing a Postgres server depends on whether Postgres
replication was enabled or not.

Scenario #1: One Postgres server, no replication


1. Determine the name of the analytics and consumer groups.
By default, the name of the analytics group is axgroup-001, and the name
of the consumer group is consumer-group-001. In the silent config file for
a region, you can set the name of the analytics group by using the AXGROUP
property.
If you are unsure of the names of the analytics and consumer groups, use
the following command to display them:
2. curl -u adminEmail:pword "http://ms-IP:8080/v1/analytics/groups/ax"
3. This should return a response like the following:
4. [ {
5. "name" : "axgroup-001",
6. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"datastores" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77" ],
"properties" : {
}
} ],
"data-processors" : {
}

7. In this example, the analytics group name is axgroup-001, consumer group


name consumer-group-001 and the postgres-server UUID is
8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77. Note that this ID is
associated with both the postgres-server and the datastores element
under consumer-groups.
8. Use the analytics group name, consumer group name and UUID obtained in
the steps below.
9. Use the following API call to remove the postgres-server UUID from the
datastores element of the consumer group:
10.curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/consumer-
groups/consumergroupname/datastores/UUID"
11.Example call and output:
12.curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/consumer-
groups/consumer-group-001/datastores/8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77"
13.[ {
14. "name" : "axgroup-001",
15. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"datastores" : [ ],
"properties" : {
}
} ],
"data-processors" : {
}
16.Use the following API to remove the postgres-server UUID from the
postgres-server element:
17.curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/servers?
uuid=UUID&type=postgres-server"
18.Example call and output:
19.curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/servers?
uuid=8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77&type=postgres-server"
20.[ {
21. "name" : "axgroup-001",
22. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"postgres-server" : [ ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"datastores" : [ ],
"properties" : {
}
} ],
"data-processors" : {
}
23.Depending on if you are replacing or deleting the Postgres server:
● If you are replacing the Postgres server, see Add a Postgres server for
the steps to add a Postgres servers.
● If you are deleting a Postgres server, restart all edge-postgres-
server and edge-qpid-server components on all nodes to make
sure the change is picked up by those components by running the
following commands:

/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart


/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server wait_for_ready
/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart

● /opt/apigee/apigee-service/bin/apigee-service edge edge-postgres-server


wait_for_ready

Scenario #2: Two Postgres servers with master-standby replication


1. Find the analytics group name and Postgres server UUIDs that are currently
registered using the following API:
2. curl -u adminEmail:pword "http://ms-IP:8080/v1/analytics/groups/ax"
3. This call returns a response like the following:
4. [ {
5. "name" : "axgroup-001",
6. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07",
"54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07", "54a96375-
33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77:731c8c43-
8c35-4b58-ad1a-f572b69c5f0" ],
"properties" : {
}
} ],
"data-processors" : {
}

7. In this example, the analytics group name is axgroup-001, and the


postgres-server UUIDs are 8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77 and 731c8c43-8c35-4b58-ad1a-f572b69c5f0. Note
the postgres-server and datastores element have the same value.
8. Use the analytics group name, consumer group name, and UUIDs obtained in
this step in the steps below.
9. Use the following API to remove the postgres-server UUIDs from the
datastores element of the consumer group (note that the master and
slave UUIDs are separated by a comma in the API but will be separated by a
colon in the output of the analytics group call mentioned above):
10.curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/consumer-
groups/consumergroupname/datastores/masteruuid,slaveuuid"
11.Example call and output:
12.curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/consumer-
groups/consumer-group-001/datastores/8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77,731c8c43-8c35-4b58-ad1a-f572b69c5f0"
13.[ {
14. "name" : "axgroup-001",
15. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07",
"54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07", "54a96375-
33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : [ ],
"properties" : {
}
} ],
"data-processors" : {
}
16.Use the following API to remove the postgres-server UUIDs from the
postgres-server element:
17.curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/servers?
uuid=masteruuid,slaveuuid&type=postgres-server"
18.Example call and output:
19.curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/servers?
uuid=8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77,731c8c43-8c35-4b58-
ad1a-f572b69c5f0&type=postgres-server"
20.[ {
21. "name" : "axgroup-001",
22. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07", "54a96375-
33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07", "54a96375-
33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : [ ],
"properties" : {
}
} ],
"data-processors" : {
}
23.Depending on if you are replacing or deleting the Postgres server:
a. If you are replacing the Postgres server, see Add a Postgres server for
the steps to add a Postgres servers.
b. If you are deleting a Postgres server, restart all edge-postgres-
server and edge-qpid-server components on all nodes to make
sure the change is picked up by those components by running the
following commands:

/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart


/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server wait_for_ready
/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart
c. /opt/apigee/apigee-service/bin/apigee-service edge edge-postgres-server
wait_for_ready
24. Remove a Qpid server from an analytics group
a. Find the Qpid server UUIDs that are currently registered using the
following API:
b. curl -u adminEmail:pword "http://ms-IP:8080/v1/analytics/groups/ax"
c. This should return a response in the form:
d. [ {
e. "name" : "axgroup-001",
f. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07",
"54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07",
"54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0" ],
"properties" : {
}
} ],
"data-processors" : {
}

g. In this example, the analytics group name is axgroup-001, and the


Qpid server UUIDs are 94c96375-1ca7-412d-9eee-80fda94f6e0
and 54a96375-33a7-4fba-6bfa-80fda94f6e07. Note the qpid-
server and consumers element have the same values.
Use the analytics group name, consumer group name and UUIDs
obtained in this step in the steps below.
h. Use the following API call to remove a single qpid-server UUID
from the consumers element of the consumer group (repeat for as
many UUIDs as necessary):
i. curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/consumer-
groups/consumer-group-001/consumers/qpiduuid"
j. Example call and output:
k. curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/consume
r-groups/consumer-group-001/consumers/94c96375-1ca7-412d-
9eee-80fda94f6e0"
l. [ {
m. "name" : "axgroup-001",
n. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07",
"54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : ["8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f57 ],
"properties" : {
}
} ],
"data-processors" : {
}
o. Use the following API call to remove a single qpid-server UUID
from the qpid-server element (repeat for as many UUIDs as
necessary):
p. curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/servers?
uuid=qpiduuid&type=qpid-server"
q. Example call and output:
r. curl -v -u adminEmail:pword -X DELETE -H 'Accept: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/servers?
uuid=94c96375-1ca7-412d-9eee-80fda94f6e0&type=qpid-server"
s. [ {
t. "name" : "axgroup-001",
u. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : ["54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : ["8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f57 ],
"properties" : {
}
} ],
"data-processors" : {
}
v. Restart all edge-qpid-server components on all nodes to make
sure the change is picked up by those components:

/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart

w. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server wait_for_ready

Custom Dimensions not appearing


when multiple axgroups have been
configured
Symptom
The custom variable created using the Statistics Collector policy is not visible
under Custom Dimensions in the Analytics Custom Reports in the Edge UI.

Error Message
The edge-postgres-server system log will have the following error after a restart for
each organization/environment combination where the path is missing:

KeeperErrorCode = NoNode for


/organizations/<ORG>/environments/<ENV>/properties
2018-03-29 16:14:48,980 pool-10-thread-2 ERROR ZOOKEEPER -
ZooKeeperServiceImpl.getData() : Could not get data for path:
/organizations//environments//properties, reason:
KeeperErrorCode = NoNode for
/organizations//environments//properties

Possible Causes
Troubleshooting
Cause Description Instructions
Applicable For

Zookeeper properties When the ZooKeeper path Edge Private Cloud


path is missing for /organizations//env Users
organization and ironments//properti
environment es is missing, custom
combination. dimensions can not be
created.

Cause: ZooKeeper properties path is missing for


organization and environment combination
Diagnosis

Analytics is enabled for an organization-environment combination through the


Enable analytics API, which populates the
/organizations//environments//properties in the ZooKeeper tree. When
this API fails, the edge-postgres-server component cannot create custom
dimensions, and the following error will appear in its system.log:

KeeperErrorCode = NoNode for


/organizations/example/environments/prod/properties
2018-03-29 16:14:48,980 pool-10-thread-2 ERROR ZOOKEEPER -
ZooKeeperServiceImpl.getData() : Could not get data for path:
/organizations/example/environments/prod/properties, reason:
KeeperErrorCode = NoNode for
/organizations/digi/environments/sandbox/properties

Typically this happens when an organization was on-boarded at a time when more
than one analytics group existed that used the same postgres-server UUIDs. To
check how many analytics groups exist, run the following API call:

curl -u username:password "http://management-server-


host:8080/v1/analytics/groups/ax"

If this API returns multiple groups, then it is most likely the cause of the issue. In
the following example, notice that two groups exist, differentiated by only a hyphen:
axgroup-001 and axgroup001

[{
"name" : "axgroup-001",
"properties" : {
},
"scopes" : [ "VALIDATE~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"datastores" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77" ],
"properties" : {
}
} ],
"data-processors" : {
}
}, {
"name" : "axgroup001",
"properties" : {
},
"scopes" : [ "example~prod" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"datastores" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77" ],
"properties" : {
}
} ],
"data-processors" : {
}

Specifically, the issue is caused if the postgres-server UUID that is configured for
each axgroup is the same. A quick way to determine this is to run the following:

curl -u username:password "http://management-server-


host:8080/v1/analytics/groups/ax"|grep -B7 "postgres-server"

The following sample output quickly compare the postgres-server UUIDs to


determine they are the same:

{ "name" : "axgroup-001", "properties" : { }, "scopes" : [ "VALIDATE~test" ],


"uuids" : { "qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77" ]--
"name" : "axgroup001", "properties" : { }, "scopes" : [ "myorg~prod" ],
"uuids" : { "qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"aries-datastore" : [ ], "postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77" ],
Resolution

The same set of PostgreSQL servers cannot be assigned to two axgroups. The
quickest way to resolution is to delete the postgres server information from one
axgroup and assign all scopes (organization, environment combinations) to the
other group. To do this, follow these steps:

1. Remove the postgres server components from one of the two axgroups
using the instructions at Adding and deleting analytics components in
analytics groups. If you are seeing problems only with non-production
scopes, then choose the axgroup which does NOT have any production
scope for this exercise. Otherwise, choose the axgroup with the least
number of scopes.
2. Remove all scopes from the axgroup that no longer has any postgres
servers, and those for which you are seeing the custom dimension issue
from their axgroup. Assuming axgroup-001 from the example above is the
group you want to use, you would have to use the following to remove the
scope for the myorg organization, and the prod environment:
curl -u username:password -X DELETE 'http://management-server-
host:8080/v1/analytics/groups/ax/axgroup001/scopes?org=myorg&env=prod'

3.
4. Enable analtyics using the Enable analytics API.
5. Restart the edge-postgres-server component on the postgres master node,
and all edge-qpid-server components you may have installed, using the
following commands:
/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart

6.
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart

7.
8. Verify that the error provided in the Error Message section above no longer
occurs in the Postgres server's log file /opt/apigee/var/log/edge-
postgres-server/logs/system.log.
9. Confirm if the custom dimensions show up in the Edge UI.

If the problem persists, go to Must Gather Diagnostic Information.

Must Gather Diagnostic Information


If the problem persists even after following the previous instructions, please gather
the following diagnostic information. Contact and share them to Apigee Edge
Support:

1. The edge-postgres-server /opt/apigee/var/log/edge-postgres-


server/logs/system.log from the time of the latest restart.
2. The output of the following Management API call:
curl -u username:password "http://management-server-
host:8080/v1/analytics/groups/ax"

3.
4. The output of the ZooKeeper tree obtained using the following command:
5. /opt/apigee/apigee-zookeeper/contrib/zk-tree.sh > zktree-output.txt

Report Timed Out


Symptom

When you use the apigee-provision


script to create a new organization,
sometimes the script exits with an
error message. Following this error, if
you you attempt to view the Edge UI
Dashboard or any analytics
dashboards, you will observe the error
message The report timed out for the
newly created organization.
Error Message

When you run the apigee-provision


script to create a new organization, you
may see the following error message:
!!!! Error !!!!
HTTP STATUS CODE: 400

"code" : "dataapi.service.PGFoundInMultipleGroups",

"message" : "dataapi.service.PGFoundInMultipleGroups",

"contexts" : [ ]

Even if you get this error, you can


operate on the newly created


organization after the provisioning
script exits. However, when you
attempt to view the Edge UI Dashboard,
you will observe the below error
message for the newly created
organization:
The report timed out
Try again with a smaller date range or a larger aggregation interval.

Here is a screenshot showing the


error:
Possible Causes

Cause Description Troublesh


ooting
Instruction
s
Applicable
For

Multip Multiple Edge


le AX analytics Private
Group groups have Cloud
s been created Users
setup with the same
set of
Postgres
servers.
Cause: Multiple AX Groups setup
Diagnosis

1. Execute the following Analytics


Groups management API and
determine if the output shows more
than one analytics group defined.
For example:
2. curl -u adminEmail:adminPwd
http://<ms_ip>:8080/v1/analytics/groups/ax

3. 
Sample Output showing two
analytics groups
4. {
5. "name":"axgroup-001",
6. "properties":{
},
"scopes":[
],
"uuids":{
"qpid-server":[
"5c1e9690-7b58-499a-a4bb-
d54454474b8f",
"7794c428-e553-4ed2-843d-
69f93bbec8a3"
],
"postgres-server":[
"3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-
4dfb-8c74-548010e95e5e"
]
},
"consumer-groups":[
{
"name":"consumer-group-001",
"consumers":[
"5c1e9690-7b58-499a-a4bb-
d54454474b8f",
"7794c428-e553-4ed2-843d-
69f93bbec8a3"
],
"datastores":[
"3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-
4dfb-8c74-548010e95e5e"
],
"properties":{
}
}
],
"data-processors":{
}
},
{
"name":"axgroup001",
"properties":{
"consumer-type":"ax"
},
"scopes":[
"017pdspoint~dev",
"010test~dev",
"019charmo~dev",
"009gcisearch1~dev",
"000fj~trial-fjwan",
"009dekura~dev",
"008pisa~dev",
"004fjadrms~dev",
"018k5billing~dev",
"004study14~dev",
"001teama~dev",
"005specdb~dev",
"test~dev",
"000fj~prod-fjwan",
"012pjweb~dev",
"020workflow~dev",
"007ikou~prod-fjwan",
"003asano~dev",
"013mims~dev",
"006studyhas~dev",
"006efocus~dev",
"002wfproto~dev",
"008murahata~dev",
"016mediaapi~dev",
"015skillnet~dev",
"014aclmanager~dev",
"010fjppei~dev",
"000fj~trial",
"003esupport~dev",
"000fj~prod",
"005ooi~dev",
"test~elb1",
"007fjauth~dev",
"011osp~dev",
"002study~dev",
"999test~dev"
],
"uuids":{
"qpid-server":[
"5c1e9690-7b58-499a-a4bb-
d54454474b8f",
"7794c428-e553-4ed2-843d-
69f93bbec8a3"
],
"aries-datastore":[
],
"postgres-server":[
"3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-
4dfb-8c74-548010e95e5e"
],
"dw-server":[
]
},
"consumer-groups":[
{
"name":"consumer-group-001",
"consumers":[
"5c1e9690-7b58-499a-a4bb-
d54454474b8f",
"7794c428-e553-4ed2-843d-
69f93bbec8a3"
],
"datastores":[
"3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-
4dfb-8c74-548010e95e5e"
],
"properties":{
}
}
],
"data-processors":{
}
}
7. This output shows that there are
two analytics groups axgroup-001
and axgroup001.
8. Check to make sure all of the
analytics groups have scopes
defined.
In the sample analytics groups
output shown above, the analytics
group axgroup-001 does not have
any scopes defined, but it still has
Postgres servers defined as
datastores.
9. Execute the below Qpid queue
stats command on the Qpid servers
and validate if there are no
messages coming for the specific
analytics group identified in step
#2.
10. qpid-stat -q
11. Sample Qpid queue stats
12. The following Qpid queue stats
indicate that there are no messages
coming for the specific analytics
group queue from the example cited
above (axgroup-001):

que d a e mm m b b b c b
u u u x s s s y y y o i
e r t c g g g t t t n n
o l I O e e e s d
D n u s s s
e t I O
l n u
t

140 Y Y 0 0 0 0 0 0 1 2
9
9
5f
e-
7
1
a
7-
4
0
0
0-
a
1f
4-
7
1
b
7
a
9
5
1
d
a
7f
:0
.0

ax- Y 0 0 0 0 0 0 1 2
q-
ax
gr
o
u
p-
0
0
1-
co
ns
u
m
er
-
gr
o
u
p-
0
0
1

ax- Y 0 0 0 0 0 0 0 2
q-
ax
gr
o
u
p-
0
0
1-
co
ns
u
m
er
-
gr
o
u
p-
0
0
1-
dl

ax- Y 0 2 2 0 2 2 1 2
q-
ax
gr
o
u
p
0
0
1-
co
ns
u
m
er
-
gr
o
u
p-
0
0
1

ax- Y 3 3 0 5 5 0 0 2
q-
ax
gr
o
u
p
0
0
1-
co
ns
u
m
er
-
gr
o
u
p-
0
0
1-
dl

13. Because there are no


messages/traffic coming for the
specific analytics group axgroup-
001, you observe the error "The
report timed out" in the Edge UI
dashboard or in the analytics
dashboards.
Resolution

To resolve this issue, delete the


axgroup that doesn't have any scopes
and doesn't get any traffic.
Follow the procedure below to delete
the axgroup :

Step 1: Delete the consumers for the


particular axgroup.
1. Use the following management
API to remove each of consumers
from the axgroup:
2. curl -v -u admin@email.com:password -X DELETE -H
'Accept:application/json' -H 'Content-Type:application/json'
'http://{mgmt-server-host}:8080/v1/analytics/groups/ax/{axgroup-
name}/consumer-groups/{consumer-group-name}/consumers/{uuid-of
the consumer}'

3. 
Repeat the same API call above if
there are multiple consumers,
mentioning the UUID of each
consumer in a separate API call.
For the example shown above, the
following API removes the
consumer with UUID 5c1e9690-
7b58-499a-a4bb-
d54454474b8f:
4. curl -v -X DELETE -H 'Accept:application/json' -H
'Content-Type:application/json'
'http://localhost:8080/v1/analytics/groups/ax/axgroup-001/consume
r-groups/consumer-group-001/consumers/5c1e9690-7b58-499a-a4bb-
d54454474b8f'

5.
{
"name" : "axgroup-001",
"properties" : {
},
"scopes" : [ ],
"uuids" : {
"qpid-server" : [ "5c1e9690-7b58-499a-a4bb-d54454474b8f",
"7794c428-e553-4ed2-843d-69f93bbec8a3" ],
"postgres-server" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "7794c428-e553-4ed2-843d-69f93bbec8a3" ],
"datastores" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ],
"properties" : {
}
} ],
"data-processors" : {
}
* Connection #0 to host localhost left intact
* Closing connection #0
}

6. 
Re-run the same API to delete the
other consumer whose UUID is
7794c428-e553-4ed2-843d-
69f93bbec8a3 in the present
example.

Step 2 : Remove the consumer groups


1. Use the following management
API to remove the consumer groups
from the particular axgroup:
2. curl -v -u admin@email.com:password -X DELETE
'http://{mgmt-server-host}:8080/v1/analytics/groups/ax/{axgroup-
name}/consumer-groups/{consumer-group-name}'

3. 
Example:
The following API deletes the
consumer group name consumer-
group-001:
4. curl -v -X DELETE
'http://localhost:8080/v1/analytics/groups/ax/axgroup-001/consume
r-groups/consumer-group-001'

5. {
"name" : "axgroup-001",
"properties" : {
},
"scopes" : [ ],
"uuids" : {
"qpid-server" : [ "5c1e9690-7b58-499a-a4bb-d54454474b8f",
"7794c428-e553-4ed2-843d-69f93bbec8a3" ],
"postgres-server" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ ],
"datastores" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ],
"properties" : {
}
} ],
"data-processors" : {
}
* Connection #0 to host localhost left intact
* Closing connection #0
}

Step 3: Delete the qpid-servers from


axgroup
1. Use the following management
API to delete qpid-servers from
the particular axgroup.
2. curl -X DELETE -u admin@email.com
"http://localhost:8080/v1/analytics/groups/ax/{axgroup-name}/serv
ers?uuid={qpid-server-uuid}type=qpid-server" -H "Content-type:
application/json"

3. 
Re-run the same API call if there
are multiple Qpid servers.
Example:
Use the following API to delete the
Qpid server with UUID 7794c428-
e553-4ed2-843d-69f93bbec8a3 in
the present example:
4. curl -X DELETE
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/servers
?uuid=7794c428-e553-4ed2-843d-69f93bbec8a3&type=qpid-server" -H
"Content-type: application/json"

5.
{
"name" : "axgroup-001",
"properties" : {
},
"scopes" : [ ],
"uuids" : {
"qpid-server" : [ "5c1e9690-7b58-499a-a4bb-d54454474b8f" ],
"postgres-server" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ ],
"datastores" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ],
"properties" : {
}
} ],
"data-processors" : {
}
}

Step 4: Delete the postgres servers


from axgroup
1. Use the following management
API to delete the Postgres server, if
there is a single Postgres server:
2. curl -v -X DELETE -H 'Accept:application/json'
"http://{mgmt-server-host}:8080/v1/analytics/groups/ax/{axgroup-
name}/servers?uuid={postgres-server-uuid}&type=postgres-
server&force=true"

3. 
Use the following management API
to delete the Postgres servers if you
have master and Postgres slave
setup
4. curl -v -X DELETE -H 'Accept:application/json'
"http://{mgmt-server-host}:8080/v1/analytics/groups/ax/{axgroup-
name}/servers?uuid={postgres-master-uuid,postgres-slave-
uuid}&type=postgres-server&force=true"

5. Example:
6. In the example shown above,
there are master and slave Postgres
servers, so you can use the
following API to delete the Postgres
servers:
7. curl -v -X DELETE -H 'Accept:application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/servers
?uuid=3b28b790-ec4e-45c5-a8d0-6d6f2088da65,750cd8ba-1799-4dfb-
8c74-548010e95e5e&type=postgres-server&force=true"

8.
9. {

10. "name" : "axgroup-001",

11. "properties" : {

12. },

13. "scopes" : [ ],

14. "uuids" : {

15. "qpid-server" : [ ],

16. "postgres-server" : [ ]

17. },

18. "consumer-groups" : [ ],

19. "data-processors" : {
20. }

21. * Connection #0 to host localhost left intact

22. * Closing connection #0

23. }

24.

 STEP 5: Remove the analytics group


1. Use the following management
API to remove the analytics group:
2. curl -v -X DELETE
"http://{mgmt-server-host}:8080/v1/analytics/groups/ax/{axgroup-
name}"

3. 
Example:
4. curl -v -X DELETE
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001"

5. {
"properties" : {
},
"scopes" : [ ],
"uuids" : {
},
"consumer-groups" : [ ],
"data-processors" : {
}
* Connection #0 to host localhost left intact
* Closing connection #0
}
STEP 6: Check if the group is

completely removed
1. Use the following management
API to check if the particular
analytics group has been
completely removed:
2. curl -v -u admin@email.com -X GET "http://{mgmt-server-
host}:8080/v1/analytics/groups/ax

3. 
Example:
4. curl localhost:8080/v1/analytics/groups/ax
5. [ {
"name" : "axgroup001",
"properties" : {
"consumer-type" : "ax"
},
"scopes" : [ "017pdspoint~dev", "010test~dev", "019charmo~dev",
"009gcisearch1~dev", "000fj~trial-fjwan", "009dekura~dev",
"008pisa~dev", "004fjadrms~dev", "018k5billing~dev",
"004study14~dev", "001teama~dev", "005specdb~dev", "test~dev",
"000fj~prod-fjwan", "012pjweb~dev", "020workflow~dev",
"007ikou~prod-fjwan", "003asano~dev", "013mims~dev",
"006studyhas~dev", "006efocus~dev", "002wfproto~dev",
"016mediaapi~dev", "015skillnet~dev", "014aclmanager~dev",
"010fjppei~dev", "000fj~trial", "003esupport~dev", "000fj~prod",
"005ooi~dev", "test~elb1", "007fjauth~dev", "011osp~dev",
"002study~dev" ],
"uuids" : {
"qpid-server" : [ "5c1e9690-7b58-499a-a4bb-d54454474b8f",
"7794c428-e553-4ed2-843d-69f93bbec8a3" ],
"aries-datastore" : [ ],
"postgres-server" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ],
"dw-server" : [ ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "5c1e9690-7b58-499a-a4bb-d54454474b8f",
"7794c428-e553-4ed2-843d-69f93bbec8a3" ],
"datastores" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ],
"properties" : {
}
} ],
"data-processors" : {
}
} ]

6. Notice that there is no


information pertaining to particular
analytics group axgroup-001 in the
above output. This confirms that
axgroup-001 has been completely
removed.

Step 7: Restart processes

The following processes on the Qpid


and Postgres machines:
1. Restart the apigee-qpidd.
2. Restart edge-qpid-server.
3. Restart edge-postgres-server.
4. Restart apigee-postgresql.

Step 8: Verify
Verify if the data shows up in the
analytics dashboards.

If the problem still persists, go to Must


Gather Diagnostic Information.
Must Gather Diagnostic Information

If the problem persists even after


following the above instructions,
please gather the following diagnostic
information. Contact Apigee Edge
Support and share the gathered
information.
1. Architecture setup of your
Private Cloud install (how many
hosts setup, number of each of the
components).
2. Output of the following
commands:
a. Analytics Group
b. curl -u sysadminEmail:sysadminPwd http://{mgmt-
server-host}:8080/v1/analytics/groups/ax
c. 
Qpid Queue stats on each of the
Qpid machines
d. qpid-stat -q
e. 
Analytics Status
f.curl -u sysadminEmail:sysadminPwd http://{mgmt-server-
host}:8080/v1/organizations/{org-name}/environments/
{environment-name}/provisioning/axstatus

g.

h. 

Enabling NGINX debug logs on the


Routers
Note: The instructions in this document are applicable for Edge Private Cloud users only.

On Apigee, the Routers are configured to log only the error messages in the error
log files by default. However, there could be many situations where you may need
to gather more information to determine why a specific error occurred. One of the
ways to do this is to configure the Router to operate in debug mode so that you can
get debug logs, which can help you to gain more information about the error and
resolve it faster.

This document explains how to enable debug logs on the Apigee Edge’s Router for
the requests on a specific virtual host. Debug logging can be enabled to capture
more information when there are any issues such as malformed request, 400 Bad
Request - SSL Certificate Error, on the Northbound (between the Client application
and Router).

Note: It is recommended to have the default configuration that is to log only the error messages
for NGINX to avoid any performance implications. However, if you need to gather additional
debug information to debug a specific issue, then ensure that you are enabling the NGINX
debug log on the Router for only a short period of time. Once you have gathered the required
information, immediately revert back to the normal mode.

Before you begin


● If you aren’t familiar with the NGINX error logs and levels of logging, please
refer to the NGINX error log documentation.
● Gather the organization, environment and virtual host names of the API
requests for which you need to collect debug information.

Enabling NGINX debug logs on the Routers


This section explains how to enable debug logs on the Edge Routers.

Required permissions: To perform this task, you must have the following permissions:

● Access to the specific Routers, where you would like to enable debug logs.
● Access to edit the Router component virtual host conf files in the
/opt/nginx/conf.d/ folder.

Identifying relevant virtual host configurations file

The following steps describe how to locate the relevant virtual host configuration
file on the Router:

1. If you know the organization name, environment name and virtual host for
the specific API request which you would like to debug, then determine the
virtual host conf file as follows:
a. Navigate to the /opt/nginx/conf.d/ directory.
b. Search for file ORG_NAME_ENV_NAME_VIRTUALHOST.conf file in
the conf.d directory using the following command:
c. ls -ltrh | grep "ORG_NAME_ENV_NAME_VIRTUALHOST_NAME"
2. If you don't know the organization name, you can identify the virtual host
configuration file by using the host alias name that is used in the API request
as follows:
Navigate to the /opt/nginx/conf.d/ directory and search for hostalias
with which the request was made using the following command:
3. ls -ltrh | grep -r 'HOST_ALIAS_NAME'
4. Sample output:
5. Let’s say the host alias name is opdk.cert-test.com. When you run the
ls -ltrh command, then you will see the output as shown below:
6.

7. Note: Multiple configuration files will be shown in case of using the same host alias on
different ports. Identify the port number on which API requests have been made to
identify the corresponding virtual host configuration file.

Enabling debug logging for a specific virtual host on the Router

The following steps describe how to enable debug logs on the Apigee Routers for a
specific virtual host.

1. Open the following file on the Router machine in an editor:


/opt/nginx/conf.d/ORG_NAME_ENV_NAME_VIRTUALHOST_NAME.conf.
For example:
2. vi /opt/nginx/conf.d/ORG_NAME_ENV_NAME_VIRTUALHOST_NAME.conf
3. Change the following line:
4. error_log
/opt/apigee/var/log/edge-router/nginx/ORG_NAME~ENV_NAME.PORT_error_
log error;
5. to
6. error_log
/opt/apigee/var/log/edge-router/nginx/ORG_NAME~ENV_NAME.PORT_error_
log info;
7. Save your changes.
8. Run the NGINX reload command. For example:
9. sudo /opt/nginx/scripts/apigee-nginx reload
10.The following file will now capture debug logs:
11./opt/apigee/var/log/edge-router/nginx/
ORG_NAME~ENV_NAME.PORT_error_log
12.If you want to capture debug logs on more than one Router, repeat these
steps on each of the Routers.

Verifying debug information is logged in NGINX error log


file
1. Once clients make API Requests on the host alias and port associated with
the virtual host configuration, debug logs will be captured in the following
file:
/opt/apigee/var/log/edge-router/nginx/ORG_NAME~ENV_NAME.P
ORT_error_log
2. Verify that you are seeing debug information for the API requests as shown
in the following example:
Sample debug information:
3. 2021/01/27 02:48:40 [warn] 27624#27624: *3777 a client request body is
buffered to a temporary file
/opt/apigee/var/log/edge-router/nginx/client_temp/0000000001, client:
XX.XX.XX.XX, server: XX.XX.XX.XX, request: "POST /post-no-target
HTTP/1.1", host: "XX.XX.XX.XX:443"
4. The information shown above will be captured when the client sends a POST
request with a large payload. This log will be shown only when debug logging
is enabled.
5. If you don’t see additional debug information, then verify that you have
followed all the steps outlined in Enabling debug logging for a specific virtual
host on the Router correctly. If you have missed any step, repeat all the
steps again correctly.
6. If you are still not able to get the debug information, then please contact
Apigee Edge Support.

Disabling debug logs for a specific virtual host on the


Router
This section explains how to disable the debug logs at Router for a specific virtual
host.

1. Open the following file on the Router machine in an editor:


/opt/nginx/conf.d/ORG_NAME_ENV_NAME_VIRTUALHOST_NAME.conf
For example:
2. vi /opt/nginx/conf.d/ORG_NAME_ENV_NAME_VIRTUALHOST_NAME.conf
3. Change the following line:
4. error_log
/opt/apigee/var/log/edge-router/nginx/ORG_NAME~ENV_NAME.PORT_error_
log info;
5. to
6. error_log
/opt/apigee/var/log/edge-router/nginx/ORG_NAME~ENV_NAME.PORT_error_
log error;
7. Save your changes.
8. Run the NGINX reload command. For example:
9. /opt/nginx/scripts/apigee-nginx reload
10.The following file will now capture only error logs:
11./opt/apigee/var/log/edge-router/nginx/
ORG_NAME~ENV_NAME.PORT_error_log
12.If you want to stop debug logs on more than one Router, repeat these steps
on each of the Routers.

Verifying only error information is logged in NGINX error


log file
1. Make some API requests on the host alias and port associated with the
specific virtual host configuration or wait for the clients to make the
requests.
2. Check the following file:
/opt/apigee/var/log/edge-router/nginx/ORG_NAME~ENV_NAME.P
ORT_error_log
3. Verify that you are seeing only the error information and debug information
is no longer logged for the requests.
4. If you still see additional debug information is being logged, then verify that
you have followed all the steps outlined in Disabling debug logs for a specific
virtual host on the Router correctly. If you have missed any step, repeat all
the steps again correctly.
5. If you are still not able to get the debug information, then please contact
Apigee Edge Support.

Note: The techniques described in this content are available in Apigee Edge for Private Cloud
only.

Introduction
When troubleshooting, you may want to execute APIs directly against Apigee
components such as a router or message processor. For example, you might want
to do this in order to:

● Debug intermittent issues with certain API requests that point to an issue
with a particular Apigee component (router/message processor).
● Gather more diagnostic information by enabling debug mode on a specific
instance of an Apigee component.
● Rule out that the issue is caused by a specific Apigee component.
● Discover whether operations such as spawning up a new instance or
restarting the instance have the desired impact.

Prerequisites
● Direct access to the router or message processor components against which
API requests need to be run.
● The cURL tool needs to be installed on the particular instance of the
component.
● The API request you want to test in cURL format.
For example, here's a curl command that you could use to make a request
to an API proxy from your local machine:
● curl https://myorg-test.mycompany.com/v1/customers -H
'Authorization: Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
● Note: You can identify issues such as missing deployments with even a partial curl. As
long as traffic is sent to the correct proxy for which only the correct host and basepath
are necessary, you can run useful tests without having headers that would be necessary
to get a success response, such as an Authorization header. In this case, the above
example without the header would be a valid test case:
● curl https://myorg-test.mycompany.com/v1/customers

 How to run API requests directly against Apigee routers


Scenario 1: API requests to host alias pointing to routers

If the DNS entry of the host alias is configured to point to the Apigee Edge routers
(in other words, there is no Elastic Load Balancer (ELB)), then you could use the
following curl commands to make the API requests directly to a router:

● Virtual host configured for non-secure communication via port 80


● curl -v --resolve HOST_ALIAS:80:127.0.0.1
http://HOST_ALIAS/PROXY_BASE_PATH/ -H 'HEADER: VALUE'
● 
For example:
● curl -v --resolve myorg-test.mycompany.com:80:127.0.0.1
http://myorg-test.mycompany.com/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
● 
Virtual host is configured to terminate SSL on port 443 on the router
● curl -v --resolve HOST_ALIAS:443:127.0.0.1
https://HOST_ALIAS/PROXY_BASE_PATH/ -H 'HEADER: VALUE'
● For example:
● curl -v --resolve myorg-test.mycompany.com:443:127.0.0.1
https://myorg-test.mycompany.com/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'

Scenario 2: API requests to host alias pointing to ELBs

If the DNS entry of the host alias is configured to point to an Elastic Load Balancer
(ELB), then you could use the following curl commands to make the API requests
directly to a router:

● Virtual host configured for non-secure communication via port 80


● curl -v --resolve HOST_ALIAS:80:127.0.0.1
http://HOST_ALIAS/PROXY_BASE_PATH/ -H 'HEADER: VALUE'
● 
For example:
● curl -v --resolve myorg-test.mycompany.com:80:127.0.0.1
http://myorg-test.mycompany.com/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
● 
Virtual host configured for a high port and SSL terminates on a load balancer
in front of the Apigee router
● curl -v --resolve HOST_ALIAS:PORT_NUMBER:127.0.0.1
http:/HOST_ALIAS:PORT_NUMBER/PROXY_BASE_PATH/ -H 'HEADER: VALUE'
● For example:
● curl -v --resolve myorg-test.mycompany.com:19001:127.0.0.1
http://myorg-test.mycompany.com/v1/customers -H 'authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
● 
Virtual host configured for a high port and SSL terminates on the Apigee
router
In other words, the load balancer is configured to use TCP pass-through to
the Apigee router.
● curl -v --resolve HOST_ALIAS:PORT_NUMBER:127.0.0.1
https:/HOST_ALIAS:PORT_NUMBER/PROXY_BASE_PATH/ -H 'HEADER: VALUE'
● For example:
● curl -v --resolve myorg-test.mycompany.com:19001:127.0.0.1
https://myorg-test.mycompany.com/v1/customers -H 'authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'

How to run requests directly against Apigee message


processors
Scenario 1: API request to message processor via default port 8998

The default port on which the message processor listens for traffic from the Apigee
router is 8998. Therefore, for all cases where this port was not changed, traffic
needs to be sent directly to this port on a specific message processor instance, as
in the following example. The curl request has to be sent to URL
http://INTERNAL_IP_OF_MP:8998 along with the header X-Apigee.Host with
the value hostname including the port used in the virtual hosts as shown in the
following three examples:

● Virtual host is configured for SSL termination on the router


● curl -v http://INTERNAL_IP_OF_MP:8998/PROXY_BASE_PATH/ -H
'HEADER: VALUE' -H 'X-Apigee.Host: HOST_ALIAS:443'
● 
For example:
● curl -v http://10.10.53.115:8998/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2' -H 'X-Apigee.Host: myorg-
test.mycompany.com:443'
● 
Virtual host is configured for a "high port" and SSL termination happens on a
load balancer OR on an Apigee router:
● curl -v http://INTERNAL_IP_OF_MP:8998/PROXY_BASE_PATH/ -H
'HEADER: VALUE' -H 'X-Apigee.Host: HOST_ALIAS:PORT_NUMBER'
● For example:
● curl -v http://10.10.53.115:8998/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2' -H 'X-Apigee.Host: myorg-
test.mycompany.com:19001'
● 
Virtual host is configured to default http port 80
● curl -v http://INTERNAL_IP_OF_MP:8998/PROXY_BASE_PATH/ -H
'HEADER: VALUE' -H 'X-Apigee.Host: HOST_ALIAS:80'
● For example:
● curl -v http://10.10.53.115:8998/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2' -H 'X-Apigee.Host: myorg-
test.mycompany.com:80'

Scenario 2: API request to message processor via SSL port 8443

It is possible to configure SSL communication between router and message


processor. The following examples use port 8443, the port suggested by Apigee
documentation.

● Virtual host is configured for SSL termination on the router


● curl -v -k https://INTERNAL_IP_OF_MP:8443/PROXY_BASE_PATH/ -H
'HEADER: VALUE' -H 'X-Apigee.Host: HOST_ALIAS:443'
● 
For example:
● curl -v https://10.10.53.115:8443/v1/customers -H
'Authorization: Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2' -H 'X-
Apigee.Host: myorg-test.mycompany.com:80'
● 
Virtual host is configured for SSL termination on a load balancer, and traffic
is forwarded onto a high port to the router
● curl -v https://INTERNAL_IP_OF_MP:8443/PROXY_BASE_PATH/ -H
'HEADER: VALUE' -H 'X-Apigee.Host: HOST_ALIAS:PORT_NUMBER'
● For example:
● curl -v https://10.10.53.115:8443/v1/customers -H
'Authorization: Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2' -H 'X-
Apigee.Host: myorg-test.mycompany.com:19001'
● Note: The same curl command is used for requests that use mutual SSL and thus
would not be testable without a client certificate.

Note: If your actual request has multiple headers you can pass them while making the API
requests directly to router or message processors as well in a similar way you would do outside
router/message processor.

How to make direct API requests to


routers or message processors
Note: The techniques described in this content are available in Apigee Edge for Private Cloud
only.

Introduction
When troubleshooting, you may want to execute APIs directly against Apigee
components such as a router or message processor. For example, you might want
to do this in order to:

● Debug intermittent issues with certain API requests that point to an issue
with a particular Apigee component (router/message processor).
● Gather more diagnostic information by enabling debug mode on a specific
instance of an Apigee component.
● Rule out that the issue is caused by a specific Apigee component.
● Discover whether operations such as spawning up a new instance or
restarting the instance have the desired impact.

Prerequisites
● Direct access to the router or message processor components against which
API requests need to be run.
● The cURL tool needs to be installed on the particular instance of the
component.
● The API request you want to test in cURL format.
For example, here's a curl command that you could use to make a request
to an API proxy from your local machine:
● curl https://myorg-test.mycompany.com/v1/customers -H
'Authorization: Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
● Note: You can identify issues such as missing deployments with even a partial curl. As
long as traffic is sent to the correct proxy for which only the correct host and basepath
are necessary, you can run useful tests without having headers that would be necessary
to get a success response, such as an Authorization header. In this case, the above
example without the header would be a valid test case:
● curl https://myorg-test.mycompany.com/v1/customers

 How to run API requests directly against Apigee routers


Scenario 1: API requests to host alias pointing to routers

If the DNS entry of the host alias is configured to point to the Apigee Edge routers
(in other words, there is no Elastic Load Balancer (ELB)), then you could use the
following curl commands to make the API requests directly to a router:

● Virtual host configured for non-secure communication via port 80


● curl -v --resolve HOST_ALIAS:80:127.0.0.1
http://HOST_ALIAS/PROXY_BASE_PATH/ -H 'HEADER: VALUE'
● 
For example:
● curl -v --resolve myorg-test.mycompany.com:80:127.0.0.1
http://myorg-test.mycompany.com/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
● 
Virtual host is configured to terminate SSL on port 443 on the router
● curl -v --resolve HOST_ALIAS:443:127.0.0.1
https://HOST_ALIAS/PROXY_BASE_PATH/ -H 'HEADER: VALUE'
● For example:
● curl -v --resolve myorg-test.mycompany.com:443:127.0.0.1
https://myorg-test.mycompany.com/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'

Scenario 2: API requests to host alias pointing to ELBs

If the DNS entry of the host alias is configured to point to an Elastic Load Balancer
(ELB), then you could use the following curl commands to make the API requests
directly to a router:

● Virtual host configured for non-secure communication via port 80


● curl -v --resolve HOST_ALIAS:80:127.0.0.1
http://HOST_ALIAS/PROXY_BASE_PATH/ -H 'HEADER: VALUE'
● 
For example:
● curl -v --resolve myorg-test.mycompany.com:80:127.0.0.1
http://myorg-test.mycompany.com/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
● 
Virtual host configured for a high port and SSL terminates on a load balancer
in front of the Apigee router
● curl -v --resolve HOST_ALIAS:PORT_NUMBER:127.0.0.1
http:/HOST_ALIAS:PORT_NUMBER/PROXY_BASE_PATH/ -H 'HEADER: VALUE'
● For example:
● curl -v --resolve myorg-test.mycompany.com:19001:127.0.0.1
http://myorg-test.mycompany.com/v1/customers -H 'authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
● 
Virtual host configured for a high port and SSL terminates on the Apigee
router
In other words, the load balancer is configured to use TCP pass-through to
the Apigee router.
● curl -v --resolve HOST_ALIAS:PORT_NUMBER:127.0.0.1
https:/HOST_ALIAS:PORT_NUMBER/PROXY_BASE_PATH/ -H 'HEADER: VALUE'
● For example:
● curl -v --resolve myorg-test.mycompany.com:19001:127.0.0.1
https://myorg-test.mycompany.com/v1/customers -H 'authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
How to run requests directly against Apigee message
processors
Scenario 1: API request to message processor via default port 8998

The default port on which the message processor listens for traffic from the Apigee
router is 8998. Therefore, for all cases where this port was not changed, traffic
needs to be sent directly to this port on a specific message processor instance, as
in the following example. The curl request has to be sent to URL
http://INTERNAL_IP_OF_MP:8998 along with the header X-Apigee.Host with
the value hostname including the port used in the virtual hosts as shown in the
following three examples:

● Virtual host is configured for SSL termination on the router


● curl -v http://INTERNAL_IP_OF_MP:8998/PROXY_BASE_PATH/ -H
'HEADER: VALUE' -H 'X-Apigee.Host: HOST_ALIAS:443'
● 
For example:
● curl -v http://10.10.53.115:8998/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2' -H 'X-Apigee.Host: myorg-
test.mycompany.com:443'
● 
Virtual host is configured for a "high port" and SSL termination happens on a
load balancer OR on an Apigee router:
● curl -v http://INTERNAL_IP_OF_MP:8998/PROXY_BASE_PATH/ -H
'HEADER: VALUE' -H 'X-Apigee.Host: HOST_ALIAS:PORT_NUMBER'
● For example:
● curl -v http://10.10.53.115:8998/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2' -H 'X-Apigee.Host: myorg-
test.mycompany.com:19001'
● 
Virtual host is configured to default http port 80
● curl -v http://INTERNAL_IP_OF_MP:8998/PROXY_BASE_PATH/ -H
'HEADER: VALUE' -H 'X-Apigee.Host: HOST_ALIAS:80'
● For example:
● curl -v http://10.10.53.115:8998/v1/customers -H 'Authorization:
Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2' -H 'X-Apigee.Host: myorg-
test.mycompany.com:80'

Scenario 2: API request to message processor via SSL port 8443

It is possible to configure SSL communication between router and message


processor. The following examples use port 8443, the port suggested by Apigee
documentation.

● Virtual host is configured for SSL termination on the router


● curl -v -k https://INTERNAL_IP_OF_MP:8443/PROXY_BASE_PATH/ -H
'HEADER: VALUE' -H 'X-Apigee.Host: HOST_ALIAS:443'
● 
For example:
● curl -v https://10.10.53.115:8443/v1/customers -H
'Authorization: Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2' -H 'X-
Apigee.Host: myorg-test.mycompany.com:80'
● 
Virtual host is configured for SSL termination on a load balancer, and traffic
is forwarded onto a high port to the router
● curl -v https://INTERNAL_IP_OF_MP:8443/PROXY_BASE_PATH/ -H
'HEADER: VALUE' -H 'X-Apigee.Host: HOST_ALIAS:PORT_NUMBER'
● For example:
● curl -v https://10.10.53.115:8443/v1/customers -H
'Authorization: Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2' -H 'X-
Apigee.Host: myorg-test.mycompany.com:19001'
● Note: The same curl command is used for requests that use mutual SSL and thus
would not be testable without a client certificate.

Note: If your actual request has multiple headers you can pass them while making the API
requests directly to router or message processors as well in a similar way you would do outside
router/message processor.

Mas errores y sus soluciones en


https://docs.apigee.com/api-platform/troubleshoot/deployment/error-fetching-children-path

You might also like